var/home/core/zuul-output/0000755000175000017500000000000015150267165014535 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015150276467015506 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log.gz0000644000175000017500000337314715150276341020274 0ustar corecore|ikubelet.log_o[;r)Br'o-n(!9t%Cs7}g/غIs,r.k9Gf܅ "mv?_eGbuuțx{w7ݭ7֫']% oo/q3m^]/o?8.7oW}ʋghewx/mX,ojŻ ^Tb3b#׳:}=p7뼝ca㑔`e0I1Q!&ѱ[/o^{W-{t3_U|6 x)K#/5ΌR"ggóisR)N %emOQ/Ϋ[oa0vs68/Jʢ ܚʂ9ss3+aô٥J}{37FEbп3 FKX1QRQlrTvb)E,s)Wɀ;$#LcdHM%vz_. o~I|3j dF{ "IΩ?PF~J~ ` 17ׅwڋًM)$Fiqw7Gt7L"u 0V9c  ˹dvYļU[ Z.׿-h QZ*U1|t5wKOؾ{mk b21ΣԟRxxXԨkJ[8 ";ЗH F=y܇sθm@%*'9qvD]9X&;cɻs0I٘]_fy tt('/V/TB/ap+V9g%$P[4D2L'1bЛ]\s΍ic-ܕ4+ޥ^.w[A9/vb֜}>| TXNrdTs>RDPhإek-*듌D[5l2_nH[׫yTNʹ<ws~^B.Ǔg'AS'E`hmsJU # DuT%ZPt_WďPv`9 C|mRj)CMitmu׀sP.BvJ>mIyVVTF% tFL-*$tZm2AČAE9ϯ~ihFf&6,֗&̴+s~x?53!}~Z[F)RH?uvͪ _5l *7h?cF_]CNnW)F5d,0SSNK9ް4:ÒozsB<^+鄌4:B%cXhK I}!5 YM%o<>"ہ)Za@Ι}YJz{ɛr|hxY/O$Zøu32EʉD'MS1}t i:Y`cФIX0$AtĘ5dw9}ŒEanvVZ?c}!wO,ƍͩ?9} [oF2(Y}Q7^{E}xA|AŜt;y}=W<*e'&Ж0(ݕ`{az^su/x)W>OK(BSsǽҰ%>kh5nIYk'LVc(a<1mCޢmp.֣?5t罦X[nMcow&|||x:k/.EoV%#?%W۱`3fs䓯ҴgqmubIfp$HhtLzܝ6rq/nLN?2Ǒ|;C@,UѩJ:|n^/GSZ;m#Nvd?PqTcLQMhg:F[bTm!V`AqPaPheUJ& z?NwpGj{VjQS,؃I'[y~EQ(S +mpN, Mq 70eP/d bP6k:Rǜ%V1Ȁ Z(Q:IZaP,MI6o ޞ22ݡjR:g?m@ڤB^dh NS߿c9e#C _-XѪ;Ʃ2tStΆ,~Lp`-;uIBqBVlU_~F_+ERz#{)@o\!@q['&&$"THl#d0 %L+`8zOҚƞ`wF~;~pkѽ)'cL@i]<ք6ym®Yi&s`dyMX](^!#h k:U7Uv7쿻чd)wB5v-)s蓍\>S[l52, 5 CۈP$0Zg=+DJ%D  *NpJ֊iTv)vtT̅Rhɇ ќuގ¢6}#LpFD58LQ LvqZDOF_[2arH_HI\:U}UE$J @ٚeZE0(8ŋ ϓ{B.|!p+,ICE^fu `|M3J#BQȌ6DNnCˣ"F$/Qx%m&FK_7P|٢?I-RiAKoQrMI>QQ!'7h,sF\jzP\7:Q\)#s{p'ɂN$r;fVkv߸>6!<̅:xn<# -BȢ1I~ŋ-*|`В~_>ۅm}67X9z=Oa Am]fnޤ{"hd߃Ԉ|tLD3 7'yOc& LFs%B!sRE2K0p\0͙npV)̍F$X8a-bp)5,] Bo|ؖA]Y`-jyL'8>JJ{>źuMp(jL!M7uTźmr(Uxbbqe5rZ HҘ3ڴ(|e@ew>w3C=9k-{p>րd^T@eFZ#WWwYzK uK r؛6V L)auS6=`#(TO֙`mn Lv%7mSU@n_Vۀl9BIcSxlT![`[klzFض˪.l >7l@ΖLl gEj gWUDnr7AG;lU6ieabp៚U|,}S@t1:X _ .xI_7ve Z@7IX/C7@u BGڔE7M/k $q^hڧ};naU%~X!^C5Aw͢.@d!@dU}b? -ʏw |VvlK۴ymkiK% 0OFjT_kPW1mk%?\@R>XCl}b ,8; :.b9m]XaINE`!6uOhUuta^xN@˭d- T5 $4ذ:[a>֋&"_ }Oõϸ~rj uw\h~M il[ 2pCaOok.X0C?~[:^Pr򣏷y@/ڠ --i!M5mjozEƨ||Yt,=d#uЇ  l]չoݴmqV".lCqBѷ /![auPmpnEjus]2{2#b'$?T3{k>h+@]*pp桸]%nĴFԨlu |VXnq#r:kg_Q1,MNi˰ 7#`VCpᇽmpM+tWuk0 q /} 5 ¶]fXEj@5JcU_b@JS`wYmJ gEk2'0/> unKs^C6B WEt7M'#|kf1:X l]ABC {kanW{ 6 g`_w\|8Fjȡstuf%Plx3E#zmxfU S^ 3_`wRY}@ŹBz²?mК/mm}m"Gy4dl\)cb<>O0BďJrDd\TDFMEr~q#i}$y3.*j) qQa% |`bEۈ8S 95JͩA3SX~߃ʟ~㍖›f!OI1R~-6͘!?/Vvot4~6I@GNݖ-m[d<-l9fbn,'eO2sٟ+AWzw A<4 }w"*mj8{ P&Y#ErwHhL2cPr Wҭюky7aXt?2 'so fnHXx1o@0TmBLi0lhѦ* _9[3L`I,|J @xS}NEij]Qexx*lJF#+L@-ՑQz֬]")JC])"K{v@`<ۃ7|qk" L+Y*Ha)j~pu7ި!:E#s:ic.XC^wT/]n2'>^&pnapckL>2QQWo/ݻ<̍8)r`F!Woc0Xq0 R' eQ&Aѣzvw=e&".awfShWjÅD0JkBh]s9Ą|ק_;%X6Q@d 8&a)a.#ۿD> vfA{$g ăyd) SK?ɧ"0(HKkD4<80: M:'֥P!r "Lӓݰ@ 9n# " $fGgKQӦ4}Gn\^=-Y5PI dPN6 Ozځ/פ|5) F[ڣ$2*%&h v%9HN H~Q+oi?&۳)-nqK?2ސv/3,9ҮT9Cef˝49i.2DxatC<8iR/ƬйR֌vN8J"iJ. T>)qaY4ͬlyg "]BvW#99`TegõII kюHLa^c&/H^FFIu`2a$mc Ry+R:LڕDܓ>Y:]t.+|PT6=qWe0NƏw<6o3mv8k vGOfpEOkÈWȤMف lOc;SR&.w,qk>MPs+Xh4iyuGRd֞q鮺]m S{}]U kV0/ŜxtADx"Xh4|;XSxߵă@pE:y]/"(MCG`ʶϊGi+39#gNZYE:Qw9muB`9`LDhs4Ǩ9S`EkM{zB<˙ik; JD;;3!4 2Y.$Dwiu|+lO:k$]ԜYLUҞ6EmH>azʳ/A+ԀZk"f`.,ל{=wh|_qYj5M{K$gv>cDp"'0޽5xCNQ1G2})*'>fC۝'*)"5.E2IeD 2.ZdrN6Uœ=n8D-9޵JKw5ُJ,􋃓ZUꋼ0b1f87GՂ 1t_o}{Mr7KO0Ao-Y*Is\S:JzA(:i!eҎ\,f+,Ąt78~ڋ~?[F^.A'!,iGow3{'YToҝf5ޓ[he>=7S8DGZ@-#]f:Tm?L{F-8G#%.fM8Y='gیl0HڜHLK'Cw#)krWIk<1څ 9abHl:b3LjOq͂Ӥ=u8#E2;|z꽐vɀi^lUt␚ɓW%OVc8|*yI0U=nFGA`IC8p+C:!}Nh,mn>_MGiq'N~|z`|mu}r:"KiyGҪ$& hw#4qn?ܶХfm_Ov^ܶ[6j3ZN9t9ZMMM)I[Rχ/C|W䳮yI3MڼH9iEG&V 'x`u.̀ab7V<*EzfH{]:*6M x-v쳎M'.hO3p-IGh ܆hR ]zi2hB9'S_;I/d0oIU:m/~[*K1QA="D:V&f:{7N>^uU` c/X)mS5KC߄":{H)"%,!3w{"ZWÂk>/F?RJ>FIY*%5Hg}3Ď89؟N/pgÞ tJXB-Gjsٶ 3Gzp؍H|*cyp@\첹,[up`uV,\KCB\qGiW痃[?i?S{eϻl71X:݌>EEly(*SHN:ӫOq{{L$?Q{϶(F_Ej>3mqfΤP-j)H˧&8?a?2xĐ+EV؍x0bv6 fd1^ 2ӎԥ sZR cgu/bn/34'h9Dݥ:U:vV[ 'Mȥ@ەX㧿-p0?Q6 y2XN2_h~Cֆ֙82)=Ȓ7D- V)T? O/VFeUk'7KIT, WeՔ}-66V؅ʹ;T$pZ#@L; ?0]"2v[hׂ'cJ6H4bs+3(@z$.K!#Šj2ݢxK-di +9Hᇷ絻+ O.i2.I+69EVyw8//|~<ëng)P<xͯ~? fp,CǴ_BjDN^5)s('cBh+6ez0)_~zJz"ё`Z&Z![0rGBK 5G~<:H~W>;ٍVnSt%_!BZMMeccBҎÒJH+"ūyR}X~juPp- j\hЪQxchKaS,xS"cV8i8'-sOKB<չw"|{/MC8&%Og3E#O%`N)p#4YUh^ ɨڻ#Ch@(R &Z+<3ݰb/St=&yo|BL,1+t C<ˉvRfQ*e"T:*Dᰤ*~IClz^F6!ܠqK3%$E)~?wy,u'u() C>Gn} t]2_}!1NodI_Bǂ/^8\3m!'(Ֆ5Q&xo 8;'Jbo&XL_ʣ^^"Lq2E3,v1ɢu^}G7Z/qC^'+HDy=\]?d|9i,p?߼=\Ce"|Rݷ Q+=zxB.^Bld.HSntºB4~4]%.i|҂"? ~#ݤ[tfv3Ytck0O ͧ gP\|bЯ݃5H+޹na4p9/B@Dvܫs;/f֚Znϻ-|X!lk҃=pnUגZ6p| G;;74^l{Pclwů Հ}xcSu)6fbM/R(*ȴd.^Qw %"=nluOeH=t) Hİd/D!-Ɩ:;v8`vU~Ʉ!hX #'$2j1ܒZ˜bK@*`*#QA 9WykGk,8}B6{/) ݆Y~ 1;;|,ۇ=sxy+@{l/*+E2}`pNU`ZS̯窜qN8V ['4d!FmaX-6 y:1V(!L7,RPEd;)QϢ +RlWDžuF7LFֆoM~ar*EtIbW>jqour?qzJJaQ#-n`/$fhnqgTĔO5 ꐌSYXzv9[ezksA`<dkON৯s|&*pNaJه5B5H:W2% `6MRR'xZtfC$1aH_dx$1'/v^ZZ4`9);q`F"d1v>ժbLGd~MP%m x52LMF9 E"A,S Vo}\"X.2< 5FB΢u.`aJ#Tk’"D#cuCXȉ4 ՖK(KP|dZ1&8{9rLnMRф%V Ng2K|`ot.GSGd oE'!B'Nb1{8LW^9KbN;sö!`0ݘ/l+1L#B8U֕&*?V6N{դ}Y(INBKhx2 *MOenT.a~.E jG)j{=u^K+Ȫcv/w#MivX :)ǪCZUnAS`SK6OSxa3 W; K>窜̀'n 3u0?K@BS %fee}i]>̤+*l:\歶!IZ5>H;0)N.w7ߍ|+qUߤ^oå~4en\.cY[s'wSSۘf ?.D s}Y~/J[}jX^ޗ_-/̍ݥ*n./cus}]\>\\^'W_nAqC_oO-S_sOq?B}mmK2/@DJt}=xL@5MG0ZY,\S Eb uw:YɊ|ZԘ8'ˠ*>q/E b\ R%.aS qY>W Rlz!>Z.|<VD h5^6eM>y̆@ x>Lh!*<-lo_V684A飑i2#@+j3l૎S1@:G|gRcƈ?H(m>LC,HI~'.Op% ' c*Dp*cj|>z G` |]e*:nq!`{ qBAgPSO}E`́JPu#]' 3N+;fwt[wL X1!;W$*죓Ha-s>Vzk[~S_vD.yΕ`h9U|A܌ЃECTC Tnpצho!=V qy)U cigs^>sgv"4N9W_iI NRCǔd X1Lb.u@`X]nl}!:ViI[/SE un޷(ȊD0M^`MDN74Т C>F-}$A:XBgJWq&4ۓflq6TX)ى?Nwg>]dt*?Ű~{N_w7p682~ =WBX"XA:#u-9`x 92$4_>9WvTIj`+C2"s%DƖ|2H\2+AaTaBˮ}L@dr_Wfc>IdA Od[jlec=XJ|&+-T1m8NP$%s,ig\Z:h Ћ߉n!r}_\ \5 6 d#=&X^-kOwĝJO\Vj; )!eoB4F\jtctUb.L[3M8V|&jZz/@7aV),A[5TpUZL_?CU0E [%W%vl x٘3܎y,< )i7 Ո: tC`\?c%v7\Ct!$9iç$><+c~݊lz1H[E'2/clQ.I`AWOlw&5fH n`gMytdx)lwAK~GgbJI-tq5/i ?WǠr^C/1NEU<=co(k0Q~wˌ\g,\ rf\PUH,L#L7E"`0dq@zn~+CX|,l_B'9Dcuu|~z+G q|-bb^HcUha9ce1P[;qsA.Ǎ-]W‹y?ڕ^Pm:>I+Ȧ6' ,}U=̀*Eg.6_~OJ/8V ?ç&+|t><,BLqL򱷬dS{X6"X#-^䀕#{К4i̎'QIc(<ǩJi lc*n;YKO?mܛP3w3K4Wu:%xHOe+ViF{ E,3#l k$ EZEɆWeMb_Xw}URD}Յ$ Q8 xGVe}S R,+eV\xD {¶ys+~=I.?A!T$ Ӱh!2j0]۷@b<^K{L{7f;xg7"BwW ?8-4F$lxIwaw^L&7ݫdq_ٗ~ąhՋ%giLĎlŒ=+calNXN|@%"Dul7Qq]DQX(YUˡ6ʪ G>7|0OVd!sЍx"c״r-ƙ ύC$Idq5iV"퐷~DAuNד8_~@(qD6 \J%**zեu1>Oꎺ `Τ͚_On\DE=Qc[2몖dhc1Js, RAZv&Ɍ/?j 3U<0koEҌ^xge(J΋ Piĭw?Y9Yu\ ] Jy:Syp.Q 0ZG O` ZqX\:Bk;Kxy4"=6ύ]؁?|>k۲<jpJ"}`A#zF0K.rƕv֩PuKy;y.| #<:qej?1OY16β7PNfOM[c$/?}5!^ 'M$L)-o"0nE[X6y5罞%~Jy>^ƿɺh#B!h4>f.E>9ʊ6>L`Rb^/U}T-FdF{UT%H6Y-RgM%xJO` % nq-bXvgбYY |V~:Gz|KQ}8vDE(@rٴ.J#3{{/DR)BK`sk{rVd4!pO7YPI0F`Z-]ϥ̕ ۏj78U[6+` 5CUY.zVr3 7 T#ߺM]qfzqZ#ߊ-(DJ97Iഝӊr·S@|$aDby(ЙH ev`i/UDC(o ]bF7贻a$LJGo2<9DaB' GQ=p#Y!H|3>Pۂ ,)vL q|ys_M03GItO[4@Ow>,rY oIW*]?M-`MeqX>mxT{W)fv4 cṯDͳ4H$$ 7^t"mmUL(2e'py^\1t6Ix/DS!2.e8+ÃDd5MmP Z=7kAyÃ7٥V%'!̍tVFSvXt$EZMsXr7]JSjM7tkX};<ݘoO 4 )(79K\0aFl 3+U4ͧC ֈyÀ? .ihNs3yԕ><.R5 uF&ZZu(+ص5hӹk 4lĿP53t=Xt ʘ \ 1و8nU]?@K0)H9*˃ YCt’M$Gy量@:lXaV)_H.gb6k)W"`Z xKs!1te-ij}9 יHD[b&T"wjf]T} 3R^D^9|, ͓i#>fnt\Ie04FtBH`zFX7f}ha ªn)( -u5fѬJ5UTk+sm`F]-0cVD.EZ DZ~7µyZG-n1UOi#u:y|gفųhu%q2A;!&*w56 ao`,FbqO7/Q{{6M9HU1Uxd\= `l97ukOGt:3Y<aݐpV\ޢk:MF s}h>w<_G[?h2M%_xQߩf%?|Ouǫ3EQr Z)Fyst2\w--莾ܨ#✓S%W1uuH}xJfc~.&ET wl3a{k-xaIbQ/&0b Q:N"fp\ʛo qو ,iL_|WOmUl=qfܒe?(PmLOK 9P٦£CFgkHKӑ8jɾ"XG3Yw|NWs_~Q8`tKӭ+z]ʖFkz?-[j&|CbGO%gy+V3;si5/U{wFWw>zzxup};N;Oi͋<$Uu*>隱kKeTl̀(3|Q!w4erX "wѬR,)`ll4r磻Yܺ8<TM ?(H' ,{sI6~ {c8!Q6\mNTHK<WQŗg1 .twE n`RT>ӣRbJ !1TAD:BHx c[A; xN* #xy gZxƵ_!I` isvfiȒֶG\'Tbf$z!Ko i[V6%ŒNǾ/@4B1KآO2XR8}$B80V͇w'F~U` s;[e*"&쏭ay]G62 MCt7ʖ WdVTe '?)h<_@{Ε"U̇@]u4+?(NV, lk+9dJm. @eXO>˿>~yx냂 kq^D8 NXݩwz4` ӎWݹDv@2Eumb X} ;u#-D4YB޳]sPACP\ "fawQgZ 4ö y^G2ym-&Or̠,g{z^U.@e"ڢsƻcʹ#5ÕL:7ɮk/s1A7\QjIcAۭKw;;ELv:;ʥ-{cvtw R "Av& ;8Mk`"lװ Tw xa殁 k͑Ckm7 {  Dr+qj^jOw;w=w+Ù{nTA :-=J>7 m3j| ݜT]) X wTE0_&g>}D ڞ*ۑ<_sI jư WMɾ<]IX&q&OfK*xv㓡$|j!<0&{Tn7s}yt#?#dT͟?KР][9KFoٕw\KV;kY< 9) CJFZ9v,a8}QO;W6#ZW0cX?˲(-59)Fd~6DH% u\`zg"$#Rr0y=Cl7rYd6C>}<ȸCL{|0k16j6˰"{#/D#9 mcf]o 2P |,K2* 6f5*gMp5}jX`-,T/ ]ܧD0>6W;y!r 5@nA)Xz45{}gբAn?Sʝ"QMH*ؐaudC'1f7萼\*kF32Dv`՜(dduɣq?)y߇ic_obvq{ %AL@ZbE99@d^$tZQ~S6ZCpu5Ϫh0`=0dI.Y?}sjpk~%]ZTeA˴м؁]9kJJwq;wv´;vI}p_b1:3#8aГ,GSZSK^$5@+*z~<慨C8%̋{ސ^gq/c zpR{Qm@U}|4v/!tha~гy:㺉OL"Y!5?RKW5{DExEdO'cШ83'QEpFN4:j?(u;ތ 78u+XV߇ 8 *u孵F#V k p{WR1>͝uL& cmXDPG8#U^c }vf\Dgӆ.h%o4-Wya%];X]K)8-ވc C!>&EyNt +qzG\7_N"&azl C|iqy/%0,.SG|ßbj =xpy]V30D6<QwP^'YB΅ @ Q 89{CgL 50ؑCK\+x؇᩻= [̷=GtbeG7βχфz/ Nzbpe]pSIB^7Ii}z?5~((!0ȃ$H;O !LhY 3!,m/rxQjMڸ_~_ȫ2}~o=JG(} tBצ3J9Jkf2O>Ve{=T@JUB #?P*: N#Kl ^U&!Y2'CDluh44YxVdN녥"I5~5Γl+oH :hmNQ6^/3U ֲ"?<%p6ᣯrπ*ŋ1̱ 8˄2,ipO?_XL2 Y"ڭMzwB2[@_̶'_6&Y{\kpo" 1S_|Ά[fuU?Y"{ #ߌ~#e: ˪U<ㅠ_X!ô%X } -$y/|z%8w| 7efv A8[GCxa Q`?\b_^X5`!)'C5L"6Ng@̄J gUC-О2kc[^B0,iVIͱCM\n\8zs.*bl` v}^/)ϔfa>k9;WQJW?lΨ(Of#0l/E\ǔ O  W>f/=8t_2WeOlbCmU2|G3Bevc8;W,Tk:1qx̽"Z.GOs O{Rg5?mPq@(kឹ6@3GU`9PAјHi"tH QO-MVJB%!hb-D> >3@zyEj18`pN}P&BSL0TKi`؅$vf70D=4+2kY=^`Zn\bHVNzY rA#s't툌d*pkG;"o9+CH3y&L@tl ik?*GmA飃A} iW6;GgqJz s@Ⱥǧ`2:*R(Fo{9&[*|[?04mqR#2XI M}NnWqQ9}/y~hpm<-j RTVR{P::EUDXS/8dRTddXԵ:uEvmEj^ n'0h_7:](!y-,^ 3DڹUQ{B6jIƐ m,)΂8_yTŹJM\fJ1QbEXc ʓxfɳ۔e,5D[1tԌ[A@+h>wFK-=ˌ-*$W@cnn m_ }!IG f-dYU6{i8:?b:7p=Vҷ2~~SƐ1[F67ձ#.eeſ7P )0¡XAqbEe]& ql2:8ѭ,? d(+Y U9tf_@@5m[ LDNHmH>6T틴׼ҖE;t]͖Y<{i4L\)jVԺ)7t=?FA}K( B_YY]spw|+'ޜz KQ~2ez[8kKD .)9GLz]|[ѺKSM j_ }GiۘM3-dAYNCja1CisĒ>:;}haU\`1% +cjphx69#+" A*<%8.5UjPܡ=V+uc"Ox4F8XWL`Kj&[klsߴ4fs#Who7a:nHQL*a1\Qe &d)x5,(*/]UM\!dCdMR%(Ղ 53m JJD_yDEca #ɯ4ұ?q5: qz+/kk2HDđOjoLI|'ģ9FjlSMFM[*c#mX?>l [ ( Gp4P}XG޵$׿# X @D"5$-ZO%EވYfj ˲tqw8Mcoϫ_T"ϯꩯq.qϵU#r,VcϠ1?N_~gٜn~^s}4nᛯ':~Xj$i^u|[7h~z_`BJ`<$w7|77{t/vPunu:'NtZ\{fɎ`uZg?~7bÌӱ\!L!n;Y\c_ll`Q>Mw*ie"&UaC, ^eFDl3OӔDV;Qe;yvrF|<;9&KKVxVJ^9Y9DCq.7nkaѫƴy se<>'Ĉa{x[nIa(thcdv,g0P)^r.;>`$|Z"ΏQ M3,x혎a)P9[cPRmdK'0n9kPYAYMHˈK/_ήN辵nIl}֢0Db,Icp8b$(~~!QȦU `Uh0aR`MU1l'> OLYj_Yl1UKMJۭM@Qʔt,tALfp8b$őA7V?*^jjjժJr:*EζmO!FkXAsŚ`V,Ue6'>zsԂywGaS^;v^5j [J1cf={سmݱa uԃM3^jeL`P*F yF4Mdko} ܕTT ^rg2RFHbdS讴5\CpPyXoX!TL8p:0UXkP1qF%cwNo*p"/E aj닑Dxm [S a΢crb1I3~@SL~wZL谠gA&c$UE>~ b/_hz pc%rB[G$ϸ#i@ .!!*H ۑA*B{p`@ES縚QQŐcgX7&eUȜը4B(_$e8 5BYzaِX+XbUIDV}uL li68X!bbX]׉PZ=ւ\M1cbɁ?^ # 1bX tXYHQWeCfƌSК K?L~56PX kmpl EEch"R+UTk_0㧍}"1㠎e+QC !9 *koȂG*Y0kcƺHvK隣,D {_&\lش>U^# > ,.DI8L#upv3npDⓋ4<Ԡs<Õ:`G-agE4㯫~R[>S8h8u)|K1I|N6sP^81 I~iG+zӨ,Ӣ<*~xlb%%gNL깨)RGUA/.**ooGכƶo^%m n_>rGV\Gz:_Qp1'蜪<2zߋv`!Prp.chmߚdz!q0ׂI7_ɩ,p5z8?R!M3Fw+/u~`ϲ.\xe?B.ױNPYf1.Ҿ&L_X/ Rr\ƻyșMtYP,hz:Z޸߲fQfbp;E(``İúyGW5 p:I\E78l.*&pMd18M?#Mϼ{\Fhu g 5;G饋^}0[h*Zޥ\x+a,;tvvfIptꪆ)f=aϟǎ`]62-Z{w#^:pCtHRw;HlDQz0!@&3"zb<7 &$ v/+򅘺[W=:.]:v,m[ن!$Y׌PSI{AN^[ڳֽg!9VZ xpJdRZB2 [$b_F >~;GW4[Za{xl$fcrS"kV2LYɥy.t !~&L[xi亠vgUr#qpҦF]ө>ctQpҼ'ч" b-G;fI!1ԻSVQy.m>ZT5\UtЌyS/O$8!6_hZЍ2pH7`8D'_HIpuwS|d}<~tsʺzb맺, z1gՈYWyqZQIg``\f})L I N-dw' V_Uk$weBSG[!b{=#yEL+|=%o-!37x|[C<{v޻ 3e=%*-2b$6sДإ\b]np]dpan믊qy`\>Jx$1 ^h)뺉|Q$őbqi4I;LtXеTgv8^UuMDgEh? Y"FF/fJϷDJ&V-M6,5F QMDEp1FI׬YHQ\׮ԙ?ٕzf]X&*l7OdW!v9Y e.<ԅ^_n 5њ6;ZZ|5;W]J[uUA n)pF_ k&À9$9髚1_b#$Hc#-wb)pt)/%m<"m|cB2vw\rr QKun6o{H4ߗ2H7b'?k,5=Ӫ& Vn^p&֤)k\EFI.V_p.ͭVo`ҫdQ_բ FmIƇǮfفBNPQkVUe4L + 291 KJa߉_%1N!,c"3_:!2Oy͵# ʿgHdc6ZU6xc3 9G9^ӆT1̪Q.sc4 rV#|m]8Z9bQ8d6Kn=\!%-111֙aqPwh;EJۧ[}mF2Xrj ό<-6;?ݭǸ >-pc每OYx%1(5%G^=H@,ߍԾ7: MOq.CsoDWjPx%ΡբTJ[>xaǭ W^}]S\>_- ?~r U X {8hg{j04jRqw.#|g$x*2wW+-Vq _IH(89ZWlL[Y3L(2Q]1ęn0ެ? B n^ !VCV* rqB.*@0!NL$A! ?9/IpL;dF zynBRaþqbS*¤V)8qT8_C&}N/Ɇ(z'NqI53y;˲p`WVe>p6oƾ1=;}DZMHy!e cٯP ^nN:Z$WD̹̏@Mu_9$-ry#?O47?0y G@k \\ ?E P?88p$u1dKnIw l*%`q-%|L B}E-U#goC (ǛWL^£eu\gk0ʋ"Hep .Sp}Qp8[=+yO<< o?Q96tx1"ɏ' >;.k|셷@+ ira.I 9Տ{ӛ^'{ ΢ݘzԸ-߬XZgfШw g8=}p67?An9IvPu9RKjwx"_p׳Iƈ1W2.Lp\j<K-ohpBݶi&E_NUTbsmXbWxqwNy}He)#a$Fa@0v{7MO04K`wҁ E,-Dv-̍jC ū?|8Ϯ^ qy!F :)N»N߳ . CRq; K&n䣻mQ\>pUpڦhUAHS~0 ݆wPdHߦ=Ę}ng|7<XGAtM>!"/\iJc@j}_ 2Ֆ \ n TIEa^CQYue14?^o/V΀\JfRÕ{k>B4UfN}caXzK>(B-U S`uT@+ulp?O-s uՂ_ZF HmiDfXz~_ceD왜iWs  ^5y̕u6RpED#zT6_)dZw$+fVbs+I@o:UܖTv1ՂSUj\r{2$ٟHh,f$ΆzCP+BOvжZbXJͰsa!}$rAphL)*WGL##Bnkv]޿n1^Dtu5Gvp'/ԐC$$x;XSj U0ǃ/6=3Cqӫ뗫$\QpTѤ[a:OQ~/VP}s1L2!"̦[ȬRp=3t4˷1BEyIM;gb Q 㿦ܛNԑƝ7 h* TkO2L[%FAl(5 yj4jL1ǜ+e u5iG4i iƪbU3Z&QbqZdo(Q^ó((,BzJ&LM;jM;f fZjF)đ&45H+ʔwSLKrR03KY Y[3vH3v@38S֌U@U͐ZXQq=rZxi>J)Zmqg֭M;jM; kdd >ǩMA?X ܚ_wՒVA?ϷҬ-פb 0m/q>Q*O'!`&׆fR|k}{ N&#olo/|=ozy?pG NP|aB Dݘbb] `EKTPˆZz8@4='s G"|0_|ԛcRjFKN+|6"T"@4\%Sqvm98%k>=34@Pq aewlW.4^(Y~3ԋLjiu%+ЎvlhZt&"kn?vSvS#ִ;xP5#GZN~jBѻ(/BG/"1߱v+9Y}I 0O1UZbR"W) LŎKtW65lW)je㧝\c`ϝA17S^3Ffq3r8K4ɛH5a~c&dB r^Gr2xϊKL2߇I6oCQ `ηo}Hhl*V}Ee T|5 g\@?o܄ *Y>L|:}y:$=rI2!,L$vV0ixvOd*,3N0/M&_=HQy5.ύ|mG8gEcXvqm㧶e1Gr2oA(Lq"Zmp65L2hfR+V|#ѧ"pGЎ ;}꽖)JgaɱWNbd&ROOS ʊ8MBiRuANgYR!gţRҭYhLPxQIx/ bJ#qnLV-/8Ql:7q4z-0q4qhlmw+3Ȑ&K0e#Є{a8%8+DfNcq6c# N ۈbmc7j! qO})}sUH%& Iw W̬7=5 ߕ}7}a| XLFٖ},P‡Mˋyʟ}3бKnů7>c??!Z0a"iF5T`Ȅs hPt I}9/ ɌMA (q)?KM&;k'X^DSBTg*Ù I,Lő X2 ppW޵q$BC~?s>-n8X`ׁ3":RrN1ݯzHQCQ"8zFİ]]UGS6D`'X&I3qUҗ<(C2)pB2JT AH0: {Bei@H-^ !K 6S ,+1] Võp.J*uC^zp,!!}1ic\SSdyhsLVfS37eMOWZMRR|=?5pa s  lH*!RV.7( |=vصm؊n3""ъ!j1OrXMbմ#`R*y$X8wphfUmpGi-"__Z ]nUJjV"\Yi sY Fynʬp6E*EDBTKɥQjJ _dy5p@*R-9^8R)z 0 ]نz `M:W*+~2ɺ% .IHrHìȄa`ɅW%cxwc+a[M` o ?2&?G:/J`Ĵ^h+0-C{ pmgܫt)H+`!gvn㛛ˏޝ_ݲĪ1u,0KTo,RGe8*G Rz5j:$7uDJK =Q-K)004;+ ]Q:Sͪ>T1D(z2RM:YvHԽyM$Z3 #vĺfjEJɵImlXꖤ8Mh5Dr/j8[T<ˆW8h̄YgWW}\~׃ŅV-:Ƭ3f4}`P usY#uU3C$63vWEDdhHzZ *`J!9#7qp%QCKS~Z 3L_ !VKJj% a5KR#hmk*t 0oh4A0 ^}?Y\M1rS[ưP \4M$ Vr5‡gN-,;L݇/0"wwW+CaU#U*b?ph%v#N'U zOJyoo.R_ P6?'_BRE+*Xм}zǞO™@{!S'^EdV$Տl#Ĩl[SӒ鵲>? l{sKxNr:\\%=a>œG~=*š7,Ys.'q`|vT;8iuZUmɔYI/~aCo.|thv—6?#PAG*B' _`XX 4BI~U cO^r| wr/_;֗֔_q؎zտU ]9]W ~Y??4^bYIqcQEϟ_O. |-u ڋ=#PY(c(td ـ>C^On#=8@;?<ȱc~6:=A,߽WPKb,_{s {s40B֯p70yܿ{ ,+?,xWя{R D7whڟ^ Qni6. 9fn!L] [MOGiw_l hk҄jcBn;t)'׆Іz4b,N3t{U薟RqKNlyϦ{SNO% kㅯ5P (mO[gFu.˕:K};ۣ 0uU|ڛw? oVX<2 +폢" 3yɩDj#5 4^J٨9C92SL`hA TyWG~y l=|Xw-d_dђ6LyEx識2)@ҩ֫3L=G;EuDT%[B5+!R||嬩]L>f=@![55\x~CUml̢S h+&t2̕N{!r,Ziϸ`1/w`%[ò5w"_DϙK LMt˃(wv5K?HբAZ25 G$}}k0D(#>|#]+Dhl ,CwnH?kped'kX\cDFVJHu(~hAMԩ +%KMB$=ܥMP֠Z֞ K[(>)W~i+ODd YuG׽FFv_'7~j!p/ȋ2t_^*Ta.aX1ŏ@8?h$?B)F[^+C:ݍe] 跻M8&)KC-ll"͒4Y4 v 8}*4gS|Eld,n{JX0-Kr>V-ި1@Fx>iLY E|0eg[VjilIՒj0^b v׋PgVwċ 7nI{+|que ڀr_R1?RY 81__q3+`b:?l m:|d*%/6J/6ͅ੩Xp穀X[ST67"jj%19;6et7<O{~NrZHY{]0DA44ϔ3eV*5uĺP@ŁD&&OP%\Nit)nAx0 :Sw tw Zs`BYUp-$ 00P"yT,EWF5 P$ˏ}C' |d{. .*@;-8xbZRإRvƥI[ V 9%|n2rh(p!v[CqEx߾ 0㺕&K46)@SÓ;b`*w* pY,Y\`\0g_­WocwOG/=ӭi4=ΒMaX,x"\{"aZ{:ȉ8u/';ǿNBd0wwYUB@2`wg+ӯ~e-[/k7FH/kz?>8+&xz,6t"^B]Ag_a`DV{_aq}žm&ԥ9֩M֥x(]_&:&ugU&̫mtFzިM釅k"RA(K)W "+@(Ke{ǼTy\wxE{率x_GϧBBu,X:忳ʻoTj0s\:p:.}WD轷;6C{s.t"OC8N y#l[5<1UA90s<i:2%Vw)6gy):23JwQcyX^u_W_/?}EYwbתח"9iUcw5諩-ޯ8wvM=~҇A7}w_aWs8lx@g5#ʳll.UZZ9:;w{χq(oȼCMh`f'!݄_2<>*;Ϊd`gh\ho{ȴ\rţT"stwɤ%=Xt#=FW)#3tr;F`w[{,2C3-[{z9Y "H3?qUY&j%Ň^&k=]3- ; ͡#xH? bfoa6DOn?D=Wzo-\5A86ggG>uޜd0]1 ( nV3Ik0&~ټ8|=qW?(0=䋝\Avs?v!4<NޝRaX5 T(+*L[Plk[U0W {<}|뻘M"[Ȗ}V\ShƦ=T8gwNl!RX\A|ZeȄc?tGTCxdEww7?M70rQ =P/0zQunj0OO':NˠaD/%}ytw]lr+&Wd?~yw28-v27{Lge7XZO neU5)vДM]ɺuľw85"xjDTǓv; \7(]7'u}vq/ mΰpm>,\kޖENBeO,DN˰Z/- Igțjr(Z$#$#I=ޜ$ō$Ƨsn)I|Ԩ2H2rdկFhj1ᅴ?=-$FՑEob8a$1Mhm٪do>]VmؚzWMǿ\wÓLIQ]_toP#ҀL݆ج"a$5z{:v.nTeQEe^Tne&HZ2NkN@1.,հ;-6E4|IFF `YvKi^"44lڎɻDۛcu{]XRe2}_lv{{n;1+'RjPvBEzt>e^&zǐ<|FGd*?%ӑFv6rH"RGEOL:zHR1*GyizKS kSfRR%i^T d*F !-^v{{|nfLVY itvfؼ^VV#R%ffi;Mz{^/06ZթNGMz{^/{5*R ԁfX^n:Y;~Znov{E$B95%ۋhc"EV%$ҵK=ۛcs{ (pL nI_ksvT^~L ?2eӛ۪ޮg}i\ANfu3SuOXѝf5쩭̶ W&雫:jUQrGE޾ǚg7\Dp_ dzWsuWd)WʊNmAض 6lJP1dJSXѝVESUIjgiϥl9YaYIDIZt t7/Wkeo7IZTO??!cv3YӠ5ld @ÊaENetÊK-ԸFHDЇBŠFs̙#1Vy{%>}7E׆X+0 OVm<:_U YTƒRk5wݐ?P(MIbړQ9%PY'=^\v|[.y؁Jǣ7ҴdF2ҰL{[(BfÞPdYX#ڇФt@ 6lZ:H_ߌȢwZ"FALΗZ72*KQ;;Tm(tϡviniVvlJ1ec{OZY%&~Ye<!LI%N%ucE ,yY\ߞ|z֮4\~|߽ߌeRjT!YVmQԥ8DanGldٵEzEK5X#dYm+Ҡ+(7i$Y@3S֗2Eg~[/Ӟ"2@_j9$0f/΂]/88bbDI/^92X9e|nKy^#6WkѬ{ b݄dH琁rizf_ĸEq쀵׃yf #Y9g U,rrO 0rģS֌păwNMIF̛MYZUN}u#ٯMhxtΆ= 1&<·F/Ռ[Eᥭfh, q^YWy@FTR0̷kny2w[#vM8 ;Upf70o8ʼʛFq#mN;o 'xt,CqyDSW7H:q>o=QXlX7ʖ(ta6 p`ȐY=/@8pQFaGyj +Vs"qE@@JEC > FRUڀQ( &Gqwō,m#;kv(#[rc-u>f`!1 Pxq22A2ʗ A.&EnkB8LI8/( g|P wE-*dh#+em-@Tp]s\Qe甏IY.VFҁȖ-&U3yKd-Hg["!3"{k,ƱcH2["`@>-H,%b6~D2&qt0)c` VTꌁYXWmVH-ں69¶24\+On Ӳ7T '4Xl XS0 bMEISay[3\Y[31[3^޶f\(xb=򴦍#sGru4ڊC*}nsS5 5;LEK1AdaК\d,Py %Dafbi#dW1UpW^q*vUMXF@d;t6Fҁ2CcaoRs'j%Jdu%H:VPe9B^ b)4wwqƒmQDĴdaM.j @\7a/V>xl)=DA5i$ny'Wc-@]y5jl^*!d BtHcHd 󳁎jv~v4U_#.qJhn3jQ4 D&'͌F\kj9Lev̾".,~AfSQ-BM]X\tNjρ$Nr5m'컉'I']՗Ӆȟ? F?nodz|#ɩ4t9<cs{9]n;M'75}-Iz=]yR@%BCj `vt3& >(**p@1;P8͊ 5@9m( sZ-m1r2b{Y󹾺1Iʠ!eBlP5i$TRow(Mꐫ(2'JVdn$ku`7^WAPP%OPD,(C0EX DɺQx*hpǣLaq leZ^Wj~A H $D .xTXsy6IZxsts +U1 L G,Rr^؄kKłC4H2\FN4X *]aQip,OXE tC4H D6ȱ(ۯRVh"CKV }Ӻ[q%)x@WDJR: t  ,ZP*}We\Kؚ ^!-2("twiϪݪȕ}(DT+ ĸ&r4):wj C#aۿNơ}=' y>sǜč'; :u&NEbPfޥ5埝~b`]k1BmWgxU'p k'%+w=:V2 M0BFty벊f2k.k{ Ⱥ'*Dh;YH<8b&jVPtUUlVB+m 5>P6>4[le8yQ (/(b}3 z|K<1#&h@cܢPi{{tXՁ;,)V@Zj^F9Xfơ{š#*[ dǝ~eJ㯿[Av0 @4Ƕ0Dw /ONӂoo0|sUgOwumz ,zC$va~>|aÜ? (uɰu!(@(' Syv5]7U;@ p5P)[ ^[EPs ǩxci8A| UƔr~+sLE ]amVS8&BuOPmVXg.YP%s+P HrAko0@VUtDyq}d=wˬg$5] nibÁ4#kUf~)}@)lsM\! a0 R!y"aE#l ]E{ ӄ!F–戣jW:<8eqSs0U!4.[+=%d%-q7 EqEN-=r ] ۺ'`)d+X ?X P,}\ bℷ*V)۬XuW+GpW5|ՁBPKJc+rQmf NCNKy@=1 ކPA-\(ǓhZe[:,xri}W"ʽ^CA MPi !k0ѴuX&PfskazF(ѱaf"$Ji%G !RLp+nlLWmH35_߸)<$]w@ (ZtL&GzmX*cDZ;,7_wCs'rM(9|BN# >QXF%skS0A]X|n嚁 7G RhڶĮ-k2[b8ۃm2?PCRzZ뛚1ݖص%vzh[bWK[IS=X9o֍m֍}n,(%>P4oތҎVq|+c HM k[~]+Vt`3 Ks>C# fxޑigg` k! |P`635͚zԪr.LAC؂Nǁu:wG#JHڧsִ<>S#,Qu |aԇ|GH)+3iΖ3<\}bY2H2*s趘Τ9z~&NZ?Z/i~ȾX-'>Vu)Ҙs5#wx\K}6 tjp08{ :Tv# y}uT\ƽZ tT4աVc)vS:կ7flI5@䓝xbu̯q)c0"H? ӔA#k%4hҁw&$Õ4I0λBv'[}2q+~dwE6 UC]`uX$ Qn2S{j}r| NFLw̼(J!%qzl"^.o{o}>S2_muMaH%ڋB"EQ962`>+<8~3Khzɦ _|k;4G fho' ǖ}S_~܉Mʘ`ˋNI8|Oe2N]m9~6_}8l~~o0PAq4o7Co8N%au%.ҳӨzhЃOݸ @l e=Kd-F U`2A4H" /C{B Ld?ݏ YEйSx~p ag 0}K D)q8Omq uqp.~d{k wܝT+Jl7V>[ҋ!)v]\'R\ Dq6/.cU]\QWC`5|B<(A/vs*f̪ށ-ݤ.܁7a,ysBwx^Hߺ"m/P[CJTd(%\% EcqLB,B9%8G'B,"l(CJIw猾tndm̔KY7BwiS j<9byN.3$GiX3PLPNI,,!q01s ǵUJMB2,E%bQL@.M$)I(cH z><;c6$9Fۙh`*8f&(@9XRHsfl0K- K<&Zc-H(D*TX[SaИ1 L#24IB<2$^jjMY]AWWTRp!B&a$I-9 V2-f"h : EAMl+R Bi3F$I<:Y"dkۍ#IB dTdFD^,`^ؘ #+a(Q>MQԥnƭIA3ݭSYd9_=#m?V.T-x\C9)XF4ي ɹV!hv%?riјUʩ|/$]8-o)>#2G(D"OȨ҈FIuzSy{J)EWڋ3:fDyF9 9K,1АH)O͹xx\`Ƭ*S]ֽARR9!hrdJ:ZÚDkN2Т8: EOkk19Bi mT룮 \.j() -U~)X$p\mV x.WP#UYdUbR8U)C5)ZX aȘgER]K.UG֐%Ģ:=5 }m.5aKɣ^f F U%AEA\Z@H nv,*60:-wy9ueJ$A$UCˮď$\ c3ı&6V'%;ʺj6 C+ MMzbݾyP'K>bBOeLl3fXbM=%DE4>`؃^s  Qȗ蠻%|̡HB=s]J0.B>aYJh-ѧE+ ڙK"ܱ",Pttҫ *_3X-D]3jBPܱ[v, ̓ #EȚB;e6jI  ٕ@TD!o8Uk aê4,l:%| , uB ɉ+KQ5{-7t{+, NC5 [eV ަ+ R*xV!/ Un%6?ƁͨRƃ!(me;x<rYM<=~u\7IR CAݴ 'ҢGhL={$|wewW ÚZS ΓAɃak63㌹廡۬J3bO6V&%<%*`rsE5* r#bsD+);T*d+&C n3"Y)wAUu:+PA>G X Eͤk7,(\sBN0䯺|AWVxBaE#$KnQC";L ]zmϪkP\BXP c8Ccsf;悙IxXV= ՞K0g&͛$d2¤Z ~B朼YYu~5k!ڱ|ƃ7HRcwf*>@V(m@2Xq V+9 +g@ B2\vO38%7aFF0*ͪbԸBzr@Prm;)ւ@$\( `n՘M5Un4 J.<,MͥCP$9rⵋH5@k*?!t+GFхO cVś~q/no '(j:sD9oYƎ#Kԏ[v#G =#s8Gň(cSh;z\^SR޿-R-E.}W;3/w`ט~5ٟoM/on^?XAcGg;yl+zC+@)o>N,H)FkdAnǨ"1ިNFߑQ5>{FiԙFiԙFiԙFiԙFiԙFiԙFiԙFiԙFiԙFiԙFiԙFiԙFiԙFiԙFiԙFiԙFiԙFiԙFiԙFiԙFiԙFiԙFi9QyKFi5Ffިvv]uTAQguQguQguQguQguQguQguQguQguQguQguQguQguQguQguQguQguQguQguQguQguQguQgu[a-u1uV7NҨ!SfGiԙFiԙFiԙFiԙFiԙFiԙFiԙFiԙFiԙFiԙFiԙFiԙFiԙFiԙFiԙFiԙFiԙFiԙFiԙFiԙFiԙFiԙFi~:_|G|t绷>7c'v~/8k-x`ipN5nvo^ s]>|5@nO[q(uuRn#`M\Xcξ \d+56փڭڭN0 Xp% -ۍ5c]mϨe#`9#[Y&VbVeJ6pY2aeF(,6\PTu#`MD5k1n,Gy+#+1 ;ۭL0C v;g__w:8c|b՛~lD "m!f+`-v#`Oq8X!%0;9nY.6֓'gW/>_20wtsgS7'?(^< n=+>yd" )*Vb9FrڧGF!1[ 1ƞVVX[Yok2^6Ϛ 2OcC`a#`DU:Koeyk2q66Zd!h#`)؍{֓㍀㉋`uuv)VRW`v+#!nDL70;Ս,Ke`?q:qRFz'|llpF!Ӧ.9\k7ֈ'`bm% Pmc+'VE։KzQ5[ y+y6:끇 7v|Z`DF蹳i,GE ~+X?^|`Zlj`|T^볷Woߏ#n?l.ctzp;rl:<ճx~_ƷG<^¶ٍ'?}TON8X.m`+`Qx7 >OO*{E"X=n#?mw ׹?KOC{vI(ukL2J=y?]?mפ?mC[keh3hn?ik:ۑ{=s.W ~M7 }xApiϾt?UI7u/} Dlo_ޥk](#l^ݿVzʫ_~ˁ8]$? ~>ݘ#:c~Sv7Gv^ h*Ģea~mQGωR>r3?yxz?Gs|lJ$f7פq9m/;Svqeɣm #>;$g{2eX#QZx1):D첚DbvMM[*/tP7,&47)UbGVSՖlЪT笣KX]ǃή*u~= hBlTZ-S`@{amnZC@![Kk͈F˨wýsf7- Їc?Zhx2@^LJHjD=FI#Sm4eBuɌGKD38fѝ-'^_)B%DWkOHDq*|{/ؔ #[҆dJeDɚ0G xkm#Y! B]m`$@ȇE?mŒw}95(0G&ne>u\q15-lJPmDBDpW_xn IZMwBTj)M  ,RR2!/wl|΅NgU^ԌS%XVDE*Q\mMADs$ׂLQ = 5Vq{| mưnBj#Equ$~1JaHMlsPZZèYKDDC_DH&0%hUT%d)0sc`w"ApV?ngs)dSH! Q c(/٥$ِ7R/^jw[QOX݊ NVh N@kE&]ï?P݊2Ah?j&b%Wx(eh y27X"BsnƼ/& YGeV\tuFL%e2ա-eFU5d cuȆYTD;dD@Nn4dk2ׂ LlQ坍p\bQ{5x Xf"\"U0VVa2̆veB.;92d䨝AV#" ʤBHU(NP ?] ƲdIz\Ek(PB]ĈnTzCE5a ȸBLAAXv[K@v @ mKnל16+eMJ[SR ,,xdNh&Af`L`8B`@YC8Q%@ `h8` I& pv\APQ{Be*3Y!(@Hq;A(QU#x@"=doH_($T* qj R+ֈꑻp R՗5 n9U* [kk Nh8ܪlԆxQpbğL " ^ zϝ2Vǂɹᶦ ~]fRxŬqWblB"t8N!vmώ ]CӮdz|ouC&6X 3WY#xu:pn>n:@^I.GWE BܮZ1:!dn֣2E iC) >@HE&rZ1ȼ*c|,tFT/GK*BZԭo%g SL![jFvڹ{<߳fQ+h5&P`0u1 6ttlѣй5vQdfqqֶbXcV|5Rc3o+\_}'ŒTqf%5 2l؂(knXU%2)-2ٴխy i,w%򙀕5D*h,a&V|4}։鉀u. l+WΈc /`)}\jkaqήn9jR}`Z퓛C>mWܵ(~wPw ~ $Wc2z'C$(kT^E%r.CM6*/i뮦٢]a a4=4[;:;9x͙ͮ^k6wzZG+TߏVogߟ_,5wi%>|.W/ۿ_{yjy7(//~s]?ώ!GA%|G?y፿8?}ZlŏZq=f,Ծ3bn) ݏBiY|q4BWĚMƴll)ܛ_pms߯`͖Cr<wdO4Egz>_00{֝={>'#T'k~v^Z|p^OSܞ1yvw(񡉹:M nbis%Bden=_G&")͛jn(୙g"m ܈~/Zr[N9N;^gq D>tQ8 ۆTl;fϫ4mndSp OZd% l]!-[j|s'dW`[[pkFX?`Om:^kV\ce>O2;-v.טgzr0wFW nM(nMvOÑw˺`r rY 57_Ӣ :>zxu㣿ղ-wx ^cY˺񄮯osU㿆{m+k8'+">^ɻᇵ'*$F @Ys3Mѭ=ˬ̬) j2 b¬ږl Mnӣ_04+?q_٪:@MI!izߺ^V9hl̓~m~]ϚJgzw= G}7+{pz%hvHx,7A| Jnx}Mn~yS߂vYZopǶ,?)dtg50x;_L_?,*ެ9#P-js7B,6 oP[nGAx4^p(ԝSoӔ8o-*$hUTˠ::/&gs&;O4v]%lW }Vy j8KOSݝUy1b$c &]kgv9,)9sj?Ԡ;MC(iڗEҮ'm/^g wph w|؝WXNu:2Us="d|8My1DV'R㇦ؑ_b{Xq/J~cٱvflgٱvfegٱvvk#zufl稯sיQ_稯/[^<'3$):X[2hZlBR GJX%2RE$+eZ X(L^mu8Bjߓʪa盚dc!bؚ$ ßX {`_Prj *ǕTO *֧4@ *Z!/A_"@uZ"$f*/^ Z&/&:dL>S&Xr\2Fpe\+}6bNQ%󑟭Îvf 3X{CpQ^VqG ?2p_!S)MV'ES y*ligqSLj0odKFy&ZA8 : 2 ǃ`/&J1.Q8-JJdȢTsXxF*ʙuei+kٗc< S_@8F~@[aZSz.pc sYsYsYK"L7˾o29^_]]up5لN>s-l5@ZTxϯ.~lm o.c9*X}d:C5<Ғ;Z#zތLSo__tAPp#Ÿٚf ͏Ͽ5bq /}\_8@[>v} (ℿ~rGk =I[MC'߼u\" M X=^E1 iۘ8sB8#q|ym6D*.IP2]0(i'gt}N Ђ%%3XT>9v7-c%[Ǵkf5s.ƵDVMQLpm51M;d~Yt,US竛G 19Q'2@\Lu%h9оӺ3N#ֹ$rѾ#'(ᑒ~SWՔ|.B }f2);]LUD*r >Ȉ/:s+I.e 'm)fb8L?Yn%0\ځs<&v% =_|Iw$e"AVBBĈX)*3bQdWە;°fK`:jag0p {ꁆmf]$Zd]DpyK%i4ob-&Ӈɼ$ C+uC]#wgTE@e:M4Ll!^| FoU+kof\e_k6|Rak!^dx#VC{ ]W{7]Pg,ㅢj{)-,fsۤ0X( >-#1(/8xZyCoZᛋ4E xWq^>Ųw\ Wk.⮷Eߦɒ\Oki oy'P?v,6>de\^ !fEL8e*q(h;>}3gP"0ԈFg1_ă}Ө*N#@]jM f'-}Ҽ?(&)as%GЂ#G|@jo\L֦J|OX8Tf0w3,;ˉD,qo 9T[B͛KJ#ЦG*t5CKK($H؈UIlnA4j 6;5ؔ엷b)?mɘ(ƚ~S1;BD)040,4Q:bԼ(Ts#6(ھ}' Jc&d `K ,q\Læ% `Cq r?KR) )z FMIaJ. ކ t=`E #~uFc3;fFcQHI {=23^a+Qs@wGXn˓eAD@`%h9S,kGoaƴn T$X"MM=0uC ԂU`)c푚MJ̎R6%-K$#2H|qϒi\wQ]AtL`GElBdk#~Kq`ilsW09оӺ3N+=鏄<{fQo|WZwճ[cMGD^􅃥#w`5\HOExDʜ=4 3 v^`t4N`jB<#>F|X|Ǵ xѤʦ;Άioa#L2Uv56.l_RsMiW& ݹBG%r;Ve'n(խW ]Ǧ'Q %TAc#%zֈ ic6NP8kL"2[[M:+#l|%T{yɭqaog 9;iKIǢGlZ-*zDzjs̼G>#^ב>S1 i?`{ 9R~r 5}jxG%K4A&In&~Rob#b}S}y]#v\)6d=qExmP! WːջѺ;cvtIQ&Gx^= xW\U*f;n[],?zkatSw00^w# n .rx%%*%,yL˥'"8i#"HmJ#g\^H:碶>Ƽ)5ARQC`$/C.<f )ZըZ0Q-]+m؝!t4OP!#JW2J2s T1ƢJ?0 0QXJ+zȥݟsV%ؾ:z'HK""(: P~1'd:bbPZo$ @.Ct-Cpt(]3D~Щ>G~,Qˆ[wt &i lU _0Ъh)&X}c̋LEw\+$y鄥ZJheDoΓk=Ψ/icT#t4jxWp( xSuP@.CVE&vѬv Nx(GxCv猨*Q9 Q&!F ZXW'-}N:'(x[s/C]6rpNƵ6r*HW FS m(:%J+=ږNaI3Vxz  5UӺD3g L@P90)),T.KrLZiTbqo]Ƅ.*,%l|1L rV(逛d*f덯) ܝk1[9-ǘ+0WОi$^ArûӢ50%f%w@Ѵ{p@TVqLSݳAcSf5I32L%lXT15$PCh0]tDQIxFQ_SHzyX̰b>ě)i nccQ `>nW]ˇk-=2Cˇ Kz_ߙ^[m{ɬ"8pЎ{T|q#aǣuYS?3d1Le  syN|"o=F+zߝ?$bp"U)!EKJ Ju mcw0<DwS:ܵ]_wy5 9P8"45i6⦏]d)BG3%ީO lK#>_(sPt4s|iD|``$WQMI4 qe(G nإ#t4SQ]!7Ù:/K<,>FMy~p@T17PD=O(`Q]nL%ەF(I.Ɛ:FS(Zo+َiׄ%gz[ɑ_1/a<Y`} leY#>_Rv[l`Ʈf,XUjĂ 5{i9[p #7-PO\rB!u qH(2nWAPxpo xQp /lbu\MR1"-c$V^shhm {\gWYz}6sdz@HZx+Q O^h%,QSYoHD~K<~c $[|OJNWS7Tã*G,L j}5֗|rxQٳ} /C^lA)I!n8 B AQ0&BSGBwy6?Zzx@ʷU~P`GA>3fRe^hAM]G2zG=", _XðE@^wdd{z@K Cʻ% i ;@wCKȠUK\j:7l#+6rflSu3*HgeI"Cu ֦֟r{ggK>f-QjPiOQƤ.rb[ X[ -gPs?YoJ \Qp 4c *ěq$<B65" 888h]r9ެ^h9:ln4DL(ǂ!2"Yɷ!IO D|4~ #PE9cQFG=J7»0em (G#& n g=; zD;zq%=̈};y=40.(PR%kcmC>mЀ]A6z@,2uf!Xrh?MD~ߒrx>_pi8jޖb6LGG1[<* z*ƕEzTtؘ~zJ@$iz[|kfOD4&=Flp%7qр4@wBVj^שGTs 3{B7b|K~hE3R@}FJQQyau]a 2FR4ƕrZv-ЅQ-Qzt?q]f HbA$_ t&1EBH1D( pqL~L~߻#„gd=P Iy7l 67[ PM\(E ˽,ОDr='Tã!AGd[輅-ŝkfÂƿ}xC78`߻zT|Ī=xVX4<2B 6"Yc1q@2G<ƿ8dyKԹ&8Z&1 %8_Fq.%ʠ;q2a%'=<0Fs]#&-*E4?w}|h w EtiƄg[aDN2B]4 $@tGQ\#MBs8|냼!!8φj3 唖"u4 qfԚf: A.` zx\$$堽AhR拊2jTYãJs!M>1(ãd3j3x7t.jV]UϛI_^֛]kk%^͗LE'u!Ma"'yE6Ekz:vR2Uʏ;vؿ5Y j3|]2J&%Sf9n^ +#[<^쐳"\;YaҞA`ƺ^oڄҐ LJݬ%5_Й㲑9TL=2JpT rϣ}1uU%)Js/fy^E<׳j>>ŷ3 9QD1xĵU:4jܒ އHp<5wHi>W]՞g8detΪ#^1`6)c}(6־+ãI> mAxfٴӐZ ;D1j$_zxx|Xlfmf题 {$%11ܮm@i'$d;WQ.+?]\؏$Я}yyy">|?Oj? S0@abWW[;"?(# FX끵(2ӏ?X>~J)]6uά]!\R Oarqų;gȪe0iasĢB\PM]Pf!KHP~U~kEa @ZVҘ#Vᆜ$8DAT jsbF]k`$:\kJ Xj:l;gr8pYZNq@(m9guRy.X~IvDk)TCBzx8mmĚÎ +NN3cX  'ᗿ,֯QZ,|`э篏1;kC0'ujHUIG1Ŀ=.v=lSt7|>iϣBٽ4?pk:m/)!N2pv堦H1PЇp1vt4ۻ_;_^.%tJ6+B˗u\Jߖ Sl'O, e|0>HyJ9c o cbW{QBE "e4.I*.Q&ǁuble(;q8O`#x&=(GיRq=N12- rSsLgoo 3w{1B"4ЙQxl peFq֫ꤗ^iv v&O$z| ea{t] xb5㐽 (Gq;0ѭ|4+J{!}b;$_Ғdҟ:di:gOO`6\I`t{Asc|q`]Z VSB*Hd(B #mj7RȽ.N<#܋(:qf0}&D8Wp_(.q|p8yڿ+ p!F2;^i&eTKVvk)x,M6 A+*ԧg#gJUX~YV584K(a144FL"DA4Ł EA@)N"?LN]ٝE! Q5ST B%g0J7ky?vxr!ٮIH<95ݸ`)9Do0 5D<6=lG!`{SȏS ,(PRxa2f3I&O'wZRD]/XgwLUx7 T/n-ӘD=JJ2A D웝S,ݼƵ({Bkt#6\؛%sA{S9*̏ L`,2kA68uQ.+r酽Mw0^ΐU49qmѠfBFX-KÄK*"n%1(w+6LGE|_C.f$AY v31D5.,7".I "y8ĠKwŔTMvRtưY'mVSV\Ʒ͎c;ALBkojgnG?{9 j鴃/{w]l K|_\e< {psG{\[ #f|u>[o."}Oq6)!xV*mSS\8NX@ QmP )=ɐOYq3L2+?:{@4B4O=Yɘ rƁ*͎:3=),ƳauDmH7}zPbƸRNKyxxd~B{͕I&egʘ/k(':SlRSI)cɍsSHV.`r|{P&0R91@ɮ/$EkPjD3Y<.}䤤E 1Bqj5[n|&4HGt:0ք)T=Sč^SQBޅ]Mi. njYH۷Ff̝(.3+mN41"IdeW"(96/OBmpOn-Sw!K@5$gRjymWtOcNTNc֪Bܛ@ 5LmuN'qoz sFPP{Fn$/ 3|?\CͶ˒#߯-Ym= ,vYwyʸSH YX+f &@k)߾]dpZ(㈱=aVj>k^Ov-ռwf$ƌDJ+HDOlS Dʗ$OK9W|+ 0Z Վ]LN_&~ A^Js>'xE L_h3]1?MsUrO\:J mYrیyk$N{k#_3=sU2+8Ӓ@#Z1m (_dT}o> A ٭M%OMf@BR]PY+cƷ70͠yxzl-–DUu6yX!p71ۗ1 oSƚ6d1eDT4 2f|As^y6q^ 06Lm<] t<6Ew2 _Cg0fV-28`sZJ)+)}s0[5˞c44Eg.%6ٕ1dA +"!UnhtBn2*r*7QyJԁ?i8h4J&QP TUI!Ta9Wnm 2Ffbkz6\PB\|q\grvC$t+KYъEp"N"1UW\aIUL}x%UzZ@Z{&]Qe' 6;IIc\FYƑ3&y$` S4=E>il^hPFTMY:[aձ%Vת7 oJpFB7hyu$ӉHf)3݅0S*$)QdWȳ N >$Cd`1X eά2$hT\d Et~ϯhYgAym kk3UVZ&f<"NԷE#L}[=^SuaMa 06sԂ}*s={`tlX߈P ٻ/" U}}\NW^C[Q\Op]!l#vFL䥛5tp;Md]k3G\tX|W^Rϋ׳|[_vg~Pe.ĸhDU 59:|XOCB>\[c mV#0Sucn_g 3t^`0CMq(Ǝl-#>ؿ5nZة.o&B:{b6ᨑDpFWø[6 re<$H5 ,@7Xw#91Qnu>Ɣ*n9^?n8&$(]f$1u?ݯ6TUba3ʪ SN$V_vob/L#36tT%R"hY c ^J9|] W6P8ESٻ}[s 3T:^|1߮>?#ĚAW0o@K%1T+,#Zo5ZBQ0ˈ5ADjtlbTџj s +!e\0a;_LusKѴ>b{BMgA3m%i+ 9ĥT(W+Z3 Vߔb/|ѭ&}" !AK*JHEܗ9_Ґ:A,уL}PqNtߖ.t7$GՌUc F]#`AcpgEaV&9hN%*BJR2 _~ìjo;6qBZsF8sc>&< d.m5U|]Z$ }xyn r;N #U2 {Z:n^T6 ۦe/5,$e@Qp AFEJ Apwkc߿fp4C[vɥyרG0v7%-ϣ]p/>[6uƫ(e&kCjAh=NI"胜XaчgKlN 0qQzY}XsKZ*+nݛX<ځ*8rW*0&@񸨁GW0X<0),5(]ꮗ^d)o~E tocՋfCY{aUG9Y?w0,G&8kuK2_RTvPX $<_Fc.Gw #u+}Հ|y~_E*9.濟a#zsyiLJ*M+ 蝟ӗ?m->-ݿ!n)(QMI#R;MgiŸ )~<"^72F42{pm qM49Rt΍OV-aq60ȍ4Jd롊t&߸a|el)f^Nk@gcPwqCߘ)!g -OfuqVpهno`b=Ժs[+%EdX råEJrzθ:POVZiu/ Ѫ7_%:Pt:}lG' Ŀ/~>^}qՄ(?jELb-e~.oJ~\V?tQ:,lj1>GI?ZVw1< i/Fev=jbeY!Xm׿55թ]bS98zUcuxG4w?/}>/F[X?i Xlqj#www5@ †5|,Gw`Z}gIw SHYnRnT4!G-3J&tt\օ%:)bhKJ_~:io尾\)sc{`;O-à  F-m͋*9WiO @ A:V8/}0TݴA+ڏP6L'H1O筧+U>U+*6<㟬P %>V<™)@ݘRfڊ`]=OxHl1nmd <)tVS-F~ycdV; ;>GG1˷CqE2.=([^4"oVIٗxwt%X}=0~}K;Eu<,g8#oѮꡩݎǜ;q7i{Jj#ض"?4>.C )I I5mApX8]]' {'ߨ]w*Xۃvx?"ׯN׽p{V=GQͧh0W#u}0RN|O~#㢭u{&:߽<ܾkfݞЁq0Y>,H# 9/1ސ2$1 GX ꪊR JRֽ(z&޵8pL:'BNqUB+Ӎ6xb!EsQ5+=ǝZLwu)QkN+ F|%)+j4vȢȰpϔ9*WUuGli1>Czx'1C$Jіx' ˙bQ;g)"ϗ;71g4 `@ <+K~FRAZM^E3SXNLf#JI"c9Ѽ0ُgQn E{y۵:&';gڢ xG1TwN2,>^SH ;ughHH" Yx dpLfiw \~DnW 窤a5#A20Y=\)FG ݳ~DD>! (ט79"8L ysP%ͱqssrI@>=.Hnt:8'Y9} Ud[./wo GQ"{8{72磤E7j"*c${̅΅Ge3qzHxJyDzxAPIupiʲU|\j7/q]|6!?id<'3cFV,="\p]8z&fm^dɡ(ē J;r@_/sDb쫾j$ɔE<EsGH9\=vi  Vڹc#-,KL"᭩ ʀPV:4HY@en"9,¥,&%ҘXZ0YFE[ObJ&>v Ivʖ4Dϯ]JqDQY{$W0Dvs Jx&i.~Hhr/ҏ>^Kh N,X04:^?42 /ɫ_ޣ$bɻ[7ϼiQtʼny etgSŽ 4(bFI,9am"bRX6zV*.řI[ƅ&-~Iё7'2ZiJWIzPc*0ai.-ϭ*{<4Z7.lI)cI1kn J(KT0'>i О=^oUb"Dt( QƝ =7t\ ,DVT\) 'p NҩH3m)2 -c  Fq@'kANIƒ\ GZdA *b$J9\Q-8NEB&h-Tݑn4LL$;mFd}JW ^x kQztFAyiwVU7aEl@ ˔J@&9Ir?j]@zURf-jR9&~˸"Q7<Yׅ-fDHepi x y.cLuɫ@7N,MK 4 oeHxe`Wb-z ^ޏ`h빺 ׹8Kh Nƙ>^YجHzqv>.U&Z?je@$.D1ttֈj F)z*앮H[̫tIG%K{KС/=㤵ЦFKrB1H臕O;3vbc;LYnHZF Jh>*]Bep.Nmp_q y֋WP_7KW۶e_Bepp+|cYv٭l-G׌A$<=O;,28|f5ZA`]le^47_P±CW7|a@y8 f6)iOJi Nzm򢉭u"}-rAnD 1/M$"UI"uec'HBuH1yzl^N%BחLk' uos7'UugnbH\K]}?$bn]$ ,4B-#mZ Ͷ`ͫSa}LjZ#k 2rPq"sσE :]JM cL"Kh  ȇ&I,28-v6(QU3t aݧyehd_F%+W4T#CcSp,)F.GhPe읎rV\nF񪵻F* j1x|F$ XBephH~c >T^BepXteM':Y ZqA3`*)t* p,,sMIs-7ceAaG[Kh Nz <ٝV+ekx}}`,uVeˣRU#228^f 5fFQ '+3O>sV|{t>4du[^Nh+ҵ˕1erWPA8dyB''WOo<ۗ&U=킓"zA9^uN;Iqf8bh%bs>W&Ag4ɺ2*,:[8yvF1 Gx:f8Qn/Gd yY`?L9|xn ܇ "|z7m-r𘁵`8=)F_`y_b8;_&JfslR8D>uދ>z,]8(|Dqׅ^7ODJ˗G:vRxRj:q dZt9n30ĉ=R-oV19Zyjל1`Hx|9(ii^nPL}|^-r7F |*T0kΘ35H&h\u-MNLYJoe5;10JQgfIvto=*hRdchJy F&TYcbsE+[`Y"n!rw$ qd M%G ĦL9ma+xィf̛Mt?_Jւ*Χ.Z]7HiFթY zY«/&2hxȠ*b: x[pPx>^$sjT8 GoO^mM=E궫P:n=#&&bݕdk/+w'g"csa~Ak^mNՍd߁Uݴtn~VOT5~O*w`fq5Vah{G@,ʛ?uN?8[ׄ=}yWzb8kݍ2Ҕ✓JΕ TAр(.6I$aw1ۍ`Jd WhljlFDgyB J\#;Oia.3cZ%ʼnJHcJ &ig'Vcae0.7v9;XITĹvT:eԤ4I͕ATsʜqP'YΧ{!Pf`{eL(SLL1MQ+y#p`X@'!Mpn2!|^ǃ -&;/@ÀFHbDa fKS!ΌI?86aJkHnSPkpBX*HXW;l+(ŽǃwGǠ7D;- e|.ۛ[#V\ʄ23 za h"I0<2C4k48Ŵ̕ڋ͏LJHt} ͝O[wH~6r , lFsy`@bp Gqf9g"IS0T$ 4[ ZFiRX’ses[e,C}ZW ?#' zx%JADY{`)hgngD hԙc1DkƉ- +"Ns0V4d`- 6m yWm_P8lI1.j静JRk#fwk1Y[aŲp{I|93jrSo;Ťq}|ڋnn0{tw #΢-31(}us(3_T_q?ͶwݮaON.RtvG^UNj}[T緔sqRoڿ:/{ФBo]g?d="#\/yd}1|Հ՗gv9P{0UBS~_݁# n)~SXpȔg0ٮߌy'd5>.溷mƕ7WEuV檚]y՟G̿c &|=kו4܌;1Jw'Qw,~ϾCk7=_gR|mr;xȵp~_ihg[UŨދ݆6X֋6K#*sL͝5Nl[T4fTGԯ1ª:qbW޵7#N&ހ|[;ڌcI.hHZIcS%()[0Stݍ_7 xgA@,FQaDܭ08M7ojo3x7MPn:⛖l޴ћ vkg)UM-e'g2SaNBfִR=q=, UW}j¸V{bvT7isSW" U@La=}?#Smt:h;Mg/Oa]YtL OrӽĻ[{/O]xWxj Z͉̀A"_tboJɖPoʼlha/+(آt՛tOfI#G]Ow9nfCG9߳ON xov1|=~d H{ƽ_||P'fit򁢼^Y{ 2^-v*Zan$l5\ت8lׅ0^k'uGO^=^'?>E!h͎͟/v>_ vNfdڜ.Gf}ͺ:azq[TlkWC:bl'~22LbBc&?b8|4H#z6FF/SqAď},7Xr =!TѦJDc&&;3wrvo(,jN¾)IJ{e mk9'X8p u;SqP sYA '3U)頵 \<[z .ҵd&h]\x9g ΁]wd ȏA~F<"n \m=f.e!2b{$0!kET3 <٥BKlwX(_:-g23/n= OCd%=u.*-ӗi19 W6UǔHA]A(Wr\A(Wr\A(Wr\A(Wr\A(Wr\A(Wr f 0fԪMR;%vtLF wh5V `Q 3SDG*E+@1]eۭ$\Zpo;'RzK9P 8L: :`! C\G2F uD W Q ܠ 5BClm@-o+4TRn+< VeA(q1wsp1wsh1D%ve}\QhWvE]QhWvE]QhWvE\8WsE G !jd|ʼn`Is0sܮ:399fG1wչ]ueF482J@RЂz+G$0M7oZ4J{>&ޏguK?s޳ (ѣkl_Ӿerj0/㴗/\61YmpՇ=} zi^25Sn?i w8ͫxsQdf}I2$/rq̶b_~+g,Ls>ИAS۩cx}`v;SQOZt'NV<}@EoDVtt9dʗp%栄NN ~=-<t>e]uFkBn=9w6'f>W¡ӹ;C?7YY!}seNI 30 @Ӑ RS2@* ֗`}`/`ԏĀ !s Ahd/0%O."qÍpAζÅ.\Ōp7kc1Nj0ues9v)u hox. G~zݗ,SG:žF8P~"8Z:s7 Jj|X]t0ز@:6zM~BY&և?OL 2ĦÖa%VxĊD{=& ul7F`U' !*L:R ;I)})5/Hٷ3Y_.pX}bsѩT^o԰T~u@hR[ IRsXn`Β+~ xA:w*,y+d b2w\ъ8,'{z. Щ]Qf,K\"ʆ379<\BW :YyGMULu lQIW.8V-Uj6\[Z7#C˚0F>Rs-Q)^[ä"0 Zj939Lw71bJ 頝t.{|Oy@{:,txa.R:9Άs a.wِm~NT$jwjjZV]}T"B\v\vWCWjw6Q.|gx}P{6wٿ`ҟa!seв?Zw+}u O~ /:n܅G+ ~4W莯F2}~P;[+Pɂ /'e< @V,]5#'}g-hSz8 "x`4LZnH9>^;1w>>K6ޞ~Lu0٧^ZR3ǩmlC /Z=sM9x!2pơ9INOBS_'qhD"BJ_4z];<#>u(?v,@_LBG`P+ "!,iOڥ_mK!ŵ&Fc I|===G&2~Ӏ mimcRA1 X1(f)PP{1|xbG F `o@]ݿ/il)E3Ur;:C[2 ڪ/܉}].t &[OlfT(R[5j Js YZG)Xbd3Vn5W,aQc+#*bzjw4LfmxA,>6-X"ۅ^3mo}Vs: ) 9$aB[Yx)\Վ]i;Wo ۦLI(fōbZ**9sPCJRAx1&TajN Wz箄soay̒"w4Vl#+F1G¨DwVG,0bH2J7G\E#UUE?]) <}Tk`-kgKem ,k2VaWpƝ2 n #Td"=Y&Wp1vjo2xZg(8Fub/V@iH6&/+ySZ2'/+jJ7\rEc̕V*q\ N9[^t@`$|͂+qAHA~4@oׇbʙkefD2\%d.(zgD~ll#W( Q]OLKLWdN|o6|f|L3fTD$Zҙ%>ooCo0~htG$(H0_"IʓB|!C6*|8k~Gnt:itw (yjR 2LahsÈw&*|`[Uu'fIa šLH0 e`-<7\Td>bi ej?"-N$$%ZyAI3  (@ p(Ӊ/`^RV#"yHac Ya*AgL u0f>4$@qGK?T41|j`^"E'&>VX|IЯ5"Њ :_:'׽ *;>$` Qk E$9_|`+*2DbCkԉ} +mb-`E-<E*2A1\46IʨNA60_:*yL1O|tuhPl2- Yؓ@F:С: +?Y%AD2-U$ʂFqU5ʠdLZɕ.KFa'䫒 D5 S6ZAZEL[dƳsޅ.4nDAPspSFr Dg_0cseI.rl'\T6v0xm 77SI.rinJkmZg"0xKaTM4"]5YEq>ޙ_2hN1[A`&kVrΔ0LBjVBh# 1R61oa)#.] u(z&엌0x7x(I$E)ޗ_0~Y;P'K(kt4En&0\Ld{~aV!A Q}b]]rv;̗07TPSLkl2S$;i ݋dG5>\% q*|29L=!ÖKFb|o|56%W&&jiPr)}j!vmUHnO,wRaIAiWdT{uԋ3]ݒ/ʶA^A[49cѱ7#r0x"7z%L"H3\={U2KFaC01 K=qDP5NTFs*<*e1GO#q1&B`˂`;L!$J SI=6-ah5z q"uhrno++֭[;]fuc㙮]9#2x;~=t%pC\ ]Q^] 7h?ӡ+o++׳VsePІGv%(9>`=ǝpXm:AAyc홮]Q.nk+A UJ=7O5.I]j5tvA}W?wˡ+pΘtu+odڇv(y\toAW:svEt%K8Z htʐ&u%٬pW W2OW+>"N1wkSx? fWϐ] *R͓]OO1Z-I5V9$~bOA 范ӗpw >xsz~!Oߙv8'f/{9kqtpS(O"-P|y_jln}25'.=k'B LK?_}¯?qKcx U68 ,8><|+h%^!Aͯ|mk (W*~ο9|w? ׃?~ד+L7vq$hf^fj ko⃯.N7?tv$5զn߁NDڎѫPś9:<؜vmqr8qZ ϰ=J \kb>QPIj[JY\ҫ(h_N=S**~6;NoޞU`G1ch .`o^|TPՁ뫠^^bv˃kwDs{EǷut ;d7ѻ?[NKH}4/oMG9n(룏w]bkU;;W:Kgfk /Yv,?71K}>wp_E 27l;⇙}2?~Eןώʮ,,@Ey"ˎq[ ݢo>?_f@0?0&Gʪk/o{Y9=C~s/-6i9с[78*ǖ q璬5{ӞmMNWϑUlC='NY̗$#Oyw}?A{}mME].5OIm7=TH{MA嘜55}v:)gIh";onLם5hrʤYUc.ԩXRǘg\H9WS5U}*w=+̝}X:PJuU*bMԱ` &dRWYZ`s"8EK2L~$Zj jXqf[%kdDMԴ5m:Me{µo\*y{I/zwlyVv6CI()5zIE؍ ]k#Mj{)A{@hdc2z^Vk@@ۘR:W`m,%cM":IF1U`_b eWUΦ;^b7[`V: p3^WU;qo8;JdŁSѦ8 K{:cFgdyΙbg`>mCj弹*:;ySSI4xN5tlne3Ar=꜌҅9+NGɇ%j}Z#vNR]I((E? m„>˾!`fX\}IXY뚯͸dӬ jTE>iкXl)IՐXyUH9=uj,Z!sS`r"S VAC=%Ġ:*D{j-xԬ;v4FzW(P H`ʑ`)ddjb |WZS氾mmۯ8ʆaP(g_ŕ8d`\VN5![]KVp: Dqlhck +k.L7m\]A*& mƬG5ˎs6P색]SPPB&ԓ)eX RFx XٕU!\ "U+5IQ22l`_&4 gI: 1ktP+76CiV2TW\m f^2$۱޶X u=೮ [ȸAMACX[Ǡ@4z$X !a@YP.i}\1:\[s6 ,r<&܄Q?%0R8tR 3GLՇ.Q0[:*]{Yim*ܙRP(q 92j^j zTyGD)Z6No4vݕ(!(K]ZD5 J H[|Ϩ`4)Sj]JDA eH U"IPM4B= VNJ;&_Ƞd\ +^̈KUUNI61 Ti'[i4B.}Yڙxc*Go冩I5tvx޶C/x ^v}6#X ɇ[oT^*PcYeU2$]IvNVt2H hyأyԄ hh(Xq==h5'HiD.h2Q*,C,qm0R~0Gcy@{N`RI uk8@vTmXtuajPF5MUSL lZ /k$ Sa<'M==:t~L1di*hHe} ]{ #ؠ$uCAy1AJ=.3wL%pv@R@"1M"Ԡ7IUiX;B/9"5\TyU<.6c55،.Ֆ}oϬz~ 3޴a{]M9kS PVȟ '[imεēPί(Rizt=HIzO*c{9.OO*s3dR۬6(|߿ W1U "}]juAo%}] |~n{H'fq c)Rd.f-.&xj=j~=&\z;ܣ'tj&;f_D.Ƕ6[ zܷ:YCs{ ܫM[۫?21jwI_?0CnypF\ޟǯ{oę7CgyP{Š&<ފ 8!O]-&V#-Il\"a+U?s}w!і-: v癒|] n⾣`_[c*"yH_W&:O8_=Ɖ9cXW̿,=ɕw)kq8!C _࠘1)>'E7b)֎y3ۊWzѴiN|1pl/X?)㥥mh[<ţmh[<ţmh[<ţmh[<ţmh[<ţmh[<ţmh[<ţmh[<ţmh[<ţmh[<ţmh[<ţmh[O3LIir\2^̶x(tinma[FRɓQ:d!uȨCF2Q:d!uȨCF2Q:d!uȨCF2Q:d!uȨCF2Q:d!uȨCF2Q:d!uȨs0F{+5׏|߽wC\ elWյ14- :%\+20ƛb|EWyu_$qZ|EwXڪ BCWVC+D ҕ5 鯧A{~k7 T>/Fay=L8z~ꍆp雓Q<٫\+n*׭(_0Z'"SLG'||tN~FFC k,:Fڍоg!~9{aԲ]z䲕:yVv3ߍ'.y#W>X%_`2hp],&A'`gaF[]`y1tpЕ?Yӕh;|IŸKmµg+D)I]}9telz{ʜ:]%}vowB++P qXtew+KtԦx +,BڗBWֹC+>VFtut%w0w`#+k])th=cNW#+7%~npU1 њWWґ:FR^sk +l^`+γRJyÜ=tR]#]iPDWXrb|2( M%]Y&梜++E)th?tB+/SʗDWXr5B Pz3xtDWXB ^ ]!Z{S]}9tlzw8t`^ۄ\DwCZtJ`6]=9Fۂ {[ ]\\)th{ QGtu:瑩:9|fh]ťm tE^zڡ `AOy؛u(?`0͠{# oaxgmna84\m~V>aX nѳ8ogYAJ>œM)@m} _,8\_ox|˛} 3gALVvIb\~/pկ*h*2ܮoYL׋zIc~kIh|ֿbN0.Ǖ EYc:M% P5gm"n5vQF5R9F6FqӨ(`;b#5Ɇ߾nNmk1|Aj}эNzAu75˩thz@nBkvi tV|pR.TQZ9]1nȔa:w[sX;;}3{75Vx 6qE|J>:8ɋky% ٷ^IPD%졀p,h:tdJ>FVlrK7ZNV,QeRQhlgmj`8r BYWbx jg:}k - A)Z/..t8ًC%N|nIŝ(//VC4&O!>CشMUeOnw&rRt)%KfK}ĀYIQc4*pZzQu3Ϲ}J.w#Jgv ?%(X) C7@RZcCőkr9Sf*g”P=xahax\2k[*iST|EK}]j7Hq;kWv}tszKO,qu ?Z LW|\CjO) ذL tJ쬠~@:v;jx;9r`}C+jڊΩUb7??H?I*Z}- 3m`YF K74bP_ٯ 3O#>1He-65߳g'pCP+O>ܹ崄|fQɍU8ryeW~S0jt_#Ӑ Js>-R׿WkQ뺯8Ԇ mȎfpKä@> 8yGb /N2S϶ BԨ?#n,?E+hH k`x'0TGԍ{E05♊x4ucm'? xs3Wy A±+:k N\5G8d'[M -ñeju`}llQih,g5 ZlhohMyg=,ZB]gS,a SWF? w[ak$6av#9pyHYc+ %O?f3֌)LYJl=8P"PijULj$4\RM P mz깔f?ߥJ8ǖ 2L]KXtİ,LBTt.jMƉX.3GeLܝ-fϧH3S1pvaNׯW)( <LP'L3C *Quwa Yļ&szǞYcaҦjoQ βc]5]Ql%̥ Hx.(^OD1Zo,*`LN,tZ6ʧu f[  pS3XIGAܢ( wry#֒jUShxT31g-ʱl\C R\Ϩ`\kM~_i.AZZV+H2#Wb\SniB"w~1{Vm-s d=ٜp OQUn>9h]T0G=%AfIKqG LlN#OS4~ G6L `k*iejּ&Mqëf4y(Tvgb*w)W&Xj|:++;cNn`d zmDgq&rg~zT*g_嘿Vk3F=4L)s<UO6nbS704-_`+$$X'hM({]~JwQv獼O ;ep6I ˠyַ3ےX{O/}vj̙5+j̪̬TՙՔ x7: zPQ7*::tƙZFΪuV[&e >XO@H87q(1Y(-‚nvyA=>c^t緝m:/R9si" ;"Us@֪5[jyS&ϨwL|qI m l M;A. Ol-ډ:=ЊiĚj¬8K.Cx1dLyyTid^Ibfs1Y.1n#=T$8f{gs1z=cixi2Q Ǡ\K lᶣƸazf[5ê'u)"EcC^si;lW'3-t1Q3÷ ɔ 6jr0pnRUgG "J{KLcde<1QtL.gP͏$U )h+=yqRcaRS0`555؞U2f} Jq5u=@A&N/p=8{zchO(ci*&O& <ƂVGXPiuR0ccSVlw( C#m.$ض\,F5 Rk>L"{6U! ̂G䰣v]r  $y$CkP{cT|<{0\}C+{C.e[~o;?M/9uc֧M7n49G0xA% ^Wx eMCK%s￑pJH?e޵>>=>Qs(G4q۽Ѵ :;8(vzENB /ϯ{Gw7B3t@?sW)W۳(dHv"A4M `xu~@^)>RaIG!~ xq<*# or]Mw)ٛJ5Ë1_ňfJEUǸҘ#e%iK" '.l< ms"a_s6+"?:(_FF`1MG X'Ufb R4姎DP :9Q2JS.,qeŘ^eFW"K*BA%Sw}kU?@i! #\Rk ~A,w.6TGsժ:,zgHI60ӻ=8@o+9D nϯw~h=<}YA]xQo@=BпĊӿpW[Ւ]BV_ O'7J{|TWKѧOPֿH_~Q j)]WONfS>J-VRX窯W1vy%SX*5=CC*#xrPJ/VTBuIx@˸Θ ZTS@lE ҫjԀLXGW,#q*w߷ "qjWb2J/@1DR1S+2Ӌ V~K)9rٴ7."~!̅ts(Rl)sw6`i=ytf9|wV쁝;Ǖd]¹]SMllgIY> nmtP\,/ 7{ i|' Ga%lR\ Tn@c7^ kupC1(v8K j`7z닣!9J5EpbJv9Qo*9p-^vT^z6ݠ__oA܃.7\."D%H>KA~_o6vBtĘ R|30uCM"lFX^K0:˖axwEa"h `&$_.\^6)> 3Z?ɾkC|'fwQ(AD $0K@r) =ȓ.tY5)m;9Yv PvNsw=>>_ķ )Ñk0bq6:KwBl΢\"e54N2;$Ȓ,_AGɖ+y;d=eEtdZi?St3l,Cpܚ\]B,buWU~Ձ_uWU~Ձ_uWU~Ձ_uWU~Ձ_uWU~Ձ_uWU~Ձ_uWU~Ձ_uWU~-ڮٸz/˰c~Ƚ$L!mlkH.J4TpA.E0 :¶rhq==8sYq Y7JK6k٦ 3*!ބ˨GekVǟ޵q$Bld~?=d8B?%)Rör~33%>$#vuWWU]]b^rj5IQF)3ZddQ,5ܶ`GUf}6TF>7w& P镟duo)l| 8g9fUWV Kzxo.l/Osq8O Uz\ݬl\އr kJrk,a1F: lp6HF_jRz\TknlcY¦MEa** gA䦀[U%deC'&_F{Ltʣ%+]-j4:0}6TK}YWiliU`Is0X?K$W=־OLsYȰ*UXBab ,y5K*x `@hgO{B!rFf `hf)1HWܷXh,D?")wnX.HN`"{&CD0E4N/(qJ7DHb΂wDðq4;BR^ey RhRcăPP? K"dĚ0LղVlcԷvT&_|_xֆ<2_ ޜe i^_٬Mڗy,5J?6`*=M;T(36=x^rÜQf YcI"!t5׸*'5c$5 E|;"UAkvЯoM> ғl}tDt7vE{=^ô,GF`,V6霎2v"m`@P:eQF6Roo ($CYy4Sd"x^^ RrH7T&p=f  (BMH&ȝ$:w_}*+j)h0huIYX+jg+E`E^oa hDc ڊJMJi {"WN2K!5Rgt6ߪJhq~p'O^l@ߧ6uF\ !qK*ηRbRRɵq4ł"+K>q|0>RE:34۝Tb6B?Ob&j6zw3Ws˂TvaDmlߛ|}isa!DIFJSj"C, ;PDFJ3zrK308Gf 6UyPhNuNB#0hgќM~Nڰ.+۱Q _q/Aο{w{˛W߽;D޼Uoip)( b.Sݡk -E蕤b[Vd3|bCn>a`s蝹5W Jw~}~5'. y=S_I' ݮERLg˯@AͯDy W_%|nB`}|,/}Ϗt+K7/:wN5lAi9 ' [,'AR)Swݑ>gwHLeD0eRhʔ<p$&k# >A,9)mr5 ۸nwcdo21Y,X;].IT=(CaNNW}AAuӲ]4#TX̴Q*c e2+g?"s wIRIuJٽ+o/pwU{+&)+ѯUOWU +ZWD\=B6炁˦>Пb3pH3-9˜OM-{4ᣇvt"ɥ'Xr醥E< `O\FPke[Ƣc\g˅78 Ō93˭ĉuXT yUu:m ` IUp;oB!1qV{tHba)X,Ioa~%潡5i;J[kCM`gKϟ mm·Hqh{ٽ^mv6Wݫfj{ٽ-`,T$ ^'@ ¸onoNS)a\)Dz |H#կ#YЊ[=Cw^B3Θʘv"Q 1Kp<׬l@!i0KS}L\ v4OwxڮVWy8!|d ьA(FwVGDt'%[>܇NpzCVCdfEGV̆B8ϩt صLTo>VO@L]ȿs9/ 4##1adRazHV8ebGr~a6 3Ӡ?n #SOy4X[.=~)jh<#4}wiksfcj}g3r6)E)hd Bn 6B>$y Qr{䵕48 =Di;p[ |{wvBpuߪ?,k<^ 
RL@SNq飶ěU`̙{ѵ8ڵvU=Jc+껴zocQܤkgu ȓG(0G J]B:`^e.Dڇ Xq[;ЮH"88fSa1 A2 ('` 12Gd1а-VZ{F:F(ݱ"pFBb%#1))DXҠ8KFbR}XæeivHM;i ;R@ S' ͩʛR-#l`uṇXϋÙt. {yspO  R>9 //_+/_1wF㛝:̱>xB|)VaR~Dza{[Ky+0{&%*픱a:'m<?c"evݕjbi܌f㪫G022?_SOZȽ{DP&EUxf:yt O/w蒪 uf,9a~4G3ѭقl8dpv LF\K9k< Dk,"!`)= X"ap)2 OYIèU1kQ3|k䛽GΖ1<׸ ^ڝuc=-S|\޾Pzw,y"Tyа||Lr^҇t;|05LRfodItDcG)I0lQ R' 3(F w@ AI Y1}j LwD@4:&yI`p `Jb,A#EA̒ r>Wsy:ʺ[?ZD,cHv QV4`6Vx+H46 0?Onc-M[)A \ HR˃!+.BT*~\Qp&!Ἦ&a3?VoՎ |Ov%Sc ?cg/ZKiG"L;O^Uatuk܇RRɵq보3QXr)[]{=E](b;+dڞ, `G#L^bf2 WvpӝL^u@ggJ X@P&j.*wWϳ 7;Ϗ{Ipx_qwLg>u }7k9/-Yozu[(#J{z HMu`—mZqnPY)w4Sh|ɛrEpPFnޙ+?e[oyQ6a6͋L4Ê0|j$E#tUHF HfQXށ"06 8VppS2zrK՟n@̓#h3rMTƪiNҙe(uٻqdU\ԫݽ8dKy7fֽy)Y7! 3,w-I$ a<[[-V[6ab.' MmSM9*REZY8Hok<^pnpݡ~?:;Lu7Hm/%٘HO=?MMU5 M *4|eu|%.L0͜ V mȧk@I07„Kə+hr {jVJ%#@6Lto(LɱUETlk_i@`]x0UJǐr^X#ͣˈ$ң А!{GU*ďaF߬~ ec?ȳ?"7|nPؘiyj?\/@I|J8s)50ŌʽA>  pv,dzxLR֑|T%ZH^)Ucxr]}f3[8 e5-SKH* fMD u뱆Ĝa(iw,xBң} pz ^{mCpazل&ه df`{E|c9{ 8y{e8c[\{~%fJ3|}em֏lHh#('ׯ/,|A[柖]JAxvq^i\ >/%C^dX=Dq]d+~[} 4ŶB@6k۵9^o#`dXՉ fm1ݑ9ᛠ #d荥|{No tQ`;+7} bQk[Eض,d<#s 2H! ǁ Y$Iϐ :1P;irhE ʁ, KD&7]! "3V}')?pC&IGn 2M,ytq sb} 2$[행Xv:čYȤt̵2>((X?6qiJp8SH:UZb8ut[Gtyn9mƭjU6h6<6fܦLF.2B[7=Bx`y+S2v M@k_9跔o+'_94Md۲YFA.RAAf PO&H 麖c37*IV(c|K4X:czokTIZXgB/x* :ƗS.@ʄ {0sF 6i{|MӍXEQ@YՕ~bZf/|%z!Okn58_o@V4a7.AB#}U?O,a6| p! {1w*ȩ27O~H[F'/- d)mO1q i(^v{a_JR D03ua@Gܠ= [G @WfzX5 0 m}]!!FMa{G9&FD 6H,Uc{8^ ZnZ&=S7~2}}Kb %H gz3pxU:Z&0Zï(nQ -ҩ8Bx1cs-UfJznf~Ѽ.4~t [-u.*)uLs.nM]_-Ծ- Ǧ]sgJZ`,[B Y.MOeqz63?E֬*OZ4SZ۬S,Z *8S*jekf6CϪ" ٚ.psyR ֻn4w&40_8J[菾)WyBN!¾x™M1eQFMؖi☳i2oOe].0?vW.Z2)FA(h.a@p^|kX5[6᣾K?L֊T`^a9ͬש9~ur?IuTO-iuvkԥ-[V-V-"5qKscJzzgf}ɣ+HZ Ǯxf4h̻U ǎR]2_.[sհFc<1u+ZSRo,'V-b:b:Y2kQJ U$Ph uƣj B R)0%o`Տ-a15G\PwCjK(5,L!i !Ò6fn$+-|JL*} paN O^/Lep ۃm:g9d4dGzA{léPL-ƊʷSٵڧ1b ?XgW)=fl/45wqiZqYn@u8楇z)maITy@~iƦvӕƦ;9lp*Y:2ǰi,r&Wk.ǴƩ:M SEg35npͮSi662:/|++Tk17-꽱Fr͗`f5&湄}?#xw;}=Lk@GkAr1jlYgᵜϓԅ;yc.W}Eޚ$'r}FZ;Եv ٥BR)?ӻ_ ue~@̕h RT -Wbu!h kgޑs=2p4%+]֑ B(myEK_s1k*R;1 i6V-wPi}mjFJZ@VjH9k('gW߄\@U*o *އ෼:r/CKRZJG{z?L5qC~9 /+ Gr֊ ِya?eӧqxť;?+u߇דJ q481}=y-ԕb9ukOO6qp֟SUk%W|W~~%ɓ£!ôp/vꎀTO\d} HBJ2e辨;R^e}t 'BOIjX<! T3+Fٶr2kW5],o^-~,SGSi,˴f^6Iݤ[!xklI`E斴221 (>f#j0/Eff-Ke,۞zҦM0?i; -peZZ9WjۙmM}g2Cg^Ӱ**S{)ee.ޔ؀ņItgSu5MRf2a#FFO^@EL׵,rNiR""'t7#7} jm1#J`UK6y#cʥ@";igb:*6LnC5oH,B9C61$AVi!~kEd??w;q'N:]gߝ*Ee_TfLJgVwܫ$ZۀV_Um4е_a*v9E5~F>J1?ƣ֩ȟ `+WQ߇2oabvc_c|5) '~9P }|{vs2^qw6ǭ?Q9y(v H0/lį2WCv(:d@O9b^_,ωSsO:5SsO:5NYv} 01l(>Q>zоh_Q;KQb۵[ʕϙ@]X<׭}_&p^FXPI8Eiz!NKu݅}`'dgtmd42o#lm yM# 9श Sy4Ԗ0BRjR΁CvWmj(#ñrN~]PomUSָD>v+Cx>PEd|SҰ7Ic.Ίvg}6K$Sڜy< &hSܡҥ{kC/K9irFD`ݩլB L=Փxwj@._JN%?в\M#|PAfL"!-k@1mP3T,Ȟp|r$!+ W ?{}|%o~M2f}x7[ĢEy4pi9fq>ar^]wVKD5|z]b8t#\rh~TߝcF1k+8 gK#<ت((}z1~2@Z0SRR$$A:UqJW RN!e#Չ}Tc**_ B` Ch'%Rb߳ b:#YiLܜa^3!Se>ux'{4d|8o@?!|r0_<rAȣem~_14V0JtVcu6#E^.D:F_,GU/MՓ[I}ۋiY~-I^D79jL57Bq=qS &s:lL+dR$ܲ<$Wx?H`x`yEш38iDsDq^&P-a_h8+hcc2y ~vp6 źX[@ıR+̠59fR#IC LʐSMQy=0P[S2+vZ J 3( ) qqTTҦ:aEBX=?Rpsh?3YSӻMH\ Fi ~~.9'3ɂt{b^D;} %IoN L&Ba1LJۿ_ƓoM.śE$ؼo֚ k亷Lr/^?P  | o9b}S{3s;I`])T*<8(׀ [\ɳ豹H_D#&pk|;R!jwvd;(HQ#{UӋbv?>JGHx) o|ṣk.^Y@씈_e{Ef]%+W/系*8[o_􄔍WC7ecbl+"jAo_ݩa9;"4YcMtT j&OR[MYhpeTt1#pP tQgpP9;ZN6WDcYkv_X#b_I@XQro w/uQëxԴb׬WC3O|2B=Z0Z07Q!#jsݬcmclԑDy#*JxMQZ^є>lISZlN%#zىD0ϧJ0zl}oH<~z@KFI S%{~w#䟾?8e+u15cFJ@%U™ABGi"0"rL :R$.?>E I~HtoDs0e?vC(1<%IL1 +l u)Lq YC2HP%t!ٷqza3X2feǂ ʘHmleԴrʿ#ۑ.bJ8(N+ͲJ5A ?Mȴ3TvH{!ezr[ލFzKy5Yp^Vkp T v9],9?fY<+ ~oǽ増 =f/4ɭyn+SզSWuѾJPBcWR=y<ʚ)MG y©.ݣ7/UY@9 . +דp˗4ˌV|Jُu~M>ٰѫKvx0X>jV %'v&A{ 2=(Z w-+U0e׮.S |B1ilyIƟw)K:ޕVh$t MV݆D٪`Jzjʁ߼)ۓU'S]I"u;HFݦRK2p8:qNZI:ZUfgRRt]WJe€ّZrzH6BT)O!5j[U{Ķt(=eB`Ofj<;Z`fVؒQpB_`0'~̳bG{?П黎c 'y0xRdz&ngCzyF@M19ᣞ J|⽷]d1 @?`pSFt  Q%R rF#BxPGǏo䭝%D21m*zOfIȃ! M\.YW1/#F8lzĴm3z}B=j:*q^u!AOxn|鸡bc<;niߵ~1ݧ(V|4?Yڧ`upp` =i%ϔiٜp?v fab6Sѿ1`}-éקTh]^s#<:L*<%0\vEOv =~QF0 1}0UKe pq) S(ڡ4!)g`KIc@d V;[ɆnAHj{b>aoQAM{C oWcT4M}67> qׅZe UvZMq6n9V@*i!f%weIuZdoSfV_"B+>c: _K\xz(qC$Xz^̄ϫ,;T(ruFMΝMvi0Xߏ< iXݩ߼mp+ ݺB: W0EJ80!D W4HpBHx'σeHgɐܢeC;?bX˨CT@iI ĩ*M%$B1;ʥJb%ixأ0?N-;M{^aqC 6s9.Yo1 hZLe=-fsSO/;ڌfܭQz#v |'nOz4EQiHwi4.nQwbz.f*8ұtL>ahYfK~y4.-n*MûFyQx;"vʕ!L4+nOs ]ʉo!llE*+d͢EѶɆ2.)3ރD?cy5P 'Ud.iR ۳;3Odlط,Kk,?=USWF|vig/r8D dzFL=d ձa^tYeh.!CA.EAw C$xndeQUeNPTPLTygd8Våor?y⹺ˮMX%7j{'eSǓ~HCgbw~mqT<ٻ C"PBDBfIMId,<+)J'^H||l?vːߖgxWk'ՍβM泻LEC<lƺҙ^tZBR*ɬI(a%J(R pVAa@u2=XN\S2Lo'^g7֖WwWZ8K;}DŵTx=a1}pB@I@Zt8 M{ & f$M!7)N~;nfy v7b*yZh#k 4fO?~O;BQXM`"!V$H%,xo^4',t4W}EwR_/'i '3m MB@J9F )ɐg!fu%t)hwQE1dҡ.!KcrG(e`Ho"*/I,l/١WGǫcvБ>;^a+=!zC C'7u:znt 0HYF8jLQ3&%ũ4X8l2ؙ y`>@Tبd7f2'aVyNNHyokP9e\2+%)ZI `_n$нS'ܓON{SmCOC_ͭfb<'ε9gi+L;ގI$ }NLmB$u vL=`c'ޥIBΗ9Y6C{u؟GzmfgeB:OCuf}64KZB뗶G'iZ<4wj[O$BVSc4h߸J!b&ΟN6qnZFqR&Q]#2\)`iu3]4yF_<; &[XzN͡5/"a5-ߊ/aRX^ _G#|IP[?W$YR3߰E:b i,QSDS 7RcAbǬ+LPցfPlHC^txV&h ʝByjPX:M&ld#q+NZ`5 ofdGC]57iYPOQiYMXe%P|^TBgsNESpaGw}y ɮ< Ny ,NqXwF1*dE|(Ԧxpk=D(gK:_zrvGeݳ@rc V<_BUqbg쌭σKg1oQ` FnޤlxYgOȴ3s^N_`.8>q܆\*f#'S+FB+ V#[c<91a鲳WӎJҔZz:ٶ 'e:rBYސr7n晩3],99="I~ĤG B6} q=ǜSߗ ЏLfϙy{ e{yt? (񳔽d0VJxնMhc=.xQ[6  뤱ֵU$Ӝϸd@j8'9N*~rȩ/!`6džUiTg^889-脉lVOoel0MyW$-17P M=ىݮQCn}Sa%*&´)U`AG8.-fPE)la $]F.螠Llbj FoI]H_q2;_q/i]7z1j;zL[v/D56$? E ,#+ `&s%FqIHjmz҇8)kq <t磌](刷=s+E!eqk9u:KB8B\žTqe:WA>~߃w]}\ !4}wQ×sa,6$[͠hfP:ヺaUAf# o_V"NVfBpA|'l|iýYA?ojP_R$m76`ʔ [fwOiztvv\I_*e0%$ՆhG>hijjg-FS8J{7j0 8?&Tn؅/)sI`+pcS2IcnؾoͶ勛X`ىXXQQQ+ yJސ+LP"'yeJۧ8y=#¶"Q9<" wY1dَgؕ^>V<}W'9\6kraԄ-,38xҎal{9 >;v7Q89me|g3\gFRq$g9y?/J强|5:@rj9~zτO0G6MEXRX}-C#]+菒CΓTiyBaDŽgO;ۍɠJ th|g px6ED p *tԂ,IX`03&SLp@|mrZBh`X>` *$셋Fƾ"s)Fv"&v#DƎ)(`1 74 bP#hS(GDØ ݲlZp~7Bk} p#|B5@q-ߑ%^ca*% C| }L<)8L)_isu<(G;枧0]":`p ¬8)Ʈc,uE,,N-o/FzwBӌXiz:aty |nlBY !l]C"GTAd.0 t‘{炞A7PLccwe65 c65 淩Բmj65j[]]MOmW.7?ƶ6v֤A[9S{_Wm]׊~tC.F73{C|8}-7N_ˍō<#0ܧ`y`$<#pg8F[7X}΅/cBЀ!`ok.-F`s(!{J}Mvـ3LNu/㗯mrQQkr†#-p·ލrO&k}: ~L>Xr9~ANS`+ݵ:FX?&Pxm<9#WTV k Ά(0Mzn2\o:od =CdFɔ>>22o!c$}ǨYwaRKP%tJ7a淵5C?`znkOϫ .Mt}(Ϻ"XѴǚw\]T\J/q{ofl7I?[ 0n#Gslxznuz ?e1\^m._0VixNdk*EQwyjϭ_ 6P|*쁩h[y*5~̣jYZCOd7nz Cs1 S]& %/ kXVu =LUN6^]e>kxiT ک!v>v#0XIw I(Ә <# Xv}u{Oqt2+ՁKf]/Wit5X kMe E$! QlO%B>SΖsYrrv=o}]ӣܐu[(ZzQeE~H,> D7C7ouK][]:' բ.w"lcg<\v ~X OW%I2hݡj%]Z \|]u:j5(]m+ZٔRW`u튺䭫+ i6[TWF cE %,O}w~9-7823Z=[93?큤BF[Pֽ ²VUeţ$.10h1r |rYeg>X.6F[2 ! Ա+D?L!!b33A2SRw) t , 3cisDq<8a1AXs "l [> o4vhuH+o@a)K n_(hL (N]lyi).LQ/S ܠ`q <B~*V?<4ou4cN&Znn['Q2m긴5/m:Aģn\pVPLKsc+E;~Pt ߝypмO${R? /A5FN;yۥ mտƐx֞ g@wڱ^~xn)W bPƝ-1NՐ+JQ4WшE*K]{`ݸ;ISԻ)%Ȇ`D@Go$u 9 +FUW=?;)B#Y~VMa .K ʘ fPbGqLMzYOGO5G#xH\qz$,=WKĕG#xH\Qz$^?=zz$~HNz$n#&=I'=W䪂K*eܕq{5ߓgE2mdq,)Ͽ똁<(! &LЍV59c rz?7ZL߶ p:p LqMb6\`S U;^+E>BuDO^#$CmQ^:Aޏ:(tdw]xfMtoE'Da磏ח'UΛMJT!g߱3㓎yyxۇyc.JmH3F۫0Z/X7C*(QP"[ ؀>(˥ Pqjh%_2us:U.44[-Y4kI@d@j*(V J%xBbq_rR%0M+m؟o< ؟4Zⱞ[p#Kiچ0Az6n?WP_tN狣Y0_e,$1?ĒoE*z!⟪5w/9tLX4xi,.qP JgeC2Lľܝq=Gcws ']] ı39F֦%g& zne}+a٧{{cb1?xLqJv. z:-X\B;q ?;:Wů`󳋫lH'g3r NeN`Yxҫyl- m{3YLɢjem9RYe,V/Xoc!'Q?zɜh?G^P_hJF?x{z`1Ղh'd @i lzy%7^N9YEe1P=BKe8.+R0h\gvݞYޥ&&F!G̟+{ɄJD/ 砓ioo;kٖ;Tm ,;k̬{颳>ҙq-J[ZIgn!Gc Ymc k{W%͉R*t8E {*%2d.ur UUE| 'V/+/(˼k8xgNߍb(^V0|\(2Tfq5٣N/14{ɀI1g(f=(1t!0ٽ@.Ea3kkmH_enH~0Yg&$Y,aS"e-ݯzC IICAdyj4}K~ wvvw|ޡ祖!kvRLs7nݮ7vl^sVo5⢳TU?+\v8o·x;m!+(Ң  u8x1TjE_mxf7XCۻ=Jcn&f\g]h Ǯ:ϓ3a2?{1/8iWAz{0=Y $%UV:jRӒS/:Kk#biUzlKJ:  GU}C-{2#l??քleb=Y,/gs?T~o6ͳҗ;T:}3WDngXmV'KOM/fډ4xwh)k71j"GT&̎fY.R(͖C/f)r3һϪ {G rWEpKy&wIQI)%Yjc\aLfwUjWW" xcO=#U::O X~P0vY4a#A2tj_ wE%Ka!W] @n` l qKGJP )=/>^u7v  (VN4IAxhnHnhq~X-f{L-ڣ}'wႾuo-2ܺmځtp@heN[7lX(D(R!J --%)~ w&79H2;<40"-MG(L 9%õPkkAoa_>V”o,ː!se|ϗ#=m&[޳Lj~#Sx3;>>?Oc~rP"2˶NAg$֎ĸTv(!dK r&X(յJ:q5բ nhk?pmwh2iσ=L9ϓ\;Yyb'm1M?p7w&[||{Xz7 XPېp`7oǦf(z~P(mٍiWࡵ'O/"fIe|s|dmNA9Np!_V;sf gf̌rfYgfwhyU1uګTI=H vO'mۇK<=ot 1O!I5xt+8b u9?{S_^ÊN<-UV[\tcoƤiB{Ҽ,HSˢcv*C1EgƼSʴ&yɕv%M~ҒNdX|>yY쒬9;P EM'Iѯ/z0|i9q:vrG-zVO@} {9=[b&_(̥ծ|A, RJ/mPqxYǓAad?S$gvN%8r ſY۪V3FVuFp6[+j!x񿇥S +2k}J(I_t|=q`1X9\ @`r/$QK-9+Rp& 5-@/2E20p+0\\T^OqxdE%:TiF*jiҔ٘ +'d1x%g=oAG /(\a<LKeYmAC;O`\d2|n? N˩N Q&cVlIR-/p4;fXE^˦P"W}-ZRS\-&x#32x5sf"pa/(]8288N[Xs̓nPpq8β F3Lx$N0+D{-"#a:0- 9kaY~AY1 3 r] L{iLvŒy,cZadPg($^2#YobA% 9Ij$f.:2eN'5)d3T"jUu.σ>\ë6x{;<ػ͙. {[%ħ 7 |8Wׯ`Wuׯ~+ߛ1..ǓۧMXӽO10z4]3?|26{O%w"al]L 3K68U \ yNG:6 ۭ!R"=ɐ.xJYVz4נ|]6NM<#;JNq0yTEi(PT2(TF7X go9$0Å )fM6~&)TDV9fE i%1#A fʁ1cEP@aj9}ˬ;?ZD,cHv QR4`6Vx+H46 ]3OoG.SsɊXuLͅt[ C$14.BT_FJ,p%-F 3>blߣ-<.\Hq7wwP&+4o]͋3V_;W7]/Yg'Lb2`zm[%GYVͅ-wFB]#I9#]v C:uL,#0W2 V0bYݘl rmyɮQU'Ѹu,C#i`ĥ bN .:^Щg`T.~ǯ~ۯ0Qͻ_ ̂Kي: <_4A0{C0kh*Њe\+w>a$+vb J?^vL] _}Ӝ90"maiM t%q=oi_{Ci[f{ܥD 8 ^ff'#ecR91[S8ƂuN0l0z T`]؟"2-y4H)ʝY4_am(sBxܨL{,FJȩ\8Ia:e9 2LFs{G\V=$pJu+ˈ`KˤД) 54x1`LIL0FA|$!X25RG7zل*[uF+[d5r eHbߣ?v+jy1Įw`b3sS8g ћPP!r8TP9*Cps8Txp?Cpár8TP9*Cpár8TP9*u9*Cpár8C1"=j!iaV~E`RGTRJmɌ6>im"|d$sYrqi%S U::Za[cR;P4 Y@S ncۅ3V ,"?oh߮6P . K룙>T(VN=IAxv]w}s~48im.d|.w N4a8+gT:Ov* ځQVjy3QU7rFA U`hi)1H-h> ߟ4L;IN8}1@,YydcKI S"!&"@2s$#<40"z_#iqanڷ!-]U;酙fx7*G ,@ateڽ(A*bUn(JMݪ3oFdZ:vGI{/ĔI!/,ChL|ɪΎ2Mlfԓx\1CV%ۏBʽُFj>4C`0)p/z9-J-AƒEiCl5ոN'k5&`5Rx(7j<[֑aYqGkZ )$1#"uBo7׹nW \^FtE_]1͉ywt@ صÇ1$+.x:S & wHiJL,p(}俩pE]Ѓ:{˿NgC<UX-ޕ64鿢F'ڮ`_ƚ1aسoLLuBb$KoVKqԨꧫ3ʬ#Z30{-#chnDy/sgϾxVHיa>a?ť9Z㾅|qpvf=)Ccu|H1[3$f1*dYiHr+G8*RYu;?Vυ_fy90bQG"`"RSFDD b FрG!eLD0+5>VY}w6o gng2(3f?+Z/oӘG"h@QAsg օ#*$;A h[tKYQiT㤘#WJ+_ >Fj`6 2l!F~,b4<03m e2+gIqgm)(}WQh5OvS?DVzZ,z̖wD9߀sem7[nR)[dz ޴&p󠑯em¹x=DmHFϵ^ʯ~wAA3f{tыyp:ܠULr "p\ ގ<m 8 j Lyr]WO0#ѓ8hi #,[)nqo⸐Tہ)np0sHg[ '1bePҏHeLLryå 3jg>NLj9+TK+ CٓN\ykAnws`t'`XokkD7Yo>v6H|ZM;= BӨtK[30Z1hN-K'}qqXFJo*YJyp@^)mkMad`T݄AZPbvHɈRć 'uϡ]i=Y)f/;=1%H.0>ƧgYO#?(|bHh8s wXcZ0zn[ǜfGU,,;ŚJcuQuRY_Ƴ ֓w@)իfNy 8)`lt#5kK>aj\o,KhKv]\-! 9—v]^8.튈"8B3S,YOxbJ-Ҟ=q Ǒ^h^W JPR..SP2{,;]Y[ɘ2Ҿrq^D+ckYSٖۖ\t;}6 gP'!bfpBxDYS|>p̓eʬIrB v-/>K'Rv {٫[W{'X~ ҩ_M[S5[o K -, QCvYiI(JCiR"劐X>TH/T %Z |Gm7 ,3A QUڵȫmBl9{?znL++ LQk %N0+D{0EJGt`ZD"rR|ϊ^iXBg ,bL1 A|@N@ׂc}904dӼ3SXcHǴ(%5pFBb%#1))DXҠs0I̥E؄J*M4)5J1c`TNQ3R.w|}yTr5?R*R5X1Q5 eI]ЪO](5UL]#˜`JCW .WeVeRV$!౬pNW eV/u(Vt:tGLt1-OUBԲ$V4Q\<`KK &|R^ ]9_=y(FR8]-^\ Sۂj1˖%,@WzLx*f4tPbh_j}1JTttE(ײLtE@\BW -NW %] ]Q`%XZ`BI+gp1E*ֺ4tp1. ]%D-;]% WttULCXSTf*|*]"] &%+̈( ]%ϺQtЊwJM*ZAQ\vpy2$4`B+ŲUBeEWHWuݟ)_{gv.;Amj;Q3Z4mMsMeIp߲<  FݳV|碌sAǡ]xMS0ll]$8J(S:km(ʬzߚggm)~$?޸R(NSM_>YEHJ3-9˜R 3QsIl iQ C▱ٱ&%blg%O"7JdWDGQJmƌ6>f)ެR/fWoR(,`3,xGʠNLK`$Rţ-GP]y4+h,]%uItJdt]=㔐W0~YJh\vJ(6Vt:tE$Kcn1(Jpui J*+ZAIKDWbRJp . ]%/;]%Utt$*DtsQJp3vjt(.oEWBW\J-4vťBZWہL]J?}4`k 1F: lp6HFZ{1[,} *9F\2WnղNʹZ/^C3;2cΚ)ff^OB{|_,^wzN:Ås<|W;mzivWZ"ڻ^7,:fuaM$ZM8~Th]ck>X]_L2-WT}H`Fnq~^zk}'MWӽɫmԷW:-Vo˯zru/Go#% kE43X8 $8e(h"l^J<"6>ll0bQ05,DDꥦ0+CʘT;_R`qx$ )[X1c2b=6M 0yPn|:Hs{ޮ7  " T:  [f炥֔zb^ iZ(r*t0 [D+ %Z}9][k)T`ysL;dQW`@B ^h8Q3s0)7jP( Ew9lPEb"Ӳ880+ikV+h%@kE` X(* X栂ywk7{HDV`dK! ڃ@N")|/E|vyQ-?kF5OM`i>qB5t]fCm?Rk-u?0y]FW_B~;H{}asRt|\+t_AÀUؾ?Yhf#u="Ԓ$ JL[ |`u BLjxꔸT:?w~%Oe40i^؇~;;pS?^5'czgPlcW'IWbA +*Zmb+}Pc&~&VƱvcӔ*6bN|EߌS EՂT-(3֎rS}WYٝ5d; } ̡9 Nd۞)FeGMOһ7n`gqF^V"b'w=W,"Y+dBiTV`Ru^Χ~vA>1ws}?hWRպh ^l뜧{06v׻#LDn`DD/^y=.l>LFr9&v̀ ,|*3-]EjOz: o^6rt:#g]>Yyl0+/)̱<}wɿϟ;{C(&Wdp<`s.n2>32$}cɻ<A?c1Z0@g"$0/O6XTnGa7[FB*֐ɀv̙ }*(ܠPtm0n(R>T?Hg)=|؄uә*`C@/|`u M}=c7ݗBn'BHn.'Z4BokNuө>'+ҹme0aL6 lp_FK`.̪=gmy@ٟ*90 a ur^\~݀'d]zw !OtMbwzbgjiyfHZ?CRW$&NemE4In'ן55-ᄽkJgkZ3ćԠ/1>G7q+g=pq_h|7y&6&??㇛ I <0Ռo;D?_NTmD(6F3X&&[;fǞ5=c[|737Q5E<Z1/)7E<}vNExjpƄZztPJtu:tXkY +mΓAM+@ː}tPvtu:tDI `5ɠ{2PtP[5BW *Rpj ]ڃFo5>SNoY]UKFj(#weW+MZAt3dM+ֲJjtut)F7=rC;b=vR()]4 RC VI= e ;_*uS$~/體aF,I?t^?C<~7ľ1t "m d~ŵT҆Xe;Av"mX 8p+ SQ8cTyv饂>sa7l8g_`:"hsf t|;>#?$63"0l偡sq |ST$O$}E{ﺕZ |`%Ury,eMe''WX py:/!"āj10|:>5XW_olϪ?,U[<\;@EªIY,/Kk(~?^CJ/鏈WEGT˄^[ڥAjLI]Xf#\z&gbFzxAТrQ*uV- hiu}wNs/v ɥN].cG_c<>3`۬.pSٺΣI>^{.+ݵr27]-|2Y,mVϞu3/lx'SO- O/mgv*.2+~qѺQUzq_5"9|@b"k?u^fN$Gc韏SL *ZP7q0{YqARj%e,zUo]F3d&JA'tN^׼‚oሃ!XR#5>9yL')־i :1MA% J /pa#IO:ꭣ 8 G=.a? r8Ym;Dr=_fl3s%w-\ߴr6~խ&j&aܿsoul=sw0L7(ڔ }vhv]tI0D8%Cf#.-]ɨ.g'Xq0¶8h[ L^.-$V(iKɹSw׷hN@JA;-t^CYGE/a'οʸjnqOzg;O f&$ .ea:.uO|鹆DĂ3 ےRF&|LmWzB]bQJʾ6}@;tu~:|^6o,̮  LҐzwyjIQ O0lW25}E 1)˳Ӱ,3ؓ~`<&icR bR L^:T9p"$he^:jB8;-EH@~.34GI#1ۦ[ "pmQ9\.kcZ!ѿ ~<kٙlfw%N 75p)S 't|SgX~v4ʡ 4(ZKؿ.WOj#޴ǚ_0׿*y|ٸ-,;[ضbC0TNY!TRֶ:%n[[hX ՗kq'M0iCO>B\!6&X2D' ܚ4[4EJƋ}_#cX֌Gwm2ζ齯so 7=k9/Vh=2YXMyrRM>2>.pPZqH 噰(5nL 1nJ @k1LJ6 晰`Kw;TECXelp@krD0kqf=[am1Ay Ww҄ ,[b\f@cAv@*)"WC1uPuPg'Z멮.k$"c}f_۟3:tкm:rum:Q(/sLJ5\nƕc_`@rdHD`=AH5}][깥&X\ġ̕¢Աm״}'3$-ėA( ,%D6+[\35~hPᮍ.u=S%0ycl ˻_co-A&'[bݠ=x5k0 p\Ej;wP98#NarKSurbvx첩M;>a[%=ۭD7}"rYMmm;,lڔ$”՛VoRouQ VAf~[S]v^#)lt- (Ղ]He./^C/jlU)gjXZسRLCNiɔ^wcQKc*-u}qR 냬iNEGrr3!62bw A~*B1MvG#Q[\hw:+`+@yC+,頵Qm76$4ɫ@1kLr]RzH"GT0\yآ>gO8'LیbSԳEw\zKoIњaޠd/&b{/m,F{8md/#NiŅ"C}6gY8+{ P"@. kYRgVokmY 2UH;^QreKY\ n4<0Km*CA ƙAR.=If\g&<+dBȿij͞lr_NfʩP{h|7<\VeTة}>*LNe5ZZV33ԮzjH+ ݒwuaRׇG] &Jx3huN!WaiDy4kWm {."`9Vz&#K-# 0|%gS4a#t-v`ldMv~K& t\|ڲ5K!\a\aqǷN\i}pg'ZHSzfcgXi=%+B;$ҷ-\h+kADme6D:]HɭtRLܾ|O 4Z)h$jOitѼZ-CvZ_(Me` ͷlm/חMP2/ ֗e\J '<>>J+0^OlODh8 ٮ4jK?Ś>oKͥJF(=JH16.H>jK ^Ȍ ^`A Ɯ C;BI0ԤЩ,t|Onte9i:[/vhdaQk %I.P!`V0/2b.Dڄ X=wr`8,r!c1,hL> 'H Xk ᆱ>r(A ;؎B<FHH jbKFbb$S, A9,a#ֱMXti;NqMrVs 0%IDYD]jUmwV'&<9l]@-~ 6f;, P-Ъd0sު}ܛ` pUQ T' 3(F ė$0 Íҵ% PHFǤ"2RA&pX`K9VwU~"諓 /./2JJ-"1 h*pJD"HрW\ `7{ݑW!iK-M+1AT6X C$1V#\v օ< dHe'_%Y!/KT^`Bѱm)*Å6[rKn9 1G 1H~qS qA#`yI>x:E]`Lw?9>V` 8"a~IA)nĶN'ѳ.! )4K Ւ t,B{X`yǤchE!i5Y}8hLk"nR@V߭ ~jj 1LZq6{SU*S9ZUiev׽<G7*ggl̉lV-S8/?iO # Cj3K,}hw &,nݟ?uz.u=ErX}\t]9;^S4[Y]=.\{ xWO?iգ7^|CL:|Jxqm^^ AM[ahE69WWlr5.Al6dm Dӣtx|^'4]5s尦n`T?b|,C9(;Y(!S“l,H_+ۢΖԩcPz;$ ce`~X1v?2K@VB˨G!=Rr'5yPw:_b:wN5ioAi9 ' [,'AR);K{G̬m܂fѼƼL MPCQƔsmDG0%Y#qMlBv ֮[Wul&#hE|`Jk:V(xX/$qQڸI%:.jY8-JДVlk4mIД'h()»M`&A):4[>ov>MV g){6QpƝvTxo@ifSgq K!Zqh`0?9> {0C 9:NxÔ(0% BOԒX}712qh~YbשH47~OE̤ߎe^jPw }<dg|l3P tx勫q})p|)$KVM5;MSw?w0-Åw;޽?oő^7=X`!?_z7#I;wfG|& gwrXܝE~qwvg^Du;yCt\cs_zPc?Tfw'<\.x.iuwpa@ "KBic{,`nЍcV'^c`nNԎt xW1b7<L=}= FqvQQ?>ٓ&[ϛ1Sת^0ҋO`lu*aklƳv{rC4Usp&|4[qtm&L^G7ϸ{y=K7\5:헝`1ҭoi2Ӿ3)NsWu鸌 XaYH`:)ڥ%}y0>6X;b^([e}.y#2Vy9ՈJoY,aw"?]tK3&FdPcqX秸J>`P+(}85j+4|g&7C)`AW·DhW7]:wP]I wJ{J!wɊh!EBh[m%moqAHeJj%H\)=ei~qD;q-EV^(W|5JSQ=PS[4cDd@mJJ]YS[ٓ˚,dHy;3d4( Ru2zUlT 896iB Cs4s܁y^3:YadEJ5VӅṟ"W!O 6OwQͦ0ojY8H`,YUHRKG <)#f~^-5|x; zTkGzE,?_5͕򒗹U׳22.ƶ:TTPJ ELPdT`'x`ᰡ )gcO# #5Pv2QIoSEߤzSa00.8&ŮAM0KmXPhzS(!d%+" HeP9aJsU}^>a&Mƪڬ[ŌڣhXhshd J`-YPDʠ pd \1vi6j#:[M3+"+2~wiTEIsv0[c;gAut"$U@k&)j Mset-xf,yAmJUJ庼zK,#adX@KPJw3 abq؀$>o&JXy1SN6}6ki>zye8b͜{M[>Ϋf,v`ڠ[@7j!}Ñ$0g2fg6: < ;oڞŠQfxjѬ>[E=ȹ(Y7hhƸނ)G6v,51+j1$ȭ%\Y#eDAv RT"* B) ;X ַݪWȰ.ٸY!ҋeR޹r \n69%DxyB2apa|(o(((c8)UJG2jMЛiUi, mX1)c)X#"707kUeR"w&p+גWQ664 ҵ -H:1(ۦVX'ʐ?샩xIk$\nAUqZi#7(jaALw zȕ^B{)Q28o{ L"GXi(!e4qBO*t+ /x@$\0 6k٪CUUpuD|Q3HhR## !oN]XrE G$TiAu.,uUޮHxJ15aL`j?,zt.nfkKJD!SMsu :Uf fR_w@ζF͵Q1\2Z]HfYTvsZ\IW',|NBlꠢ ]-PVjKB/Q㸒:$! uHCBP:$! uHCBP:$! uHCBP:$! uHCBP:$! uHCBP:$! uHCBk+& u9B21BWFP  [&HCBP:$! uHCBP:$! uHCBP:$! uHCBP:$! uHCBP:$! uHCBP:$ԹB  1B+)BbՅ:JMB/R R" uHCBP:$! uHCBP:$! uHCBP:$! uHCBP:$! uHCBP:$! uHCBP u@IBp1BX?W:_PGA5 uHCBP:$! uHCBP:$! uHCBP:$! uHCBP:$! uHCBP:$! uHCBP:Fw|xO_e&hyn{^*9bZHϛ\L^Pcu+QSVuV2J,DWK窗u =3Wm"r(PDd{nr7Gz s͑Z-V6_AʖƤbIWZî{*V:JW_br \5]AW\nJ*}UjJW5|dכ=f@gOW2XϽCbyŬJWPlscd7(]mscUߔtU캧bԔt}JzFꛇ9yЭTLfG Չ J§VGzX̓jdYyt-{3=![~+"HN5mLNs%OYlEZۭgF#Kz`tQ܋C?LeIj*o>ժQBl{tB̖ݽ;:³?ܟVufx?FaZW~r?'`*Q5=ϺZ> *u/?^^s@]ojv4/R~Dh׵ԭGdgc?;s֮[훝^}뛥q)toޓ'>2ʳԟX^eeq;ʆ7//y1Kh;yҭ<=~?N=RNۧy~pjVA_V}d4MIW K3a8e8=nk9vӃ~8Xz'Groa|ϳ}`tѠ_ e#(VgKWWom8>r뇓 Tσzumރpq^WUﻲO/lyI+M?z wmMiwLFq;cFkno!t.րQK%V\=r>R@PfΞ61o>uߝd?+v# ˀVbuU!a7QYUx߬w˔rL7*_ "`OI^I;+OraM\F]jCMc@gWk1凿pgwby?D|Um[{eePMS8]4l\Ƈ\$U#+oX9}ǣ^;}ke!ȃi O:vK_U],037&imtN׮*oSc֞3S4,^gxqnpXl.r*0lOyp=S!IwfW5F`G'dhuKFiE/)ϴ:'k!w[t/>|B 7t˙+9O=/\@,y؁ ѤyҶKx@D\I%iC+&zG@^]4;*{sm}2 ֵ`(e"(˺KE_Ni"f)oQ R(+ 9ɳ~%,_*yU>*Bmms6 w 9fd #@Mln9$@n uX}9?̂lzBnc煑s W?OF\p3-_*je}jy;\n5Aʘ#?lEJwڶkEpYջ~jA?{W8-^id&yi4w@~۬wI#_v^ Ǒ(r84mjƖo鲩 +@6:YAn2uT dbtl3mn\y첶UnodSM}Q5.Z擫d G,_S΂IutOa{y[YR) smQhRN"Lǯ~7^xëw~xݫ7 ^|ݛ V306M¯"{pxMUMcAZ4-P.ަ]eBsvd5 H~ľef~1zYc:,|Mzb' E`(KLYENoQrK0݀@ӺpLjfy#?u-GiYxS3k&fp LOHG'4**\Fo@ v6ƾDlSkRIm*KMbLe1(@?mzFaJRIV7HBK'd)21 i9 ĔNCT1D1_Ky0FZQjUFW[.V/ %4A5Lb"UXagHgCT1d=rDYBMN627m[.t6'FҖ"v (c#E}{lEy@.RovJ r_Ǎfh ܨŀwJ:h&P!pXPwlTͱQګq`Zmih2>]|ӆK{NWw t.Aƴ;f $=Hj 51­R5WKSJ>^I<ۯ*Ki5zԂM;4ؼL?NGnb{%e;.9a<^ł8(]v{-b~`bOPP=$`@hggO,<%8U R&q  ߜvBB;;8]r13|j^HBi9LjrB1b4^ڗ$!%Κx$ʠTl?:_UnH LnOx"#gHe#ҚLrȴ0kEolJwx.C]oX=mTܜ,w:oȎA'uctܐ5Az.^mdqp3A ^t5ZN1$DhH}I5qsk'}1<*r6jJҞB J P !GZqSw_ķ95m+wAffvh꫋o@fE78yrptWAÇc:Jۮ08>+o>uTJF D)ByIo\ǜ&wdo)[B|z/<$;ǨG"Z@bmZBBD>VrPy$Vj5b{# "gs! &#L>F~4Lr. ";{uFpXU}ӫy^gZ#KwJZJd<i #5>½fJP:, ۫d;*]ّgs30$(gkjV»+//hQJa6nzӲm|-O-)>QUȴWp[t-fhۃc [s<3$xoT=NvO/^>il6AP/oud)V1AW'yk,Y O-Ux "\0:X û۾*,#v,ς |[nz")Ѳ_ϒţ繡Nʟo~_&U@Uouf~=dn|H^1)Ɲ$^o#ú*8MlL>Og RaQ_?݇^GHV&F1gW.Ui9 pXHN 0BJD/#Zw'#@~c-{Oҥe~,]Z)ZăO\h]v(p!Tܾg\툠 Ĥhe@@iCN J'{{b~.)R8rT %ZZb]i!%[Ad AcqYο=@Լl־9iIhS4.4߃J~@bH !vvz3Y9uH4piރ  XwAѤBnĴ =qGQ^ޤ*3yԕz?*p:/HV'YQ)e#O/cpX`s.yc<]hbjGz{"CSUn;z?T͠9N;K';p(m׭>ȦoΚ!Q3-iLcu $QB)I)& !9$F2só$AYx`X. 2qk.)zuav)E^&k<һ Rе'#kۅ2]]ExQE1i%A-wf+Um\f.(;ΟzV`kHPY$^ jE^Eem'e1`%+I=7DZ2lUs[b#]}}ފW;nxPzapIkO@.il4_|i7/<~th^Ў:̍crL(0_ <|(M, N SP:PsD[3D(zy8;>G gTw"WHZ(T R# N25*-QZmRQLR:˰&%gځUV`uڸT6̶!>4zӰqtWJA):vH4?.2f1$ X}̘E phzx4džGW)!kBKd'<FŔB1UKl)12O<#z5יqS;Y_jK0@ḃ0[RO5 KTklY-Z"SqQd5X~BWMG @sd@^0 vC$p|'bءA88m1ymyzu£sf_j<~~Huؓ@aDʢk0z3nьX>THtD2oO ] s$SƕтBq0lAK C  B(m= qVsd 1)`2N@u`(=[N68 cEc!1:.da}tAI;Z ;GsӫQe*V-[`)=A:r} /7HGj%<0 Oi"GBuE/*RKcWWJ uG*LHoU$]Ej8vu|PWOP]1I$=RW`{"7*PѣH%BzK!둺Kuh{HUr+B& yU$Wʾ@G"+<UՕh94M ^3j?r#]G-y\Y< u%PWbPWwzr&XU X@u}ltcWW02 AG* SuɅ/*R񱫫H%ჺz CDW]QW\g0P;J<dHQW\q_UcWWJ*uR>@ioEiu}]TR!5*u"\$_>\a=K3&!xJ/@΋,Pf׾o3&rACiN$2J17Lag!n YY<,MZX @.?G9OY=gцKk%;Lqru|D뱝|5kRWvsΦ90Xo'g'_(Ne]-T:82`ܻ ݕE*LUE+i5O|Tw$ϒjv|9V )Ys'ɗꪥU[ `52멳e.bM]EfpZw7JbheqWu E=޾*OŠ-ZjHe(ۂٻȍ+WIS`p&ڈuJiffE"}nIl3tD|^d0λT$ >%}${ %3Fx: /]>`e0 v48e #lO-!.46pAXj{;{[0CZ`sQLn'd?JsEٓ{߈#ʏi9C9GVj' ceV#>(]׆i3\p& G+@ u[{@%˿)8^; 7mWoT-{Vɧ!LPGZ uVՏ@?f"<]<իCgYc0 6˛%2_e*-ۛFer@>S*oRMBbԧ"~@`0r-y̭$h`>XŃ,*Wnn5+{Ўtd.M7M$2{~,, 6s`2=~seQj|$23wѴWu203MJ7e1*A(3xn29L `$Ť֤ 7p SX%]Opz~-\2FÍ"?7^ZY HϗIݪ6 NˆowG- eGl^TXnwW59jO25rM?*{}mD$ͻ4[cR;ĬW逰 ߜ .(M~l딡/ 3yQ,Y\zmQ|(&OjeiMhwKs{20׵_&) E33=ȺUL;ͲP\_i9[,,m+r=f:CʛȉoLp0Z29]c'䣝9?ovV|Q?4cku߰NG#G5-WY˗ { `np!/ %wcԑ/ j-]zP{Κ@;{Nf (RGC<>td<$gH/ ZGK!y3jg>OLj9+2Gq Kԏ6}7q]YɵV?O\|n@l1m kCbD2I_P%tsR.{1 ܃(S/pҗDfk;2'E6IT4>.(c̓s-g\:NS`2;؇-OӉ Kg bai:@c_C[nM._oZ]<ρHWl[}GT`6nٱSd<4n'IW\qNO7z>Z.feU#ǵ<$6nˆ^J瑅H>HԔ1тFQ4`pH?3A.}M|#n6J7]͚r ٸ֗g0klo%`"{&CDX{))Jnpb΂wDðqI ~`moNkKiKab?%KutkA$f Tq1SzC9 ™cteYWOXS<2SCOܢMA҇j>NR:u'7liɞm2.*E^N܇0sr[.Aҵ"Ha9Nj*oYۻ(aF#) Ĕ@GB}oB?wŊrOe7҆E#w܎܏;&hQj?y/[;r8UsPp^Q-N܎YNs8=Gy}洚:gIt (ǖ B`ֻ!zHG{!'kyQ~O鬣!cFY; ZDl ha[0 ^OK5; .<6iVO[Lr㿝pr̴Eb72݄Au(4vP*_苲F &0s7-=1]r Y ^ɠhG£yQ]ѝa0QD/)x:J9DŽRn&x9[ŔZZ 'RHDc/[=] R(t~̅+T klَ-F-/cx}Z_32;E|?fh>Mu%;,t"UvTtV7𺪩)Z\ѫH/U(֢tPG qpiS-&x#32x5sf"0|/l衁pfgyb6Opw/GwM'.xqd9ȁA~FY1D 3@͎ r] L{DiLv˛3cqIH5pFBb%#11)DXҠ8KFbR>x?:pi^4)cW"jU"(SyJ۷5 m&:?:&:b(17t. g3,_]5zdG~r ouľtn-߭)7)$#3 LFҌX5xXDBRz%l =6"ap)fA2 ǨU1kQR5ݰs[ Uetϲ~]_vUZpŔ& ߼;Wu`D.XƊs:"QuΣ*JC–aF2HR8C5x$hn1"ːNa#AC0^|+?Vs"Iv_H2AX4R 8x%h+m.Vhla]/o 7PcɊXuL.i$I cn /#u65HN8oKF5aC6KK |'O$STgv>}\r%yeU 4$.*k܇T(1Cye\)riTtWVmݣ϶ .#`LG#L9B'RP;1e.検< c/5*@weE-6`MH\:$UhﮖD-o`BFgB1KX_LVY!Mve*A%7_*Ivţ*% N NAɦCS'X0rhtEȞTE9L6ƴ(ųk3?+wo_go&uՙKiT[Ws(R+~'A^0rrǡxjIΖoo鲫ٌ@ubyC1d`Ť'rr418GVAv5VD2mGR篓KUK_1KS|IK%ǪJwT*&QW6*י]]9ߝ}/~~߼;D۳w~ Z0@1tvA< ??>?ܡiW^4U쀦c}z1ᇴ+rG[Ck2hGn[+}ׅ˨t)ms^”YZi 6e2}Tf͑W+JV.}%B`.2.lm6&v3AS$ƌ:UI6 }{6 ?cd~Ve#)E)/L ={_Q F\s2 :(">p$a$H0r0;bߟ0E2"2)4e{C GE S̵AI gPlBu]}[wuFt[d3r eHbY߽w+jtlwWr㢖;qQ[NQ-pj >{8T3g0f`&yB²YkGlIQN)93Z;MϘn=i:R4=c_FH˯7> .C؁ju@HmojRz\T`p%csx؀6^_.hc_ylJ %є>8ɲͯDOR]K_?c8R\\]c.x>wz鱢Ηu>-S@aƜ>ͯf.rFA `hn)1H-h> ,D}D0!PtrN#y 3w fű _N&Rvgd\|S%;22ddpn(j""IfȆb cH+1}b,ƬɈspOaWk>xU|>6y#<֔1.'ӛuw ㇟҅ixXw%z黿<i^ 8V>0z&V@8UTFwWkW C T&{u(-Ai~,-奧<Z,p%%(>i{O‹3H.iO?;ag1B=_g)8)T~YsJw<Ȩi4)X6 qMdy q=ǧ 3\究:eWԗ4LX&dPkmBX5$aRخ-נ[V:L5]F b:}3,QWSQn*WLO]ٴm_vu}Kb7̶4,yoEJELJqpw=u2%G)]_=Իiv-Eӱ-[[\e-͵MwRbj`m+/%}X2k,j6•?)D\m5}yoc֌@-+<`+<\U_+<JC[ fCpաWT)yt5FkCW `ythMs Pe|kDW`Zm j4iCWHW*cTjDWشjCWWU޻Bc\ɻb̐,.u+DZNWY ]m ]aVjfֈ.뵡+kԆu+@iͭM+]jDWج]!\˪ ]Z^+D ]m ]S:-ptp<;]!J6ttejԉQ'׬͓A@kt Q* ]m"]YQFtxW4BWV~1(շBW?3 =۲Nj'sµV *l%t5'u+*NWyن 25+(Zm f] 䆮6(w-6tp56ƺT6TjL]!`>tpxWVWZCWHW5++kƻB/f87t4t}[VU6tpYm jkD͡+ݔUͪ]`մ}Mm4D2tV~K BwUN2;j'ѵ#ǽkCG b$t3vTHL251e%us3NkvXg׊*+BdבfeHv$ *B< E}8{|LV$0_blIXO5긌:7[ F{"Y7pAלq\ޕ<5F3Wޘ 咦LgLI/bZnLqSQ:@/K:-]lϗlY`=B#^2hFQFvA MqHS'Y$4q2[[` yZ H9B tb?Dg@SX {>^wpyeydXJ%O0 jwo/l۱3"wo"b ǨK.&/H \d<CcB0/9?z;owT087jDkZ7/,< d=9Ez:x}yG͗R"d[_%7G?}D[CIuN" CN_dva4 |q"D8 qmpݶ~A8Uݳ*ENvȁnUNJ?x1z#WiN9vIv~rahz8Հ"YX.?/.O0։1m'&HiU2y1 V8d[OFbON^}69yʌ`z< kL_VdGo/NA4/(kLJ#b>kUz F|;$al{ڑ9) RaZ{w զqs&:Zt%䟠Lv#qo!(ԱK՛0~{vmNʁ.I~i"7@sG szR^Y.B[ |jjL h%4WD|`"/D&AM\ ,bi)|sXR%|R]q^*Sh>4VJMȇw$)#`g+m=/ 7d5> :l«Ο{/*aO;ڳU;hM}x@_3B/X6Q-ufh]Us=[ؔuO-1M1_LfɊX{0{88vͼv*JxN8 #Q/ ǎ+Nyh`aՐъ= &ս,>E }>6(gۜ"5\kElX1x6Zrpv1`ѪfBEe)[I/1BDl4͘ҝk"NDˏGew} 4"*l6&_iaՠwUZJR|Bu4yvL#' g.@OQCqe;gq|Nv^ Q1J?,E2y IϙEGF*J=ރ 2l[H +Y&f&DS*OEBܺ\W4Y#ex'U*z?Ag{ējn|S^TAXi(9E~H9BH.9瞣3A _J4R(ך.4[Ѣo#XK.X60p-0\o rXfH×5Ρ[ix#VkM<ю6EBĉV?(,-one΀;Ht&FSj ^?aV3ShGͩzwsŧ֙}eϹ]hqO|հj43)o\3 a Q >7ai0(p#@<ŠU^;rO%zTN1!eLXό'!\>I5xLx=qh})GTF-bknkp^_}\Gyn] !E%7Thk>Zń=6=/Nltt?vT;pkrclkxsIAn.,6 bb4y[eX^3vx*W VIA7\\uQ Pl TeJ/D!QR#AcD ?h'>r,(LkN־ >ZUץr9d]hTF!u^mk5^qa~_r]RˑZsX  b}4B2N[$YB VVW{yt,U/K$ s}]NnKR4Kz 3(y]`RyɝAQT`IȍL>&nb X=ue! KB*"͔p@͞E"FX&$5Vƾ!3`,g$ Ӕ52JX\1yބG[Ik%]J^KbM*}NDժºÍWmn?QƏ'8s qvJFIH Y",ROwwٓpdXS) w=d؜ 1=Qv@VK "N*1i7X8$ié3;h790(PB%ly&4J P ǙsgՎ0 6c!)a#wg)/3g-T]@LՃJW{&rO:Z>5V2z Fy7A9iVʹh`V9ѢVɜI`.a hAN2:3ΘJ.`,$Y#sI^ $M &JF`s*#Mj-O&os(Y]+OswǒrN|u#PIIh$"h\)Nd]t)X`^[^|5m$HNF}YbRl> %`Ŝ*j)8VIi+H R]0ùTe?>f+n,n ~Sk&MQ!tCej\6ei/9 8ߥY ea^Z_g̰OetK [ݓD.bA,a6DCAB L & K|x$cC; ׽>`8]&İwg%ol̮Byt\]v7\A:d$sQr LC߆ ["!G񺸴B2P1֏o2! j&:&'>$ BCOWW W?1KUKs՛wWW׳vg@9v~RszVіz?^(LloJn׵dmK&ٲ.ꚡ𹴙Yfb_Q@Lż{ 7mv.:~pnu2ȺVj8muDFrU_ aJE6XCRSud}ExתNz]~o޽??:_ޝzP}O>{߀am]yHEn5?ޡiTޤiMkߤ]k] YK{k)@QxU/ }~=/vfE>5Ldr E| b~QjEgXLǭ^1z"K q? PgYcX1[SRqҼN'`` ~Rw0]~V,k̈́W`g?oJ[ I `a)8yJA;IIʉ  *umϨ 0[q*9q% :,A$M-(SP"QҐhbg ɵPB rKݷji' #xCȼc ʄ͑Ĵ{]]QlX/dayD x_  غC-u4͎#p(]iHaM7#mBSeԄ6l䉄m*ųцxpmǺr'_֙[;U7w_ᦓ+6X_tiv\ϧ 3"CiҨ*z goi5hePAղͫ]E[5^_y^@Yڠ晒aﯨov!f̛Fs\38/-zrʚOnjI 3)~S!!ժϽJ*qG֥">^NXO*@qE9h?GLP1U9 ϗյ\x?^x#,*:L;RuSu5uWOl[%vۉ޴]/MLaX=Ar+=%/L-/[jfǕBzXB.² -KW5.:֗xS|?UMjo:QqG_QQU~<~Sy?sd&oI8x ?/r,/r.7 _ ]?wsXsbꅞI%И3 Lх<c/"MHp7^"tD;J9KkWl߀]q  y[ (?UNO|9|6mKqDi7K*Qpʑ|z1m!"9"FƭFxI.@-QR% mF"-]Z0 r?Ju^w{&\T:FkZrNit&4_g10i&0'2 #ƘCndlH(疷)xʹj%)g@ěPH)#MuVAu(e-Ӛ `1$1;c:OZ#Q!R 0pIu0̃t.N/g|'Xt>73#;;,B,KbyEϫ |wщn꫹wx/߲~KJttO`W! 5yACK6*nf@H0S] ҧOIp*Xjģe1jN 6}VXpg"|rN#HCOL 55 F& Hc&ˉ4[B gƕk.EFVI鶶6{C/kxzw5&YGZkȚvMjH\<Ω!swǩUr'wlD(YZcPro$Z3-vYGNI7ָ7Z"HAkd"Gazfl 8 TR$NdIr.<Ş8b>QTkTRaZķ-[dmj鶒hl-wg{k [st7!>n)}L\9yP^Ɣhuѷ_a؇VYJ%>B`*t?ݰgHjY! "Ev娦^k:k{)nxt9}3iL9 .MX`%8h3Q#5"ʭh FZHkxh k rO iݳ5[iQ/GS"\meP(Tz:JN|{54A7I„k@*TNN XEP?8"rkf7s1W7muw1g'ݑtVMP/Qb$z1u\j/ܢrZ98c=De-ߴ| [C2,3 u[S+cjI,kAY{d M//n9sg21erh _3cLqL1^jy-O+^U{5v*]֩TjӰ9;Ɇm2tYfp{R!I%V̂%笐_s˿T0hcr{}AJuO]\;%uNH+Xv,.,e:/r1ZRѫ1oYuX 2ṫªǮ\P]kNPiBYN4NZ)LP;XxVɅ.1oC ' ~|i y5#oFAm&|Wg3G=+9\Z~gu!BQr'@nEd6 $IB1HZ-y=N7v=m^~'"5\Hp E*V&H8"iN4hf]o578;ĤMؓ[n1 EkwWiLbd: CP5Q: $ 2׵?j"b˜2k"1RO93<(hhh?4<:٣a0?D;Lwi Ts0nKR-p}>̕f)j&pUNÚ[Ŗ8‰5Ma˳an| #!j"pjh[$x9T Idz%{fx=WY7l*Keéƅ2ۋqsrp#QKbc$K1"Ġ0 YpH#I icgN5fOWTj8 b*zMbec!I* @, JiXUt\ZqE1/]|Pi]yD4p2j\xjr7yE57JȌFEJsDDFJ(tP ybPDD [-z5N[ 6m]qW']dmˎS7QݫKIdŰ[f(|]epйF2Ds`Xaa{bP nFamؿ-(" rMݺ.|/laf᭚uZito~@ pHD2bE8Hc0@IC_" Bcc:X*iT*q h&  #XTKAF1,@C]Y<έˏl `_*e֕@mJB* iBMRԠHs=\% !L4זL~WiF CH,L+&e$IBM"4 P!L*H]u qɖe$M,DE˅(! E1.qY(B@ ^$\c3gdҝ0$t^`uד#[hcpLcA]],@?5gWȉx/0v H\B},3B{?n.TwAO=|fIBG(Qp'byndr?<7/U9 @we%*H 0ZTݘ|]} cL[*2v!a4;/{B /3_<+C>K\8ARaINgw䇀JCR#7jRdU͡}DIhh}iԺPQa1Yt[{[yn<N Gfg uL\YMONxn'ԲWÁ%8:hr&;$AP-h~Kgǘ mfX#jh5y/^VLѠ~paju֍5qjnL#^_٪9ȯij hyb3bFJTJ&~.*K⶧S ߿|wg'ǿxɳw(G?:~ haFbدuުߢ7M1'+4&G]N*&\ BV@^ /di+xyTE8gZ? :u=]=6?sֲ^ ԙv" ԭ N,dcX1cߺT>!=1ǣj[;ڱ~y4zj^C.U0H 9GHj֠c , #!|R25 Zpz"!\ۋgLA<0}O7)4jgKu{6dCyfYE6{V[)W_YKV8ln=?4Oէn3#R{>smVٶ@h>1IT#vDžh[z}m'|YJa^}}pE+En$J+s_ޮ1z_d?!}sNfNyvE|zMd4`k3`mq攑S+u hp`s]<0;"RLe QLhZ*% ^)xKCwNy獋P5PW6a? ؁/Ӱpl".C{@eou R?毞8`[ۦDܫ8o`@>(ؗ8>EA6ͽ4Vclʗ =X}`>fif *~9/ ߒ@wݵIEKQ,pv58[;o gU-|\ρOw.]_1uZAVÇtQLraB/o" TDJ,'q1o72D$Uomyt6$mӐWJm WJ_RuW!N.ZS;់ɖs[GC7J,ܫGPڇn_^\=Lg/tTfIT.MW!dijMDžW^:PDW]|tmsQ$wYEIs?ԞQ'O*$>Ǜh5\vL6 ܤk̹e.$-h2rp9R6υQ0;$1Mށë%mZEke6#=87vvM5^o묐5뽗p1=DwI/L.UtЂ^CE8߳4 gI۶+ZQrt}rr\N +^rp<4'NUQ9C7AC(/^hG֢Wc{xtH~n=W{?anB}6C} Ͼ쭪·6z2w]Nu8׺=sZv*@"ᆴKq/Y_>EW:{3{/luҭV\.}Y;¹svü"Km598dKl6F/?$^ {w/,?wƅߥ2Lu=c?޼=-غw^?<9X'6qoTbngo) o^?=~u#!mk(„AklκVr'R0YM mXK^c Kދ[<֟\S^9;N]4<9gw "&!X( 7!1 !.SJĿ籠4Eo<\"k,u7|:P9]cu^Emtҡm QJ;tD/IbKApBF ,DI.Er Ȉ1HZjK력ì('UՔvӿr" 1 wΈI)' ϱlV'B "`p<*:`xLts2M36RFngqޤ 5e3vpoG}=|ʗY1;Ś])_b˗9()nʗ}&`-Z#wr"+Ǖm?-WsNW`Aw渲B5-?0AYX9B "'CKx{U/{~p^Bx'm/*=W`ZO{VB !Zf 7vᕕ79 ~]gƊ "5!ba4 9Gk*E SȦ ̟0/M〱 aG9>\&U9)}.;.ƃ`Tm7d-kFdTPss:?ha*#iZ@JN!Qq$H&#Eu,zx\~8:o9mޯ` v,9}CX% %ۇ2сKŮ(|MD4oRH8"-n\38:$ aB,)FHE`J!SQ X U Ԩt wx=5xXV°R]1~WԠ, ,AT, 4BBa*jnMbhLĘ&, h0I; @#asp,1BPm  f"JWʒ Β1Hm:n;#&uӥʘ i+W7q. 1ؚO+k_MwrP-S= {U+hN$ BX1mHKy'L"uGs$NJBKKS.ШGy@s( 2j c% 3V,0a{&%8"f)HK"(a RVwWzL̉隔͝; NRoZc#PFFsPSDDscWR)LdlDK6 |liB0,!BؒL1J#,f%]"DٷJkh? 5BZ/!y^=ˍ0(H$V0 jA0#!ZAR32 L`1mKy:&n{x:x ͑:&cIM;0GESUiDN(]y]Iؓ$>4.ckaؗJ7g%U]ʃx4e!?3dpf ~ĭ0`'"!/CswnӒ:#]7]%H*%G `ܔ+cp]:We鎲Bƫ']Zԩ-<37A_$[%?ZAQ-W =5D^pVvȼS$SYNbV*uc9eyp鲐,{4{Ȝ=c`ut'"1YۂȢZ׋=9vLP5#aR>y0r @0Edyk QnϟQ,loUŽn&_dL3m(G'Y5j\;ǣrq>{x1W8AԞ9sN!s:uuߧ }n'~U]Q_%te~[ V`Nuefڎ8^bh9lsΐQT8@VB_/i{\f'~ʡ+LnM+tw\wuX-!W^6CTN@kpCc,1ܵ\1HݜO%тߺJT  ':fކDX.@B6*(QxLxpk8g^6j%rRӰc )ƚŘQ0Z"& (, u`' Ičag-]WDJGĻ*ֆSaXbzy_-ʺ,e 9VJșI;,u1HP |]^X^> Sc{+3zBl Q! ¾CIb0'2 M+ ˜"L 0fStOk)kɹ&ф PlwŢČk-Zy+!H88Dؒ!F@*nY%bĒ0lf7y ;bH@C, I6ʻ0Yeg-甽vUI腑m,v(?gJѤwbWryՙ2QC<+تU+y_]eǤm=Wã)(@Ig B* wdvxvӲG'׿,nyW\joSV6eۼD&MdXֲOz'':vGW[QI?C5jukZoGݶߐAӚyX/{M$ltC|Eloֲ֥dk+!fi5y5m]cS)5]WS]+V$,IXm5h'L1XA1^Eث"J$MZ; MUK°/o:={E5ˎN r&fGw/7jnz n'. p!I6T~3- 2fڠmEUU ;g_>?6wdFGx9FFm>QVœSklz$ RQo~-)3^?I}t[C]j}?z =G.ߎw?um"/foVޛ&xf|~"2nw욘7On1뾏COفQ^YR_܃綯ፕ[?uƣ<*ٙ ASQajXNǹvNG{~}9oV~rmbECY]enƪ{l >Q/pbQ!̚sv.אCLeWCxd PpLXAA˴)&AJ>^{%%u!P@x#aqbdch-XDG2;ĔŐ-)i)a~w4qv{Qrq.U9s)u@]7tlv/:>U.ini 6b.Kgߘߵ-8 m~u߽]pA6 R;-C2̰un-ﭞ'w8\BCoL ۳e8;Ng)zNl5MA*Y̦gg65Sx(6+β fϯdJZ5Ĕ0 An(v_uDAŜp|fuPh#;7=|0o8hۃЅގ~?i|d Tƣ]4^jJzlUmmبټl'/iCGjSƙ`Aw2ĨҺS5'l:]?h_Z=O5̞:9ۚv,qp-ghVu-r)lF]Ε^驐 `= g͈Nc#t !s.a$*݄{6=f /n|`V-FL\Lpo35[T 1}&RB*4ߎΦEr}X\N\ 2Y^۾z톢"1bn>)/R{JјWʅ@(dkaeo W"-˛l]{ ƿ/nkDJI8"p€HF ~RÛVX%jAienU-J)!VTp,nSSHbY|RlNŶԲK#hWrh ݉NAټ~U[YwխH``2vu9ݍD_ k[c*pWUpC%Tۗm2BDpkvP4|2 !HʐxɊVDhw.KjYn 7.+[ȁԚ1km?{:V '}Z y3$O}3VdcMز3iM2BqJ KF~X~D*U]IeM[n*:+)c>ȝPڍupKNנьH ]KagLhT0;œUc&AUCD*$ !$f "MpB1& ő1sNŒ2F|ZU|EFZN\>[?'f}h1taS,E)wr*x:N OWl2Ų6:X;p)W( Ōn"JK(j1"2(kci:I"9] s۶+N߈6Kqm4qw$%A=)[r `waؕ4U+[r|&9e&%Z-ovHlYHs]0讘'yF.FJn*Jyri6+Z99ktQ +0hAˋ) ˧K+ɜ"`V|9$@Y*xb*щʘvp*GRSjٰٷYȒf2ttZh P6i1xV35^bs I67;+S+G Ӵn^|jtɹ6ҌfFDE҂&)0+蠅r-pPq޴FtX6I3X ˪XwʅJ(TkU?<>=,I2 oIl߁\n>v_`~-WE&_\Ow!f#P-wc rxW8 pmLb@ݒlQl)1jN=!q}˽ǵKs/?YJelWRӑ fjdˊZR툆fQK3UYdx8.@bdx,i ٫NÆ,SǿUY j( J=PJRULt;"S]L?2'AIU |?kZRvWZe|LJKy>6Oyu~G.=a^G ZTJKK7k[`fs@ϵ# @)Fjeo'f8QpoB;iQV\6?oI'Mpos&$UFv5yr?}I*ow.I@ewIradow$81>Hn p#Vf7 5^euA/aS9z^tKezi@ 63,֨J T_ޝx,9iIElpAVw%2lRં#X2P3.ss3o {./XrŠj:i8WcѨF)uv2П*/L"Tp C\hR+2 &BwА Vp.0*D6ر|Six#]'7:2AN4$Gh-#fX[ThZ>dQT<Vhg3^$ƀ |&7y4iR.>t3(5:t/.<K~oB !TO"O(tl dE/[yǓ?&p x7N72S A頕_7Ɇ9VpJ7J!r.P$Z{XvI =cm‹g ޠmgAx-c׿F|j3C'u,4Փ Ѱ d̵خcٶ)Q$]=I\lT,S5ݖv/fNG\*Ͷ›AOQQ7elI~LwUo.dGruYQÀSNcqcs߰rnO~XXP&(x`fl;Rv s]϶E-) b iˮ\3W4Œ$Z|)bRJrZIZ%imRvZ6 s$L VN,|Kj)6 tv; q_/>\`Z|%ð<%O΁$a<ˣ PQ WsM .غ mߦ80V^He]AC߻+"DžYYk$q]|uH|98cr.*dazz=22JPRA"L4vPD-Eavh09hR9s80P%X8$8EyZ{ya>0|A#d8d=SMa\߈|:&7STT$4 &vb;'Ipœ_ާN6dpTI^Y0B>%DЌW6[,L"(Jqr >_J 2;,織ZPA,x+d#MƗ,}{⸱~']Z"_F<D?d"*k̍ViԄLRI0!QȞBqƐ"Gc9]$ƣ{1^nwW"F&  ./j\}Cd㓷͏GϮ6N/l4=~߼|{>,V%߉{0[fYFN&\V'3㲿|u_GVKwX7F"N4Fq|ΝoHE7EN4/x鳋F5>Naꙃ\;>ske%ݨc2 u`d?"#V^(i[Ʃ zbH*G/=;A*,oz>qCՙ4/Aps8ΣE_"U~Y{:,}u7`_Q;azv xh qɛf]Ԋ?| ㇕ڸMwNUDɴdK`GE׋JttR{ /G-=Z>S6R/x-˺q .rr\L<136ڎTv׉V@\Q*1o"fO5_6 zEp@Tحu,.]!KjOӻ.m JWsʜK6ls'Sȥiy!)<=-٣ ֔0nⱝfqB~ЦVk@FapP!"-Pp?sk.bkNOz>EÇ({9M\tŁ>p 7Х\#@X]~XQ>}pU+&,Xa/"oƬ|MAglռ}|8!BYf-s\ϗZfYjȮy~Wo7^]:aB'. 97>~3.NNk뚵WhLǃ1UZ֜$fiijҟs ~)ۧOq7i,0Vnf'1CڵgKy%ՕvzvZjZ|{\:f,֥g#^Vx"Zʹdb6ᖍXw¹?%R/> ܏k /Y_g(l|gg6=AmHPbX~ou~cٵK8FŇ ѝˠ9l#kݵܔ;ܱJ_r[VS줍$Av򪝼j'o7wdQՇQg"J_Hf (S'4ȟh.HL殭%1gpyqkuEG/eyC-Ns_Hl("-=MfHD݃ jaWBETF'a`3lPHrfJs)ft,Jr9Mn:h.)amYaH#R5LddIb3ͱMDU\:jKd6c' ǯ;e&$Ez~ G5É4*F;`wykl[+&.Cq24+K|ao'c) 0?x^+G㔀㌶\DCΈK\Q|.(71=G -iuo^i  N6n8V|W}A1J}ИV=9@~H!h.ug?0؂'33mTp C\hR+Xl&BwА Vp.0pR^!bC=bٽ 6MzP@́:it̜J+3PC.% Xv|Sx) -MPZ*qM`ꈦ` H3qq(hZ0I%qtVyޘ`i9:Vľ">qJ7Et{Rl&1~`K6 9A쪓_ _7c즍N''킜i^Vn6ʦ;L-+ͻKeTrrn\2'J@9^ (Kd@`X]E J̮&=fR@hT=%\n\r]Iɬ̱u%rûU  0T/NL,UƴSR)oUc XDM*y- jc]dm}ݗ}mH /EM)oK'-̍>,QΊv2Xc,1&ySeP-Tu<r-x4-WE=[]͊ѳSp^clM7crjP$|l<ӫIx17=LZ+m0)Ҷ0UJgǧVr8lEyt?ף2*J.p!Y@ʂI@n@F7-gn6po:WNxmyxeaΏŦCwr <3%6@٫o s)ZTȋ Fcv=E?lN(.ܯzIlKAM\(ޙ!}naKUyB[c !0]eO]ovoNCv"M O1CGCJ9ō8`1waʽ3ø)a{u7uvqYK:|{$O+%vvs;JOH* *h/Sއn 3?Ѵd,?6 N&ƔAiQGGaQ.KtgZV}rI9n|y-B/ (^¤)@:e&Tƭd)K<\gg_+dH`DjNtv*M|5u_m>\8K1ӓ *U&Vީ*ųf-~*.n|{5;.Z&g7/l\σQNʾzѷKE0F?\% :3y%C%)[b|sKM͐fB,w>FA^L|<^=]9̶@/GVlj]uyӴjI s>O\j 4al\.FXQ)hK/eeQy;s3_ߧ}o~~߽DW߽_fa8nA!QQpZ#Q[X^YFSp °urp(ÔQzB}~?YHm+ˈ`KˤД) 54x1HL0FA|$!XrF5RG/촲t]oc]'Z+ hǩ+%=[y}hc}%&\Go!Ȉ!,>]LS[yTIxweO{Wl@7x~{W_+OuS2{56ӧ-#~oψZʄ2mA}0A*1jϨjgo弪H|#h3-lkX6fPM?3x|Q\Ç? 3 I;LrJ͙"3YtBL_ӧ//ӨaJ3fyݍa/ad_ikin g]B7mO6!J{#B+}bm$ /\RbrSм?"A\h)3N0z']&8mI(ͣ)"j5F^3>uNY[diP(|sSr;:4Vt?WeKCJ"Ycү:+nȊAQ#?֡/q IΆ#~ւ%7ih 5(V8dx㤻^coEr6ȹQjEw9lVJHH!={([h=ۍ5ͧnn?`suN>}ֹ(})}쮦LzN1)WuuÏ[o1a3D Lkeb& hzpk-ںc;Rݍ:&2q x$ )[X1c2b=6MVHKDkqUlam{UQg"I^u{}-*tC040ZĀ?.&֡tO;G[*%n͢B(C`ݫֽ7j^E PRa2o=w;ͽUT\xM\{-)#C~让)>f._-~%t_fG ;ƝcƲv q>FsXDn 9byQEz >Vyo͡ntwgXjvV,=C63qxANcF1ZleF͝QFG0 GT KIwжiC6YQ;vlj8ّܝE5 RE]5>$OPO'>uPU(Q|Qˣ8T[L]8p|.UcкUW0q5ywV#_ik~0)BJA  YjD.lUa.Wę{/(I}S}P|eH%{:eKYv[/[Dm=J8U`xmyxea㳲ΏënP첸 ̮&i'@훬Dҫ,ePv=bz8ј]&sQӮAJ_qyOϰrfX (g0` E)kI,4GGW[G'Yiԗi@"/vǗUueӶgV|)Tzo| Nْ(3~CJ!̊CC{XUb'-"G/6>qT)C!$pe:x"|88C֗QtG9PIoG |+JyT8d2ҠOP2T+>VJ~e*xWpSB+J|@Wg O>ξ/cIю߅/59ZҶUe[ӵTҴ99K=I͠'ղC\ fu%q% WKY>=C%Qw@` zpĥ+pZ&)lWCE:W 0F݁$.V]${{pdgWRJ4q3p%Bu:\%)•R !:cwf1IKԡU!\iE\%ew*:î:\ˁ+gH ,N&-~"XWX)t w u*I)zzpE4\uƏAWڂ+>u ݤ dp jFI* \%q \%i=p*T=\=Cb%dipbwKH=@LoQp w-Lm$@۸gk+N s-$E,g$11J4<>>XN~_Y}^'?-R%5\)ZV>VK(Wu(@:t9%Q]%j'@zMWϐ$" U,CW .DWVO(Е`~CbfjoMP4g)[yc5:05av<䎂Aazo,1}P  Ƙc٠mŅڅ~v;0jRVve,:uv迈Å*a: -r9=eH" M)&Ke6/y ߆ǐ\;WeyTW?mh/9vf1|%gS4a/dJ^DɳxX3M.jV.}Yj:Rj3fȴ1#Hk#E4˱k.CZ10 f1cr9+ZDB_jIFUGEeB牢1G]jU!mIo7&Y17O5Yao>~bϻ~ؘ_peڲnߍp >=}%t%So7̏*57?tzhc ;p· V}r7{;/viVBKm) uYn .?67De*'1@7~.J/[&?Nnf6՝v$`PJ>.S;ܱ(`ei!J~K\E. f}Xbv]bev5;*ܱ7Oe| ۀX\ME13*=I|.ea9pj}rw|wi;u U>8K^N]8i|C[0揅wn+~7/I ]x u\qEk3Kyos"Bŷm_mq.K3MeDk8ϻm=#b(T ӧ`b:JY65Am簿.fp&*<1Ij?*+Ȧ+Z6\Zn;"T5Ąd)1MPFDnlL>l,))lOedSt~F̺ô$s2[d.0$AgZX,HDQCvY̘$jro%U͒&eꪔ>5%Ne|Ī:M=0v?G5/_G5s/7V1 .YKͮ?p*$E3Cΰ 's+wa2a姍)؇7iLKµ"m[^؍i[,b1=5Z-抗J[MF[6[\YZoJzk3ڭN'%>t&6o$Z:]JGH};tgT=B8 t5`\pzLOo>tŢ+>]5]W׈0DɺUB+UB)ٚ!]A5XUQ] ҇N8]%X9ҕFdbrDvH'CwN#tOZft_~ шq#m/;ۮ<&;lm(w VMXN@)AB*0 \&Ocp>dy5k^}NV,Y}WUBTχR5]=CskDW ]%ׅxC՚!] "%`0VYHp5 ]\`0zMWϐ$E\Ԇ\U5@+WJqJQIy0{ ˢUB}tPV5]= ijDW µWЕf=LtPշCWbFՋ-}j>t5Zh?|( _-sЕX}U+t>ˢUBUBĚ!]lEkDW ]\,P]*}wBY}edMWχ()J% ]%6A@UBj+kzbhsu;d+0TR9 $PTX/8RҧQO??O$,yuգ ۳ˮUCq্2*=wGJSxn] II 8wTzis4`xqbFȔӸѐ퍉:a~&UlDrcs*x2hRaE]$(aPbcxV}'u/O/Хs94E$w5>lwLb4 {PYj 8~|ܬxUgkrmܞ2QxLN~CjX?̄%\zPAXQb5%ǪFtKRJp ]ZVU{gMWBW"kDWB`D3>1_sOk>t-9ҕTU4@}sZ`;C'4-\eF jtajxv?x:%+d &ce^7jW՞]Pad5saR-tZaѥaLqXtiZ1r_uoח\,YEH(򙖜eN)bxЄjֻI -Prc~Peگ{رx LqY̘C:j3Ncʠ1KвM2Z=z"[ev{8,]أgZY$^Ğ@l'uةL2J͘"nj M0&JH F+kꕟ=A+F+j=$Ud P7+ݻ;Xi*(l6[bٍT]3pGÂO>}E, lnw{дA~/J%*JJryTtHi: mF+ӧ|Az˸`q\hڸ% 1ůy xΦ E܍3]̠H`cFPHBD0s̡tt$"][fd~ ySXBʴQF(X`*`?K RP4xǨdt:Jä6  b2R B b;FFݓMKt[ƅMFJ*0qBKXYk Vn%5\(kv"P+>1izvޜ 'rX`#gBc9+q{6T+[CvJ %u}NVC1 ]EleF͝Q|u tdMx>5萙VВ2"aI)cBLB|Bc+K)µDX/!y6,._t9(@D@hBD*vC1R!aTBR%#k7!vF. d$BD#VN8S.,=(Jp@J ?UsPRL20=P,T+*`DU;͊8-P9%]VB0NFoVxhݭ , kCLm]Lܵ!EDC4 w`܆{҂^U@@P1|Y]0BQf2D`BV An!@*(S`rnj@pAs^ϠB.dsyWΨ mFHZnۨ44>pAT298cnF u  39($@?A j@!o 2Zdh,p{aE޲ #ҁ< %bg[I; .[ut-,feB%޼Ր6Rj2+"l7 j^n%LzH/6jI^C:kiC:9X 2nu9.J}gR,JLT@$4.1Xermn#>7@K9Z}|Xk(m@ Zxs vrdС mVQ ipF ߨ񡒑8U{ ̐A,4 8I* 󳆗 JZ6{ِCLX[ :_CDC.cd0+&K;KP$"fĒ. hz BB9KUkT}"!w*;cg[:^u"a;Zōd6NСɆF)QUq$Ky5#/ 4 A hw!+]oO~oun>N{1Q(8bE:}--pҟ|~@|HlZFRyHN /搹CqjN\=B2y Lȃ<1XUࣄ+ sy8ok@nњ?^4ʡy@pdW5\jk×W<^QW5\{0bFC+%?-;b\+{w Xվ[  EiÂ+\9>uWNlUW5\UR:GWBp `આ+͡UwrmטZJg00yr֤<'oֿ~|ͻ?'g'OJ,<6רdLXxD_rq|&G\$|)NJI]`s"ztu"?I(I6\L]h]'] BFf^7yDB +*TiߤU?.v|hv~Ҧ%-TЦ&Xw{H.nӦ0-˨ܔMMY:fugu霢1[ -?M_)3~3_6uKMW[/i/zu^get~^͡[m.dmd(cR>pnz?]bY5;eO$u>V."宐u̖';W|/8s(yx߼ڸ?9M]I>ͮӒ~o0V-Ǩ=.+.}%!O}ߗZewK-.X$l+M3eKYJ8;ŐE]251D&Jf6jTU%*pjR2uuuK{H|=gw2AB?0t;S W~u:B>|Nn4~S.)SҊY^m&/ֺN;am8}=t߅raو.U+R&Йb-":hY_t[uߎqrqE WRB ܷv $晕Q !J,Xʇ>zluaǧۺ;[٠FqI~~s9߳GV99[^y6"ꊗ傗oˤ7zߍR\ I|Kr!1qO,t-_{q%=.ޫvB 4q9;z[y2ǣ xvT[ CuSyHtǺEYWa]#ԻF3YT!u~\r/}bc|4 ;(ocd+)}6]Fl-we֓%:b.QQhڊ?OܜaoNW]䦫nW_qyqIӋ#v_]z]^ű[NjNRW?|ٷ?|/^}opo7/$! `fQcW.g?piKzKKnqi'yu>oyu?>qĝem:{?N?<o48o4{uU՜G^^ F7?{i~.D%*Y āC܍l~^\n,k2F'u/ޯJ %"Pʭ)gm}ߦ62mn\ԐWRq9~qX`^u?hh>vɻ*tVM8}O.iM,y!y.]OvYכl0ߕ?뫤|HcHNj2 ,TR )-s m-tft?Q/)ٻ2۶ْ0F׉ Ve:D˔ mW2} .#GOA_~F}?Ōvb%q°=v cfCRnFߖIHGCs5RبMCn mNB)Kbq[Ⱦ3Hߺ Gߥ}I.< <1oe,V +Ea.˜Tj^Gns!34yiK{0RA}p_=;Xzwm~gx 088}H 8x(HI?,jm,Çq&f* ?`A`~w ½O_oƓqI3j{boa/Ѵ_h]kmZA ! c+,tbf@l{{I!e=&(!p$dBqڄ{8"x d9|`tU'N'X|qpuq vͦ7:9iG׶͟_.=4E!^Wov}6me_a=({u5}lEت~0+MXǝ!;SGc#(a7Q;䬣'ǎtJx-rs\U7;*Ü 뙱1$LD&$maO(ZZDS*0KQnn׳n.bfOkp!%*n`烼@PsX?DP2k]l -C] Zm"{@`>>\nA3џ.Id\q>:[@ mmt[q9f1VxqC12&mV8OM[sNz欻:%"=r/Os"HTc411 s[ <pF%R 69k5IqM{5Ɓ ;U4sKx(l}؏*d3-溵.3oo0(Ovo4r5fH3t#=Wcª.iy;6XE[;=Wʃoo!fwgI{nqp9(zm/(o;\~$MY/!$s|z;q2Jp䵦 x#b7J].2E 0,p/0۝N բakK,j}r~q_H0+MEy';B oOѕ0CuHI zϓ>yT[)ttYB[(f!f](ޟ Q2Q< M<납eKw^},JMY"ox\!I]9q xt/Khx uwWI`EgGTj,{N7]lY򄚎GRSv`+e:'Pb9 ݈1;mҔtqd:@X /TZy}^VK5׉<]+ʅY9q8d86s ߎrr[ZQ&֠P:RX>q+cGq<>TA#WulyL!T#U^l9s}~)щ|V)I!1BϥwTH`c)qbD2K6BoEz{ұm~So֍™}7Է/~hr?~3La@1s2K$SMDnd1at<`C,|H7DaIHeӂp@ L YT@L!b/iK@pD@tTO9 !3`,g$ Ӕ 52JX\19r4b-V(P1))D'bX^"g5Z[ޫ#Gf!27e8"As箇~m;d0I|O1f)Foy9?i?e}O9RzcY>ih"/aJޕOP>/@ipDHӮ{x:CV^׀_swlFۋm}맨-g;rNr-HEQS54NVdUHhp,DH MQ1,1l%@olr7>s9OoG}$.P"p@F ܊IWw_ҳ G^P%Q>$i/r 6\T{~T|\g@Mpi ]vͯOq D(T h~~ǃ9Ȟ1 3{}DR;?}m3Oǫv>{9hvj:~tZQjZkpj(-DA UOFqRS#iPy:0H( @,P.m`֍uWeJ=>XX34 ?%Y/68ЧD"z^P9a LϪ؋ PIŸNۯ-=++?ZںzYpR Sh ;.+ fcF ʮqYl' 1ތG\gWJ z@TfrɴB0 m!I(p67aAr5SA4̺D?yn <6Mҹv/2HOE F, VrH# be}0kM-a$EV 1"KUs?N|t [P7~x_2V|U ֱnlDJeӹޞƩu[ď`~KoلJ6(Ag:c 6ۭdz)}w:Wz?e3 Q<ͮ#CxWjr4K$0YɈ`Kj+K{^ɑrJ!y$LVi[2R2µ߂MȳS֝-9? _qORO.36D֯?Rnni8iw9NX ˲R;=u(E:Q@uRa0Rm`)!`H|ρJ/:<ێkN?f;"E Rr"1̰2;$2$9` hqX2AgѹcAP & CyrSYWu'X(4R 8x%h+m\ `7úv$?kkڅ ivXg\9O-0`&6o=-cv)VahEPEg:&.4`$`|aїMyE7:[AFvw'{)*3;|j~k.G"L""0 IfJ3Ln 8M>Iϫ.M{޻;XlESfL&$eV` $"aH.S3Id B:+`ʊN[lB1]{!jIԼ&+DEa8MX|/#L6 OUnL!HY}]Z9=]:VrZ)Q 5t:깝JAuwzX0|h tIe|P aS2tVOnFX~+߼-^M/g36̵AⲜ[s(RMDgf8Af5$EcKon麩،@ebpT(4i' DOm-ypmodSMcI4[ÌPHj<Tl+f#~{8krx}Kheea:sK ǿ~|SO1Qgsvu30 .ebn8 AuM˶ꦩb;4]jkՄҮhs A/  Z0/oĥ۳:T:IN]w 8 }@ͯ8$IN(YQC4?dcmyj##Î .J> ^k57*z+y+(rNRANYerJqe^^uxѼƼL MPCQsmDG0%cT#qN+;0HZu6F+aΉmwy}{zx\n'k)O4?UnkZtkڠWu1LO$X)뵩 ףaė-D11 3MenRE?Mpfuhpi-jk|n-=%pJn?qޝbF*as>7eYfG=Sl*1VYE 62`4 VF%ny08j<46E<߽XJƯߛ^s}Ql1:+Lo^7C[^@iaqZ(ڜkc̘aLvsg}6BHocCk=3K`\/6]1F: lp6HF7g)Oc.*5X@ "~Vq0qv~EVT=6dQ|efS(ϰ#PDl0ɅG4-}6^ G ?m9 LvjP ʗ>%\ukFѿvhZ1i,Ƨ8D(\JRbr8AR;Ǩ:iQN:!0X:Z1F9E.J4Lx:Oi(i P){j_>qwLS2 BEU9ymΨ(|> oc^f !%c>VTYkSd%RfA@ۈ;my+[ݩ;FSĸ5].xkka^fE]1o6距}D 7@4Q$70`/[j9-r͙A6ƒEiCΪy*u:y~6F<G EέRȰ, C[rX+SH:OuB{<ˎhhOعޏ!&E_cuǕC4>H:n^uOwebD0ܙ\"s ʵ21O^oí#ڣNAؿ Bb?NN9C<UX-zg՘IDk1hFs+%Z:t|[$FkD-X3A~4iq:YmT]W=u¬5NinNndB'1I^uJMz b'(Cbt^^=LprO2`=,*9+jݽm{E+wyvzHoYyٲ[*v{*´׎$ckzK9iï$f2i QEe;ҧ`%Gds2%a`h ?Sg+XWׯJT䭙|E>o.Lz`v Unl<>JbVBe^e/T^ԘݎfyWSY?AT?_R,}ʴɠbdnc?(]jg+E]̽2ަ1oAֵ#'!dw 2 ivXg:lD{PTێe}t .:c 1.pKH#I-Hbh F Qyfx? ˷>ld"Is `;|/'RP\i6\B ${"_qWPbɍq{B=ŦQ>ekdz$a@ IA#Lݠޅbzf2 vpۛLA^@ջ%]@YiC(;"-rHZ'Ew-LH牊.#02 f0bGm=ѓEtjA[jXu[@9ϣqß`)p6*#7"stOg?)l@]gpy.ou@жS:{e[MC{uTVQWK!7>a$ {k@~ϾHEg 5]`ҕg t&0A*1jQ'(o$/9ߥCj%63aȄOw(]CyP;Cy4#hZTL;NP;CyP;CyP;oaHڡU_;YI޽ĭ1y /hs8W?@BPHqj""rLrQ[$\)Nf!"&&V̄5gpp`;)-7*Ӽ||zmj`4J6 I!C1|NPܺ8tttd{j~6HjrwIlR 1JHY|F4|X%@A*F {T9=܍Gh@l<]+y)r-,Anb _/:8<˼}u\xWN;.nsIYP[.?U;mz*vK=FYeEoS2`m/ z&t~_Fe""wWɪ-#Qf{Oo2U^^Vh,YpY ѫ+VQVPщG0Ոtիe'/%mu8+?|-mظF Ӛh[ K[ݸˆTU 7U^ ft1A򱸽rk'IgGдជ48+IX3Aj@6): ڄf;1gČ&b2ǞoXgrœe]XAN }-6qyiRPN EʎI @/_9lR\)ռDk+QWZ9wqrQ?P`ZQ ؋.ynEh2!DQ$"ΠwKFqb_<=]]9erDuN3mp&Yjka#@7hd"2q[.XK˭4$KdTS\Pe3T9I[/ovkCÚ%G/ OtB_.à ms { ԫ`g>6IATRZƑ 0#<ŠUƑr/>nͱ=%%:I{JSp8̄ 1a=36pIē\x=qhʉ:YIJ*R+G]p klF(v-NT-x˰WO0ч6LU9+2#3w<FIPOReqpAe ϖ"ê9X0U [w$`DS5m x>ͲΪ^V>+,^c3#yQD5Ncp0u# `T",bY#TlNj._ğB+CHd|I3YSz#w Ҋ2Չڷ!ǵXO*d^kn&'$pv57JPd&WS*ҪIi/WjBH\Zj#8 B2N[$YB ~-PѮ'К愔m-\8XH0ݎG~zLɟ4@|u_~Hm,qaF6|e4&I5A%XDc^6lY@Ye_%!N bJ8fϢe {I#^Z JXʶr@hNѱcHnj&KF1I*‚4eEryͣ(g)W*}> M+V4mvT/LV$GDޫ[%0IR$Ӫ@m?"s.箅tm;d0IS@1f)FoyI'O:eǝN@鱥RjOnGyh'~SXxey"X^POy_m,lZ5S都VJ3ާ cr%ne|YdN f'/}:t__v<ko~FuHbZpGpiU!aif.i }eI*% K$p,Bxs|Ko'oqť*T*l*l*l*lžFB-xPa Pa Pa Pa %5*l! *l*le-TB-:Vz [4hZ*l*l!P(U%Pa Pa Pa Pa Pa Pa |Y LDz>O+්qf@$Fee!\yu-;jۣWrg:/ uPmEb G}e3ݪԺk h+@Vp KŒNq {g K:Lc$J!5o>lc<@"nC)+8Μ6Zmc,sbҁOKR08x8 =湜A m+]5K~w ξB8,yvcM=[dR;n\׼-,.g&3 \'1lW!Z&eHqgL%XDIoipl9$pYAFC; 络` JF`s^%V~ +ʺ@-sPXb[&I Z&XHD6L"-Nd]t)X?HK-MMҨQ.y@m2W}p(+Ԯ"UX=};ɨO:KLʰG$1 V̉hj'x}t`a,.d(TwGHe^ yLn/LB-0ޏpb\qgf3)#XΟ4欿ʍ2nyc0&<̮Thxuc895ψg 0%0Lnمh]l4#x}{Mqz#c qpz>p8pSدJ"?gׁ 1tlVx߹!ߦUG*QgQ+Ǐψ䂐 [X+.sS;K4mݙ4aF sr»`@9u׽-3s(Qv y'?pd흸XHko(,@C*3g1~~6ѣ{v 'qQkݳlw͌8ӑ|coG`R'G &+vt}Cd_κ\8_?|?ӿO~Pw f`=, gC?5ɞ1Gߞpk֭~[3Z6z:M+|ń,ʡZ)c;޽#2ϊݟjNJƼdf_a_WrgFGbAuL  7򢏱GZFOۜ{t[;J!Iὔ,:~O)hg68 ^1I9q^; U8l1F&׎SyP+i!XfY "ijAQaDcIl촲3ۛgomS R7 5,o?ڱ|uS]wKSjd:i\״9i *9La20w3K6^H˻o-d1qL2 F0 i]2{Q7{bԳ[6ͮIU|70/|sMݫw.03.z|Hzk<>K09ƥ%*E/ JB2Q ņifu sf3/3І$'.G.Ai@X, 0. Ô%е}bxsf{ɺct Ns+0b9ĭؐP&<\of7h,;%Ak2KG\#q2 񘻠 6KsQq ?ygLXSm/T6No׀Ow./_aEu:߆I |w5VY'_k&( m= ±^oPsZ7O[ Z.aNjmAZBZSA^Oїmn GHWhrZ$ ~Fs3n=܅xS<Db4LTm,TI.R)%A"$VV8MPbbYuq~cR_K>Gh7Iy[[cɀks؁XP@Qb_jA]"Ey[Q)z{ ;ӧvj -6yBߵ1չYBK|[7c̲]ϻ7,=#d{0/qx/.[j{#՚!nC:pqSCy5OƅxS3NB81x;=qiuNTIY%H𪠽\kVE#pvnۏKX]F ї'G3gСBCLKA==ʂޗ\Z-RX hPz[<5iB-azqDUrIfEjj]1Jibb 悷y$4J@lr0kQZ4=G?AwO?esoz7ﺽ3qhE 6w{ң3SƓFкzMZo?͖*]y6%'wizx{ޠ煖a8lovusgn" _Ri[:ilI\y e%9{YӦ+j6o-yIXVD=iX?ϵT{J[ jjvV>ڰky5p&$8i/K:"%Q. x*ӧ>-8]N-) *Xx3~>㹭fAfxzkXv$|@޵q\З$hVEQHҢN~h acI*RgvIi)JZ56 w3;ss3ss99YbDm1VYE 62( ؅#*$;A -:dҝ,#j͂ǶԊk{+/݋ۿv#{[2Am>q+[eԮt8. RN&CA ƙAR|j˧|@XGS`,ĕ[n7#jGo4JGB1utcgɷܝ qbjY8 ]Im 9[ 8fv%v$@Ls.`9VNF.He$"_jY"@4a&g1JtPOx7tl0r"s< H3&KRLD<Щݨ?*/Af1SIvx LqˀY,C]:F *s*m4ݼ>,\(wp)B8cF@tHba)h,I< %+`\[6io6(<ꯇizp}WtY>SP=(gs7}0q>KTї3}YB1.%~rS0:f3qޠ) xZӲZݍMP^_iRKro}hoo49d-Knyy_ 8ZP3X܈OgWӿ4zU|֧[%C:{9]qqX݇DHmYͫisgkǵ$h#hQ:=myf\>.+y23030%:#W gP-Y^4L-"$Sz_dR={Sw#kۅKtpa-k]rKKY=.o*U}y #:pv s|oߓLns'nMX#uhzㆻ4ؤ$lr@St0ǯSD̸idz<#,;R9c:;kWjóNJg,jw6ގN:X}|ޠ79p.脅hbkLNgZ)vvrQR:fc+:~U͍3o}2袰R:Ue Ė B`ֻtb)|浭tH0N\I P{ipp2F9Ν Ӗu[]GCJ9ō8`1waʽ3])a$`ugPJ/R/_󋆌ͽϢw=yR}qjoN|:lZhKkKZjҺH`$&*EUSQg|Nxщ DLJ_R*2M4A3@8߄W a@"UR)ʩR8˥7FӄJ\}j:MU5]fLHrUÑ,8|3)NAhkro;*&ŭ9xup_TYc5>02u.%ߝ+xXlzx9ub4Ϩ4< zTOY/mR^h:f$]oM]9uD%ԖhQjc6L0&2ੰ<$BIL9ɓI^GΦs-x{"7I᭕H0?śoZ^^Ͻ4vjPB>.ue|HeN?~|{Z YLbƥ΂KPԏy߃od9P&T)5(W+L|EC!Lfb GQKP0^QA+\ʸ1&JC{@ d.s:N.:~meQ#(cN&IN.:N.)S(&H\\'s\\'urq\\'uBG1sqJN>:N\\'EBrq\\'&&ur \\'urq\\'urq\\'urq\\'urq\\'urq㐋:N.:9F10Drq:N.:N.|RDT|5Iu7 ]7&8B3S1VZ#bJ-FnݦHUуY}cB)7j<۔SGEdQ0A[PHRnkˣ/&F"a\o'=dHfںHk2yyeBm@SV_)ӯM0㲯E|6p$`.H]b NDl tT" ft]@F #<'Eu{m._`;,|>nŪ,}sTwxyB=&2(:o;cƌL"^ˈiDk45[!-%*X9{=]>|mCvR3$5AZX,@^vGEgO> ̂HKRroDu!ǭf,kJA~76}~XW+!T۶7h}QՓТq#-QK>NUJwѹ7E(JH1IoW12#XP1g&;b2p ivdr6/[" UP~s|뱛x{J{Ͽ˼:*01UI*G(Iz yI)R:" {f=Wȅ4*0b0kv4HB$Z!|`#$:Hc@ح՜cZad@Pg($^2N Y$5GǺXhFZFZF&(9_TԽ^^9z彊Xc+9l)KMs7PJ޵T:f;' - dXw) 3,l&' dNrn.a)ѿ)r}&՘Q~ *,Խ. ?_ r"5XzzAٰ:W>%I4ڏɾ|Y zG -s-^#:NYwgr3Td׏eVSOzZ\*JˆRHsOJ2- O4`|gl|U]ǎR>` (  [eh#K KWu`;.)Tid&M *#R"%1dNF3,NN+ )]!Z֓L{{(+J[ /k?ZD,cH3T"DТs[AFoy_\2M 6k'-@ƲHTd:J0ߦϫ4MG%VJh 6ZwAo:Ăqg6 /V $ɏ;M.2#Z-Cul<9=cΜ {gR}3s E j."lpa!֒-1~wKm͐fB a`(&K>ߴ;MV@/GVlk]uRVi3#4I sG.=r +iE>ŰXcC+*@&~ԗ ׻8q?;~ûÛ凓7>`Nr|7`uᰵ ]: :peWMC{MT5V˷:._%"ڞ @~,~z3o{c2*i'm:X:iHAf t`~^yiǧR꯷zEɍs_qDm]xZ>F2>JiMB{Jv!h,Rn9,XA -I][o9+¾ y4_&ا ,K$q{XXU,ɩv+*E!y¿F_97Ibh`kpOk@-FqϜ:f%<NsZq,>/w_Iʌ:M% CDEG%c!6-Αԧr͋sͮYKI9qi$+Nݣl!Y̖63l4 %>|LODgԞ ILt\kU`PCRTRocvy&ps>s :~)K)?ږœH*5SC]!ǽ3MhY7iBTZ)Bi|̤d9.i6f(uŐ:ꇌPxb཰@%DKy$#9~ \F 0`K]t%Ʉ\Pm,F ?=b5Ce Eߙ`D>StV\͂eݣ>?wc&hZ-m}o…OX9y>xa]Z/iiqn<Zy(%G_?5Q=e8o[D3 ~JNҦ;zhy  [6 ܥd1gJd-6U ̽"1ƢjR9#C8'甿 /󴱋9p}󻯪6J3YJ% (YFt!}{cȬuFwނN7ۇtE^/=|InVP ـo[~s"~#9"C'zp\҈^_6qB(/Ӥl!S /@A\;~=V|cUL@LhqQ'g q "ge A&EH0C()Ҭl˒,  {Y2/F*rF!Cvik-s8٭ƹjWvץGA?(=>RԽ_9JUPˈtg}&%v"maq2vr۶&2u{?ց >%2"v)EL@ 1)Lf"G3x'S p&V{o0dTeDw'XiOxn35a믫_YlIQl߿pRՉV;k{7rkƃpd ^uΎ_O,MtRLKo?Q.lͪB,I$Yn5?]yziGK%h0Rc}qn‡fxx-~-'9'k^ALY-،5^{2lR晾f󻎵gsؽFNӠDIYNM DΒR%2FHlt7q҇={A@ z8~V#J~xzcX%!0YbuOOA 1'!MNoːK"!f *hr _8G*벪-],r3zѲ&Y&ߓe+.Lzbh]%$G~5 ;7$?Wns<ț~9"dzTIWψEe !#V9/ Lf:Dh_Bb璤Iyd*Pj&E:dh𑗄1SB2JKf)m.IKћ6QS1vr"fA K=Ol@3dR YD$l:xC:[*5|WY(Uʜxf{e( ۈ,\B:eESewzAͳ9zo%!MdI`]dQBDmI3AYDT!F=glT:RtHxWSg.2EOD5Tp5x2>!eE⛓sL__`Rg &mBD4Mvk)cKqEhaV$ć_~ǿwSyLΦ*z^5w *u?v'lt-4aVEժhc5XiڔeyclHhxj|mΫf#Jc+,P@ #p B)/hǐY뼍8 ߝYoכ~ȎddZ"+(vl-91P_, 5 Nn S /@A\Y jWY3mvD.#2^F 4 &2%5P#J |ΧHR`q.KZJ$`40@emȼ4mMƚBg=_ۥ'c[68޷d.e?8:ZټDGpkܱ/+6s<ګ8%LJ+EœV_ KV3}~<W\)i1K)dbI4Fg292;m6Ƿv;SeKձnhYԄVQB~s}$ݓ؝M: koz.;wՃᤪGvHgn֦.UgaеR_a3 X`4^=gE/]>UX`I26/j~."5/nEdǬܲ*QlZ`8\s[N/Lsy}O׼Z5#j5B}>["/oQޢ `T() dYRDFH@ x&^Or#Y\Gϳ~jD Ooz D"D&2KL5>#1w:9ƴs$ɩwCtxI$Č\-+e.;E5JsCh2;Z$Z9ډ7g,2m;cnYu{ΦL[}>cN e,RZbdd2I.('RHQk9'ǥosq5ki4;Ҍy|]XBT'TFKgJP{Ks8UэWd{E-sʑxoT*'ig$1 !$lTFu%,4 +1ޱ$EtL,Eȴ8cc⪏nկI+9F>UyWr\?7e BA=WlPo𹿴|ЦzFKD!lr&o)xO J`Ih]_uL_ld7aZ{wlqg9N=ܔxm\/,?7™o4E:g>9.+;#df`f`>0oxKpuʠx`P72&nuŴj.NR j-ڵܳI?~i.d\MiN,Iϒnq7]4DUO_yї@]߃Sx~O2ow;q3m`ej4;4cp ZlStہwbl2͜c\346tk*-(4.l}mj}-VCVD:n)vvrQudǨWdiuT3}2袰RUeB[Tb !0]OQxto锉(c$`FqQ'#!iĞ!|;mY5u4D{S(sVA-m=30D9A ;NpڄTryrrM/cla>׽ǫ]NΗ꫌Um:ӧ]MK@5ZjS_\4bH*Avo6RA @򓋟\'ZHSzfc7J",dP\tW*oS}3UGE;Qy DOS"Y95[ۣGE"a\FyoNOȾ}L ,cn^7kH+eZ7L6sŒ˾B.:gAKJNB+Kl‰=3l$T @T-ԥ.8@.כc aq/Vro;B5%2K4y<{^`B<UX-zg՘IDk1hFs+%D+7O琝k) -,Q3' qf%*J낏쟶OUnLu"YW_+&T۶7XQUKUhѸ%Q?| sh-JyR>RLcR飶ěU`̙N`行5.$ͮL.咈a\_Yj&~Bi~OTPE#&KxJ\B:`^e) Ӂi]IՅ {f9Wȅ4*XĠL1 u BN@k>rANJK0FHHjbKFb)DXҠs0I)ѱ.0d4#-#-#mH{(_DԽ^Փ]{Z{3(9lH^ŀ I矾 r"K vau <}jdlhd}eJ/{9K8 x;2[kJ@otPbWXZhVO,xx9r\0 X9]vAm&/4p&[c2nN!6 SaV+WP?..th *~PZE潅~?r/jz<}jjih0%1tPu$H x2Y6zf;GO ( /Q15BCp޽~D12@i_jY"Mp,cp7V7|_>2B2næmhƐ a@BLSYZyT- 2񭖓+sR皆,IQI)%3ZX&Dcr"2(2`K.- OǺʠn2mP $| +u|T[c-1;2 0 +p t ݠrGhRzk!dRp44Y儕;BVOΏ1nܫGo-_YYb kٳʚP L@YK4*5AR鄡ǭ_Vn-sm4c )f P/G]ӽ"闈WцErūs0u&pza\ {8 7Qc$*`B*xIq/+xD{?_# _j^߂~4߶f\Kl+l8ϊg ^L:@'oA{a09.HV7JV6}}Pb:WH[#M24]-VwJ7ɶ庰:>G[Gmcy_O8kZTBmwk-f|G zs/xL}l{pQǹ\Ǎ\/,^>W.ֻp뭝`#ٗ^SEK-4mL?ܧ~gmrIwl;]X6mU;E8Fr6$VG /~ݤ51ֱtЍ@mjX=Ãy0uDܺp /kӉ%ǚM[<\QH}ij;1hݔy;~﷢a]Hզ.ܯD b/ ׂ^8|&.j7 6ժ]˩Oj2T_+0@߆ʡ~*RS9;% 2ZK@{}U7}ϻ:u;P3ۦ.C1}R".wpUn48a{[&|ͺw1Ad-}T?D T[[J.Exg"nAHp W)3|>{76 :J¿Ir4IhcpDq''uUݷ779fl֙G=`&`&S U_CgAkO1hmU9OZϋWxK[.b),[XğËgE#:wKj^<8n߮.=4O":".B*.Cо$Pᜁ&LKGE79>6d _Yq *8.u^D;z{lvpbGrI[YZE^;s 3 53sG/&+O.QWx۫vxgUWy8]z}40˰m t˩U` (  [S*,3[U /:|0svn:p4;Q"itL*" Jb,ե4^^ n]&kVIFꘪW9ڙ?g; wP2EEpa/{}_o1xԂ~RF$I}X>ƅgia\:<׉Dj(b:Sd*m}Rقb@ڔA#L9@RP;4q8ar_  8+:mqc/ǥCR%Q+ H}vGA]!-ohI 9 S )ϛߦe&GR@jt(T-i!c<(|]qGTM/ivQjOՃ' `͗ {g5mvaXm2 ə̈ e=I'|Y7di7,2[a%uuɇUCyobr-A{m&esI4z-`$ú}6U9U/!ՌN;G %hTO Dž];~o?:yӫ|O^z'w@I]'kCunuɛ5]u 5]SZt]jSWt9%ހA/inЙnD7vI"N=a\k.d7_O`:Epo\4yHQ֨(3;k)!63ݠ]򢏱GnM'&oOQpZ#Q[X^YTɠ>pRSSwE\f?7WVs7 *W6 f^Z&E* 潡ϣ" ڈ $aKFhN':0ؒ}^GYw3׀Z4\?ש|{+Hh~!2HQx!n~A`o 0Spc AThªq܀R;mTct6u uLȵ0䂫h$pE24XCC)сP(ҋ&sB#4WcO).mBbE-"E+TE;)bm9@Ӻ_uae#AWLnTQ`iVLM,TUI)+ 71VWev!83&NA7Mmv{T؞sWژSq.M!GBŗѧEqMj}kr\]?"IL QU;GP0.H/1;<l3 =B 'gPou sfApep[JƧg l0[5R6gŴ̯Z~y?L i&$8!%排(D@!MFDWj{ .[-5J<(JXDTk9( ~:F50rf D+$RYr9M Vxd4cWKIh圂H(LȄp©I Os'b/}yQ%f40*6h IDK:xh3Vkjopo`;;:I߿Vs%È0\զ_ɯ0b.) bdM(Z<5WSS Kz dűHTc 'o3Hh!Ka'XZu[JU@{n/f26I/Cfsy3f[Q /{Ө3vI?ue w󖶟͟SlЋ͠p׹'=dw8n?Fև9t戒MAKz~o|&qѣ獖Coኇk3[Pd㝍Ě;W}l=5WjڗbvKjNeߋ7_)9M7u}<ߏmR w>ƤRB9qF#)/:]k,OW`۱O#ȧ6w'c9dS~|}~&ϣQv;埍x3QKox S.tcAQRܐxte8_2'- wBh#8ۭ gJW [mPCjsV ]7W]ug&k3biqb[uS*m,y%IE_ͺe O纁rc(j^Og6N ~lZ4ԟ?xCtVmT?,+Ki0-8<1oͳ}itυ iI3q\#w3iB CKyb"BzE%`%#]:(KWl2,L ]䞂}{xYF,5hڑH\ Ot42Gָ- yVAq Dņ'Cbl4-~lGכ9C>_-k1m ^:~it;1c[@h;0h0KMSfLG$yI-R "eEFNќ*.\] @ Ъ稘%JGNJ"B^Aq:s9Z^&%@ Ju(Hg8 G^k PnEIոKq3X1̳*Az$ܘbRZ!u |6H3$'qpczS!)xQxJȵE%:uTwqbݼuKeJ=>XX36W犎VQL%| >WT +s%|o WBJ\1U%|ϕ>WJ\ Iq_Ӎھ5u`~?ID|Ir8^yL{ feX=Uesʍfϓ O'h<eSИه7g7}fG@".y2/zuFwwk\׷.a%ݪa4nV8OfVJh0MAH?k{5]90؃6y ɦ-/u(s۴_p*jWc0a^0w"jvʆUF1Tm\^:tիIJcԒZ66j[en}>h`4HXrH!^<贴օ| p,mx!\+ԋ3hocJ/IljӓU{ŧNYXA@'}6u8\X/<}ʍy=q88)7x'@X3SɝPR[6MY;eb>YRMJւF]D;@ חt{R/27 ~4%;Т l4NS?^T++#yyƝٟoͳ}i %\(hgл%#d1FYgʲWߐC˲)ٵR̪^Q H4ʒU!& K$p, ]䞂}{8) bMjG"qo[Vd^:~it;ach;0h0˺MS1 a:_TpwIO?K0)ʢ]ʴiw8REU*6C#t"2,׼[pbJ& 뙱1$˼&q"Os)&ʲJ*R7JV͈@.%X.o·DF`uTz4AzBųUtO}{"h0WS;j/`3ޘMUA *e~lg:} Y{Ƕr{r=Dyc( z$k J-Xp|Z'a@]S9Aꦕ 䏟oӲ!rLd'BFCBLY S0)([u9E֨k:ϩ-1ӌ Q5/]SE-5vR5FVb"\vsJhHBT",sX)]Eq?ؠOy8:(>3OS;'kyY N(, 3;ũ6Y 4h/90(P|}QLLVXJq`\ikV;r L(XDx$/I g`WqNgɽ|HZ}|,+FqcXz謻6Ϋ ,,Y4qpS%,Q[<R _1ulC]Ax!*fZ00(YShazl/ CI PtS(,8v-Ke{y.eI#K2@˜^^t*T)AMo1zu#'0(E\fccr&4"Ʌ YACN> XZ'?Ī/4mbHB d@ev1I "cHOcJ>!=o]P [Cz c uyf%h Š 53>%b9zC6a^G9 gYgHW'41P] JD7L]` <@P1y%0g߻6R۽up{_{Egψ:a3.88gGHe<A`tI}3y6FkiVx!c9/mdz;۴M7$>-e>יt~zFOwډd"%7&3~?5Pp4k)t7*EMN?4wҜφv]Zq\/>BVswgLҽq^_0?O~h|{uy<Çypv3dJ:s rx,[oe; !U|U&ч0 ocZfeKJ/olU3be3+G2Ąa;GJ>oݶyrvrfp`[rU8kuTB8Ұ&"F4^汶R5҉+;1~?|o?_~?q>}ˇOV?.1p ]dz:`3:p}MۮMK6hډ.G=&.󹉸uhOlmm H_{?_}v;Ųw3V(-昼6H^~Mf~zecCz%ŭijN!0ӀԳқ]PyctM1rEAC,2,kFXI[|&砽u ;LC.xS4l@)ȾH8ghO׭ ܎sۺSfnSq%Ìkt\KEleɳ57T|/=z"M@*\yrݽ+[c;eH o(I<.-a:{Azom&2[HNLraґ V@,S2d&.xLy5hB-t˜ ]s | -yx3? ?=.}ek߻{~/|L ;rk쒚t# AFc2Lq%,BdU(MY3g- n h+KA͹P }H[tMDbTR5iBK ̲ ,˨5xe"tB~t@^ pN"{ (%2ZUF͙̚rӃKJ(c)B_I!) 2>(QIͬ`->!HSh9[ϐ0|"L&Zr)V5LEHTΙ99RY)B ً)tnCi!p;eG+d `CxXNc“6Z4db['H,qAf@h502drnNg!'ts 3szEnR%MaLhU IL+P5jDIjrYKs4H}xP-{BJs(R Grt 2UϘfIU,heQGŵx[I-t(;+ӿ a ek3a(| pD( 9֛ 1& #ȼB96jӍeY=GhNJfKl-un͖R7[fKl-u2߬앃rWzrWN^9ثW^9+{`앃rW^9+{`앃rW^9+{`5uSw$`g*{M앃rWSZ9+{5^9+{`앃rW{ odRMhX* vh@8#o l 96WI:;>r, ob,%o39񐁜5 t19b0a&yemf/\4/{A?=(eqhNXhfRwEiVWz֋:ֻ7Q(F7Kps Z_uԍO.utC͸IY+JKat!37AEǧ:ñ2/%OJhxHhp" bpݔ1^TajzElr^HPvKFwm4xm e p ;e"N\@*`"Cx;1 z{ {'iS|6sdБssqoȿqBEgJ'E{FM\Q>.ꠢ7pN6*0h hX`"W_ثop^  7&NHp\hWɠwEP&΅v'eH^*uL,k 6[&l"E{9(X&s< g=BHOe#\9oFJ(=QQnI_V#bjkЧ$ 8 W_ԜNӠTJ(Nn@Et ro@o`U46+֫KfHa=QV{Qꘂ8fLJQO*!t\,Sq%XVTѻFqo`} {Ĩ;eDVti.OSqsZn=;8ݼ]qŦ'WA&\uۧYn!^gQ/^F˜=M]螚+h}ڃ1IsoySXKX:4).MXT+w*2i3XRFc50n6䯟Vh*-֥]WT_x6ڬWFua{QःW`}kЎG%m l+ YCN,cbIZ,!w6@s6V҆NHL<^^կƓ~Y}O6\d B{jT{2ڷ9MG'ˋ/]y[h$ (I᠌DD),-e=[1e/l23!Zazy 2dhHC2uTQ}G[1;8rH\GCo2! @yH*P!2g k0;uglv^'14wyv>cskOen|"s.ќ7xr;xF[8[jփֻf|ntT#b/@FBG4lШTn|pЅ䳣XwqfY 6eD%9MUSD{yw-=v=S8p ޣ#qG܏.:5#N(S\Dt:bB2ѧJ)&B%\zFBg@Vȵ#Qd '.6$֏Ggo<~B,a!/QO $n=s6]ԉZ1- kzb=.l)ٳ-;niU ׳![,x3 xTQ"='}5 _ 5i_G${!y->.Nk-꫌A&"J+(:GkK-x;nZt <}Z>4O)-pwL6|LlGSEϐ|Ǻlh1ś)r 3 l),n9'zyRvvmْsƹ\TML+lQA 7Q⇺v`tq_~ //kRӪdrkrsTZaE!ZG)vA DJm!`ʝ/<7%aɺ6`#v{aERt*jr5'WD!_KJHo<FB {u1TԠ/ -R)A#BY V+*S:ekZ״iԴְ#(Ş nU]ꨩg;CG=<9;nZK-+YFݑcAd`FpizS :י)\Ti@iUtq2*RRZv 5$܄$DֽI3Mհ4h]S`k/=~jP5֧4"@.ec\G! CD%'8^;FyWf(r t?1O >#yL$D,[T wBg &SATa9 bB?IJL%{cGGgBGxg؇җo@MZ$&+Y(kھ'=iM~/f;mYfΞ] [=(ߡy\>{ __IDbo3PU&\r vD9V#G}:AɄL@5 ^_WQ0ً7n_ ZA?5Qp|hU/rcM͌[ 7 SSȸ_z)u3Fyb˺t-YJۻl*͗L0[9'@$t{]۲RUFcھvw<^k++s#EYRX)7NiHZJ-M.5l<|m Wuz5ta ۭܘxtq%t^y:{z#iTV2sGN\.h5KZ[)<8TÔw5cJ/d^$9g{=śe-Tz"Zދv\ra}x!%(xh'"LoNAk97E̳k?)p@6Bzۖv E󸸿[k'g3ۓǾ %[2w.BijBMzSϔjB9'}}3_|_9m/hd@ADDbN=HYv=7زOύw=Yp78IJmlK(^0 OZ$wDSK> E"fO¡7ED@!ck1Uf9$ic0C|wk0~oP<;jnY9ߵ{jW2߮/'3 ZeS*utdn'؏T~`FKol`bib'|*7T德kڤH0AD& rSP`%Dp K4*TJ|ֱ̋}(duV'#YdvrUZ6[~J=A\EV;r*欐9=\;l3ߍ3Is! J݉MWC^0j4s6]ԉZ1- k‚HL6ɔQgoJfVvx=x70}FH%*~"Wu~@P9ȳDzNfx0: qz#)xIVϼf|>Ns?QXĀht1G*b,XlvgVݤ^,L&&7?  PNV餒.(ch [-LEG$,ڲR&-!sF(y C%%6km5&l1R\2'\pQZR{K/R'JDRIE P0}4V7Kۖ_ܕ~ހ4VtdC,I>e򾞜{gdP <*uc8:"sG-=jXY\.{yAeY^Oʻb?Q!Oܖ&QF+m:P[obV۲GGwOd7MW)(.hV+)k.J ̱/cQrqS4b@fegZ$.ftU'sezNN'p$Q>['>+45@5R_*7z M?bhKtP}Xn>9R q>,5HSbq㷫 σg;{v96 TmIw7|32j)f`3aYf++bGa(H?=]Y jsF]꺓C5u׸iem#aa}r&=Tor #?P_~l=)P?v}͋oW_^|mhz5Ο0REw; >c5M[r7h̘^ M cvyݏ(9Ml]'ߟz^w.X-gQsYbS2뤮blt,9SW.!nޠ޼,} zX>1HOIڂ/%3c_frUIɰU|(ц^*P`Wi#O ˛.ׇtx Qȸ!YGe+72 ~>q`j3"\4)uQuBĭNp m>V%nl|龝Xu^Dgjӡ#mltH1M _yWYR6緰̓y㖲!6TͶr˶$W$Ɣ ^\M% 'kh&G7&7}\)}8Y!THMlNFҴS>״h\WPè328JɲRZj'KJiݬgtDŗXs0&Χ㶂.| )GhU*;n^3sfCEHdkX,<r(˼γ{0뫥պZSI,!gUEQV͛LmI%'D(WO@>z!)CKNqgav=%5=CLܬ_,{)~!!甎TCh꒍- 'S򙪿 yv /%E.7G;~g]lkFeяu!P.]Π,/_ۇWl&ٺ: HLM&[[69G,,;[BGRC/$qu~# 9W*@HaJZv€?9y73[x>ɲS5AP{s@hSTgfor,>Oq;uƫ:hߕuGVgKbSٹVO7]xsisﳴJz7=>{yTp{(ss >'rGqI>஭R+(+]`*@@̛| f= '˔VV\XYqk^; /;]dG?5?tdr7:{S/7o?z /SPO~˄]K3lW_fD7Yaneh\c{u|Gp> sNβV;ۅp;j 6L[*]wťҺ~^!ƔWRI'YLFΉ6FMi۪9YN>MCK6%>UjHZll+Ut4/Y{1]R-%$TBݹ t:j0fLc3&hh30f%$ŴV3ӮR+YXc 5.9飾\:e]#ڦlҹꈾ ߢ=S3d*C6:熩Qtb E0Ѱ*oF"EҺz)b B8xu7JN{.ˊ"qѦx{E#u /2@9TX|EVRbqoN7"nUEG%`*sJJ ڶj6j!ɵ3 Euؾ7:ј$FS?cICwXX$&##cm ,2&C&PKobc ; {AQbXrF&]0ΨX q,ccuPXud U ΢A[Rn F6#e] X S)E)1٭$ԋc[ZS氡R Vy㒲qT5 QǮ.ʃCCN}lui2TՔM!aa :7KU>"(Ѱ֖+ m 4P!kC,q^YR)&c 0N[|iZejcx+̙DlPP@zO: HA\ ^fl4d̠ w9KՖ Ɏ譤 $ؕClTz")"~xg,@Ơ) u0z 1c k}/g=JLh#ijNZpΈD ޚpxfG¨a(Jp \aKI=IA1μ#WCdD5,G q#XCd \B'+RU*D#(# VUnԑu*y'h)(`~C*pPS f cT4|?'Šߣ.Rj2"eTo5q_h5B.Cq@SX,@&$@H$y{T&Ck?VH$RJ@0g%lr9[8R;Lp1]@LF:C/BpR3J HV")2ЇheD0=ҥaa]zG@^ rˆYItkKx c\JX8*~D}]y%DMRx%QН M$`vVb}#"}\45.ɰbe2c 1w@:rN~FQyH!%.p 25LcӲmR"T2+zGc&LXA%?*f1cIAiulyp\^@IȚT\"ΨN`Ӫ(͡$dU8bw `D.xS}`FaDY7 1ChYv)(ZWUa^ZkkLIV5BVѮ n0l2Fxx6|JQsPMz F xُ5NrX_9h>Tt#B܃DG;Q"s 8P`AH&Й23'RŒBX7ndwoWYWzE])զx@8 D#䏚]o7W ->Cq6 0xJz9|dWKҌmKx, ,DhI2܃6'21y*z"[jRVMIS'6$@8X5W 'P;MϬ~cP&̌e%pMF)AAhSf%De-3X XD10e#ҵ6فұr.DS6e[3{t^20}zLH+Ϻ|ࠫ|mB" >5,/GAp]u`bp -@Ҡktw^;`]% ND4hoQk L5zDMrqh4]+U6A`L:*k!ɋqĈ k>sz-Xd[Mְ NLbs{]ݲVK{^7DU5Ə J Zh@q T7H $,%B$" H D!@B$" H D!@B$" H D!@B$" H D!@B$" H D!@B$" H D}H Z?&$lA3mh@%#I=@_#\nw!@B$" H D!@B$" H D!@B$" H D!@B$" H D!@B$" H D!@B$"b$DŽҠAJ<$Pk#t9B@$׈2c@B$" H D!@B$" H D!@B$" H D!@B$" H D!@B$" H D!@B$" H d51! Sǃrz4H VGH d" H D!@B$" H D!@B$" H D!@B$" H D!@B$" H D!@B$" H D!@_Rj}X^5%մMO`!805A ȕ;A LȂy ?F7a*GJBGz-Ol--3? pp|a,Y\#_ A3O^v蒧]f'D+8QoJZըi'.)̦/eg4Ej;ﺏJt<0?Kͫf,4qե2wS]$`#A6QZejptu1Q K [/G`Qv.]Fl<|t]77`gWV4LG~g^WOGal2N@MtϪ+]"{SVWrpt|%PRw+S" blNJG-㣖[zbm{|[w0?F$h+%|3 E;WO컿R%"sZ}K0[|{X=$E:_9kvQ. [_6k%m!h;49J~3XƖI &~a_TgMamjԤ+Zc1WqN0`8KNSx[C Srp5&&aZ zQ!$j#9RķNWi2Ce9. χnRu{V;t2GRN9SΓjvp~en ϑ:_(|w%άm&5xΪں낡p9&ʂ{+qeX[*˜DQ.άD*i}ﶍcIUM HkpDaN&[@#wZd-.]y]t;ٽRvEWy3O)+jt|2(50bz@3\?B#TAnM *JWGsnQd-rԗ=Yu[>z9j#-˸d[ 4膃?St5 W=\4B nw9 ǀǀ701-ӦsqJvzܰ| O{:I-YJ`k\;mM} .PAV=YtuxkI~0=_2·iRUD~ɕ^c}}a <ޟ&Y<|>i_3h篷ZΗ醵զ|Ϛ֚YMv3_3vWI9чG O˼ {j^dw]J''ǘ >g>{W>޺}>5]Z>%WnZV&'HSNS´J;`VOrm6};ﵙ?`Ov*0Ѵ U÷69g=,xqscI8%QK=2{ps[UQEtA'#[{3Q2ZJoq`Ymcq~])r-? 8vR+a,eF6H'D2Q^۫z/yUڹUX)`IŗԼ<!ɣݯ:}nmuX/,㺛߂Tz]/?T:#~́DR55XrkKT;e1k(-_/zz]3fnS["r,Y0D夰DzZ`ƂgeI.R>fVZ n*-C㞤f^o5a݉# A `ueFe2hDe4.)\iё _ iAb\A{/2B9`%&"< 12-gң +H^ʪ6nV^%B;js1CZQ$b^&4s BK>l"O+Z҇n?fRA3^'FD"B9_H)0\C+? cb`lYcol 4IM~ oKm sҷ%0\ 5,1dK+)6ljx]uR|I_r98%(C%vxNOC73?x8<;LC[ϟ-oܭ.@ ^N@Iɵ%#L iɛEj*Sz9;?\Lq^C萣cuZE2%k5h4 h [NoAR%$ˡ3+NKvʔjՅ1߮Kx lTuO~j|??xn]%^%1gài7ܮv-@2+9yzS d[OZmI=mmFuf7v,/@cjF,XvQ3ɪ`z>8Fu1mn-q뤄BFGl+W%|Di5.鱶Q&5*W]c{\)_^ͳ?7z7?{2ߞyf`l [IPw&;ߟru_]C;tmXO.>Kkuhaه` ǧ""ԣvU5 g]]Z!i U !nglCP/Ov1ʶc\#]&n'{=Mf_!(%Jgr36c*hJ` ʻOʫ[s{P:7^0%D^he01:x2?єQiJRRZb$uz9יrXCoճFM 5\fbV9Xr%{ ' vyL Nm 1PtT KR6.qu<{X_ pJ#ԆKJ)MGN1dɼbdI+lJhb05u`K Bq+E]eם2_x.7:b{yv _uun|fQ.xwO:[jK=0=%Wl3Z=Q Fz'&ZT2Rz+Dp}Rb>x&yt UΠE\0Ewr/.'8$MH/^ m39ΪںR0 ηߛrk#-ukmfiDsJ>Kl"}҂o[1 B6*Z驎+E,w^J/1)כK_ +ϓ%86BH/+HK1.G;r-ḋ wЙߛK~uv1Ņך{1? Ǔ~b"y"mr9\౫;6o/ooF_ =w Z$`@uiN_}|S 1ưGL ^pG\oYXuA\}_MB}&@e6%gIˬRB沬0=^SpBkۡFtO}9)!b,멨<;~a㇃?._NT\Ia$c)c#z 82A]\y.q;>7Q:;vMۼ_<hgﭹ]ۖi!:y:H~3c_^ޗ3&n% ?-K!'yNhƟ^ie5_x&9S**0.T7 l4!dMCБ1^,^c&EU`$da]4Kuf`*CV<^Ƅ0Zi*rm`Tm8ہѪ:6+{^ 'c{v?]WT>i AAA#ACQO|V8 k\&eIÔM3vM3[Q'nEDLMȕ7 @1E!3QP* &0JCMAB6)j))nC PRyo@b<L\ gWDjٱyZx=П}4{;=tS-GSYZze{w,pfzw?uf8(ܤKn]BO]zcoq/!:Mdrt~̖5~NG ڄb󌑙RI3܂! iWVF0]U wV%˘18 8}b/GLk~D Fu-p[Z 츤OnW7esI*ٟWH%R@]W\;2`b T#^#%sdFOLl%9SKU=Ǵ !EYZX1xֹv9]*g^;`~/0[j&fw(_?#m  6;.T@[J A dE=qqnǤtxmmكfy,*HozQA#A=iVz/ÅL~)e3)6
J c{3U\S2F>@o]#p_22._.;,se6ly<\ɬ  PGsy90L]WSBA=)ֆ~gPRMOz| |cTּ)y%Ss[=p 8&=H>=t--oGWV4<&Fɏ2Tsq&svRfAs~2<&f3dL,Kx;?NS*-2褡}bBTN[@935n;3J 'F}D[W"sc_@ױ W8¯co; AO%hQ6:iaL?.nG*V}+;w.z|{6|XSh I,9yP">Ve*{}JԪ=%R_阩rAs* Gڱt1e,b >Ahzfmd#,1 60s5ʁSVr26]:|Jfv?wvgybY{,[tFxKC߂:4hfк `&EjMjwī6Ӈf T9pHFۋzDmLVIF#6noeP-:y?O?-0gyw7_޿{L{"yJrqm*2}߽1.hPt YjexYrВǬ @a|٧ sn3|bnBRn b] ^7&1oL/ 7DGT^~Y [7R5ztK=ѣ=ѣ=<ͳѣ=M+VjFnE0a6ztG7hFnFn^ѣ=ѣ=ѣ=ѣ=FnFnFnFnF>/ ЧMRj $4ztoFnFnFnF>Q(O&J?B\C$D+7!~ {8ޞ; SAx{?ݨ4*nS=^SeeCRƠSg$Bj !8M.g/ְRUy0rՑtJW'@ךrLz.A'u!Xr4C3Nڠ]1 w\m=vёБ1QvWC8 v!O|  w F-ыzyj $>:@s@u]ғ&s) >om9ƅX.[YZpaB[r>~H s>] FNj++՘˟ƛ9Hߣ< oÞ2?ր•y4΃y_ i[jaj`06` ]x4H38ե` x*`*I 2@-"Ē^'0T|UK:-IAJS8˒6BLh93 YD2%z,-rl88wƽտ^@,M~-s=,OMo"!7w$DŽd:ԋu`4D>FݦU[sY#)p3p܁GGԞH5{:H`udz >Д(O*, O邰.EeOf`*CV2pc2^?"&WUdCS^etþd=/O=6PpZ?GG9COi\ԹKK:̾rOi0v= Ӫ慍Ӈz>.L;t i&T(E?<>w[\4O\d6>^s[֭6-Vy( If#wぅFs!1zԄƵD]WdZB];)TZɞr@+)60{WޕLa۬sShSnIߍgSGW'1fi)P,GB1[4NQ1%c\FjDʛu=\-sY#QmO;>Sn?9`up*H'p ? ^P:( 6؞9V2 +{-ʾ" Ri `c0ep3m AMPkcI;W-5k6lFҀ5Ifx.Y\0TR s;YӬsac=KҚ5K;(ᚴ\֬,Od+aϽ[ѦzJcÕ0 ]+ȕ+](rR'%'KT䔂Vwq .N'T+Oڗ%nRڈU9w |B{=1t_ +ccph*{Mu*wJ1u{s}r&IO*k9X̹Toú)v!J,2&J)m™0å IgétDY]ȏnǥDa^A¶T)ۧ漥G]iOhn!0EA&J(H}&rD0x&`F$X*T1a-YY</{)Oi}}ULJB"֚9i,VAxc18Vc-PVΎܛ>"e$ט5b&,2e=YEmʼ' u~ãc W%'q]lKbH׀mI;pHuŻ5'xc\e~Ͳ\]3r;&J, 냎@)8 ྕke`Y0Fa^y9IJ JQ2}o\B#߃Oёڑ ?ִ #1hs;6h,EbQH"6@K|Dd!F=Mcv>W` ༣6e;ԊZ K41 |d"2rHfaP \ci+[4?Al\ aqQe'[ä~!9$)~LSߏw470?føٹj=}. /ཿjxZ^5h d aPNXoDzZqNy7,iںhgT5O^_LN/+g7Ab̙گҠ|n$HjNjI0qUsI# zuð(BL,Ȉa-y'VNj1g-Sd}$׍n6%Ͽ!w}1F ~E{6gX)-*ߛU >m{~{pD9׸@Iׂ o@pc_?~кq܈ 6YOɸ)0B ~5q~X|oO.gS|ز5A\8 l~V[c7lA,"Ġq?E`jy[^1nNۭObR2DZL^c&R4ނ1fE(+wL'M !QA"Q}Xt@H*Ho E6t΀$(dYG6YnQʛDiҡn1A}2ei oe.๺LL*KIN#pBph'iNw0 ᆢm:fiYΡwgO4C49:PJiȥ>r$I&!ؠUJNZa:/g)/3 ]9wfosO䘳ƶfv>:8VQ_On!'&X*L),ץQy- Av_ek弿/XEyxy5[XjIRNlUMii~^bחxmFkB9QI4beJr((#YyMwCs-X۠Ւ䅓3ZH63aƄDN& '~TBW>R:=4$H`Bd&'%Q2sLhW<yQ3EmF8 Nh(&כj2{Oh9y iד->:ZN4m6Jx5Q IuLhLxpg&DD8".(YPLYYA%srpT&o9&U: gJ|ESX>ޠZ3#;dk[n^g},IZw;br mhz,Ȳ(oݲ-J Q%R1LʯѲ6׹WBDJι/ ҺʜإP?՝Q¨ [; yB> wo3K8A]9n %u4qV&W[1 B2*Z驎܍E\ \  sL᫋Aeb0{@,@UL]2o#k)߽[fm~NjM8N &o;@k08Vѫ9}-oͺQVlW77Nz"M mMi]YW+?-?W% IQnJKϙ+%L+4cl=DZ;u/Wq)%}ܲ 8+!䩾3O2 3s 8P4Qv%[Z)ق2|Yha˩p̅ˢc{oxj%ȉQ 7{,#2p\8`9czK0T{xw^MgSǵj+-yB'on;'GL= /p^A%O}K/$ zj^U#[5n۪'J[E8ڹ$  "*&EБR&mAqD('+51uf#Ԝ. ~ךD-"SnD4rp99!ҹ`9jNjc.pm^Pcp[*PHV'S.yg+Y9~Zg齩 _$թWNb4[ݫnlָ<ߠC#mƗ ;s }fpt޼hRX[鳍0BuԌ{u*:.u=ܺlZ==vlb[Sjݼ{E;Rb z^jy;=͞~3b1Mh~XruyhO9c]-9y LBycž#~qzrl͵.®.&ASE _UW2*]zů &iIawfp\qmӺKqv96Wjܸ$(QaIcږׅ5UP /$%xF \ ($ oCPHuP/s<j!Cu'z=jΡVk_@WEk5nD慠Lmp\QF% (P>Q!DXr|D\׎Bϭ"cu m@-LX26L>o"2#W&J1Rtj+0RpQ@jEB%{ ij N>@2[]ZIaq5ta(~Y14tO ÛC ŀUɞc/'J44.F#{6?;7] 'A@}4RxUq"ƷUCN9dG 90J('7"c8_WG Uqʜyɻe&0B́9s_UOr풃Ix1ퟸjT{Zr$֎$ Y;2 #FB f,|X .g/'LNcQg\7꺹,磎s x$,|=</(R^-M<59Vw~K i~<.BtçGo߼{>wGQft5:? &r ۃP0nkho647b ksSo2jskƽ>P$Lf __WMUYc,shbD,gڶqN%$x^g3/wّ[5~dKJeʒz[hVnFJ˯YͿOofWq1EmUZ= @z!C\}&b붼clċ$8.[ؐޚM aIc&(%a(Fllt*!b0]iڸOzusA3H.^DtN= 'N*it}4Z"VB[ ^8;EЅFI<C4!pL⹄$_e%I<5Ðttsp[cC1QBR:C&/] q{xxRz>ge!I:eaB" Q/2Ĩ TAL@DD$̩$F! X,tLDBt5͖R 5\o_5Xȱh!PAGFJ((!* %ی\YyW3N4K12i-}S:ɲM0!iQ V06l)dR?VVNւʁ)RikPH"PPgIXDEV s'\=څ, P.m_F8r>}ϫ. O~ t~/4qiD4YT&.̝d66]yؼLD8FfyM&&pFIX{8=d9M x9-kŽ 7x~vкSG>Ǭ4rxKv= !kVcD ,EhT@ђS*MR%4jpҔ^J=I7gWH2D-IDA5rvkFo)C.V!<>żȰ_1z',#}N>d֣SG; ;$۔?WHܦƇ|u{WtS\=e!>s!V|zwݏ@*T] l8NB'@(_+mvtV_hעo#:K!qo`+v}RR&:bG9`5`KPy)(}c.Bm@ u$s%e=F!X,+pZP  `Ur.{݇qq7}^b:dh @;5lsF<&'?Tut%O;O-xg!v Qw@"tDF^%x5ZWd2J}ľ('KR*.ǞPM6SSN\q@쀉(^N5[n` өvƻKTO7O:y:ig h$}&8BA cU.:!L x%:T%I0gLӯ4ډ )0F8qLI"(b2+D>x0mYloDo>@"%@+R$ ՚_rL^MWz`hrӅ Rvý>v7[P=&eԳi|_нש"6c.wދܺ]{֩*>u3>2_, Ctn>z8'>M&l쮷p=C8xtVL"zs}PO4wUS1JjkZeCQaLO]sWc/O})\ꏥ&=^$Ӆ>#*d:䖂dMIP$A#@)E6Y ̺q#X|Q oT lv'O#7}* -K9?^}>(,6 SVBJpN:rV:8/2+MyGUa4 ogDldtH4A+-j/pTJRGcFb$緢R[8߫A}GxNyxAMZ1 egGJކ+jhk^8 -Os7}F1hzF,WgJ=c0J%I Ye% Pxmg| p|qZt)[)E2u^Gd<&ퟬHd]-3y9ŋe-\#u-׈/ŗD$7} _%3*J䀤Jξ( pr% ;&8qvn6EPu7Ie`\ eK@SJbL"is-T %:"(hX*\w1A2M|o"kbef  Qyhw+]C`x뺞yJDU^ U^g~}3{ؿH߷j҂%Yeͳ 9CHq,:R;s*{n^ʟAki_D.a):ey *VHJPhD4& Q~G-K/| aY$MQbȠ4+/1D7Jdui q-g.K? >|_BYb^ӧz;slA5;ח3hA:S'ZJYC^rR1u`a-Gہ1֜4r%"wD&j-FU/gػ9tb}Řm6"FDaM(0:*% ѣr=Nlg@砈TV 6&J:Kd5ǾNv*CDKQ 3)r6Q:h&hhIY_g`=n;Bݣu"y0p51XQΘҢ߹#J   oMvZ`ly) 7)is0 ^`$FyQ|=Qmɿh#>hM鳞W}7$-AKf d߼sM񥜨sqDG'BG޲=:ZiPq94f~,D-@Oΰ ]QYuE3ED+ 3׀$$<se)ORbvMj3XNq#{=k46:kݦYOIo7!_U~|ʓWlS1ֿs!W?7| w>F#i?5f̿r!q=ӏٯGZ[_,[N( m^©ݘv}3鐝l(5ʹNuLeJ4$,t TpGL>0ՠ`].jzK([R9jg- ˘ :)N" hXN4WA{k85'v NoPYwk-*cމ4lmC4ͫCEަFyKJA?(9!c8XGk@й(1}0DɀEkr"H"(r6Ҷ[6F* E)'Q L@&*cQ!8uB͖l-tfC3hOt˹+}?i,jP8$6ju1BALPR?]4D^4G=tͽYw33cX:Rw2l%li7j^98}׫qEl^"?~sw~3yap|۲V5g=39](L9[S (k/R)/TE]lJJ ɛ`]Pš䡪O UT>jc_O>=,X@rQGYxS\b8#sEPXtA*7d+mMA͟y )eRQ )m4";:۸rvhIu˝}KV399|DcFFO?M,=Cu N8'ƛ-E{YUf;EWR*!aSRUȱTkk6u^r0. <8s%Ωĥ¥AZ.%.i&",9$!y03ݍyV@K([iu@(l?R,]p-6\k7_vqHڊ? C͎>~ϰURZUKTU^/ <%Qa p}v/HÊ.J y1p3ir *R,A,*Zhc]^yBX PɀX'uvJU$Y};L,{}:8Ys&hcB]ӺuMۥ=k ہRBCVMԭ^ճPOG`y1&WaϽ'[o;_渼?Mǫڄ$Y@JYHBTI3Em{YXi)UWb C7 O`DJldNtHv2چjq/pTJLR'c"{s:{9wrՖ\7o6W#6rB]v/*O0#>:xnlxHcD6/ʐguGJ^MVG^a&aF 텳,fP`]7gj_Gw<ʚaZuxqq1>'MyWW Y82p/qoi :ˇWq-G~A*#";h #pT) S!P$&8q1}M|y{ 6(^v`PFO,Dj$X)Y)"uQ %:v (R7zM^TI_l&xѨq4ąUnܖ=i깜ڴ/=ݤ0:S Jj`I hzQ 2D1^ʵR.7R.iVʥt84M"lHNYfƂ: )J=_B hD2&E$`Ж~ʗ| M(I,MI"y[dTᕏO1&6@(A6]G<>ax]}ԯ,_l A?65ihAulNptDxU{b R! RԂ7[uaK CDPFGiCU'Gwl'Dwl'U@E  cga%OAS$1*B&Rږicڄ},1-P,IJGFE6P-pB 5+ڒR =3Og6oǙ,֗: v(T{ҳ~6D=a6f-->6@d ygq0f*CP2m%w:{]ݧٍ`˳vxIc]H ;0@;I5P$HaX s1YNBha_/u{A:Q a0&Zbف*UNdJ1%dͷMB6ltM5m5lJqK[eDU=I.;dEt'J?۪0{% 2c,B6!V lkzBg&3Fig(5|l3|l3|lYВqSEAV*H YkJ夒tV $Yv4loh)q?ճ+G>o4XmKL%/CcYyuFH[@e DWA%L ̦mofgPBu/kXcf}']G/k >_\?Ք$jqPǁ:Ax[wŹj8zqjB^j5}g &S"r,UBy^hR{/=+/R YeJ Q("\6M11i9xD%ByM 0j]˹Hxɑ cfӹcyaM'EٮȇW"u&5$цԬ"61ٰqBe6Z]1sq; V&ek Ρ kvv%C* yK1~E gƲ'6L検%%C smIj:Vzt`o '1(P/ذOj z'g0'T`dm$X*lEI!ZZe'?R,9﵋|?t'>)Es2>* S:1j#`eu1D lbWr -!B җ g<%PPNŨ!TQLT0kR_/Pܱ`׼_]q?".9{p!/{gd:; :!${[V҇zRUǐbea}dֿ 4c>ۢ}DvjPF()j?pVk~X,(liձw.|! AXҸ$ק‹s~ RD:})/O!S {smP@{hHVOֿ7Zᄏ!hPJuSh?,r}/o #ŭЧ3^+wPjp]{kxTXq<o/'Jp1p^fVg{%@¢6a~pg4i$g7#mFm q3+bh ֧XW|cn9+6ߜG]Q7ݫbE ZG'^lH/{7xQ-,|yx/~|w?ˏ?ԯ?_wRỷ~UN)°q o0/?chjho=FШZe\7{C}!t /{?}~rHu|]99z͖Us-6H罼6-םuKqVbVZ] @:ʣ A*@PwUctMOv~~R.֤dm^o]fR2FOXs2t&cӶvL {W6IpXaԞߧtzXْ z sAS6U`I%E ^%}&:j&jp}[3\\A=3tM<7n|VԝIqJMb5pfӁFI<9(i' hh|TؓxC~&S$h),R%!Qt%0%a6Ĭu"/] 򩃽vI<ִN'P !~PB'P;2T$i\L? HIXeT7Q e BH%;kڈEʢM`1r,h+HWAzZTe6+Kp,Z Fֆ1Ġ0'xAqَ6]Y+ -= fڱ2j)'"Z$ˢ6ĬEE1*Z尰l)xD5;.P%FA /$w3u[3=0D9T`WHw.ρ~o!f+ٻ޶rW 0/Гt7 |1OAP%6,K$' ūŒ-Ε-C[v9Eg:(%üo颿(g[$Y_kb:RV')21p^҃ɽz~^S:;`)xD-G =IAZRLWP ZAVѦSv~&y|txC B"[ny CRKI~@]L{ggbɲ 6(څ™R `YTX %E6/,F+}N~^:v=(\c0;-鷈!$}R)XMp)KrWʭsd.(,=E]S3cu HUwЛPD~!y~:黷^> ? 68Mk֛A8C*4¾<^YhWŶCZw iӵ/ Go\C%Yr0[зt>8:.re&VFf6,MuԀ9K Nd֥9q4xTtzg{AoRM͑yo_>s,/m"1ȑ2NBqH(-* Yr_ dX z$Z>l"`'; ]7kmoA̯_M_qC1X&WLGB?F:t)%,xhV+ +U D:$O+N6yD]BjT1ZFԀ@TrtŅPE50>Y(u\Qk nBǤr lLy5-:>Tɝ/K `/y$cC jG 0`"0ǽ`:H΂ª F`a qܪBأP}t<9Ei!"H8&L (D ` v9j8-'rN1\$!(CM֪鶧/Gsp84WM>1>W-opz u{j7Z0^~* g]sbR3Jݴf۳wdHCwf!dͬuluoμ<#祔a4not6cAFb 2ߑ-ga߭Vo0yIMa[`j$n,1[8.ޗ[;)L殛݉[fK{12dYѺٍaqRdc/wrwTu4Uʺf^vF }9,M*w?GNn'J+w4QZ11M؇;R.*|"QZ97p%hh &ɒ0Z1S#6!̂ B5~pN~C^=~y~goSuǯpܥ޿#N&BC > C+w=vŊ jWQ1Y[vB`B-oPx,A㸥**#B5p*iR$qv .4?ߴe?fKY 4/YuJE%6L4 wG[W lfXv1ֽ̟5)lpi8Кyo2:Jy xsi=8ē| t `aJIFz\ Ab;s8! x?HWt;&0Q&2mlĠ!Zhi"ZE LhKX|anZX|*&XWn/稡xF{dmn>XFyWN= 9n__+!R"Ĝdka$AI=b$Zm :V_!cDYj8596w⎻ =91GSׇʻM̸H?dZPKy c4]D=|lCvu(n 9.1>~y<]E'j5N-\~lZUliC<)"1wa)xD-G =IAZRғ& +q`NFh>ag'nGnA?rft(#QT[ ?,Qh 32 *\ %E̷H[Aɴ6x{z (< P  EavZoCHRԱxARl򕮸1r v9KS3cu HOgyЛPD~!y~:黷^> ? 68Mk֛A8C n| ZM{O~4b19(ZU1}cy<\tk@_E954ȑ2NBqH(-* Yr_ dXD7x[3=I<}8]oIv &^A+f MX6Y_&o㆚3cLz %~uXS`5J<0Y.Ь&)VX,Wĵq^<:|utIQbCtĄj)Sf(QȉcB]DxHXdtqUFUg70= 0=/olֆ1Y[PgO4(~Y~pp"ֲ~#A9:-q/ࡰQ;)|v Gxr:I>BD8pLI"5X 2) @Q,Ah0sJ5{ΖʮZ^j0E@ B,d)W*I@A AP}yrwv;PLF >jzy7knmpe<#CFH2D@U{sՑ5p'q꾗Ufc'_s|ڧq!K{2\,)JTb VTj0bƐmeD(Q,GoIFwY (򱳖69!j6Bd<eKcl8L1UF 碋+a% q}-z=OivTձwsѩRNc_" 9!zUb/#DݠM:6rE?PtrU:^}O1ǪJ0]"d7گ~e2`xdlKAdFT *hdm}b$l~omP=˾+w ϦZ>Zq@GBG>0@>M1, t7ף )dg؊(Rh tbġ!s_8;9]72PTR?[b2:1nK Z R\hzwcQw=tq9<;Pjiw?ڋoHBE J*9zS} isK"Yp&xBm0G2AN!C&R 5-%8MKݭkҽ_d̿u7}}_s+Yw қM2`ށVgܾ<^"Ypˇp7jWJW"~shGψҀO"YU!-Skxs5㞪Nj@a-P+icIhRbp9KОWR׆oIC^`3pUs]=_d֣돗=%<8_ޟ6{K!sg%o6_  =7֞ez~/0Ug'=Tzf\勋^Ev+{@ދJD,2gJa.(up1җ%kז8&Qcg/kv.kMOtg r"3ݶݸܞt.k/Nf=VLFM6h7Skd*&ւ /bi HD 5mlKi8_lҵd9uH@J J싢LJN#9H֖@Nl'Go/hsA;0<ͻ)( ɸ ,)YE.P(ޱkAkbb莊llſȦZ/./RC\x[L=pj@\hx2"Ŷjē<]T*t)@T%Kѷ#v!2y QjCk([F N ĘI(6uE1&UP@TG'JՎ:1m6-[~# f~^{^8G 6EX6Yu>y i_Ѿԧ>b84SֈI7 w$c(EXK L|fu~/Rl[ >UV^#>GL7g /j?^~]p_(YS>yYH2Ɣ 9-e,)I%e_L,x]5Å&o##2\>:%` 8Pg$Z/j 헣M8ADIZVL]P"AbqJ=;غCD4LOUޠGgCI O=cFD 'W6D2{S+ctkvRh eޕKJrZsR!]..R@t:Ji 6R-c9i!Y@Jdƻ>s -Y YL]4&t x%%.Da[Oޒǝe˛#ʖ7G-/k4k̶(JE 5v)TFYJxʇQˣcʕߡef緿q7~Iږm22˒ȫ`]䳑CAw[ѧIM7P4Uﺔf^ a|WW; t{9ru6㜮.{/4^p٘]ieS|/<\?~tp:la~91ѯ7g'|?Z3:ƪ$|Ο&/ٮls׮?p~IL?]=ϳ~#y^p6+C=N x2_Ϧ+˧X]Zlypsid3yY^}Y !gVي_ a?ӏtb EtZA 1bX؏p7Dꌱ6ۛQEbPaYH)5# K&(S**9=e_J9?UR$wWWPPN&Jg5̝| G}Nbt1-t>Ya}A.l:B^_@ RJ ix/I $2?Oh ؏twgK4(6`g i D!7%Hd4`].D&fa% }D{z4QlRifzNC6K'= CUd5U BΆ=!e F<!lGEiL&*&v/\\o~+Ɨ wh^в~ahNVN,t+.v\ܖnv@BKc,jȉ~±ܘ{)y+q fIP&R "' G4\j}1aL1jD) /ʢ9P>대L.w' k5CtDɂM.*gָlyjF7YJlHzN9g:"`6+! ij &b,8[Q7CXr9㽡Uhծν S[.Xç%x{ltU ~|[>._gwcޟ6Oro@V=xNJ/g/h7g>p{<Uڿ}zK--ڥtTTAmpwTm`u-ΊT 's4Z/,c*$2db.E(:{BZb9]vY+% ջ[פWc[1zwXlږQ+@kQ.m< -VBnq^}T9o TRKI$iHIm**,A{^K^GeOxOf˴~fx};?o=}IU47U)IHJY@p$eJJg1Y XiRc,x0 a8aldFRV[fڣ0$+GјXH뚐֍:.ٻ8n$W:ev‘Ƴ2 IՔؤ%*b&b7,%VWBe&LH>#Uu߱}e n9s+Ȋ4Fd y6;Xv]dU|VOBHV[#U3(nT a-6}G P]D=ROd#BW^|wtoH5MukA U¼ym=sw>;>[fxg@--7zv1߳']1?v;*X޸g=eW)7,n|?]Xc 6E bD+ ^#kėq-GdA*#"h #pCH%5;lkp@ϰ.2ze!W=, LBVLYd<R(ѱi AyFDULTfSuF#jC\8x]5քRܚ2gz|.-n_WgwBT>7Hy"v$)ty dbNc*מRܰSZ*Ci7#-SEؐ ̼ uvARrEe*x]dF4""AR,Ghdۗ| M(I,MI"y[dTᕏ@ Xolui qs^/N jWF+.\Џ ,OOhAu(NpČx*9^ͳ;"uSآoFyB!ZyEPFG#TTÆ j;-(0AaL25 D)hpDb8FB$\J:mTZE1&֚Ҿ0I NzdYTW:Kkol3rk;:P"-!%6>}|y%8~;d1bZYNߊ:zD/C*Tﲰҧ#!ɇɍWm1DO^DnW%KXtmϜa<3lw&N#Qkoss)T+#\gCQ dABĘZk+hj󞝅ZCn-sQ1-JYe,lAJjsf܎RD6]ul wͶu2$׹mgG!|uqZmv~1Oޚ/.L'Ǐ14DQԬHPf(IbsI(#e+%8۞נ.PUmjS[,D ٩@_xoK"aVܭ,_1.qǮZ[5r#1J>EfB:"%!Vx H, 2`M;(.ZBfd(ŤɄ*RXR=D1E- B.9wި_eU8،?ըG$.djLN@b ,Y4aJl*p:8Y#o8l{֮QFI=k ۃPQgQdWQwz,QݠSjǸ}=@' $|^~Ì:>;kRw1eC!bH:V5L $HƘĔ)=2`` kdmQ ]ER9$U"+9?"=lHy Pa oVϺӗ}ܹj[IdX:49wgT@0P+TlfΏ|^;HJzA,NsRjPv}"*l^g5SMIvzɱTw7۩z;ҷ_+N ȊSLnS4ц"6OppB瘣cZyYyE0Ikcrk_\PFqkZΡ6*y[1O~scNe3msRΒrr\a @a`G'Z3ػAPyVY; AX 愊uѵK(I1X3/Y rlG%$?kkW'6Els2>" S:1j#`au1"{a6qKRI!K݉[&Ixt4SA1j6~}̚!ʘ{q#ǭu49kٿ֞=w9Ӽp10o&dC:|RSbM2Đ՛0?DC='^] +ϥ垼}E67һfgWϜ Hdc:eBj>Cg:nym7Z{sVI>ʐ ǚl\vrS&7_wtvr*R7IȔ'a2O;|"YWZk~SJP .r5׏҂rh}J3-,n5gώ_ ξ&`MxVLJGrm8HfWE>;|}fŝ Ħ8gziq+i2˯,f'Xwb9my8+6?g]=Y7=q=뼺i%#ubדT>* ~%dqpCo2Os1lqy?1;鷿oǟ~}+}Wo IL?׿<`jjjo5FԨZ>r3m!>JHyEqZﭽifg/TO.VG=l5uYկ0,{ktQT+d>*C6eTOR6V:x\I0,9zddLƦd 2&+ R7IpX/Ct: ,T} 5`AMT'4>X*EdjzF849TfTsk8-j y,̗Q6]hē;߁pt: 9JGcs`R1vE"fAgJ,Q'+5F0) %fs)xg PO#kij֦Ip<űBXl (t#CE01A$ Tm* &"JD,AId>xBXHY 1#&\mciFCd + s1~` 9@ !tZ,9&o(.frJ yB3l__NYkduyQhb" rX[|r`ט[䙳9hBThGL%MKpK̯{m,bb̪Ex/Zt`T(9)7Z3X3řbo&Ψ#kz'Pe\!hD) OBYt$?} RAeR"@/|ܘRk&OHIrm=}u;By0\m^^ Znzwٻɧi?[l0>N*M+{nha7i=ҺR^m闏- Sav똂VR(ǟț̙n:yp}TE/W6ܾ 牠z 8eCP?>79A & x%:޵57r:/9y Uyp6qةdk h4$F)*4HJeY  __ci6*dѫ9>b_=NdY"Y!) UQe-=JI#1X)6؉,g54R1 H"rH]%Šc0ɺl9;F.^8ן"8mէ{O˷u6-oǬ}b)uk]֥M'xAm~js"}:ꅟ2?,U۬ >߽}xAWRbDFsvzL=wɸQO ~`> {[W|孻fop ̰sU7U1;xbȭ/͝`Y?_Ȯ[ă^\NrvR15gmF@RؽGݓf9&E`JȠQRߒ2)v?X /WYzb}؆oM]CxM٩oz>_`Z#!rQ>'FU!NU6 4*v)ɧ,)mQF=PIVP=9rLUPĎ9?= ǪpmLgsܫ QdVzT`,yQݾݜ YGuE1QDkbirm hі>YYCgL_U-j#5j珒D0e(F3Ib`A)wۢ@DIZfL%oBvQmt1l9O(]9N|2x{mkWvi X \WةTޡ]Άv,>&kD|Bӊ 1:ec3; )ܩdޕKJ›ȇEKŢУ7.\)R :VZ4Kuw,;[Ӓq"3Wne!^T˧Tz8;L?Mq]~74q4| o\b;Vͬ"laCjA$#(Yce)6B)1Y+D%YGq6_uކbVBbS[TȮ0> ؂+tkMWyZb8";}Xj^j6׬ i4VVO F vBdbP|-).bF#d %0^DMeM&^"y̖T1b@bױ-iy؂e*E"vF"U/{؅ ^ a0l%B(IRE@ =%+& WI@opjS}i==j;StXU^Yx>eɬ0&ԴX:dį>q3Ec!ƄU X 0BSvcʎu"&82Qd2HReljZT +2F?5}࿽GK7=ZVZfZZݷ 5JHgMj}2f Nh0I+, =wkq Gynb܌Zh6|\SkbI2߲ ӫa:yljX^im./ۣYi-O x;` s6v)8@5%B\3gOtJid:c!xyB,-ೋ)`H2XC<_npʯ? ]kTDY]jw> P^ЧUlbxgk1^+~tvyuzFVY7=kq5봚i#ub5֡o,ʠZOL>ry}ˠiΙ}ӛS~~Oʿ{7~G>u;0#dW/_b7|ԮyS{S/6|yma죄wq h:[[?_SOlQ-&aUA^~l~٢XjWG2"ij")+=#w1Vt_"&'b 9a䪛 J>5&Cg2k)Ȅ>FW6ImpXht8P Ȯ >稣lWJ-Y"Q@ŢAx#K:u4:P]^}_.ozszwt Pc!')|xhR1\wSETHɕ5F Sg#J s(r`E]JǘUa0 P *IEP*Dv &y/ c 2F!$咝1h/)6tG.m rAc5\L_5DHwZ,9G;U @qٶnY:gkȳ`e^KB$kL$(|Ũdds@gK&@L0 CR؃4uF fFqͮb s/d:TcY&/{s9lsc io;^h/9lK}q4xPU6zrMV1Spޅ@U-hi=[~X^bC>?b94(+ &eΥV2(J}019:DـFkk䕓S SrBC0,esfE)U pi/M'F!UP! &O$So|$PBGQ2i$9&X3'(:6Q`:3J*% VszAKGXnY^{unM٩J>_5FB"j<}ROf!(BY$D)U!FN94%C)mQF= C!*ސ)#VGfD-i^g} ,'Y]]{]|GzL1oW򅐥uZArR0yx)g[2Ѭ"p]' .hS,sij)X2~zPT=a{C[AoTӫ}Qm栽g| iZ.(i{)k?m~TT hۍo(&i+r+[_l;)~ߑu{ lPT(&ѥ͙ڪD¨LIM߲-,'UU];ep5 5VBd_cD0*>$efRIC$%~Ud:N .owgxx6mCw<_oZ}b5mo^8a-؃UpjVg ]70.j8m(RbR(3Y Bj^Q j(bu XK R(0,K1ޥ ݩbM-ŨMD"eQI H[X`eb=Ur ]V:[ӊC)tL^=Km|L2kmEd@c`6 63/ x9t+ng1}KnT)[v Aw;*"Y|y.;R6vN\m;8 s&8P>Bq 0]Ӏ|Zxګ,o 8sSn Q3_G~mX2i#"H<y* Zx$ch#(< u>B_8z[S.C>Rr&s9x'SA j6˸7z=iw$ՙ06n櫷quyin]z?7e;s }Ffwt܅w=;lU\ރ:,~6Qs uaП ԺKfz{NmvTvH-j,ۖo/{r<VOGFιϦ_-;aa{ޒ?l-鯚Sq-pmX5saԛ`mVZ[6}nh}GIK=DlrnkR Cc6Z RcZRyvh&H}i^ 0\Y!9& >BH2,|J93+2k=GYp {n77pD2;,nVO6|3Ug}zA&52cKmQ IŌn\ 57::s6!)M`8 A p0 |b$l)ܐií R(}6"Yx``F.R1%cVr; ) 0vWeNL>~q<;+=[byzq cJ,P vaD"/yoVInjV ..?z:g@Y/FvA~goƅ6o'QQAZVR3OIe, יs{0;҃xljr7YB}=lI=7n/fGgˏW]}}ŧ\Yf{z4^S<^ ?%]yv/uD4/.MqP"Jwz5GFYݺhvF'X~_/g!jh_۱O給%3NVYMOo4d _*3nӢۥ\ijs?i,\Ÿc4I22t~Mf_۲Z߱ <'c${݋>݃ڷ{ӧ{{J˪m#nQ`#ܲU`)vˑڿ|* A/-߽Hj:>xm#{^`hnAJ(Z6P9^آ;^{~7)3![AJM Nf Jŀwtꦼw *oGWEñD\Hi!2=x`̕)nLoqR޻UۘCmT|.sA?7-Og>fҪֻf |!'맟} _3FĹJFY[݀b1*x7=3GmnT^׺HEJ#IdSRI$Fr^`'9(̸[KuF ]Ʌ{a˝!T5Rg%WmjZ B#z/[9!.;5KmX *@Q11g=ƺ%掠G!Y ,S,Y{5I1[ $>P)UNFm]fJiZ ƹER:u+( Q*n>"&S9\%#rq3m'}VS` r DQ,%B ځS N)X]UkY5̅+e8wep Nd,8ƵXF4'N{t{OzR ppJur91Xr6Y^~zD~G-oۤCk"ykoZ8Y}N~VG M~ʉj[-Ast˫|! Fhqa}uˆ;F޹Fuwſ!c=AO 4!-8Y$yO?~W;H˒ *h(ؐn N:L7n>pM[L  2bQ;`94fgGy01i59:h@z}`OY A %i ~ƿfEX$^T]4iJMy3q_ɢw'- ^c,@}% k]Z =b }p|s~o?٧鷹0O;'((U=S$uSXLau!1 ,stAEB,DH/+}}P rI03OE*wxw2jR|~40>챠\I/|JtV06{Xp&A>h.p6(ޕw+s*Ļ5RrdKNmЏxݺE@nt6ݧDZKrh4ϑ9\,44Hv3poH` t"*[I?规^DRIkɬ *ccSj g.4㩿iX"{ɞJUNUgj*"7P l/x* mOmJ;byB {?IabLy>h<`C)JdU;ڳLJHUQ[S!uMYrAЃd3);: `\KϹ j#cF,Ed싅.[iGSZ¹*g[WUn7󸌛|x ?̮'p̈mi,LK DIFr9YdBq^b/8q=غ2Vfe(Ox ) lJm!QjǬ9"2l6cAjұ/jʨlLQv1gQc12fu߲`{+bAX7E^V@21Cd/Fƕ 4PpI-\J"F}e<6xR;&[W-8D"M[7Sid"F6\&" )D Qs,iFzK3nDw_X:IɾX,\L(eh}%m3{? iV8? Am1Pe7\sb1%9~PQɕWeacn5,{./A4'R}&N%gl0L\32/d2%#xyW͟-o5$+G!c6Ld:#첻&?}(RÚ~} sCuz7+_,ujWNWwrUPd5\9:[&?=ˡ5%~JYޝ1KfYoSk7agH_<蟝V۹] !Qz=iߜrr8+0Ԓэ-ZeS3Xc31pT(2e% 7ܶٿO7 N Zm+qlBGJÊoq,U_O~^ >'7ܳcuB jgǽؿ:'qo훟~~[~{߿o}~x?o>5:b,y hx6m5M-hڊ6G=mm{#+xN'\[k}/$Ey}Q[nWMts=L urƢߟauWR @L !bߖS˲cl'n3 ~RZŨur))r9'!*˕`!ڠȻ2 W7+ɔNMK/1l'pHY jϠYsuZ9י`ڒ']b{Fؖ=;γmzROkg\0PYҐ] 9 JU')s^&sAF:#MD`@I@.v` 5NJg Y B0~/La Ҡ́!seBB"yOBJ *XI &qL9뼓1͜g锥uel_<ۃ^_JΓEDQkm8ɢTA$Yd6DL9{L¿S ^ w!v ȡm[]a{ۘƜL]{[er4r 9`ؤS8i`8_v|lCreGߜVl*we\cM1CR| o4ūzj%~g|l39!$')(`;a }ȨS>nSrޒ8~fMsa€}ZKI}6)׽gf1$Yg8j#@ eVI)CBsy3kmq&ۙdzGz Z}?e`\ǀ‘掠G!Y2y -f6!fS? 4Ҭ\h_o[v=Hzq؛RD~!ay6Mrmw-^>>6MKot[/vˇp`05RVvg^Yh`m;u祜4 p>ݮlu^VA _i΄Щԁ[+Ozжw̎mLEw@* ȰPeʌKc4 7'BK6q|! > $EZA^$ɘ>9{$MQ5m%S.C>Rr&s9x'SA ھZwܝG8㛚1xC__52hxk [Yw?o΂#UWn/[m'7vm2XO|B.^ u`fN+*]f7WO9_Oge=@q(R#O9 2̌)|ވv*jpt\`w9=JzrQn ;'ʑ58'w8.Ҿ MLh4)AJ d0d .Tŀwtڽ.qkU8Ut#g-w!.rմOxX0LZhkقfff{\fq2{ VovS9W cYcۀz3 `f5VALWW˞$REר\B[Aéi(ȁ?r C.!]}%vR)uqŔTF# Fsʆ,s][6{>BպC h '5CRRYƭ"gA>WB 758̭߮Gb9A=9]H[?{9]:68VW+筭Z_ B#y}~+!.;5 Xp, HJ#XM1h]~wcl7Mܝ ᗫˠzM/*x-/<emG inkS`rD*A,%B S>3cY5,Fbhm&LG\9sY& q$Y,#spV;)}J @\ͻ[kҝ_dHm}<)s ,ϫ;ͧmmwPoOPg?[},w}BD&H[ Jh AݏId1:{@V5-]grw(3$D@aJjX yT! Y8HKcJZO7.n'9?[ /3Eh_S,}sY}mccw_}|>"$FfΣ"1ܔRssܟ;bLqgqZ-dD=32r^$I^樓U>g$17 IִvcSe -tv[p(`N}(j۳;b}'[KmBJ/܌ 'l6 O.baOB1JkYf6g6X09]gF.{g6f~[&Zx[MW!r^.{gu8W|z/l2WgqNrcOtqrǓS2m"4)!h|SBl8d kնhvzc,)_{J@Q?u_~gϬ#TPJ*)FT9xEy.P, wu+_BJL\~,WN K(ʭ%^ f,jӖ"KyQJ-EmDaRkpHB ;\BNQuD(l[/61\y&}D;?M'fg3/ EC~ۺ5:.w MUG8'Rpy"2#=Mc wԦvZQpJͽ4s B>@2[у̜6 7s>pU[ 6}]i+[7yǐ` ۏwsnarM=LzL顩a$俸1j8{+vzMPOP` erN.əVSNd~pw2B!xq[5fEBJz!HB6~V33n OMO`xUPq;zC-TwjA_gZ,R]ĊV J sZᐳ.͜P%4J^\YqG?ݫR5uo<🵙铟oΛ-*g'L/|2?Lto ;_!m#i:GhifX> #Fj;8 rq46zr?faq/ͣ6jZmQ'ٰ3ǐ|>WeK)="N7t&~vNBϿç~}㷟?}eӻxw B}:S&wLG K ͆F0a%Wl.㪒Knw}tA׀gk@r}vOBvp2ַDTJW؋q5Z7颞̊{!uB!X^1}8I󏎓bR2DZL^k&R4ނ$IAb`DtRw8 3WG3a|vQ{e2!>єQibb%F~{(F{J,Qļ|9rʢsy qݙ4^ӶxcTYm+A4BTHR39'cqs.?O4@$$<V:9cUL$><8Ku$fCsKx,I#fE(+;&k >PD,:D dٴC ]%HPȲ 1Em7ĥC5Dc :dTa)62HVp2+W$8!^d @0f!,:ڦc2]<{9n[_ pJ#@)i^I Ɠ$WLbV)9iԿ"_`WuJ!g4#c3s=fl0tg2IB-9TZ_P pdpZ HeK% 7Kr4 qJ #TLg$jQ1hBbǷLL8H sT笷N$ a 6$@r"D* "u#rVz=+MgGQ(wsR 䙦v5ļ۞|sn2{k-DDuȯ3'{h-]7].>ktkrInڴR]u"=g39i!eosY ܺ.~䭻 ֽ]v>q"Znh m0fIP| \ba;sҜ-tW+R{]3C'藓'\:lL\q<6wFv͝cOW>o$܅mS+t5vTx\δ g OfB$I3+ AfA1eed `LxgXJ:28l@QE,Q2U bkW6_r܄c)l|&dU3^J컳!nq?W#Uu-+&Rq}%PxWTZA.ŀ@P+ +w%Bk[>tz PrCi GcBM#d[FEL1Vz#0zڣ-y~Q~az߮zQ>b+l<"na'qk)]=^>< [oCdd'_qJ_snN_ [kڥNz&MX4r_@ق$G]8^y\@9v([s&SȺπ6SYgZ#< Ha&s\K'k4ʮj(ǣcD/+W&yVs.WHo| M>axp(@|ldFʉ)υF~C:v[y5U8mVS;C^lt5haq+Yb7σo4᛹sR*SϤrWb0%hDbJ8+1YШL*mስ#b 6=bw9ın?OլeyXnm[4kpw3z9~kuW; ht>$Rݩ*oW:HnANN11#1\1ơ[KgDՇ+RǠ7߳N v:Oq?Lv9K۳/O"FA&g(S".fkM๢ZkiJ@:pi!N0t eApJ5B3ɕKE-D=Ƙ4 D$Ŋ }X/;I+:AM>mtk1ueف-gkQN 4p/V7ޠxa,JΩs 2МF+I D+o$ ;SFW=Y'YH_*!37v \ x(X/>ҔYP䋭ٚQ[hN G>eŘe&igC(ϒ E3iQi)K4$şz}Y1cY1FlƣYڟF8gaO OI _)r?L'p;0B1ɎɅ.;P+,1\ra'FK4 1Go i;+ҏW4}=_8.T\W#-N/wO4 _S"+nQ2nS-j .11YοG~gִڃk n2bqPtv^}׻towia[T pS;?{ȱ^ ߻z.ދrw:h]]͒C0]`Xl 3{z?U]U%qkf^Ȇ0P훾 R>tJo_M}`ˮm`k?\NCM\쑇tbAؽXN,wmO!j)S\dcڃ" PU $<Fѫ娛oiSCdNX2O N̮Qýpm+l(J/+P86rسA?:\^\ 5ܠظVgJБW2N&4q(6QsOo*Fci_|}.E10 .PGITb lTiۀ9B6B$SJLJ3 )CL(E]JPv4͆Nb{os ^ϡ b,^%4mL3Y~n`;G` Vֻf>g x>q߯E댿RwJa+\ǀ&1xSv|/5FQmDܻtR6C*BZaA`$1*j~,$ HCw6uolp\rm|z᤽ѫRdd54} >_ ;Ă&l{L&6и &g}~ףּuo uN 9]Nio>_h9}yǵ-ra!χi: ̌hLFgcږ}1BRg38?)O/inwrx@1|}hr=ƪ#|η>~F9~ux!U/NeĻ~M-y}mxr4+9M\^]'S0GS;qF}QaPmXyoZ"Y/F33/`w:֩31.\KFeJb;ӫr7w;o#;p?s+Tܭl.3)7bS"-r5h)(Vbj(wkX7`-FzŠ1HһyS#rz}|M/ڭ=<hS^h kTr,Tr%>, g̳ȠX]"SH^K5ȍ-~s:2 J€WQNDf69:`2&#aۆ$SVH¤ A^E'F+DN@ p9$}&ѽfY#>Xh/[?cC>.}K.ɳN\OxF$8D^(+3t5HZǹ@A)9%c^=^/dy ;Cc}z+ l &*:;ƅeZ U0`sq7:6u'qL7~KvGMjBS Nul SBB pj7%VJwRVbJ6Kph2Y(THYkJJeT(^K!c50N]O#Xj1w?NAɒu`K" ig@Yk1>Il UC):0rf+'j]Tuœ3IǏ~໾ } ^`L{ ;żO/bsN6SfGUeyHlQ D9SƘRƐTE|mye,)M'垜=lg?-F˩a#wQs4mښ@DAWRM$l12rqk2m4,2+4I3%KұRd h~CWBdW}Al]j8蔾%~:NjίI39ٰw{oثu)D->ռc,}EZK>J;_+?/Vy >y-%%019+4 : ^ؓW.^BkZF\Rk:!v}..RRLt:JiM,lZ[fy2F)/b IƦPF p7̀W5$SazOT.z'zBON>?g؞S$+:fD2.뚇@Wxd]4v_t1< YaU5yt&9o E@&#z_pl}ͤcS[m9XlO]\H Wbeb1FɅ,Lj3X#&ҰPQ2HՒfyL,j16͆=y`5{Ӿ|ETE,ʺv\5ah[op,^DR&٩exd-uH!7eDRgoTv&`ȑ{QS}JlGIw>%=)x4M,+#z%iDŗ1ъΣՀ7~˄x#㶹Yz-7+Y~6cYN,v>iv|]F ,(")$:,KxW/f\M IH 5CV"ZE09|'pz?sc5\񧳳IT-|bOg}0=۷y}N|M<|/#L6By#]{-Kϓ nF}OTPP|/Rx*Ab_AE`x@GJl6'{QpJ,pulvenl?.~w'>[ď/-ewV t2a5y~w_^W 2:ͯyZ*W,BN=$t"2o.䬴qE <)؍I^F-nYlj mgsT(h|ךnjSA@h^xkdVpfفVdQIx֍XIVV@RqTn8?Yjʊ: ,$ %_"6Ge*يB>=%A0i_ڂbKF"ȵ4X(4)BuȤlR}%[ gͺkѤ#9ƎNv\5?xmGXʲ\/xvr[F T1A1dxړ" M'r,ƺ b;>F cBH%{1h¹Eʢm! Ì l8OU wձe%v8 %z-D2VST"*RerAXsj]AiIi-c-eZ$ s^,j[)|@IS KK@J68 j.xۜ dx_?=o"<3l22 :uDYʹu@nΡʎ!yXF:{ʎ$~^^+Z7m.ؘ0:D.[Ö䕋K :/$ 2.#QF-(jݮ-u'5J N QjGՐQpP!_aIH("gku !v7ty~F-XX\1^}PCGZg"k9՚΃Lڗ~+ʩ&(>K17$I9M;R[]x`7Ã4 x)=kEVrO=t/T}&0\Pa6gVֵğP0NJ($epQa+Jf999UbTr /"Cξ(EބH6O2Z#62L."(1 M"UMvCrc2 w28d)*b~| dM{e{V݆o" wЮb/9JW'yb6VS-BĖd2"zRUyDm=LFN 5̏1t O hJNwԸτ9@ %B#ﴖdBprFNB7;6 NŒU0$Qbu Gnj~'1*E4nf秳)hL՛t=:!o7oRl5B%=ĵ⏜_[C錕.+^+>SoԴ6sOGkxTL?Wfo|ξ +i|]koG+^x~? v6l۸0~J(!)YS"%+3dOcOuWnb2TvaZqf 2rmMR!jY,A}K#F1^n:УyxCAkvֺdgzH?,P^~V n3OvWX)t}PsDX7: o_m}ϯ޾ן Vᬶ ⛛75`6gU׿2xJG{;'q͍tb; 2l+0{ajR{:MSƼL9P5e{C GE '̵AI |Q6:YuFa|=8\sn0t YНiНK.`F¸;gȑDrÜQ [XcI"!tsaMҮ,߷23SyU0< r j),vʏȼZ9,`]1aȕL:4>S-qoKwե4{j̪bP̱22*Cz/'$Rz\TkniδYT\V?sW_7 ȤKoCOko}ʥ% >g0ȥgWW?y7{^MjҌvK7JQg;<l~-{X.6HHHKiKiB䌂.TIs -,%)>.[bdC sbrv(aS:oE418C0\XH>9&,Up8%b"pc1g;Jaظܗ 5g&a.o|MMXDH}qokJ>3+$!f _=ڈ\oZpO!jNJ*k-0-=(o@_b`ۈ͢z{;v7e0T{_,倩1ukh .7kmǿjr* Ƙcx8 ~SE$"_hYG4as{&+飞.r4Ͷm־}_Olf)m Ց A*0~7LTZxU1^wz▱ŹYS"` ., '1beP zV'֓?ӥLLryå T3@cĜE%i s<=5e{@gwͲuNݸfߋOUövXw$G%V,;wBv3 :B2"$'g 8UNcAt} [a`7՟9caa߁>18EDd^#bQ!"y[ Fbnj*XcD2x3n/VE\~Gc7NrNNmFNy3rJ^ӊ_.k>Nj6a&8 TFDG`Q$VJ,D5Qj@i, E8EHsƍah  rlQ%6fJ8&J"zPs.rڄe CYo 3d]ތɻA\VWPb{ƼdUXd O*0/ nư  Ki@ᔞ H 8,0!Ó ߘZXfy4q i.o%CO [BEP0d `LcyX׍yrؘ{}d>*D!hyaN'*jGf5W`K`ƇFi[$Oo~.C mkԃ@QnϮc7p{FfK9xR}CYq< EjqLaĉBd!0֥, 6\A3Ycna%("\y 9GB)D&bR1#H`+3`K!r߭8_;k{Wn톾au0|1ni{BHvDv4nޕ K2LIJ2|0nHϤG(ƟZ %~j >{[oB:꒠Hۥk_}9)޹Њ7pN"ʲ*P-[7 I:z>1onݵM3ʷX5 WӭGR[=`Vē]3xx5a\,L/eYs VI׺b&Ix]O RZN/N6S^"iަpIK$kaFgcC81>Kz6 `2"P\)ޖ/&0δ0MvaXZ5/ՙb6O[ jM>Q7$ezՐIH':s-{ˀQ4[ Qm%>k缁/k=b)@e/G{DOW !Wŷ"Wjԣ%-qɟ[+vFUVqݪV KI䷖-ihj5^=hmg6E ~=kr:ls@b*]ף o)EwofuX%4wP .NjY?aw}[woF/Gџ&|Kd`ge7uO}=HMxrᝤ{Xe[o㻟(h嫘.zh ôȇp5>}04Wݬ mF/KsY m~˽W$j[_`ieY`l|1=}hVK̈́Wn"F5Lсtdվomtfjt:6rUܤ|qX3@;Q4@9en[ILh2M*b˴5w;@NEOlkM[Џ ,paZfк `&9ALWWRXdǫ5$~O^"Ȃ &"-W`9"pOޞ; ƒSAx{Ԫx꽛@B9N8,<s)@r&3041Z ;$-:LҢdSOgxUM 5ĤuX&1!I L{R+H|>íE 1Q, xR^`4fpE?Yk:X5Pwt钠Uy[_QjWp~>AQGၱXhT V0b4Tǥa5GLIE\$Zn2oGGo4JmY!gmLpy3.9a| =ZmnJ⒭$Khؠ#Dߔ ?'OVe-o!LhFEWߒAT8Z UpB!R 'oPIINUNEKimv#Xy8]BrE٨Aʱx\2(‰Y2LwZ D佖)c`D(o&OyqAr!Q78 #0/uICpB!K|T`Qxg(ʐia!,$Ȱ[)r\ٮlSŰϚ]S6OP[ {FޕMIuymQsr\\^ȭKI卉tjS9jwHK= WGe`fuD}b'c$FÚF22 %Z…R>wem$I2)y50_{0/cyDJ(J!JeuR$rjUʊ"2 !`I6OZ0R XF ֖p[( d<xo Μ༸5emӃݖgce''i"]=X?/43y4PPdPdYعMd Y3F225OrȁKLdIuܙ~RuҬQP'=k,KCJI[ȷ(A@_Je2'5]!#x|NA""}T;Լͼb}Rb;R/j F(:Fm c6$:;!QpRk*yq.gu)ꚶYjsmMU&ELw=*׺Ǔ)/}HC R4o43XX(,f(Ew:4~)G睚#NQxr$t#$"-h̆G+85BɥYvZ Iж`@J'it;W[h鯴rUZflBG@Vȵ` =/{%~3%~} >zy !\Ǫ>{.وWīEfx@‚J Uc{ #bg]ereuԖтtd C%HYk 5EĤZ6R\q) bpElMQjoiIzQu]w{M01ZR:p1]oOwYKĚxLڷqkytpOⒻl5 1ٌB` SؠHj!RĢr֤ s ۶h)/:}?YUNɉjUX'mGj(dȇtV)QY0ܷc'PȰϣ^Y)NO`@\UcF`v,dEQB G#KK<{ ?J'BAM3]$TV2O%іњ)eK2Qk1bfO@Hz.gaMK>+An<- NYXpmL>@HIH@ s0<R\u>x87&"DQ:Lwٹͱ8XA [O+8pE{J$8SbA8(%-(6nۦWrӸzyv9X_N)!R'Dcɢҵ!≻ȋH IO֔ߘ7 κf7O8Wk& {%#ps I¾: ͜)3g>]~i.&3Ícef͜Zy)D0lCapyDܹv* pÝdGߓu)bSz—^tLիjj{ H>|瞽Q HGrbƨ\*ɨNB rPDT2RBW[#p-o s Z3ϭ|)oV~~c:2j("sCAd`Fpi7: fgb%Zg9&;cZ(973K6qG ;rKoǫ K]ʤ 5$ u(܄QlqIO~4 @7䫗FyEF@Ab-&=vw2gxQ%'8k]rgOkȿOy9[ѬDo8؛3/z33aaD Fc}"o&S ,(/3#ޫ9f r^ `c|EE'gG+g̛6sT Qܦ/K5y{A?X}e: L)0˝`%gޅj6蘤}C}5PD7B&:p Y"!dVKB`mB!!SRFWPDCv 9P1(d9{[*YWhl g|ٞ+h%2Qp;6WښP֣/ۭfEev˥gnצa6NUMPK׫^ R1zNg3:{z64ɖ.>g L6/iz/^vҙzG7<<.2=.,thv-< v{j98K9/ډt KӰ=7?o. W!]>ٮ݉',ܕM*Ix( $C!JZXf{yC6P p@Q#T k=ԉᄋE-) ~a]5Ca|,]LY/~d%y"*e֥$rI.B+3 SL2iβ -CS N^] ޠxD@)KDA!ZU.dG T%X gY@^qn,~&qO1u%gy R:%a8A$اe P X_8SJEdUJNOmTq< -f9tEy}:zR!(Hbv{""y~Kl*'Hřu6 Ӿy#^"*fvLou]wt^]2?Ҋ#-: &ݧKW?}7_?װ ձgC-jlkӇ-|t Zo%ni}T[b#!-J1J eFB95#iJ!vәp/^spV QZG=QG!ib. WS4CԶ}`΁;":0Jt| XYik}%U9PűGGBGZ8@j ͨHjy%$'/!􌙯$$qƠ޵q,2S0ž_$5M8>l /ղbd8m!E](QtˢvK4kfJ@$%56j)iK ujw >0h S10s&rIhh&PH,Ks}nFf'w-zLQ<܆TWOG/xo^a8 9h͕[7^xd6+l<_Xh]R$"DjL2hAsg$*A쮻0z|eB5>2r|V aV'3,*"\1L.'\BER ;4Z5Qs43H#K8t˒VxGuaf]Yt ^8x9wmo4nԽ'kUmBk'"v@Z.ucvş.Lئ?R yKɫQ.YQ:ZCnaZi"mKhqNNڿz'mЌ||h__8WiWB7|}Fgah7?猪!j8 jxq鯳|&08wlѯ?>.mg'ۓj}1MWNޏ/iuOj9?O_h BgGt~JFyҼ|{2}~_;NGYٗĘ=t'MڣK?Xk.?o7gr ~ܥ=M>Z k΋Vbr1ߧVMpm-bjqp>L{*K6wt|vh|Ks@?E[-mx@[5xۢaj0<'ngymjҷyב;߲r{NZ8 Q"]'t g0Z8ʢ1jIFB΁(B8UEVY=+'5__9iua<|;v;' ScmGLDIB͘F'ա^y~ ӍK^uzm^=v v-=V}푮J^:Bx$ OJht`~|g](W"x3 n 8' | A4UƠIND )t̬04˫cD@)w^Т.MH F:)P 6PV%G(1 d⩰[LmvkuO nttrc1}̥ v, qYiU;'ܺSW 1DWip_%wFJ L q ThF $! g#f<<}>{x;JDQF!vj9TIIT*D$Kщ2cIKK54JZ0ES]σ4ZFE굓~a;R:Phu@3 lN2,%1`tR9] ՂF1iA3 vam7X<.uנ둂.j.Hbef URA>>tb#>  zJ^ vڟd)>C t2X aF:( Qo| sS*K#ɥs"6 HJA5&^L+Mx*]SL=ED% VrC[A¹'kr@r}OT닌e磺bra)+?ŅFVEGv0ش R $@_3c*R^E|]}]V<3y(!xAyo@Y"1Zk#u;˝^iԽ[nZUߏk0i|<~Wml0.Ebe&Ʉ (ChˈU*4dd8"o 4h7qɍ&&&s.Hg ovALe-츹B›+v,CtvuJ]vby۟7ynnGT RڵE-Xy1JG7>(nr4'8rH(\ĢȨN1&IwS!,ZhRNCTTqG.!HFb܍{)4cW,ePNly.dyv_0pdf)ī^A@`AE,Ib"0IiJ-h)2bxr"#{YMtP+5 cU)JM镑R܍~2k徠v1YaԦ{k(AQ 1*oRa:}D$ezTڅUL`!/=m((fP\4K$#D栝+Ĺ~Iqo KǾ "X6PUs@Dtpe6eDk@Su9.1Q5 #Iiʣ,e#@( #i&4wݾA[[JW%>:iɮXHfW؊ߌGNu,!yl 14,%**>I*.>.XJ;w$zwO͑wfefZHt[TrG*VU ;6Zب" u@e2. ~ऴAഓ#!D|H2ۏ#h?܋8  &&"c92mbjձ't Q˛9Joݧ8:}{2{s$`TSF~+Ս|):Nq8IF X 5ggCp% F> !E&OeYҽoeZPZ3o`5ozc;Gصd쪶]!7v%]SWPZ_Ëd|goZd_ǃ?m: 2@⑟0#p6Z#V14@Or)OIjCJ`'@yM:kNi୍ow ZCz_ c!"P殣2xftЬ:O>=~vUjE |usBQaӢ7{M|ű=X(;cf7ڐ)d&rkׂ%Ȣw0uRK>)Q>p0Fbf Lpo,)89ϻwIq-le`BӽqEWqZ3KrQ$vf6/߽R+<;f$Smx-Ftg8鬆&K#P -Gӆz+F9'*8}^1+ĩ|MUseƓfn>V) b`ZZ6L$H 2X)h@n;p/=v { ,:Y5V3G^g E!$DQ2}V ߃Oёﵪfm 1ہA{4/B9赁@cyYmՐ.E1+0RpQ@jEB%{ iz N|d"vsQ r˲sעShuUA3+2 x>pQQ\PW8'r&=>f fH|Of'ech.G96.\O5,t9;_OQuxW FQƱMlsާũ`02QB9aSrx甇C׶pGAFv[4܅B1SH6pV379 u" 鼘qw[8o)}W?^DGGtрW ͋Ba+?{Ƒ GK>8 dy5JvVTL5gM7דepv1(\n?ӱf1K$j"b@H]MJbsMWuհj$!,3{RGQ+<Uwsw:YWk]_MVZy?d$W,}c_wO gkz$DmJ۟o~|߿߿9>|3oޞ :D Im7/j_Ѐ;UOZ5U57kءj͚u5+rMkK4h~O׷o3#,VMteu0b~UZW$V}wU-_Rhu~e\}|lclVă$i2^ʐau Nc&H)hg>x5Z^tD WSAZf ^_wY_h!uʍZο7Rg-phFdK=]b|8G1 %0n*YGn/m4 K~ Fiq*`<JxHFXp\ !'gס >ެI9;9@!K-ք9 c0'MJ>1Nܞ"gIq $(H9wX# cC*r(gf6X3L7Ѩu_= 3JGЯ#ur)\ІH/  F8BdNK^6&jFD[3)u w5 tn.b3ry!rYԂeOepӯ4mk tvpC8{7j8p|/z_K;yWuwma@aoHkeRN7>->1zqpBRFl} Z-wuCpѱWL OyEJIdn( H`]ԉcsV5v-uQw#. _=]!.*EϽ15(ǚkqȖ8PNlpL:62,KmTfBo/Av'MyP{E{bl}s g̮v=p\⨀^_\4qCM Ȃi :Vāi`T~Og,Wo&Z$ͩ G[;'NS0A AF= %V)oqfpa=76p*)cP I4m/}b1%i`XsFZ<|mv#g'Lx_<;oc)Al,E>>$Q}mX o E4-0RmSnp@Z_J;#=|!ȣHPt:B8qLJfYkƜR[kfsp1J[0P )K$)P5r雘5gGʻ'C}>\)M<^nߖWMw@>ff+kӦ4/ifDX-VI7}ȭMe,Cvҙ`3>eWѱtŴv8=DeɻuG$% ,YTܺ;m{ijJn`ϋ>~F\*6cz-9kκ?+mc/V+ݵ8X{^i+y9%^D zYcA1{~hCN(C aEknA Fl*7"Q}o_ܜ&n>;(ܧQSՈ.;[`Z>zr1-S=|ɉ|:4+ϙӐv,L pf2ziI#K`Q 䴳FFC4R9XV[\<{R *ybƚsoY/@ꢛ=[j-mfbjJey h*ՓYk+r Ҡj&*d(Z ,9mc?9߷C嫈.4>&!JD "P-{6Z7A{eSrB;JځKn?hk ֱGh,u:Pe&*! 9MZ/iȾh1ZlD*Vc<3a=\ПqmvWn]IJWBIj"f"X " -ZDi{׆Q8p 5Z1*2Tp'.XM;)ocm [7ge\ɾL)m'"T%^g{ntG.-:::<;BHࡘ&@ JuEbI(Px'i ^H`)/~(/"o$:҆lCwsXwԄ3Y@J RzqJscϻB Ƀ0y `*"hai)<1!`s*S`wIoKg(A h"SXC#w&@()Q P4Qc)B ~^ݪ_V}>u;{j+UövX[w ơSƤR*]gyW(/}Gt1pL~eg5{)L a.2siqTDsn]b6 \\DE\~-|n{ iYhtx-X.0>]7}.-Ow28DFDi# D䔢ʄ踤JzchJʐBHyy=oZtxR E3M uD4cLhO"Oو_o* 5~sZ&O1"ymy&WՕu?^}rdInlOiT/LDf X y$KA 7DI.@V4xEhUOi5KM.jU:;ۈ<SFzxdÔlX7Za{VuA?wgV@5WOWS[=ȳ]ȃx,y yaB.M|V!B T1ToyJ7kRwhލIsy1wƒMZ ³vo56Bj q0Hg/I*}׫L]:dי-NwR:Ď+r_-yՖ|E_) ^v VŲc{^}<_i&\*V㐜x$ϩ/KxW[ \R[)"LZg ˚ eˮzZ ?GiNeo3'?~Q [ElהSQyBIYN>g']O^W|OB;{}wb-^ G٣0x_ULNA8n?GңY*Ypv\s1?}íuiPw[P\#ܦ#^K.6$[-p=t[Uܤ*,`qܡsbX&&żlRe=y|3fvs#n_ @g;oEzYRF 8DĦ:&䃳6:4oY`e=)Is'mȧ@w{+4u&֖S"X"R $pebe\{(Ir ,6 ,9eY-r˒g:A[MUWEW2 a5Ӓ3[ZY|^}k=-&h[ą'VLikRxHhF0Fuz?ok}~nAЯ$ BI{XhR4ĠaN<>5 N'^}iQ›M鐭 Ns&r:<!! V!^ĭVy8򐸊.rNg$xBO] Ykäen֝]uzVƳjv>36KG}yޥm>q­ [< E+J Ľ*GUVrⲬuLKR۫/_E^-3Yj1aKX0 ',L rcRFih7B=5#K M`]-=)B9CZH:ZFugGjť}Szҏm)4 D$MԠtsU/^˽ m:*w'k-"R2I9zxrisȠ # u0D6IR;WMIkg X]o cӌл_ZWLĘ'xY e\&w \ '%`E4כ'|%#4{p-)g\2qv΍x' ۳ɔ3jl]$Y9}d Id`Ef,n3ؙ&?γ!48\X_Gd?7xۻ,~k5 ۷w^G T*,ŝJQfϸ5 6oh-kȦ޽1Kfŋ{T=~bUp3\a/fs+ܮ8HHԆ?L$.p9~5dtcK6tԌhlF3̪%n8רxch8]LUep5N.jX]YN':R0Ru H^g7ֱR-jfǽؿ q˯o?_|~|>㷴f`$i~vh]7re,>γ^iUlzwr84lpYe/z)_+{kW6{Xf!#!-ecCZ^ʛV=0Hvek1zY)|9~CR>жtsLkLEw@*VdX")Yd1ΓA H͞RZL`_.wziD\^0<=G,5V $%mG< F+͛`;jiܡbf@h#mg2r2* Q:s^;}B9I-6,HB$Raa쿵Qoa&Zb2 5\e~4VDX%mo#i;9Hsrl#΂jX tb w/Ǫ\)ruŊA*vlv?Q=H!L !KgdBTJF .PU`-GNL1~'J;qy'lEvCf>7ӋYVT\0!&Jal{Y,6̔l+:8eNIbڎ\ц4j+V(a*!hG)(!ZŻ|B!i۱UYYu!)$sFO`ѓMDM!:#<X_\·-cm::,\N¡.%2,V[Pd`ܵ Nx)`+ xR[[ן3DlPwgt;|a:/RϪ r張SU@뫠AUh$N:#S%deX#Ơt $$9~w=vٝ~;}4B)eU[Cjoy t{ G}>]#E=9vHHU>codx6HڅI_Lp@eoa |G7(uӧ+oJ;Hu$HR`j>gc]:LE%y(X>O;Cz 8ˣ(UL^-ӄ(Kt:g d1Ȝ:NZ fVt;R[g?\C7SAzi[Ey;;3`j.w}BD&H[ Jh A8E8P'`pl H|Usp@Heu(, \!*! iiw̙YY ݿ;ܿ(~ a{ޗyv4z}ٯC,kt?rIGXoG!kmX]Q`,or7Fr#O[$*$X7!E[6ӆ-jf8SSuTOuȤbF` 7. 57::K1eM`A˽ X"I BY %/F$+̰BeT*8d,m/}YH#xxM}-VL^ LuUK KT+8 >[ x*;k-aLE^X}3X?^6-hA*${֑q,rB6K-@pлD_#%+\Z4 JwLj?]vȰZpDMVaIWײVg$W!f!ٵ*z$nrg/Xͭ"#I'iU'5=ռI?/O軧~p/@FŸz_ϾZ]#xnTU4鱗"ѮN W_L'I>W{~g_͎Nբ^cy\h2}7o꼷}s=H-O3C%y;|~fˍ+F597?>OC~ʧ'Uٗ_I |z|r5Y~8ɳkR7!XkMzď[|ayz}O'kX`05޾LN ;Û;$0tq7I)٨R%.=w2,ͣ|9sۤG`^7?ߗG/[5yk/o]-=glخ'o:tMN pg*P=Է[m|nu23jҷXܲ!;m'q֦-MB #6/ڽBds(V$eIFFL)xCi]7D^wL,kX~K2 MAɪ*1bL,Ir,+™x&W50W㽪L7v 襖uW۩aC^Xڹ('auض5_݁=l>^r"6X]8$qv@Z)xX}OK͞f[ͺ,${ >A̜ u2h'aQ2E.oJq]%nm3"ƒW.$1Df','-KcLm;\ާn7>8?=_Rdm1}Sr=?6%7![V}7iN`EÍ~|5$p79 ViljB5iFJNdgzOT7UenߨCUw=U5S.x ".( .zIFdAr!Sf"TJ%R).T>JsbEP{AB\*HE wc!F&Kfl@5 ,T2;RJfy3LP+ry|('8j?%zzqݏ?>-X?鲊IgV7`Y쌶c%r׌eq[cC'uIe֨QI{:i?X'13:`m,fMlt͸(Rۚ&gVEXf嵂d .C>lj2L}眸/; flYN#Jjč㫳7̾dcV/(#̜*d(AF񜣶,T42i$mhls:p (.fUgO){`$FHET 0%fCͯ]nPM1q_KN3=1qXĝ_ԀSN/ZCIB`ݽ:9)79%͖="ϥdSTޢĊL*lf^Z1)d^xuzUV:;UlRE\DHQ-X`eTr_tZF_+*65/zm;.;n9Į-cWej)UYBCiO w} (ƒ'pa D=ɃxQ0|:^~{fE͐}Vd:&^+hIL3 UNjg&*G=8Zq6NecCd cȾCG{B^]L*cA^ e+[b vR:=_l-N~yM./|P˽"3{ h/][Q]X?±]$67R~hm]N.W-߀QOf?o4{4F\Iy\; ˀvӚH=vCRMM̬cd w .{AʖS&xHK2RUs]֚z>6=w~zw :Gz!#\}/>?jsY䂩빏R2Ԉ2>$b\ĩڋ=z#<;n+!q*-aIE[ZY3W6ҥR)MQsf(2DJ0) 99Xq6 ՒPwI9F(JI[N"֕g$aFs(39?In'|5 ]6T1aWL&LxC(xT@7`!f%hܳV׵;4@l>@ #1G;[fXJNK] ^iOcĘHg9(Va9"s1Jqp%͊tr"luؾٖxW]}1š;,T7JupNcKR"C:JZ t@5!ԂZ{ H er  H%5Mb-t!-dcH33 i5/H\ b1Ӡ td9ExGm#/&exxv@L\0o/јj䐰Vvl),@x% (xԂsH]iUX0QrR`yUNu(q1 H y<qL *L:Ð0Ď6Eʷ ˔Sj-H&, Ptִ`[ժXTLfrb81@6(ˉ3\D-$Z YAQH>3yLSS 5I&Pbc*g;UX.-m. v98l|@T q1$E f0a¶(|Ynj`&.\&R&LGȆU8l j2a+֢ƛ ~\"[gL,MȀg:c y_-]b_|UQd{!c@Jʐ9rGy}ΰ`Z2$?<䄨G[-ky2AT瑵Ix `ﶋ;0J` sJ'.  -*=``j>pLqΠ&j 5:/Q%'@2QӲB Q`rQR0*g4lh륣 EduK4К5'wK{C:b8K4YF%Gj3Ko^5 RYU[fy/zx5޵Zz7(-Ia. f\M:[P61sɺ6xҘ[wv)NqW/v4lW+f'SoQm: ;bѢaT( ȈOZл5uClep?{ ڰ(RVx'#80`ci ڍ YmHmΤض!o6jt$ƀ?Kvu~ Ek/t|M5ʾawU|d1»e`93#0ҲB3^\DbiDF9|'Kx>(B@yV;͠>rI"WŪvD|٥P#HhRFCd؝@ES@$Ti@ui,?uj+z3!z crO\B0Z8ηDyIlkҕڊ#\._~}xW}[:BѡbePdD}7xP!P;PهSyNJ lQ\+:@Ƃ@_X R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)\%aD)`(3Q5k}Jf#%k@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ/W $0sR`瞏+`@SI DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@r@(^?%Bj7O^ RW RYBGJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%З_aūWNRv~w*%Giz1jXN\'>zh&9Nu6m>3&N3YhR~վW]T788Wvy*n6͡I})p7e޽Hwo܌vxݮTpZ10I-񝿏foggiKhYcv&寿7UQD;osŷiX#{ ROB0]t:yAnnz)}L tkˁ@Pz&֏S1n]63̸NyihgӽR*nް[㹭WעڲpU;uCP_[=Y7uf_/ڃv W[s%^[_L/7k\נ󰡅t& uxĵ 2Ɏ`UƤ7O;U?OcY&''wTYoO_ҰMsƿL_6wz'Dz B-GR|E۲y\b-͒qPR0 &9[契X? ;]vrEg}W˓9 orKͅyU$:WRIik2nM "ǀ7cO¸cPp7eRrkV6m)4iLv مQcKk,㹒?7FHgMZFLTUd]R&Yb+":hY_p[u@6z^OǓa4ŕTS`w* LbYu Š IT˲lmi~[(ꏄu<ώ;n~''e4k_N~X'ŋI^DX߽)7Tj9/$ ~z,G0﫫Ց7z~b:$SF~qIMeshLp%gM[e:Y9'fs$Ӄr0.T>+&61z+2N]c^7/*yߩzt9 e÷,d9JٯW?-gGh*'toᒫ_5wҜo%@s}5tOo_o3oS8}1zƜI=9^j&?^]^zpOkخ+YJJ_];/cymgy@e2sFbɇ@6{nΰ7'nrUwݫoq}YIe#FM jC~G.='5_'OGi|v w/|o^_ͷ/_s^_~ 0Pv`m2{p;\ui\oui-.Đw\ y;IV" 8kk@RxIxËIY.,j 6Us~Faz_\x6JSV}w*)6NsB0];s[=m#]FwIy?_:O蔌3up5GۦTE_gU\IVz>>Ŧrx:fY/v߅9N[6>;'͜u\T(mTx\AYN>bDj\3y43ȼά/2|Rݲpy¼y};," 8hkzGG[M*J\ƁѸV}x\AObu/qLEkb;媁]Z6 4N1R]3*GcQ8Z#"ϴXē]AfjQ&RM*\֬qKf PO[ijΦ4St:Ղ܍WAت2'mѥrtLmo o{DN4@e̜-+x2&Ѐ3XT:4ops!PPl6 r^}uZyA{}9~")OۼdY\dUhm;g5h|)O7<_ =CGjqKlmJzmԲr۴nTemj9#[ywVq$ܶh f?Ť/p8xwCjjw9ZѸ77>Cl'v-(+ 5p Dv=8g/]ϠVvd*KX}6~W~>|Cj[Co끵J:/kүK%O:| ՝|<ի~Q*eHBe/zN\2%CmoQ2} .ggO܃p#YןTff{ y$ #qSkᵟv?;@hJER+bYK Zm"۲s3^AN<>M53gK&8е{bݴA%o:T`!UMbQyb5ρ,WyfjJpcaź\FF@QU6ŻRTf΅Vl{U/U xch0snFf'>Wx7g݀ZOkufg7Gq>ni `/} h?!V,9y~2AYx`wvwtLgF0aVppbm.f e٪,Lt88,u{AK#IbvWwW0ڃcyDiQ$-Rni ,")hvR@KTUV2*/"޻< ќ\7etI-DCb eHч8rYJd.s>w*hDjO2s=#5mO9[& {m4Y-t,x2uMvo\BgaevOUGi'=t~`!w6Kl&/DhwץK;yCb NѩuC/YNtl'x?~mԲ|;;畖h9|Ș aq>]fxa>q;bCsq8mfS*f"y9PgQȇajnnDF-fi̬֕}{ݬ?-fU:޴iq|@\kU%E"7 D݄laR:љ>:Ǚ>h L !KgBTJF . 9:@"M<85Bi^VW">oBC@)MyiefL̤pEFD-SId:*8yQTֻEscሖ&UW?Y`GޖdyίNޥx?S4zE?ӦUɓ ZCJwJ姸 ,Qkiy1\Y%d% .彴 $C=2)S %l)BV'7 'rGuaoJʒVmΧ7[k7m~n~^G[HvC㯃u]*yOkĕv-4b1aZ #364ʆZR\m Dz` h}C$LUh$L]3'.;5Kzp, %wc?'9,rƺӤu`Y\gػ8vؑ3ء Ҡ B3j$" M(,5(hໄ$Qڠ*HgtN{N VH/ʴe)(3Q*锲'TtU7fgmcIɆA"] Jew .Dlrn݀HbhLTFKB>h -h5>b׮|x^38A:CrLKn \!*"! iiu̙YYE?"]^Ab~ awY¿Elt|oVm}?t>]4dFfl)< !29ÍKFGx1IvpD(cy8W 0<0 x 97pRPRlD + !\FcJƬf |' C <t홒+mBL>? tUZOW] jOSJ,PH9%ZaD"/R+ΤcFhtSxF!kC'"0G+yrNȔI-2fKS#`M),|3rY-dD=32re^$iTKmQ'|H(ݬ6 ݤ2YשŖ9{(A 'Gښ]?pAp!+ ^&.<+a` *Jϥ>3h`lleC,֬@мt00aIӺWc>q9x4ǻ:Eqy6G8&j:!}uq}>͎{Vyw[#= ֛}qI?Z 3zũLןN?h<~._x_R{=h0aYqCV-h5ojlUj^߫[VZz\o_X#}??KVlOcoGOn|﫟6(-9n&rm_~-#xnTUKe+FCBף(˧q|owW~i?~hW*_ft}?\y>vEtn坔'q:w}u5}egӻH͔uoâx5$%"哿ny+fwkå"=`׊Cǥ-]?{Gx~G7C #/K6 Jr >9}>Lx ,AhʔHJ#T%ijBXnMyY8;Ыze|Lp>kH66 `C\1?~c@wT>ÊPJ|C{RLqVXS?a Zio~m09$y` %"BH >Q9B;GR?$ٛM鐭IMe1;yb4,*B4 R\p[6p!q]<IʅD YkäeLqsWU[u#:}*St=]&&D}yrs&z7i]'0B$W?Y]mZFFĹJFY,b9t ϼޑD\ޑlW7*IuTՖQSg"fRԑ$2)$eGT9F/@oT9(<.i^L=?'Z\ݍ+~cCz#ru0!ub2n9 Ø#xM7LШ6&X'?VZ.GUƺ;TMJWNf;7=ט[9(o\ A=ȘAgy0wnRQ0U^E xpD0i\&" 'duʹn2R*isΐ5tY[C0_m̵]`)B7k "VT~&Sgܯ`gn7a}o_'E1(mY}̀Nj|?hC9f%!&݊q։J36J ׌¸::u~ N8 L쌵1ht6xi G4N:ɁsfŜun|2,/ipR!8\d37qw; BWf9p`'d u2^2<"$%uIct#s*A`G)HYK#@iCJ``NidQe+3@z 5 $1BXԂJfcZM/JPz]Gs\6NV9ZH8wmǡdvGӘա?@ﶧ6c[) y-eREZ1!%@*7B{G tbI *2jkrJ9V*jkS\`,έ&eGc974sgXMXTj#X,uX'1)xKd$N֠àNO؆Ȃ:$ BADdt,GNJf,28Akf NhV1e'ApRؔD AC2[ ݎY8sD[ُa4L+*湠vq,jʨ;N0(QhgQc12fu߲`&;GqO ź^Yic3!3@bh\ R#A KҴIJC*HJ5+a5qa/}`<Z\Qt!OHdC2 t3'aoGy01i=DR 1mtcTH\ֆd@3!h5jN2I`7F,[K=ҫEsuV%b0u;\p@&dy"PʋKIHY0˓5qF8U4ܓX'tųትFǹᑙNa{k^kɷfnNp3y?*% }#?z?ƀKFk\#>J -JR0&I:G8B yގB]:uŨ'fIϡ')Mp""U"rd hu)zN?kq2%`)Ҷ/oi~Kku?@Ktգ׼X_z;|:ky' FǵJHrR%c?[@T{uգZ#>Ug'٢w-ZQwU0U7ܻEcoG0]臽0}F}oz!>dd4E!Y947d4VjK"]O1 'y&yOjR_ i.!7^6u{qeJ_6FRbiH|XB }O<8䌦esk9ӜUֹVթ*F T_OؠcOYߒ' 9y|roFFK>IR^0"?64Th_8ePi>o_NV S(?fmXO >^orz@_3̛ tCxi8S Pg|Nuϗб5,^]ʶǣ`j}9vKuf@6!탞د1MYW$2;<+Tot^kɳG8],iD^a0}SkEG9av㏆0/HjapÜX-0cv1W:-(ddN -|1G85ϺZ{&_/V(8%m+4ƪ~d|HOrUi:KB?drOy9ܭn_z0k5֮Vn-+$2L(l$_9?eɓ>/G,ԱA*>+i_G?Ώ?D/㲓7ɪ'W oi P& (mϬٰ\aC&8c{k@;ctFxGnЎ&Oa4[gQ]\] $c R?qp_.fm154l~74mI#SRta`2/vF_"jek BqZ-Yu$n?c4,P2ʻ;22f>/j~ZKg܎?q|g ')NXVD_P9hE&ze$Gix2Ih)u5LKŵLFl볾h!*dGх<$)YL&6ĘERc|JZMHO;:k!Hhr$*2SZ;eZuo MWTcT)G!qάʢLJ.\Xc%!b>pcpD38fxIԚpJ*|M1flv D4G0+<&lxܢM2"2 x0]| qLƨ2ZS mDSF'j D!`=#3>߅۲8 |d %ZZ`#<%} ڸOHmX ;nۨJܲ2"JZs+H%) o*c\<xrV`}nJsO׵Tir¨ȷZ4]@j0|tx K8,}tmV28Ut)\][UASc2Jg r7V[_jRcQBGą1 A1J$bB2He(T. U e:?nѴ8{ ":t`2GmVdnh ^XTuvR& :UkW3&'GbiVmA5SRJRɈqc1v?VGʞ]ɸsȓR`QD6 ^{@#@ |vm^K z1i"!]%_GWFYuE]JQa,C>i6y*Ze 5uW#3K:vnjU9': hx,ZnkT5^pt(I B9viǍEIAd&Ybm4(?z2Z;(o *"eȨJcQy7) s.CpYT2cC9|*i5kNvXfH'Ytgi&0QI RhěWPrVYE*:j,W039l"Q}׌oN%Ta- \n F`⸱~<[=tyڗ˳atv^6kZet۫% D][tquU  Ipd*"ltnLvKQ#h.֐D[k sQS@q8RV  j31'0Y@/ƽ=OJ3| gj7 JȀś !u>fdsC <%\rA,B2!SB # R*|D=ZJFx>i9mRksL!$*V5L:LC,cd0+&T/!T$"8tĒS't\N&!KklB9KBר"!(Tv70&`j?®z/z/n[7$!tW Dv Eݩ^m[zG=hw!+vyNaov*}ޥ((3µ(y- gaǤYGJ>x%܊YR};J Ϙ@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJoV L(YuU5+ %з^/U#%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ RU!?&% .1L $(`^ +&%зI DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@z@_Κ@G s=%+@ߢHsuU"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R}=JOiZot'Va{}\Phw*%_.wivuiQYV\-NϚ} 28{?DŬy8 #}H1>%,3K␔4%g ġ9;_UWU#VuJ̣޲R{ˏyu B*(Suu1e0p9^~F6E k\;+\??u,__qōAsyf6L")#S*<;XR!e|,̟#l0gVF[Sӻ)mKz}7z&2ڎ57C ?!=#{fL;\ B[<"Ah-nl{btvzʂ!:q夰DzZ`ň$xY>b;O(FX̞)S/~9y:({+镽yk9KiY jRԦ2Nc? _?z5"Woh0ثvM/֮y&PJ`dʵ?FX?^a{ ]a GƧikYP,|_wЪ hN>C&iǑ괡^P:ģZǢwȐt;n+>ݗKB½!T0r%Bcf݋͑ScQI4Ea |D|QS}Z@ڠ=E!MDm |V"2#OpϢ)}H2Ra|9^ڔaw D,KpL358>VJ3&eemldθiw2/-< ;},hXX>-.`}*@` (IzOwCnTo_s}՟OJR0-Uljx߫|0ף> |z9|I%vrMN.rP M&pO&SW'0El.ĭ.Ȏ9F@ӵtgܐ~8ϫm:Ti钣bw~gr>])H]fRNOW*X1(5t09 {!]9JHJK3B`|R%ujO񟥛WeUpvw2B,&sn4_\V{roW$HjßfӲK7\lBFҪq$!7t4 kF8*EF02 w0b|'1W)Ҽ8F:_dӨMkY4.F@a#Uq饞MP~^8'WyW2It&~:ǽпDrۻߞ}u7~8s>+:B@p8) ( _tu[Cx{ ]zWqo#Ҡ퇅n-f7$ϯGWr@RkVې#0zOpWաK u嬾3+Ζb)B bf!֪u򪍱FZG${vRLJTH+I|$R[0$#H4W9</Ѻr;;sWst:.!E ]zVֿyayfxsSBDV C0J(PlZb$M;i\g#ɷD5Fkabr^woӝ3Z/Htw}%B){*/sdaaՃqc":3:;c3ƧMZ նDK"W-R3 ss y\A<u<|kXt@H*QߥEim:e.๺LL*\*pBph'i.w0 BQ6,c]ybe"Ω 6WWJFK9 x$\)=ڠUJNZa: M"Sx\ss}!͐q3c89Cpu^Yvn{-amu*r =c͙g.{* )qpuFMzlX֚zJ%cB. x}:ʭwݾݎ6@i6s˭ހ x105U2jZlA mʋ* s b?/Rs yܝVs/JU0[!k?>vf=wAP/Cb"Z2ddPoRG𠢷o$/9ӔYFS v&̑L䭛0_o?-4P) pBQ4~+cgl0h8doLkZ/xȀ-ӭvA";5Bɝx;i8_Y'k4Vݲm\ܢ[+(ǣTTP?dq3E^Ed?<8,!B8We6S# loc^%:ʃ9pߞx$ˈ i03#H!D;myk%.jJt3ٯ6q Ff#]3p6ξ9}Í`<9=3)ƅ eqZ:*1; ovVKjhjVs}FO>Ե=.K4]qy7+n5֞█Uy;>3"쎮k-ףi'Z֩MFYŵ@h{׹frBc[ϭq;.w[زunmww^_G+-x40糼ĥ鎎바[V%hAϛp,0 z4Vr¡mF#ju.2J1io_xZV27_ꂬ^uYwmhyq\=EpRƲD$TH!r"NPO "&l`%ſFH_=${DFu rCi GcBMܣSFEL1Vz#08M:Q7vZ_K-v ?zQ)*jeͧ^o׸ɥVMz7? G E%}3/)`.\}K\Qkwm,Ռ>34ƆBʋijmi'A˘B" ¤04 K=?ryDaDͨ/UNCD/.H}.,(#Tl?9Gr/#uWI27yⴊKjG?@FtS9vHQ*Pum)PB zHQKH X!8R2Ẃ_Kֽ/B:Stt}Q_4ayTi R蘌0SQ}UTA#[ -.u @E*D @"Ēa/.avp%!:2D!Y (" ODH<a U<gswjIk9\Ǻ>N ϋ۶;ԓh=}$K.#ӥ|H:"ۅ S` !f6-h5?o?˻+_ ¬N:94gx$Q9iɿ;IB8F\K@[ccf] zb@0b2 ;9DEܖe=J׽˺nUxX|Re|^=*lql LG JUJPzY,湜ٝ r|UNaU)E>Vu]VuZբc^!nQWê8frX&j >9_[%4{[?Q92*uԯ?qݞ~'j/֭0:m`6+;o]e@9 Y,wmX i dl/EdwM-^Ųjޠ=HE,QLƶf"<sx.Rd4$?gOh~t}]$I_R غ㓣`a۽`>٣_FxaN_ _4śݫϪcUrp>v]|<>_٨{@ʫ^j6r>=`B_ۏόk>6}~:E0ٿGmU}ì[ ́M%w+tQMToV80-onqָr#EU4ߔ久v~yӵm?O|.{K=Ʀn8q /key%|sUY#_dW;lE{*\;sa]+SO57KM餶ϒV5 v{z۴M5]&MY՚E&vq9&]hSPڝ3NVoLa [rN A1SģGPy!U%Da%ȋDKׁ^K+œ ~ -UjH϶6چt`Ht6#b@i(aF^XkPR|m{RDq ̳a`~ҫ&ht' !)~ʀ) Ė B`ֻ.{G!r 2} &z 2nԗ!Fq fZب'#IGQӈ=FȷӖWGCJ9ō8`1wa J[ t:aFhNP7[wqqy[lR\7b,^^;5MS2A?6Qդ:&Z$1_}"(#ߌІk1VY|D J#Hx9Ó3۰;dx#/ oþ,W72j-˵HB9N8,<s)Hr&3䍪41Z ;eM֋{@g +Ut7jn j$IꄱLbC&A #VB |[ڡbd'Yp444fmႾ lߨ2)]Oti<V?'7>O̡[i()=12bFX ^ɠŨ;|9ب/6@VqÝð^:jg(u(Dfc̛r 8M4H Ȱ, Ƹ#zR)Z )$1 Lk D'=->+2|g`Gz5e=_'I1S"̐}|4Q?e=_rLKBTe__Q8Z UrpMKC. 'XY]aGdڨR()SHl*k RSM1Ø(.Fϛ(_`B<U8 S띱Vc&yerDk45[!-Ygy@2/3<~ e\JFHG4L"ԑ&9 ,QPIcQF, R$ jxT~.o>5!(,VJ\fA֝5ʐztվ.d['FK09ZbC8㫺nP҄e/NHX:K7лnڥ$.?+*גz,A*8X*U) +i#06h0(GkQ+b\rn>j:93@*72fȘdl+cS,y wecrp(r~Oo(M?[5$ +"L68(: J!1vV$!8GF!){Ї-$& ft !#a:0- Crumb jg[6ɌڸCtvp s)1Se8!p*ҍ\HR20C@_t4HԠ$ XG Ɍٺs?J}#8 ZCWxPU% 2*@7(OB@7 X 1yAU聏t #$$8p`$: Ą3[XA`r4i$5]rx9T[ŪcOldS\̜zj;\" 1¹!V;=LDJ:,~NbzbR8.'Zfݓwp#>cg|kFQ{;y?~Ի~$!F2a i/`̕'1R(p4aAJؔ7$A2 !wI~;Bw7|o^8B@Tp\* zb$b0KgRi5HqX:Yڨw?V8t8} "mw!olVw>\cqc#H} X;+LL^,cQ cAy7w]vʺjv=HEVc8[j]s#Zw(tI4V@a}?,*3vx9kg bQS E1[l;Ea?Qc(,?' L Plo#.O4~=s2Yφ;-0)/&n.|0|B9tr2ָԂ@>m. Sauaʢ_pϋ:FWciK6Pyenz?V_bj>n\᤾hKW3b1|At?ץW_7-G-(,3'<~{%2`K.-bePͲ,o3`l:RyK*R: S Mg]s,IQI)%3ZX&DcrHu]&=Z@R/_jY"M%xM.$+PAf̪zP,XY^8Q(-wT[cJC!DOJ  c X@ fɚ&愴?]7m3X<\0/U.7C=6 Ww/2'+)ZjkRA v"Vs R("#P#q) #ۼBNɝ3rEemTAs`yljfBtESϮ(QW+J%uED]QM@j^#UF]Q(QW+J%uEDvED]QNڍ%(a8NLۺA ܋TpÕLbJcF1K)`)$Ť;:C\BTFڦlI wla5GLIE\Fr˼q]KaDN˵ᶡ2?Z|-O1b3ʷll/ZfչfpTerܜ_?]؂t0/.>A1xCm9,OKqiY6ӹmp:^ wR+uv'nB%€,e .wAq~@3KlA-حuZFѩ[iGRi$vikNNSǩ Nta=_vuHρg ZGK!3jg>LLj9+bI}dzYKW7[I']ܒn׶;4Tl@1^Aۖѭ15ݻ"mn(Yʖ+׈ؔu2Օ"y[ F"U;ƈ)e.n` 9ZtѤo5}Xz{։+ɟEX,D@R69$Z)LT`)8FQs ",:`x`@6`8+Ơ0S©0QB[$Qb•qn5ENY2܍04U5m'%pf KzW [>G@U/ jư sORjcfcD5"T&dtRMNujDf9C"CuA(f׳w ^ROt PV)msv0p$ö)N7*| =wUIxmV 7}E8:xF74e6>kgΫP&O5Fpłд@GY?aɷIGVaݳ*慿lWM6yD6{pܾOsкFi;Z_SSӭmEoo_Ͼ,sė\ Y?p2tͯѷ33$ck{ߩa>ON{Y7=o'fn~Q|M9I%| n8oFwj< F'Wɑuٻ6WxC~8bOFtϡsu8 5EɤԶ}Ň)Yd:ileUDUC"cutz>ʾ "p:\obC/'4~M]Gڐuop^x^Mf?|㣕Ko kяҶכ4a[Kf?~ӷhRP㾪g}eg/z~Śs˿^],Ӫ3gvgq3$^qi=׼~tKS)/OܳHy:xxvZuz{s1>f+V0_IxWw "K| lP*D!kjc'07TR{Ra!z+6Ġ:uK@yp$ƀ;N _O?v-bAHF9{l lA!+f}"uQ* +*|pzM^\A$n;$fkZ$^_oZ q}ՐՈ) Ju{^{iWq|Y}لR.w-zDgUt:4KO)$agv6alqh;NŝdiG0T)X9RfT>;KDDNYP1HBRnJ#bmukc1S16+BH;/]5FĹݲy Jϻ7GOlx7d:8@G ׾b?[.PLUE%y7yIRdYEX3L|ˮutN3 iN%blD*Yج|QK#. s)6Dl[_ERM2R'̐RƐTE F_cx($M'f<ĝ.{@0؝5Əӷ;1?ZáZ3?HY cx+ "),N{M8OQtl%f%hEߐ@e&xّfjb^ܺP+q7T:]edƚ._ e!bDㅐ#^̮re;uOJKޛ__T{˒ǟJݛ^A5ZJJHYҤӡI$1Z+'hRhmgtĒ]XTz(0b:4Hi vR=c3qJ3_L3v/4 NxJ`J~SLu`zd*$L9r$rf&ɵO\cL=W/W[l%Iw/~Hh"((kA#%sG'm'Vd;}V1&={/j s' ,(l$)o÷j",ïy;gL}6A" Pԁ3ҩ' L3C"~!v"R~c"Rz'eP)>E*w0 C phZC|;Ir>L=] d:`k"A*E&cM-SV*D @]2.NUS)s!yˁ>wuʪueZWX<Ֆ,^ꮸ:FrLQu֝Y2djƨ־jtP}hbHJsn Ol2ƆZgFmdؘRDlQop/6 !@dH c;NF"(N@yQ\QTjR'(kVeom`6YZ{珼 ?9J .B ,j$.F}gь ͈=?2 'UcC|^cO> 517ZVs&YT~yyj# ԠPRH-ѹ8:Nkt&WG vG4]Y%$! AlY@"nӋW@MhzQŗҽ^?_Ϡvnєgojt٧O-{ϟ]?SJU`Q~YL/'^nυa7f-)|Od||x:~zVoՑO_tp'4bHmG:6:b0j߽YL~6\Ә7SQW7mmk\:!~ZHز|6ҵmYH3y$Gg5~(uZ\l<] _՛^W/͋^*yo^gVWW Wx [ 㭆<`Z?d\򖷌{|rvsko@Zv>wʏW=b˧YY񸂮ꮧ8W D{\W:T?&V bBc݀@.|uZڅ0$d$,*rB(|]f2`HBH庽ȔC̮Oz8Ġ lĠr64hB['^*i#8r "r LQ+K:4YTf:k_g}*=3tt7]ynUԝIr?;eٶxt/1]:g0I(&I :Gۅ*g{/w\˝LH]rrOKsx>Mitp`Rp S~|f˚uLvU- tr":u DEՂQP0+cf3(Ψ" Zp_OL:)0 CYt"oSD@FGRYmkaLm5Er8Io}|8zBW<# 1Σ!^{3j/Ϳog[Uݮv_dָp:֯mWLװi=2Rž k/٭v>kQ8 G]CKZAPhlmT)ahɹˀ{f imuw6I-( e;VnR=mRX}AUU~מUޮ󧪻+~:[>Gb;:$k؎)%X- I  XS9TncK=E~~aK S]\C-o(-jE՞y~.4djOZPLS!΀H]:QgrY$)ec5dm(4s D M3AG1dT&Rl)yfBE7FւBE6c,Fb3 21 !UHIy.Q3qF7А~-yݜ5_ϊ>yr qAj<Ž/7C8:/LTJWc`SV'Waw}/  *D8q IԶ(NB&b ,%Վ 收9j9v"cF"LFR%QCZl&3wmOIU_pk.o5=z`g8B藗CȳHqQ$CR>xVxšD9 i5UWU5SMi^֙aڅW\.E'wvm<7PCs7f|Co;D.ǟKһT:/'JͲB,Y`ɢ{5/ <=Ԩyny ;=sS=U喝f>[/hΆ?+Zכ|˱$e13͌wJj_x_fn-7"n.$yn5jZN}^pf$%AK"Ud 3 Rf[f[r#cy=  2P2% ))XLTF_;Mx>;wa-X=4iVa|\O=5Qw1 IuLhLxpg&DD8± iSVF@P\,KQ\8VgM(Br5uDqJq kpcٯqǙmϞ~qs)Tr;h0KRS߻I7"tAg>tpW#:I(ŦJi7áԇ p䃩7Js+z>.I¿;tCH?MG(L1g`o:pULreRf 2d/Jozݻn۩}Yw5{gts jWE-C$VRΎ,uBsIPRƃ̆R~TL%0 xoԻVRZAgV'AShݾ&-KF:ZKo$A0%[>N _Z=U`$VKQc`pTr9{b ʴ<"]t]|XY>~Ȑo.YKW3T|{W>$RyO: sE$ v2Xdj :XaV9\K<^YRYgH KV[ą;:h*YR2$Nq2쵈?4z; u`8x3w}q]M+xg?:7oߜiXz~s+7q3Ot9TLsk_xy HDz`\q?Tx`FFXqKz}d{6vaMl1-E2DMPuևp)]bibRŹr3;Ҭ-gÂM/_qT'8/'bl&Ldg")A9xAyR!6sHg/9;sv֦|pȂ!:q%KM\yG!rjP4^jd ^㻟޷MtdI#rDB8'Rpy$8J6 *\'xtTX.C=:[16$Azw쑹mNf"oq)iLkS0-N'D5A8jp:}\}fAI_KSMlsy!(twLʚ %7ۃ3sG3KC>vFi?ﺽ851`vUIg>&zh{۝"Vg궱ڼ4[+0ӑܰDn0*mwc\R'3][JQwTP&~VOwBwx~zs^|{o(3W/ Yk](~@5˟?iTެinD krՄiW59-#.-y|o-󷝟>~ۏi"Ykn5w3u?v+mTņ*^}wW- r 1!e(g2ɑVg$qشw!(%JgR:3%IR%+.(Imp-yE=FM^>4 L ZY.L;2DSFu (GD梖Iou9nxnUdb@jZVϨUXs)zΉss :{qmU-[quGqq[mj[ %DaB)`=3%p\2y*xؗ%h:IH>yur*) BL$><8Ku{]♯IXFΊ(+;&k R@! Qj!HP 1Em7ĥCn1A}2Mm; S5U_&9 %Tw0 ᆢ*mZei\YF07Ϟ@iNC49:PJiȥ>r$I9QASH2Tp0195b[plei sOdF+q\:i4w6>qg2vj=Z:Sg~VpͪY vv4oHN~^c 7wзm%>|qdiKJVIsɢCl(҄GkB9hR<&<2Ak"KK,h`S1 B2*Z驎76[rɐ9h/kX S5C7{@|~gwY6Nr w׭^޿>Msmσpˇp^o,[آdy=ח6um -N|!r?}` G\ Ss O`el!`a&CM·4xe&@d`YA%Og2d6f)OZp;MiS-W!F!Hc&&"7l֜I-$gJ:Apg k 8R2B/GLz4:_2 Wmcv|eJ**kP 7{Q,#*p\8`9ch(o޾ݝj4f_.Y4āBo6ۖ<;g'YzƅnCMK'U3_xYAU ^ؖ<V#jɤV\;"Pa^D:Rʄ)IȄoqfp"IRzIPDRB #z^&Xi OMƺB[s%s+Օ86( h%M>O}sYu{ z?6s M -eƥ"nHZs{rySN(_j(aPEdq ĄS̱ (0Yo99;¶ 1C-@rR!Db:Q"u;˝^iϚl;5oVNɇ'Oe-n}W?{m0-5*\xE^вΌmvO.\pR։uJg;kA? vWk1sRx!:.t9\]=Ngy=yW~lbK֯vޫyQxUb5/-`G}]c#~QHI;꞊ kQluɷ LBZ1ɪ3ݝc:ޗ[͟_uyv-N'>V[u3 B }v*|2P)--worwϑԞG[„p(H wBKF0jd@oӄ@qNv9CfhƇXLdzjb 6!И,L$pfc!,(,F58X*7YRep 8b2ǥ &UI"SAd+!9mO0 OF!6YIь*R ޒImcY+D~j/B{iO=rڢn`kYREOq;KR#ږh"rH|pv&wZM1xg=CX{i,榴X#./ׂ/g>/>:9ؑ’'ٷݑR %Ϧ$F-E}Zpk1 e m >ž.8 [ɳ ;ҝ[KwҳV:5\3(5Td :պOttyz6 :"!m}iN%#T@lE%y fTT뤗(5 I! 9RFm:%*EѠHX82%j'GI7yc<ׇrny{}c0f= =ѳ㕅Z3z~`t>$RZp)ZM00_p@ W*~4[ugٷY݃X)J"Z<0Nrx*=[R#D @qHx$vܓgja3vWrf6:=?kX0];z7߻soi?|8V .QP@qupI4K^km*XKSҖ'CJ B-}`񀁄@cR E3ɕKE-D cL@y["yHlϷG? _f 2n6<ܚPWc{f ɳ&ہC)¹8rD5,bѺ32hI D2P Id) As4)<0zzAw[}2YtrHr7X$ +x *RTs-5/fׁMjfw%C&Cۄ(RTڳ;\,]B|gw;nK0n(ֻ sQ!D%@+X.bi5P|F4>xo懷attA_VߗjhT8!#>^l2hyNo_9y?{ߩyY}?7'+$G ¼O^]LWnYat~i#~:?~^3\LGYĈ8]-/bE/GМP˱Vd]/7{4avM.+?-5KW~]ϫ OAn½D{!=\bi8nXDƠ&VTy:礛q(2"ReE\^nRdܥomK]Z[d䳫, '?ow-uR4{X+Z#Q i˖gכrd1Ǻ`Ee+? '#v=6feFfg; c=^T#/ mo |RRNOH)vS{<ܒbe1֗ht-n]O_B66;>H}'E57h~;]݌1wөI%3^GjǞ!>ZwW"Y!ohs Ig6 HJA5&/^L+Mx*-&v]@O`_YmH)is3 Lb7LTkry| (T\#|Sر7jlZu`2d5CӶ7Jq`gaqpeNqSXKДAwz`^rf- - JyJPRƲew5P21C74rR!Db:s%,wVz=SYg1qv, wt|[y$ri˼]be&Ʉ (C<)s B . 1A[FT$@&3"85G#ь#Լ^kbB˥np:kkxxFf;7d?>Q#.~/x.߭j-lJ S/KT' 'DȇId!;7ȃH  G?11sTws- I T+&I/ccpla6w?759Y=s*w A*Ti#j"Ά2JB\1FLe=| :uXW}FXg6 m[|W7}W4fӊhk]ޣo?"_B /?߭~k#UB;F-]%bk5ltp۪CYە+}s8QOWuGTYգ߾}bG_17u(#z+䴕E}u]|9^gCo{@)X ?îDZkt̎tb%׺mC5^pT _mR-$zm*Y Z·6j}^VtDT~}qT/gw73׼{=LvNK-דР [ aH=zNEaŮ!SG. 'xmA2X{[~Q4%꤬Lהd }il'k)X`Sg~:&кWC4:AH/5Q[KyF!Ȣw0uRKE]g|u0Fb.|CX89EI-MђXr1qv-`{+0RpQ2 VHͽ4s YS! ȁݟǤ1{,]z/ϯ]DqH Õ?g؅P7+]Oa^}$"IEC9wL]/Or졹N'԰U_)z)>8o*y ئP79<88;& ;S8< ?:jG0umQ.Ԭ.hRԁhk53Kqސ}3>}.,-]5m kK޵#Ev0Q|;da/5%ǒ'߯jɒd$،"+?j5ѫW͛*i&];!g?~A%JSoq́[7T9ԴM[>mO׋!1(sb]rs $j_o"^G m$%[G~aX0amZGUN>z|?f?09I'Qglm&|g2>?n\|z%}A{U;EYЩD紳pחȎo߿y_/z=e뿾~p$z^ ߾ɿC_1*547kahJκY]ƕ%20B" ھﭵqOgG/37r6 1aOlcM5U??{WV!1~@Lz[^16HhQvn'$RSp*_3AJA;uv! s k'к@P3WN=o5QQab@E т&9xb8;oWB}owNp~u2uVu< pݑJHzםq5pA!'`F "@(u8;'spѯDIwēZO.9 TH!Bu$<>U*h U HsktNB!L%"(b@B9O޳`-2Ο"Z(C Jk&DJ 0]*K(Fcs+820+B'8!l )8,ӞXJkf"c kNX MwgG> >hNiQJ("N'ڑ$Lb)Ya#¿϶?2 39 3ܑ4Q@Usnf9?Wa݇P $RW` Һ.eKRu$Ug?geHѪC|Ooڈë޿֋Ro%лh,~w Ƴ_wtb)޷ C̷WZe5ނZ_U(If|.&*h8HKFL0LJRNT'9x{WO}9fK/`soi9} ʱ9Ge@ 2|R69ĿlfS>_ltkenqb̛5odP0:ofa/ד=p]M,odŴX*gHā鬚sj㧳j9JM+0TFT[+g&.NKO^pgo(kΫҭoeeѱuմq9-lwW[Vزjun7-o)C -7h͡sޅGô[D}Ot8v[(9kΦ?kk4y˵WmXUcUW;%;;/Znm~$ru;}Q$fxY@rINJc4 2;۽;Dmwb`P,A 4܂2لU7"~#S"wDsjx8z6 ͮ[eCeo'6 8)ȉ|:4+ϙӐv,L pf2ziI#K`{CErY#TcFxI.R=T)zDrB<%9%x-zrԖll嫳7Ym)nWt,KA',Wv 7Qj&d(ZV ,9rCnvUX|e$DSPA9ES0yo&B&hlJNh獦<Xg?wuDku7XeI@QCPX&44Zjh!Z]ǬT%bں9vHHiny6ew@xH2A ԢJ, 8 Q*5CdخL3I98 u |3yHKݹ5u u: ge+ Ց 6L. hgp67_U׃ۋhdC Lu>ȋJZXZE OLA%T*YC:XD"FLQS R-OYeR?gO.^scM}/Zϫ秂e[Dy;;**94\Ks7(蜏1loWYOU FDCtg``pBF3xu;j )L a.2siqTDsn]b6 \\DE\wa6=VBBh(muc#c<(RTTIo MIQh\2/CN1Q7Sдh0s&I(уf )B8CIx: ^o^WiH_m3ݵue݁-k(g6kx&gx"Nza"4;Pv(]8r0wX#<\ J!J:-pRuL.CzZUOy11(]jR0$x" Ls@:iEr4JvpA+N=P/9^&̨JHV|NsI\j˓A RD6W""NIz'-º -ZdvWQ8d0GҖ]m'pb;R %i |Фj\S:Fggז}=$,IԨˁIƭAQ56K'C[qjw^ ^"]a.yd; =݅rx{3dOQoi[tB{:Q|p~!zXfܭjQ3L.Crv]OyT$^]OWwߖ{NJ?.佌7$LBI_֟Ojߵdt}_ϠF?l|]tcǗWJ'tSɟ 7 -*i:B蒻re<l j0׬TTkrpΰMMx8IS,ea.;eU LDfߓff;'Z /7o%]|UVwKcNZDF#3R*K95T.@A䴡*1U<-塪2bb@1aX@E?{WƑ /z~`vc .p\Я4Rle!)J5i[bKLuuSOK2":>e&2Rݤ.BJt>'VtW )`; YD C2h`l@B7*լ%% |xyGzLn0+ry|uǏUEy3v3k!h3f,5<%T#2Fu`P9WΆRZZb'x@q1b%~)e/"PC8@b1ơJ펚_ڳAK.8=<;ɞ wxΞ5PP!}o:M[Oj͗ݳDJuAQy'29&y>f`C$ikǐy[:1ΤTiPd$kJJ%VG"kS\zr&9R79'988ۑq82fb! {\v~Qqbfy\Y oǧ~#5#Y%0(sD$c%r$Ȅό򂃣;aeQPCM|$z!EMI& ur;f]e zc(q#vNZy(=ub`揨`ox .YF1K5++ [R;pTr>1Ull_d|E2d0C/F%!B\e>H~`<Lx&ˍgCAD񈈏x#U52M G@ (fN0BH!{!F} !y=D:jc%"TsUʓ&8HܐP`RѺ=_>' {T׃'}+åǞ$ᝢSYSvKDt\dI"/RA)jb^l}QǺLr>t_PA]ޛ'ޅn[s׏R^8?w a}9$U_G?_/&x2Gv4}|˞ZҀő3޽^-?p Sj Ebs5)X9vI)V@!Oεa lO/':y2>=;oQ* kDMK#n(dqve8Bcyu N;ɼ`oG]4GzRK1=VcOG.cof"J^Wӷϟ)ƍMbdiIeD^ݩ}t^dDcMY2qF&Z+N(E$?5Lx|޼Kp{p=p勺dUr4Ҩ\tppM]'1E0ޫ{#W=+) kQQ|*M}IQ(en,FtlstŌM<%N ;d ztvӤ8M oq˜kR{lHT Ejre˂)П~v_"<uyUmW~:YqMϐ6*o}Hl!熸nэS5lehEη5CwӕM* %8ѪZ,.NyL>1;u2q[w{Oqecܞ n7oKoNٶZGx]BeOKR]^Ԧ~lln.Vl{ӍNM:q n%Du`R} vwlvbysO 2'w튝m{v~⽾V:k)?^l'i4o~-rD>M7mKn;4yݧ5yqڵ79hC Xv=aІKܩh|e5X޺i5/9}(x2!юJںkHK%357@Yn>lH{C}U (I?e#a}ќOgڹebk_-'u/vJT|;D.2]1!Ɉ9S e|=qea;MTS\cYK($=IlsHbO.LO } a/ވ͜ Mr8YقK|H*wy0n%K|kƒ{0*6yW*R7S޹i496Fq7 ?^NNg<~4oGU&&Nק=7hѝVFR嗮tBm`GtV/?;u<;zO/J9utvytdY@IKa]OʵylwvmUʣc]MZpMUgltH,+AݛM}+jwq%vKpN+<9Zh>uڽ_z2No5Pmj1K[ZigQx܆˝ 5 P\ý%:ZrH>v 60׊ij2Xٯh̏t^tDJoڦnB1Uu|<=yV"~wbKZxq_k.ݳѾskU*v*At{ VVF{Եf).{jS`l}EZ(\,HG+)oW轇Zz0 Un6GV)tR=:fh cd݋}Ae/-6z)%s2y͓RBA Y)r/.Yk5QN<_$p I vIiݯn%z.t򻥸ʓ(G/ʶay-E2H>TgZ*`'&]{ߦ`>(n3rh R2$&ƇDLRԊi9qU K?AqۧlD#5gjGmVբ*R!(-Tke=a㮆\]L@lc B)D"s-F9|%a>li4X k,:ndcSUTeJ 32P"lhcVNk+&SOy20zc<$nvlb`L*!aM":PU@ߥ[D0dm ҧR L:2t=ðTrdGB[#17~:}כ̬>rDKK$n.PɆ\,xbLTg";nWEnXEH>\82BIaVrHa - i.IE F3bZS !nG23rLA*BN,)P,@mPy<kL gl0`XfԋDw*`<#R!Y̸E0Mb$fL|"U,Fa Vev LI;xR͠j`oȻl!!~h0`HSx- !RX "!Y2u0RW )`ykcyk@A{GhP;̺ r `P* ajC&;ʪD#KAdj< `I D8RMVr!ȽUi*4c`q 92b\j` F Y J/u2 vR#m0Pl ХU݃.PDvD0 [ UAy2#4KhvK1QV rhU CNl b P>B(k ) d$eP{E%tMކVAhGcM;f4 ā!z5xrI'#.cUmA55kFV4ZnkT9>p4'e#s4h⬍KXD3AqX(&ɀC rw72Zdp% Xd *Ü>F4^8YT2$c}:|*i5kNv`"nc,o#,4ME֨p Rh›WT*m-=+Nc!-B4-Ia.y5$oQBeĐ%`&6֢1wy^ۭN0:]y8a{3BLnpN`ڢ+Z`8FL-M <;5,jo 5$Z\UFCi bLy~Pݵ=7Tfܵ1nH!/7C|̈ȇty@CK6ڹFm堝Q.eBw )0HH*P#' \ cF7lkɰn$ !>?qE r\B\$\1;9ՖiAyMh ;bE 9RŢ4򨖌5zwԳRuHc-P ͏!s6,ѳdD).XZ#eBύLVjmR#4AYJ>ڜI<(1SA"h)I1ȟe .Y#9cP*mq~\ho:k /5s%Jm*F m,~33![V2#С 5 у ~4BSq#Fx3Gli3h 9$bUpDt0ebBtJFa%܄$‡Ȱu!ŒC\N&!J˶2y]Ztj+-Xx3= R GRIntu4AB.n;ZGJTmő`.mS[PUN!1|0]H]^:_K?վPg T'P݃KH l!lx2$'&K$<#H ""H ""H ""H ""H ""H ""H ""H ""H ""H ""H ""H/Z{H OF=Z$H? HƼRH ""H ""H ""H ""H ""H ""H ""H ""H ""H ""H ""H "r$ wt@3@@2W0yIY!gE{@HI b$@DQ$;Zp_ e4I?<9H ""H ""H ""H ""H ""H ""H ""H ""H ""H "" $$"{J$ @x2$THŎH/R F$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@_ z'Kx9I[j޺_^_/zJɳźQ^MC (e7^g>zd4 u1mmf2 NE,Ga4) \jjmWM* 8NW7%fӫT:4gj{!X j?\d|6>LRZjI鲁-Iy᪭= W>W*rV]rN%A:5^lzVV%6[TUtT&u~k^?ŜpW˼{2]|sFOC9f?ș8ə LFko+PxWJ 4KӜPԶղ7jqC56g}etyTCW= P)Y͜}HS1n]63̸NyihSC¾CWrը \:X~^Op };B=Dj7<&ܖ:GQq,τ/ŠŊ~wAr˚\ݏC&$vQ?]^ڦo?Ml-/1q> =W6eS]bՁu6#G:[Vﮱ?ݭmq7}~zU̚5t&aJzpT[cm㘦rqovc~fsZhx5r֘"f ӆuk٤|_XH)@@hb=Zd$nRrk$Gq<9afԕW҂¨Kgin2eWl6}&ӋՇ|=]L@ڮ?F/u r1]|hϾ~ѳ}oϚ%Ϟ\}Vޗ4ZӇ̽n3^|,,Z?'|/-Ff{.=Q`}>> қn_釚/B*/n їZm0qfX/+wAA{e) ,zSFkwe6RC e'T Y9gNfsr՟MbJZxls% k4qz/YZ&Dyv1Eэ6}|_{~k/VZ>k>~^2+ lvBAa{Bi_{u9U/$ck [7W6k@iNi!Դr,ުnY%ʼnw[=-xbfմ&:j?nTˏf 5 u-~zFR;A!l>˸4gmv>!߬sV~kˬD~5o֏~|^"DA(W:Yn{1H+Ќu:UJf[ J!%`|$1o+;z {Ğ}=RIԔ`{!d  Y;QR7'+Y atm11^ǚ/ֶ-hXt'b &eH6%E2AgQD )zx|3Ϳa'ܸȔ[ܵ=Y>Mm o`quon_w]6k>1]ԕjx|jvY<{ր_Su~ek[KQ"lq>r{쀽f2uɥ=w~|#\q\6f}P6頓MGEm2dâO bbra :{ 8*29+Kx>׏Mpq;׈)^uob~;Ҫ4vp1pWͺZ2ߕپ?$?.~D $瑘m˥h?'ui2^."ug@uuZjw<;dj&HTT:"թCu*&c9n\CicQKnOz%WZMV|!6Puf3#@:~|=yяxqQ]H";Xe6shj$XieQ WŁ;{Q3Z^SIKR8~gO=A.tvrm](P8ez~qVo]s^`;_~wZ?ݙohv>ry'||:[5vW}ʬKl' .DY*tV NSV3LJCll7t"bS?ǶfFM$vxj;&jH2semy(ì CXvɪP - Y B6å>ۊ܎9_EyS^Tɺ:2vbu^&Khd|hȤg)1Lߪtގ# .F* :.C!u ̠[<ڔ?ʑ 8yj@89{Tu_uuLcC/J*dnRB O>g!l[oN-}ڐ_| x4:qq,PY yh?⥎_XFѯv"8ZV? |ݤm U&gkR{m,4.bR־S܈AI$)(Z8 !<B~GJIU,D͕g)m" *5BAI9Fs q[Bg]<O?:| uM]5uPrHQe KH?׆7T?Ⱦeq6TY䘅 Zy{x#,ŐE;;"pkٻ6$W e]Rއq{0f#OkTd/odH5)V23*+"EV clrMO `Rx?lx,K=),:ŎR>` XUQ Zj1J6Roo]qnJc"$R"%1dN̰>;d 6B8N/pǒs zNƺC2ȜFX^(B/0x8x+ ȷFoکQa7 KHNZ0"+)uci: | Tΐ5$h+ .:c 1nX  i$I `u!*OiPj.Z㟋ud G LV|moj9ZdӨMk:N[:UpTCŔv/Ά;?Vt S:񏲳ޠ o>~&ÛkXu<+s(j~* ֆ7afkka@^eL^_{㒰Ώ-z-U^]]|럀`淦A.9RQ^IxhŜVYFA&o=NQpENs2fb1zeuP)'KǻNYerIϽP/plTLyo󨈁Db6"# AR,8N+ua2;tIpQ;/g^aUsNH<FMZJ`a,W\0o1A[$sHI%O%牲|I<>P\R VFNyU) Fc5H^{I<3oڽ)=(!ff"# (ֱ9G1ebn`ƨrL ^*m4U aU8Rn+NzUe+'fkcX AI O;L Rr/q+GO'EDLSK~\Μ󜴸x|  !+ȂZz0`)2a)3 _b@mH1 ۝n7 M@vW;%l]|sFcı8X|SMsӇy ~ȉR4gKr_+LwQ9D5UE5: GS%ID!NTUi`g^r91&J"mQF(7j<[֑aYA{)"IbXKƶQkF^ʬͼ蹹"| }sv4#e< ^ yϑ!ZCQWTzhĄ)3MPyڻ5:=`aҦ1>ʭ*"NhD^2OU"cGŒ x7)qH51jnmfRCu2LwZ D佖豉hj4BZA3J<{G ƺ LqR܇5le-]ފE V쎮UxwϪ)-VC1{;SĀ0^3D9.?ۻwt:klMn͢ChC˼6k=Gݣ煖a4n鯾 w }ZtGUfa⿕rH~M=Z`61Q+,y>/ubs >|G#@bKGmT!G8O{]]~3o,"Ո0fDIN0a*A@o5v?#ơ峋}y7}Xfsh4C0L'7eVD<_bf3(>:eI!"D3Cc #DhI sbZ`1VYE 62(  #KIjCe nMg;N{l"];["m昢 %.k#g8 d)R (O/ P~ `ʮt2 e6vTGFr>`9IC~Kdyȫ?>ei@(hFZp3qZN̺8NNӄ*3ۻw ~ZG^BV_ UesU!is IKCfmzUӻa>/⇔j!xޙ$p Rvt/~n=(+?Q7$п4JoLMtޤ]Oww Dt ya23+>U]YURף)W7hJyfzo"4F5ؒRL"xuсT0')NIZEzf㇦,IX1id7w'Wɟˠ. bۊf| t|`Nlj.ܺWZ'*UWjwQnocZǥTER(7,Vwhiw[e!OBw6Hn%t3hFs%7*]b&=+ō=X]⬛U}XЫ+%b(j>3ŝ͜m6/qyNW_Sܹd{:v3йo &/rcOZ?'0c~6IVU^Ot$e:KK:'uX}qmnHGgGgbܮk4if'+_h\8g)#cy`>JZP0?~1\-v fJc(m0Gʀ/|曣˷'_zO)~({KhƇެ^Z2WuB;?,v}z/G+g>\]%o?Z=XpCuԁzcc(n.;@'$/l? c=Z_Tb,a^! Ol$zv:k'os'|mY~SMq') [0[g勤5[Dfߜu0 3[&r`$ݡ0CJS2 ;H22p~'Z.a; ӽvKwlhk4Q{KAwgZfr68`2ci^/%N0 ; <o_\?=܍~O,fe|ٮ֘oY8@mϭ97WT\i؎Y "J0I5Kni){_4emӟ](jooxp P2.5#JW>?ONOǨnA½k@nΤ}1.'iVdoE:]\^_l{ro1XCtr+k.}*'W&llMDկ3x/RӇfjgŭw~ ]zG =+ր;u'rPT[+ϫ#e"3pV+OoR>Y|#wslw.Z7&HުKx='NL_>|iYć[Gk/.Z[:oW[qs䬶R']qvSP9&gM l=eNY.EΛ w=MN47jaSVWRՔj͸}UM=Kzc%|yj3tZ}}ñj-E09 gxxZ6FadUu9E>d'ԲҮ\߽g%ޝ+5CwJkLf0)E4T9в5zd1^#1+>bnl.HS^rJʇx &<L:1d]lGa6\65_eXDe9wj<49m3L9G#Mw?uEFq#n&UEr'oj*ΩRR6z9 u3:xXvoQSEW'8: AOkc6!4em\JCts|.@5#FUK!Puf\ \H&E>iuR,!.C2+THXnCE+dnJCQ|*PL3%Ġ:*D{j-xԬ;v4Fz.M(VNˎ|i,ڢ@pv,*60:-!x$t jX_ `Ʈ= pYiË+D\ QQ&EWP4̒ (Ey@QPShHY>FdOBQqʺ@J @׭;=Q,2/7Qo ($2%Fie":Z(k46Z#b $2=(a P}A׊Xq{4iȠd\| X\Zf3RUNJYd9 PlAANZ&D_t {axvL&^Xu/x{6 ^}6#P q8.x:p4@.[0uJ&r+J]e ,#hyأyԄ0m hhVcBgą<H&5*)2gC!`FRŁм$0}$PVZeL< o 14t¬LtП6-"VwN528m&k)B(N Ͽn /Q'KSEsOe$X67Xkxhq=;KRnz ЋY4 QT(ѣB/ 1s(#L϶a )۽X1z*2m!%tmIJ X>t T+f)ci6nV roێաgry 25{k6Ljc:LMUQc4(?zP*Z3zw7yg*zVXtgaYwnjpM#2PBЌ-ۉB8Ɵ?vz:68MR]#Y J[wPJRn],^3In.)n;c@3 --Qa-h ~wR%;Yxv ӚZGmzIhΓF)akBo2 /WU=}x+Woϸ8b~XQw%gb}BB ^!KpY\/Uv{+-Ap+Qp半JW"Dp%+\JW"Dp%+\JW"Dp%+\JW"Dp%+\JW"Dp%+\JW"Dp%+\YWD(*hp80 `W6\hEp{\EeDp%+\JW"Dp%+\JW"Dp%+\JW"Dp%+\JW"Dp%+\JW"Dp%+\JW"zVUxC\`v#1 -{/t~+RDp%+\JW"Dp%+\JW"Dp%+\JW"Dp%+\JW"Dp%+\JW"Dp%+\JW"zNW08jݡkmH_e`{% F?%FpHI"29h{ $8͞_UWU'jpŒHv Wbъ.K.K.K.K.K.K.K.K.K.K.K.KVg03A/ћF]f1:o UzCk#AѣLI裨̳ Z;|*k4L<3lEPYΚl3^o_͵_6u\{\?#hH=s3gqp:v!$|2q+ : îzT`Qua,8pI-A7YqZ@Z4)J߁+T^\ r̭n:E,Ag-j 9 LkY"`);\^,˼Cx׬ի%N~.qR'XII.H<B^F/)eģXSqy4k-~JK8im5|%ã6{76߇w?1^Ak|#Aɦ(">^|Os^3`Dc/HߜͥP?߿%|9#XV9ه>)5Etshg-q^Ŀ| +N_>nUI ̽Ӟ< T q0O"8%rQ NqO1Z0R~εS}  bC[ >U ׆?;Kn7"?P |35+aʇ0ֺ,Z U3hs~3-]TsmC6w1^5p{ Gӛ,MyZI@Ҋ'kP ~jv`mt NmJ;HB0 !SGc}:Lu`墫V%,\(OYb,1qV{tHS=‚SXF#qF f=i;q%T"m 3=ϗ ҫmwP֡G#}"];DwU}9Ǩ++Ff $ O-WFBcC,;n-[3`w)R;I`,תqm#]hҴێFGwK@<͡fvr{_^8>tRߚ/~.C 2/h8$0OEm[v=&z"[Km($u]:oUǖC=NQq{ hk $?}Fp XN `1 @ gXб/b:NF U6f$)r{hьv:aPV :yp,rXfS jn7R#6PDCM1-LV}'iBMDLh]}I_%W-^]<~^mJAbDWƇhm."Ma>.|MuZ4gyH^$(][0};:zMJYMv[w/s"oYքԃ5)6׼gEz2v G<y6]%Ҽ0 L[8 Pw}پgD!(!ו6RSܻOzR{%R_M1M+KVZY`"[|s=Iu| 81Mi[&Xw{bjSs1$w E/ש;4;\ns6 0jm޾$7LPj7{Ϫ[ 1E&(Ht|t_XvɼӮ[eOD<5zjkvN@.o<ο|*8Q`rl!f뒢O-;)sK&\.9 $̴Qj/ NFТ{9Ν o-íB\ +7‭iU"bK(+m=3@uH ÌМ-C'TqZ'-gFmz:69qw Zvк `2? ë ܯ5Un[_}y%gWb tBU.9= I#U.}A#+#H7q%g#)H8P ih cTFL\r{va@%`/TERcib]<-i!LjcAz Hf0*sK" ƥ"ج+J7@%٥ .)n<|]VIv:o]GYa .![wWz}.{D\"VsU8h^]|U˪:n =LX d#۝ ޒl0Ee3H2`LuYz5ϲl%5ofv PTd+\V܅NjƐJFW <>Vp2jW*.;o0]~f^Scr"b4lwʉ:iiGJG;}- ^v0 #L(B_ 0EmE_gO5.`,'TTtS3G`P!Q G9# 7rIci\ޅ%(JmZj/)YwiT tm({6)0ZÌ٤}i*4ɢ<<`xI:a{lTT֧̃EjV#}.5W I rNս/EYrByA?`^e tr;`쒕6FdodOf7 %&pscl+J?ӣYD|qB7 k0kp qj=I4-A(:j^F-u+¥&LR"ZFe)AzObu^j\Vv`-CٳX PG߸j eg `71r FlF*IV R+nH/p(|7*tcIk8Vqx[@Ke```,@AQ>B6xH5 0`*#/>dd 0mtT#Ț\1{EsIasiEN0lXϱW/kiwqJǎ[PC ӈk)U\>LRqRV4g6vYn=+ur'C/Qq>r~ԆH4)p^:jg3 g "AxzE(P b-rn:2,"1VJHBln;%5rv#K j%pז"O?mZvCz;G-b HjbO_/KTqlEOd -9 c?ǖ'p"u~Wױd:6j7 wm SbvAj)fȕ^&C0!B*. cAcƌL"^˔걉hj4BZ"ZF?s>#ǛoŷQ1@KbH騃tr#:r]KSI+e{CB<<^2C{{̦0csؑ<"f6hPH8E5ծn6 XDhe!Yԝ|GИ~^PǵY{7SݚK f.tHm4|eZhck-k@ dy˻/T'ů 8(/="CN ]e4I!bҞh,w,x} d$8)3jFiHŦ!I/DVZ)IY)lm'T'fiT1Ŕ[vL-߁zrp?/J!=X~6JL1G`s`%fDmM 1$GO?'\B-J`^äd~ӖEV0h(M%3BfP+-8iaǴ@,Ed)DG<eI.PD(1''+mǜC)ym;d?u!x!i[["Jq,YeE"V2mX+ƭF 8Z[v~F]]=TdW㓍oS{ Bd*T7Ee ׂ; |1 #ecmǒͣyؠ:q5Ӑ]C5:&<ڼOS~w/eU0b"f=-PDҹlcvySR.dox-7<|U$ɤ1L"[šfs^Fɴv%Q:PXg?ajƫi6#c(V9K9K畃1P'Db!=wAdb=Uر.ٍu+g0'TP_9]tRa/Jd9nl9ZU0R,E:+'"V$l2`|Z'FmlLN"&є^p>8R[C%%O$(C;qV" P9U m*(&*>kRw1[f}- 9*$ϿgZeܞm,I<e}79]5p@G~3*tFI&Y.s?ӇzR5)&750{?L59{xVڻM8J:ּ}M呟?//:8R6z0i:+GӃ^K~9|vg{D?FrvH`x0j0V\W_3NzO>:̎.Vz~5x|sJ?&7^CuQu?)llX}~:9Ks_eR3rONNNol\ԲNsylyxo_~~o?_K߿uO`A'"GG `#byW Z 㭆ګw}׶ 2%vsko@ZrYtv_Q[fMtUw=˓f~Gu#]^/Jq]g شc]~ Nc{ch$xIXR2} kT@JƍqҷN8\w+~AYj 5eSBOTdU.Ȣ,q\i3f:񝔾tnt-عV;ScX4;˪MdT:N{CcK(o`(˫(\H%SD#”%fs t90(*+Y{qt88fe!lQ;#CEB8 E_2"+4f0{f @$&BH%;0h/)60gI.mci&XV0tʭk"H "FT> ^R'ml3Ksc9e^}<{v/k#eHDZ$6ĬE>bT ^Cj+حϘk CK}'鸫o|OzfL=Sb ɇtSwbnH'O>7Hȁg;+ hEMAdF'ڼG{%]擏|OfwK̻?Yu+mb|e7A#5%*{6ʷ|`uN7n: Q#oer|ؖc`Q:ALZK3I*QA}0h,l!-nOy4hTcL!CfRfueM,k-OZh":uUv2,9)7F3wD3Ld!Ψ"? Z}p_/ڳJbAeё(.> os0QL `LlʕHcJJdC/Cw~{iR{z@4,:E~!SܛLn~r^savq!iZe{juN.~tt]u-|o d;"gBZJg ix)w-Xp~}PwQ+wLc5*R[g}J]|b*v\R&ѥ +SS$XJXD^gg' YR2,yy3 :9W>oGn uBITװ{c"Zf6,`UzHx/`y7&|;W =@7›;ߴo–MR +,YmCߟ.;頏2 ꀗu`&a焗%t5w&&aJ9V,BBlVCNdY'"ƁcHfm UB䓔PG 1 ÜҖc'<ah 9aATE\d%bL1l&ΖmfoRalZiɰjzz0n'[ېʺq| umso9b ^N\r'm[!m~>P?; t?c1w-mI l_hswWMOS77ymNpw$朱Dn 9by:}w}#-ycF1 <5Lr SɅS`O,NY ;t6P q˛͡Y;4inxs_Tmpc"[1sxiqlT:Jr%vf8 q'$N'10̉Td߁\M)c%_oZe6iU ƹҖGy5?▱/Z~d[-iλ֩xuڬ5. c̱|>^T2/.,ez9bҹVpb#V,~6)j;kaVE>ג)E 0sD&[)Cv>#bH?#$HV7{#y܈A)Yt?3QOstLhY :N20X }LX~Tԋ7o_l@aDk49/ zI;b)tqv]f8[o{&d-h k_= {>/9o_dkO^웄< tft5CzT򰄳?_|͋v#ſZ!ߘ1@#V8r@?]ORSiq㵈 I44 bѺ1OHW|w f/3bcX\|0}RK=˂"j ɕkDl *Ba,층z3Lj)ep,7 kNUj,m՜U{M Njd8 Tƫ6.HhR`iuZFƢ[Xo Co0 6`\Sa$^Q-(ERǹՈS99J;([o{ wNӑӔ~(ɻJ-h(\?8Bxv` pJ2vXJ*8FT#)2„ O7g2_4|nGǐFF!N'cUEUP( 2k0ZZߙ_Jߠ7Eζ]AknhpIS^ȎRF<)pnp͸`u8ή 08(\B-gDj$K0#6Y؛b2O7+ngzל$?M|E a K*Ba/ -`Pvdoob=WEVvO6r0r-؀U@fHͳ"??Miv` kF4HRbQ8mʞG}3U %WM|ty4*c y xֹ6>Dmn;)Y Rؐ&y`9 7ޤXӵʽ2\;37hj1,:b}]>(^Qz%_eZ/LbA.gEȆ%O4=3\иB-؁Wm)SX*R7h*V>Ydo+kRS7Tl:p:[*A=*At`###Iտr%-ҳB1w#5EAj1ifp? c_-,T]{ow\[ @uT3/ix UgƷS޺Ǵ+2ؓ+=2W-˗]|[\]wm3f(%2޻I=cMh.ji<%Νq%X/&gz7آ[&hY%E#/"-))㲓,' #I"ia"^$=@F 8w60G@ w qu4D{S(sVA-̀!% 3Bs4v q;M~iȵrjkscvާhYUpcߝx|cb;h `JzxDZGFx;g ua=Ɗp"sBϙ\q?nưCxcFx;w@cAx;nt$Qf').Dν O"qp)(L<:g#H41Z ;P"ԣA+Dm 6U=?H cĠL$ ,0=GH!"1 %@ 1WNht&vzqL@c߃hՆS&;]OtTӹu'{9`/d土cnbR9c6([8Sa5GLIE\Fr˼qQ /;Bژ% C!fHxGjd^/;^sXe;FJ|A:ry87 z^UĮ<OyT{K{˰w>/cgβ+{t4舡T*`h1{6UgzH"@f8^0v];CXF͛~h 7tzVp{6u \'ۊ~(xڥ1rP aK;߽}4<~?|7$7%Zg{d6)?n˹S IjA[UuZFo[iRa5>ڥTޘMS}qa=_t-=eu\(wp)B0cF@1gz4‚SXF=qV WV%~SAz8 f߸~,HubA=F%C[,FyIt* Q3En09K T'ӹütSƎ%sGYNFRxipT o4eH4$R6bJ#f\ya@%iㅑJm8Q̨pZ$}OpJ;JVn({n6'0$y0*sK" ƥ(7l6y[zSJzevbk, !m Y/R5_|bFWEx2]~۫Nf v:ׯ_W'Y1 az^_}[}|]^2Ie)̍E4*ʼ:]T=<,T8^)lxI|T\Krn׽\_IȴYܤXWd]8!)Kv$_%mUmT~6Wf(DG:&fdgqP<5M' {u!.AY8B 1іbXԘ\2jD 6ab+|'[o($>,mҊk.#_!;xpJ$/`-N E yCEEA1rF:-vAc9` SEN7 W̆].I *~.ޮfM6K ~>\,y*Oln=lHu n.@إdw[k1w6 %Yw<28.db^.aWT\.ԬGp\ceʘ5d@+җ".]Zu݆!LX>X+cS(M"i)FQ32*m&$\.%6!`B a*5(K){Rbv&N˴1e/\S 52JDʢSě9'6^ Y$F \7E/ҏP _Цer*RY-"8XK,HT )RgEMXUrg*F^\ut pmtT#蚔24F^D)EXlZ' :vD:61V%u ??= m6?C韢=S4.B?EK9Ap.B3S9˭a1x%>@ ~>6?*o'YPڰ VqÝ EK'xYL: YBdHk"sL(`9)3JGEdQ0A J)#RH]Tw&,t~BUmKkKvbO76 Cz;e}͹w| uuOn|z̨,TH"PgnNvDdڨRƑq/{PDQ؇36$}Lc'nq% &iK*HQbTb]י![ĬQibM46:#jC,}bxlB:2- Fd U}(:՜If3Ψ8g'Cs";#/wOiA UH raZ@o5327Р OTtY;m F>pYăG$R˶dU lzsطD`|LbëSiMOv3Bՙh)ގʷ]\^}wsNhRWVkœ/hb^j *0UIc @c,LEie _K95مP ORX]j8Lbz{nanf u__jιa>hx !F]5W_0EG3{l/쪠mN䊡*i 2KU궋) `KTЂTumavX<4BdT02FWEBz/u~OZ9M;vkkΖQEʢ YHZVb :vQJe(hT|Fjx hDP`"e۸R.ETO?&p |1N/G#΢l \ 1mFEPNޫ^8HIB]jY(1[A$:M\2dŃH"i@o׍`zo&!˫WÕqM)MKv~qw"VbB`m)]t V>:m$qb u_| 8K;q7.]I69s/b|L"bv|@\DՏOuz9\&?^\%>-ywO;B*;C6 2)jHPH0N B)3BHm~c'QHX @ʋ1>G H{7!JQrJop7^^Pk݊us;_JxN~mkj [v>X'(sa3EM]X{Q)蹪 x X1$@dȅ Ԅ:Q=tĹ߈7몁: Y5L'nk'{az ^Z}ѦM;`9v}jcsA ҩy:aHFm CERyvt3ﶙڔo)]Ji rٗ,R KXV.W)-VW*kΖ>{zDԦV l !G86`Z_CyQޅ8vMhWf6*]*'F@1FW\m4ޡHm׉r(`!g5Tpq6&zJW~KM5gtɵVyN甝x}l9[J%;CnPΏ8i>cOiPUU|N&H6{NI6X# ʎ4Ijήq+>x5n:SL":?V&`$ZKcyJѵ(tH,J&! ՚?!/#"]փEOF/tg[~y(E%5aՈͺkQ˝7Lj[oڱPy3 T'nu@go1 zj+IaG=hLX'X7Fؙ $X{db=^6ıO.ɍA/`NAPQT9%긊emKH%nb%;)s%"uV'Qo=6b2+1$Q)IĒR@>Cy6K/fǁRI`7NG'"ɖxh@J>5p{}`/|gE!'Z]Wc^ LC?z/eA}X]-p7g\œ,^~rf W?k_TȘsbe紻MFq#IhY/͑/Զ:G:6 l)ub f^sj́{Mnu۽7~R^H؊xzF+_u2zO/N/7ƋqQ':ˋ-//ա~/~x~h^|/4ً^f?8gaoE' '15|zȸ-o=5G<G[G2},:ZQNv7zŖwDG t5w,-f~2Fm.1JRxtRFC-cV٦1Ɲ1M4Qq>jR%r4hS%̄#L![ jc'}Mpɦg!ZGdbk$נmL .)lhTu.:*}_I.;_|{7 vnUsӔXqJwVWUgt*) >,K:s"/'|",DOL(fC)DgS)j.@r{Μ% i{x6t^)NzYB(Q"#ljlj3i#D 3 !h1Ô#*rbI)ͥb&(\UjlJdɗ\l X . 蒼rgCM^4(1R "Ȅmfcn,g|W+Ϟh+TByd;]46\"hK$jM6b¿7||h ^'LuS%tܻal|GzfN=Qb'_5sCI|rrŕ);1B";S =g?6>s )Ynxst~XS9|MiK}zѥ|{.q0M+Q;)۶WL)p{۽Bԉj3r!% t DpΔ,iP GWhR\,r%GJ=נ?t`FzC' adӱPjSC4Ľ<]BE4\ 3hr9yFiBqj& ǒf0-[fK$4`*d/?ksgԯ=ԘS_ovttP9p|cw>uj8'>R)d23+eUU"& D.y .Q[5bp.ƤL'E @#$1Jw?no⬯8n[b7Hb -+"PtG|DV.w[[Уpd>V5c⼇Ju9%|hm1l. o< ח#X*%֤0Ud&K rũPL!\6k_ഏ΋o¹Q Dfr9Iɧ2̋-ZZ9\kS 3{oC|۫|7_\Yn)i-q~=>> %~]om ܺep -׺QGCZ7^^kZ4⷇0%q2C6x$r֧-OK?.%tn2#yRrm1 kg^5Dm*S2;QԈjRj8*=WUti c@[3lT-r!+7tNru$ij)5%;槗o㙸N_/Gdq^ÓO>wm//CӘJkm87'ahu:@uM agw"L I9Q=>DJ$E'rNsf33X;e"N€T#2杣ρwt>ƯmK"w0u %Nf Ů7m<~åsRo4-3CǵyO*zS de&2PjV#'jV|lΆ k'e$8|rëdл"hBY; `IʲU`$u (uL,k 6[&l"E$;Ynk,m`ZwF71?0!x=!݁)gw`[ h$}&A'>jGL- :2k .۠! Xs:#pq=cwk }kY* i%KH y: "ItEQ\|PH@.ز2C4vqew#&P\BsR6sk0DQZΞauEPZ#uo¡Y6KƪAIXz{L0pԣ<~>zIA>ŮO/ϱKFӲ7`fzZO/G!ᙑvP3HVo%ݧA{27IZ?qlW!DN]5k"&BȩʒˆF3K%bW?58ϏX] /g :Ju263WpeFoыOGP氯 xN9R' _\`0e~ _Nךf4\LWM-a"4aFWw+8YH-{T\M&S(lzw4+bRY /m;k!qBkzz?Un7 PJ7/Yt&wT,sGͦB*YSzNWkzo|]ʋtWJrgc{}b]y'_tk)9{Wn=ƨwD,\bFFr9zOilwS\ްj+ꬁOc Fk'K€I$\ӻIvR}IhbDs8C/2c' ɰ4 KgգޠvQi_O퓯<FTTt<{[r:Tu4,C*4W%|wa~*E5诹yE)|3/W(^0֢Jg5ƠW_t'oraqji5[jzgz9Lz`Xjs_g ݩh7_ FM[t#J6ڹro?~ի.D<6tM,'y@Q+RVutN^Ydt'Zb.s-'׼[6xf:r:ʵנkQ nso'ka^$2ͩ*^juN&$g>;q)}8xrkI{K4p8o۟^V( ;/ϋozr_jTzgzUz7:B QW%{'BQ } m,;,P^'M`3OX\uvuZ ^{Wges6<ˎ媻տ#m_U.rY`K# 'F'9YL%C$:8sJ,HZ8DBnS z@1OM[ T0VrEcaZES J _zzŘ瀴AG΃f\QL%kX[#t6A d2b2ѧ, !t7C6)9 T.1A,7/KA b6\$dIT.+Uk xƉ8Y&^8K)Hy@Tg _BAXRPIJ7ɑ|L֔be!BRͥTF!! ?{'2ؘtHƉl'9JB^LI\mDFVkP+NI$l ,ِ m5X /z(2Lc]ːeu֕vcM[ xm:iVQc!y9fI{KĔ!BB"2ڸMvifV-,t֞Ev9ƺΡ F"e>2-hO,Ă9 X xQ$Zj-ˀ"_H 1MDGzS% #Hy00 p 쎨PgV (L̴|KfzO+Ac5% KCyϞXiO3]u9('%=N/ͭ48F$!~( 3LF!^ämSIcsi6V 7'z7 ;U 5!&3)W w  '#v)'a2 _L$/NOg]DY9z!2ĎR"7w[MW4!xD ?+Trvi~Qh|hf_-H/OՏjAT8d?{&;FiΗCϗWk(5EfUkcfŋylT=y*8bz{>[v@B6r{}bl[KlmI-]lkFlm0UbyKp0ldT66=nwћa'V궱"NJ\HiXu8n^NH~XO x87XS)J ďz*FD۷g_/oξ>{Ku@30XSn9 8FЁ[Mwhڶ4hZ:GN9Ӯis[ڽ>q"{k@RA7%9(-׻&iŽ*-^n_n&UW꯷z%5ċԈGa@Pe[^1vHh^zzNw=)ec4&yϴ3γeBmF(с&J*.OmpX#q9nWJeʥdLX.) D*gN ݹN+:GM|&9yn:7]v6ui\wN?d50>%'7Z1[ W4F%Ѓpspq֏ĉ}]8$sܵPq3ai s+xwG? O8.?oH T qlW{t͛Yz x]9ZJ 1JU:+̒;H$'0 oDP]b`QT?Zy}nRRn]O9'u%qBj/鼭T_F;L@:0F`{2%CLZh\j7KgNGNAo#i{'h;T)0w6OJN٠3٣,Fk|!hNY:q=Bqn%vDd?AK<ːۅ/>s">۵}b^ W#5N:`@d,$ (ѯDǐyǼ0};4[2Ã-@w*7k(vMـo;Q,FB=:i^p\@qCꠢ7pN*0!Y`";9h5 =>Vck5t6Xo$];.#q?$9+C{;R.$!`T$)ˡlkzMJ(NYUUwU}6\EPO22xJh4zB 3-@N'mmT%O͔>,sL+Mx &n` Bzn|-m™؁QՋY7u `^Gs$cUPqͯR BPjbh),#5ʼw yfaa~)=R(ih4J MfM uj .KOULfEiV-}@ϗWqj3y^j܇OpuwoKyi\I\s5iTWq=G{r!)̸y/vXژ;P*s!c\囹MD.>ԐXo'\*,BLb-ˀvXdHe+bL:)j! [G{< 甔ϳr\3D6:cdSinVv4ku;qm그rp\~Um]r'y6$P'/nw;olp{t7=*^g;wJ!M֞]o(v Uj[g ns l|[ÍKl]n]6w9vtnN~hb[lݿmƻ;VbW{r|X[n/dpwy4G3Gwt܆?8:n΃Ra=5[om֓ɧl&'}esfi;NDw_!mmY[ w>- R)^h2L;qN܉0bU'縏ZpR;H&272aƿi"2Q݉;X6"hGjr/ϭof0އnVWM =~aVH*\\{δc WfB$I3+y/++ ,EA%A5[*3Re-P\Vz0`!g՚QP'0&n ~g={UbkyO+Wq&)Wr9(!F&1אykq<8ĹEmS_{®:unu PhufQDaCB2,%>HgD QQalN|oW휠uNב.Nq9ݗR^5(Dv1ESB #t>_d%F3*s'%.ӕ%RpNDEJ+U|dqwXN .7NUcz[|vDO3/—t;  #1$ZW)PB/r1$٠%ZQW&Y)8 R2#I+hR:qz<H1G$x0FGUR× cYJGHu$H!` MG5NFV\NfA'[fZ^OrL%SF-#wK2s:S*ųt sK/gATBKk:$J4rgu:%*EK-)8k3 ݭ5gW_db#f^޻~*HM ?B#A=˸CL3reN@tǘtJeKYWޕKnAD'܈~B\ZfioFtzh^e?18@iT"31WU2R/CpWǔ2corcA`g-Y~L'n;SCеێF&uIз8^ޟlRׯ}B "Ѩ"L"cJx%,9\Q4%mIDqd ?Esƣa  m ^pJ56f$W4 [&=Ƙ4r/Dr\"ʙ nO[yM~+Djleg+lTtDo߇Zȍ3h^JqεǬH.WnVH\zJg3_hzzMhy^ bQR$x" jsL24"AQgnF F3p4G zd8Ώl9^'C)FHP m>p`$\ɫ`$qkqQQ{>}!]B}(ڳv) 9^ˆgn=/J܌Vj˼w EFSɒQ+\08h4P\fi4nƟSaLvm10I{!b3 cV SmFv)>)pEYȗFgM6Oc#`wr_? Xfi8ᙘG*!LK 2^iT4FuTkz`41M*^/Vojpm>lǦ/6ЖvWޯǚlFt5mRVn;cr b, \6]̧_rӻOglQ"9whߎI9+MޫHW3mio rlڤ~=Tصj5_,ֈGUOo=>WkAy뻬(+>m]Qmu!Dû=ð/#y-⇋.hR1ɤu5'l;cǑ SoLG+ ًaEP757h@U* aՁEh]͐[Lz;&V,P"DiE7}3lS޶Ǽ*V4iIz+==)m.y g!wjSӎ\rr}هk<.rDd-`ZEΙHt۠ H Q*9Bw\8N,Bȼ8 iW0iQ0%L #!h} k1CN&/1h1zytYT%Ƴ*CV2^ “U/񁣔3'ɌVNό~X?ۍ!`JRU:9߈hqGbv2$68gCH)*hGF虓h@B45FՈ[pFܰPՓdTT◳gx#_f5ZQZ6Yeg1Ժ_fܘ] nߙq00.z殑#> }끐7C9oB]ddn_gԠq˰P !lA.4$TUv۳jYڲf룊$:eO͝>-8N<4mYRDPu6CbSE,YK5R%OUigwv=lkqzP>.ې<de\RTW7z9YO]M㈎Kv ;;Ц8y#uhjV+?ܬLm6LF=zC4z'7u?H,jn/w'wuIR&* 9 6X(Mi.i#MVxOuɘa3rKäb"%&3 \F"=Mmi%g )q*\SpxHIrt2p \@h^ O ni+/oP|LhNX#"C"Aut̡y#$|au,=@5z-r֫zb-"QNHhD%mT)iYIx9 #SN{V+c'cKaޞbR׵ß`>&.} e6IY4ADH1uUTUvUabO0WrLY-H"bG9W2E7"Γ@E+¥s [eHUI$%tc|biǴL-tyBe%ѥ*կ~/'CZ ?qZ711V 1oxUBhPL\@UNWuF)ʔňXӧ/yZyˆ:L. qr\P8 )wjcI+ r3o c1nw Ė4zd~,nJ HV}" r*xpVSЬ>j R"$:F44ۂ9C%9>P5vLuuv&Ύ!Pqڽx1|8BgtAj= F%4I2%+ Q%:6Tm1Q2ǟQ)D|\wbAFD )I,W5.2S['@(\NQN!$HY)C"ԙ8ˈC SZLKŎqō@$7h*e2hR*\yW?6UJ!P* K*0ey=\qs.W]^+m;5{F]U;*J6\;YqgͳAub{4٣8s/yʋ)MRU81ƀB:M{*-:֣J[:R̺WUI}G6@ hwk(ѻ8wu-<~yZ2/x5눋롛vͷ|WS֢J7Gբj_7RΛ'?Um~3UEw}񴨻$[|2>9͏Epy}Nǫ8ߎ擻'`=Ὼ\3HE*'tҙ217Ξָ++i hOj&1 ͜ق?l2JOkWGnq] 0_X92/Z(h), ?ewp=e8)pffro)9֨ؐʼLiSL}daʓk 2Î%%c} jyz~^F 4 ?ǕFY|sܰ"ywssǘtJ3& ?yWj\>' ~&%Mb>m!E] l눿fiϪOtTr"hzTGe^:xp{J1>K#O}Q?I3HN>gr/ޟu.咊 ."QEBFz\͒Ӛjj卡)iC"#G BF{`/8Bbϙ&W uD=cLK>I'wd>\%` ;-y480EO_n7իE*i[$§e"'rpd4vP ,3,Ɣ WnV9 `Bxdvziy>71 bк=uH )҈^|0Aչ:Tzz 8^}bxV0S:Y gx AYi4 pI\ɫҦQ͵ά3?!uݵh]. mj/n s!dGR ) |PNG}}0Qh㬗yR%VX.bj4T0,0?tsSWGwcc䜰`{U4 E Q9o'pNoBKJR  Lmp:̥GX\F+GLKshWCyYZ˻|NoissgyuiAo yctOd4EP i fҼ9y85kgx=}5lu2æ߮G,]<=/}yHԺsxV2!&;z|[RQ bauEjoW":Ul2vDՉv8q?\ֿ-EJl佊~"Bї6S w4YE0<8춿Y4Ր!TJ'ӢySV&-rf6k˂jů[s&vzߣ)RKW7d;K͓7qr:=L:fnEhBՔ 5*_ڱp2kBƓ Ga_6tPSjzoa^VenԘgi&P_ŐC+{EнyaU޺J`b%_9t +.*qDZX-I"J 4?R J%gV+.,OO,C> :$yOPX ԙ(N'L`LDEHDyb (d3 e 颬#Obtix< C"n='D {~2eJ?]}Y3dNnU+6Pt AiX c>ҡh}ڇ:C̜1!aC"z:fF :hK:d RzByDKd h.P鍧J1qAXy d$S׷/v%. .PhzpVڳxsLr6 rzv,EDN촮;0ӺW;6f]w|V:hX4x/UIɟE^O~v?x=>I``ptߌкh=O{RWӤTWkf ʙo[\`[gϿkE ~g"A|6Eb>[ny6g)ٟ=cmi)B?5LP $:%J!,9tVi{}Tsr1 EҾ:ґT өŘbd1ITDr9|m2yoKTmmJN=mqhs2L6PӗFe:Ȏ\Ԭ:}Xn5KuBS4C-3Q(0!HBPXJLJoaO^XfuoIv;O]ַ7|=o'[׃Z.n[ Uԕ:SyQ6^΅?uX/6-Vq6VWzZ7n[{9C#ټD}Q A_b&%pZqSĿ+ sӖE8DH6 [vÛ3Bzm:J)ha(i>\-+b UG2 nUhxEdz@i*aZS7'syr`vB.tX5J8-[ΙщJ8qfr`dR`r*onp5Aݓv=KWOK' CY+bROI#"$Qn3Vr4p%rZ"0"hb+u!8mB$sF(sb )&u.08aY T$Y?DZT2<ٟ.QiJե:F#j(hQ.x CCΩHcUZZC}ُ6]J CW:IB)"$#L%wIɨ  IƠMjqTwF(AI4fĘc&czt /<`wV'E)*0uCKiШ8FHv!:NE`,%*Dœݥ[D V% )N$mVTzĔPB-D OaKiq4XO[8r(i ƫ]nv@6~QৃyC؂QW5CsH:߳viZim ڗkB>sysIQU-wTM%s$+f[t!(sRB'pߨ#1,KctSt-J I25mRhіbe_'R|0!4gmG催a$U|i! C6WKrxNEbѥ`D)Ȃ14JtA+tp}E DnJ3HTgEGV JBu$(x,;vDiF*% yc-. `I. pqDVZ hƬb8%a7k,X \EG(l No!AOM!j:^Qa6uYދ.ٗPSdg\Ϩ`[QЧU@H&9%J,q@ZX*  v/PUTyh?=sN xfp{ܠd݅ᗢH+fD:Ă` )ؼ(NR!'D_ .ìrv9x`,Ggj0q+v ^&ޅL}6=Dd-D&`9:p< (}r*&{@\!i$RUFB>L8bPt0¢ЃV3驐P$rDMWD&deZP9Z6zOy Զ%1|Y%@ VxQ ox.cHp+KP-WwV4<ܶMR xY+N";o7VE`vսoj|+di"YrBUF+pmjУB>h翃bQnB%| \:&$ճR"З e@w PJ.Z%l-':hg]+u 9.z-| bhwXC-D(f brogcy@P @Iedص7FuM35)JL `TDAoqVq *y`*, BȲ$,/Ul~%dl%VmZb~ M;Bb8a$jE#WnD|9\P4ФU$#4@EWKB$Ti_@m# 358 N}G(Ջ6B9 W{qz}zMLPiNјG eC;ajA>bqDmQ12J.owO{Iȍ=;o'l%sj6is6c۟|X"_/!X­cn>0ڐ{־9 4`ОcL! ;ҘIhC4V!=oxO}Z _x~~ 9Vҵl(* XArMQVZ@ӴZɋjkfz1sFBzJkX0rkgI0:]"g;(M哳d&Keuÿ}p!o}}Xa>㗲e^%l^X;+|Z/' HIB\EW c7ѺJ>5Z73);,G%ߦOTc.WyyJH54# J$`aJrW^ 8yC#vto i=2^y;3FEa-iW?˷0iZw9^?i/ۆ69jxgV?\o_ܸ֋.^7Y{+nx}tX>\+x=8nu? PŶ'n^n_>׹ _\{ TI^^Opq*{0F7: m?r@tFAV(JۧsÛf0UHvE'Zlunj˻aLW.&f|9]ڴ*'j|sώ)T~=;&jzzo~?`050־_0~0BÓdW`9WgGuѺ@Lv !<8olGR m6}>p9=iA;skilUD*LvnBqHnvX~^,bFpxj}<<|vޤ~AQ;O[_OYvkP01b~9Y]Du{n\y{v:<a4Wxfͤqq!f6Uq*b+iZ<uG(rU>7WÏtvv}r&^͉&}L碣Okbԭ;#«<;< wOu {:rS~}q;R&e\N\[mFX4@3kb>{4H_}!- 1T \̜sO?)3F 8#ThUieqJ`U]c$7Dr2ހcP-C'R6;iY踚c|{B1&HqEz'io7`X9кZ/E[U\U%'c'ఀF~kkۅǷE3'LT҅y}GێxϮO۴6|U;>Z_4ZS5_PA՗VO4 ttQ>1Ds鯡􍩏"(_WiĄ3Hk.qcZt>>-̘+BRp_ϱu!)o]D|Z[+օGwfi l}h_g[u"ǒhhݤTPUEqU/9ad d.<>?aS#ܻ)-u\ٗw2tTn4fVW[N^|U?˖Nη3/Ӳ9Jmܲ[6sfn-e3lܲ[6sfn-e3lܲ[6sfn-e3lܲ[6sfn-e3lܲ[6sfn-e3lܲ[6sfn-ͷ{ZW .v0I^3Fn,.6}v_nn7ޖxih݅Ѫ@饏I8<-:pϤ?O[jSӟ'?5OCj=b LBlu@X]YoI+ľ,%}{{}`1LkT/`FQ)RT*٥]U̪88dDE^rXChSZ#FbiYAݡTu`BjLҳчLhxcq.HN*F>)*Ţ" i`Tˬ"ω6~EE6݊(q Γ&GvlR&' ڳ=٣qXq' yNW)?0}jL+|t""nH\eF%8!5)Bn3(l9!5X*9_UX#9E6yW.NruYMJ]]H 3(_/J |JBRl>s<9 3~ `y@HIkcZ{G0aw5!lȝ ǜ=\xF=#<<%Oя㏣fX%t>rcO9b@HBbF4(Q4`L"@B @H!:׈CwD $eIE$Dʓ6z*I9'-JTC5RT*"j3rP6/z6Wאm3-Tׯۋϼzl.iGr?jU#Ä6gi~]~"Dp!5O]$c297H]S_9۪wGίcLwv\*jcdjn1.ֳX|=$OW`)eJ)RTş5WVğզG Rg&Zy8s\e2,yk?Gi8ނs嵏R -,?eZBg}6P.]*oZWڇ9*VHqD  )${ThD/٧|)Rekn5]zjAFs,)g}IlY{U&"-+TH< ) -#%{EH|4e(b솬gVF |}9B..,6{z+B.t. y0+?=IӖd/>Ë4w1ujug'|I.yђ Xv ͿayԿN/g4=Ի(Qj aMC\;$a3.8b'gԇK8 i_N34'7c˟0\P 0Q߄˛p/@M')ʣMsnt:)ACu!\֐,>?-Z\D^.ND¡|{C. 7JsI׊s*9yZQ=-O'ٺ6?K{|7dPZM2||lykѯdGgohp/Om#Yu$7tmuaօ-*bʣqk S|<|ӛ17gQ7mm4FR>i^ȼ[߻ Хhʀ<;v(5(_l8~ypoy 2pu Sۓ&`&poz#CKډwxkjq﨏`a $ן_ҟSt/v׃6[nVyL ai6e˖ihgYIq#pZCmcDPoaҲ1Hģx:jT0&QB+)ZfR$}vTD P/#]Iť=O`*Fdʥd9D@~bHElSE=sTTk5NB_ya:wnv6>tyBw;=Nt509vRO o}eM9Rx3&y: Lw9`S,TVjya]uT7_Ѯ *g\)C  YBbimze,#@V12R`YVn ވƩl]{C7l)Zj31Kɼ1ՙa8#N<%,Ϛɋ2e}a26 wC* %Sۓ r_v~-AZ}}퐧ԁz ;BqL<2dK2L:rs+@K)DfE`T-|:ϣ ϣ%)3'OCn ~ҿGFO-84{`S`=ns@~Ť.t+ZonmXhoy/=z&fhHK9E+vqmPRRp&,D9`)`K\{ǥJC<ځl^qwJI-K3Hw'<%C@jVi %esa'"&Z)UZ !\@ @TvrUn'ɸKQz5=NbeooA_Pm %3FE=vYDLKR&J`("{孢 = NwƵm鑹~ʎ T@;-{l=y&~KUpjF3m>YMn7L"Nedt\pWdAT`BEYv6FLȪHj*(uCNhmMQ(B45RUn`t uij[oo(W;o\o;} &uAЭ͋`} X-;VG`N` FyBiwtaN~݁kҮ;t. o4aJ#ThN;9_B)2ȬPCn|]Hr6 v眢2&1zT械-\C8u465~MB{ ˒UI27+m#JX v"vzwxՑ%KNm,:,SIJ^ȏ{|Q8s,rBsJ9*s['hgeӕEOO'F?߯Zp< pI4`Tղ0ߠ'0xIޜZއOpzw&g.'ŹPyFI$>hA/_n$)VqU7,] 8\']U=zî>W'i~㲊Uu|/Vawo))cQ[c0?dc(yTxBI=2ud#AyqqyO+!fqן> r]m1)F\޵)&כ|>n0}.5.2lrJX@֖]/-C]Zȵ5c\mzZgm4!emkln]ZsltfC}ށ_?mزƖu7[v3J aEK-χqN7͜G@. >Ν~9wzM6ԜYWMC}stÌY#&1²M+IRd/0+O+3_hGՒzPΒֻ E Ѡ8FSɒB%MS;ͣ*rv9NjNQF-Z`U7yŖe'ڒ6|f;6@)} Ar ZMfv_5vN-@ -eNj Qȩp6SR$*4OYeBϸq#mqam{B|uP^7l,oz/D֤|m Oet]{Fc:ryh9}\ʊ$j XDy`rx*=[ npwL.&0GvSppςֳAy:gM_K|8!NpM.k0$%56j)iK'#^!Ŋz`@cU 3g+4[&=Ƙ4 Ddx7G#9Y|8y|[a؁-6;(v_ij/4;^)Z!QsiEv`|ܣMp1m޵}$[~dfM.ye!ί>޼E"³5ks7ir۫Q֗,;"BE~]x8K9:_t1Vv-At_FcZNJ?W]wo4TW<,^)Gf ˾i{j0m}ךx2ɏg2Ÿ W;5[|w(8iwm曼f+|s !z{{tߓ.EC23lny'#JBpj,$Ըew Yan.iw:k˥g{K4-bJErEg&nx̦I+Cwe¨y;eMKJsjY.2c'7,YQBP[/1pEcՒ$.@zoy@=wZzkFts`7)tU'1:ICR6\TޥHb$rm!fPCKЃ^305=VF>l\ln2z?W\LxG%e+ ^q;RFGLԱ Y<{A3n-upC6*Q&-E {wԓz8Cpc}Ulԡwv엯b,p(4SH̬0\Q; A'F)`([&l#hv6PV%Gze}ESV2QTgg>Ӌɜ-m#:}̞B4:s_|zC&j=iM`f)tILW_0c^ ȸP4IkBJWQ$XCs<'CԃaErۺA܍*5uiS Mq hQb5ʰ|Nz#r s_EAh;h&O?XK2/UZ3VZź{녔."{BжQ#A'mC-ٴDE!r_cj!=W{%04?:xJ;i[M!k/ܳ}<6YIь*RޒIǗYʯsF } \OMiugLךsL4vu>2~(}c[hسbG=Ptō.ve7lfM=ؤ+)jejjBӶ/JgE?fwqp܅:d)Od1\ʠU ;M=0/e-(r*())d!%il,K5P21C74WrR!Db:Fw;diJ{ bli^ D!TUW>ƅ5XtQ V&q$Вeg1)9dp /TmS+ 2Q|!<.8TyĄҚ])q6/T8l =R!ŋ͓)迣crPE/?A8'm0Vv1F BhS{EH%zQT"!<#B4@q^IMNF#sc4S %X)=ɥJ11qz dLU UQ*KR8W)fƮXHb =="*3b)&I@s8U*20[q(zDq݌GUUsHpe^u3ˈiM!*Hb4 ؤ4Q 2@Qل4;MsijS, '52 Udt٫M6;BLfB m-D.KRRJ )Kyruzg<90qwARb[ H 0Rۉ9kIN-A(+z'Бu}񦍝U:M!m] "-H2}&DMڗ@z%Ek@ݒh~!$Xlhr&[ERY<9L4`cE+k $؄|XG#ak?pMhk]|w =Uԝk=FohKCڈBTȕLU!шYt>px%v'PKٍbփ,N(8*dј >B @Cd3M"nRb *ϊQJGH:s1,Xm&Ȓfjk +kOeiUߞs5i\-q1~Z}nb>\ ~DMΚ.E o}eRs+R<ρD#%t㐃L,搃^E$,@L],骵<e𱏞=[Ϧ3ϖ\xE^*,sҢ!̵%>@H2:8 *Ţ!8c̘2KhSȮmրyt5K.e +7e ; YI9-agb$Xo,0xz2')c9ԅ:Q\`Y`uGkro >pV ^մ%+uZSqO'1>Kg{ߘ{9ST܇w*ƣʙ￶!.Tf\UIWB^Ĩ8j4Kb?{%}lM|T/Ksx9}) niyۼlK|{' J?q9.@DV {noM5]XyԥNstShPz A*HGrbȔK/`{ 2%!LZhRq ==FNr!ș"!,KqR%Bי>Fj*1&(q(%.xvI#9Q0O?GĔ}aߊIC2N(x 9gzӆrcDFVkP+.{?IrڇIY -u@Kh=zeBMYh )ZfY1+xDG0Y4Ɔϙ7K[J<' )mRHn21ΠvDHB B/zkME5>޲R0'ub |/["xLsJ ygk0Vu5 9:D!~M,.ل閪Z }5( (>䉒MsVź-%\f5!lιe%My) (ϡ|?.%T5DsKB\3eKYJ8 rP&IkK M+_M];kx-[-œmbDd>D\^ ˓.[uKq%6,pk" >G,۷kF\ kPXe:aU$q*ӜwF 2pE Q&nc=H 1143`YNRwd)!` Xګ_,kb@[W6ER$e$Ԇ)EDGS%zCjאØ#>Q[ý!gV (L0D.I-F棭IKf_X;x;9g'3N[^Mk?,{5ݭl.xdL]PU8J!M;g6FJ.N>0ZHrCd 7`E`!}k>I^% Gߠܻ}zҽFi׏6sZ+.j(Uko[ϴ{lѯ6W~/.gw!Wj-|blۍ BP|Vs|}^O'k{RzwOWM݈n cl*2Ąq=GQ,+xte5ӛ>Wٞ3!zmzV_urW6R:֤>Obiz>BMZv >/yޚF4jH'~[46-.OO7_>ӛs.~}WaQh(|w׿J\SwzLt(zykmXէO}uƝJˏ[9*rřC>Χbwfw& ౲cFa3 ½F'< ?B@XDˁ4}#!WItC4: P2!xrYJf.s^Sj[˒UIu›ȓ$"I4@tspwj?l˪lN 3q4 Q9[vf^CLBzHN!wA}Zl:x6K_ũ}ggZt~n ύ!j*]^ WZz)s#㖜bs49LO`9,ˊ7h/tj:%0 )i4-,^%_'\|0$@b xaOoX ݢj:c⿖QQ_3ΪQB B6:0 |Y4rV zfD6]pDWΕ+oI&_`nCr4,7rVG;4֛5_3*Fl`XOY6aC:{{Et[{G6ߘp]?ZܽyF_gh ҝY-f on2K\쀖7ܯxO{n 솛i}{^/ߝx7SɸXWٱYھ[n=٩(yΆ!0Ni8k Aukr2'~=5\i ^4U /b*ZDu&PMӉ #Z hdBgB)`5jDeB!ew^Б՝q=8m0=Y(VL+dq)ǮCFhT|vgKdk.̴KҼ"6Ja]<ȴ23YfRx"Fрuעݺs z΍^Gps@N!µͭ4 dT9 ۵&g}v\t`}W9w Wٯ=',Vڝ,}5 ݈xohǮHh FYY% ɻr%9E§w*z2fɹicH[ˈem\ρltTH32)O$ST1;&@8$MI 8X_beR6:|Q-K'Zg^ ߦAgA/oQCl{wØT?\^Nڅ7n_mIXh(.=z"e jimV'_?f$ەcU AVA LIQN=zGgᩏ E.^qwJI-s}xڋ \f 5P+ޕ6r#?mۼL. [d,`ű0HYwmK,ǚIۀVHVU,V[[))%aJ>Y+JKat!37(E3ClPk"8=B_vCXm]8"Uokv r@bvዏA76p8bɐHrĖ8qBa@@| wۊN Wdڛ&jCG`:bW̛5oƾFzU?IFOj à88`O8FLH_y|}3EgIIƩWM ~U,^g^./鿩tޠLS=O<=$<͞[zɥ ?k(Ar:`겹M Ƴa韛z$F('FnZW5G)fP+=:g[aS%'fMJ`QJEU_.y~9E3q֧&R=iP*G}:1EtʸGM8JG: ,jUR^*&,Ie7{L9eQ9x\ZEkiҵXt{(#ʼnf4.jD> 6p?Sgmǧ64gHŸuɤ4f/m 69s-÷dH~,Tb&hӍ2CH79*gt%zlf5FŻs1ۜ벯IT  IH~"}:3Jgž5,u56RQ\_v|,,M%gֳ#$̲]s&J:Ie9OR ײcB,u٣nF? e!͛$ג%i b9niZs}4y}tq/._h~"\Hwjai(&- XϩK&eHD\l"ZJ!ɒsT>}JvzޞoQM{ oGdS{T h|E.NaΕb%e@5'~~m#>k&by]rn%)Fh;ܥ8ɬBq vu ^{Wg2AeU?YTJ]iKTF dO6%Nr!șR&˒!Fgmv3[^k JְqG4mH @F,3!*9s$*(Nr[ܙNNEmGr @Dl8REK~kKBJp xƉ$d LCK $jDN1"ًP[RܯhJΞ{GBi/ɦ4^&"$P$R \sw"Ц=$W{TQo#b`c!'r! L3my!09FJZAxNQAPRN̒ Qh B;G"4:VR2kR̲λҎ8qgR7n3XF,dM  Ƀϙ7K[J<' 1m N=LI[{T-4CXdW- ߼ĢN, ssa.Ads(E %g;hO<`l5U:D!NR$oNfN#,A-HH]Ǎ= qґ=]gVƻZ $ʘ @iM,(=PcpIKyA[fUt"$ ^k}K-rX|zX_v#{dôfEAYȣC $ye$quWRIi3d& ({dbj|BRƲ=KB:cyO Ea kQY'd;ʤtΖf$IH]$*s> 01nsV9$ +> 1d +1H19 fҢur!A)HR!kOtAkiF@|@PY$UI "aHwcJB{޺vic <4aE`bC]([z\)511o!< s*'&_^?M\ DzPOkqI/SN0/C/Oa\>:ji|.)膼o7=:<ԫI19.~q9ܫ#c8eK&/kk2/x|EJޛvE"cHȼ!+2p= 7'7}ų& Y"\MNWa&P]FLGFMtzzruqy|J#KőS掹Qzj6?i8%Gg (59i_U+}>]M/5qS<]s61$J͉{>]r ~j>?qL1%k6.65#66c2$AL86:Tf=^ٻ]96wN[urSj:.p&#aMpUc^ 'n$wO:yYo0]9~_|u}wg_} w^~:XZo$M{`j?>iVެiM;fM.oh"IW7%_߇_Q6[&YŽ*>Bd_LeU[TI`xCZ*^ /&d' WR6:FcL{\N`699x3\`̉ Ia]&nњ#vWdׅFWkA6CqsOSL9~ >ۦRDW1\Tl>yMQOs':8L̀*7d4KTN֋R#WgebvG:ɏnB ɔBpQ tV%w0dIhO*a2ވ' 2ACP 8,0t|4ɝ iwW;cU#kkwݚmq@*%d$/)b:r :eJ $1$8;8=^Sfl4e&`v:@ۙ0Odpk݄yh-۽mvkf[d0S{.tP8rA:yd #g hbeK2E3k6etōɥ]Ȝh7's\HX^P=Ww"$l|UiF goz-7Mdn~ 68M gA8CG?j-|sZޮ]: k"Ԇ3a/:u`k% xkk)s[ wl62]J`kϽSJj E2c ZCJ9v Dp41%e1S1LTi'kVi)"dFVQ ȹ3D:q4O}L.} nnz-3s7lXv/b"ԗw|ͩ']ľGhw/=]t3miS9x3m/mZs wNd+;R w{U:K"Qʼ2u3qGWQ^u3qE'!-aB+QzAh(&OHiNILvX6l)V>#x?c[bc-tPlzٔY_٪G3BR"s4$oK82"I"Y{^f^1ee`) *Rq`U*kBL҃\<Ѹ J<55qk8qvӺXkV:qOѸγI]H!^3*S :hN,/& JwQxs o$8.P魧Ys(CWMÌĠxj&|L 2'+s/͎cȫf}eς5a^`l0VmwLjtx@x!W[ 7T5j+3IK|(JܨRH=K?2=`W >&)WH.5Ge<(#Rr8o 4vXm޲>1ձfYZ MQ F_'jE6I!(RbtFd@w ؊UslZ'X;@NUǸD%|K)/d"v1ESB ##>h>8_d%F3*s'%.+KrRDZEJ{ U|`qvO^F@mnJy`W*i2d*[)`{}ҁb_l;ttt$) <{^xHAKL,RpJ(1 e$y/$Dӗ0gGGt#&ϺL3f( Ձ A*4hpv7Sx0KNHfZ^L%SF-#wK2s:ST];ֳ#=Q -K@j QK#w6QSR$*biBIǽĵ~vkt7/2{}hX3_/\?7oUAA=a Ƕ4QKas>ƤS*] "yWj/p6YpylVCk|6[O.8 ˺OϪ CDLQb8⨌Kܺ Gǔ2corn/?]  NMEht_Y,yįG> Çw_/HEd4HȘpI4KNkfkVZ$8Qcr_w0tx/8]3g+H-uD cLOY"Ohn"ʡ [m6vی.rukCE(G|'r &EuRs1p<iKƕ[3^ҝi&Й[ae-]\on׀c22SV \LDEǣA߬uꭰ?!3O0 XnD<ă+⢿`@&_X\ΫH>佊h_)򩦙?%dnZq=̪X5wz5"AsuRqqP)' _p^|kp^l w0<B4)g5h'i!]!go[1OT_N%i*G(-.~Ί#kcoޜ'q8!Г#(CQN*'e㺒~cex1fX1W=4Xe;gbWT71Cև-*~grGeF`q_I콹8*٬ׯV`ceNVhyڧJ5BC4?c&u?~ߍ2w՟W[s^_і>%_G|%/ﶶ؜冯 I?DoZ | Y[|eͮq((OB:_M 1C4{V?NForO V2HJaS*DSsB{+S#g6},KoN8"5D@ oe:2yy$Ӗ-žsAAlL 'uSad"S $pmc^f0e}O R".PBI=6#F(2c;F[N0Z4Hpg>YmT,B#f`tݠNbۢzOZ=Lf]k501.mߣIY^bϘ΅ٕ⛋~dtx30.FwuCH{VԀ caQ9oؒun1ăz*Wey 5h2$qB= 6d*^I5EO$l9d룊$:~3'DgM'_ nm>J Aש [S[@Qh2d1"tKdV^Сo38tHP8)wjc%QM]dNrj-Q[n9֧$ P1BJ&fqdENм,Y*D,L@!9tJ;kVa, -͢뛬Hv8A>ĆGQ4̌ŞFQ[t{4"U:aQgo1yޢ0Tm? g4(#=/+:+KJj R6JT($ |u9! . 3@zk@Fb$GԂNZgCM  oN;0nl'MO6NXۉ6 rt4XJf* iG9õMʄеmu,T{ի׿czw`Ic 0&@*@FB9( O)"[a :1ΤTWhr IR\MّyLFzεTߞ7qn*4cS_U_GKYG]jo_\|dnh"d{lKȂupI>W"$c9r)ȄeaVy)F+wdV4 $!EqhT0*YQH1pY(ʙ$RG_hV x޴cS-zڼzޤbh\ eX)-;A;QRF>1=&kc},)BrEx1Jkde!eX$Pao;P)q[_,>)ٛl{NRo$w^ڋKIHe`8#p- |T~qkľc[Trna1rwsobfQtՎw(ՏShRW?^:-ҩO[\(UҀhbh֮U܈&R )ƣWa !BH!nޢ7z+ !9 -H‰Y$窬')mp" DEP.Ef y( Q tV%g iTd.!Z%Ѓf"3s=GֽsBq=>}ަ75>=.h'ݞ|&k[OïwNQtXІ4fgQ@H*PdL{ge5'6fw=saz5'7Oo-]ؕZHwsxxd!"](=PyL:%ؿʧ"b9*)>QS\"Xs'w< @=3NK+ZϚճlRxYR{(<팓dS(ʛ{Lӕgqܖ=XYq0=lތ>w/nlQ΍bf>usk]4j]XC0/?k7~//.]C, * KUJRIX* K%a$,A9pPZUIXjYIX* K%a$,TJRIX* KݘPIX* K%a$,TJRIX׮$,T+ K%a$,JRIX* K%a$,|+Տ +d%aTJRIX* K%a$,TJ²ؕODp{koI+iFDOȕf6([Gfezp-uF].$`m($c9V3k $ @!,Hrwx\hՠO)kJKHnE0d)>/Jf넂C3dyҔU'Mhb66;ivhla~ٿ=LG/?o&yE3v_k>{Fwʕ?4+-<߹΋}Ο;Ef;Y ~^ebӯ:H-v<{c68S7ğu /B? X7 oIK'>[+э2!`bdIq+FvKZs@1Q͸$>E'MQ72G?v, BZ}vtDapY6"G`LϘώ:uY([W^? B*QɷOﻄ@hyd<+Tfb8Yd'(ASIWr VNƅdS{39*wd'dJd];Wh\D>hz r̚`p NS.OIqÃ")E3=a~z4C{Gbn)sfE__-~^,ꯖn$B+B:n堔_J3+qd$cqI$KolکNyz;h43{{:n `廝?HE pkϥ:Ik vvnr.IpeПMGy0~nc{QeuE弹Q->y|. #%J7}D"'/ +`MTL4DhхlA'd35d&DAMȅ`И1dU!HY(c%xNz玫W'IfOR^*)(W`E9kI &^4)"T"u pދ8.h(}̋Kc? zC- `pA!W,`b fHϠZ]RnlWbT6X!!GLf-Ò9qO I&iZDuؘ0,G~akOxm].* j7UGZEZE["潋?fsdAtέQ\SFeLLfƶ{l#RlQ-kfׇ"YR<A[ft8%($D/rGjTMhXzmwXUYZ.h"2ymBΑueyZ)mY&ϭ^P{öi2rd5(2Ge h3ɤtt` 4c8RyzYs1;SaJw&޺ =$/qֵYF)hlJ.EbB25@mI@/Q3 ;~*pD Rܤ86gnjMDathFӥ.}{MlvV }^o^s#6ץ =Iڏ=_=NBxn)HG1 cd`%w˔9`JFRz|'y5=wS@[!@]vah"He6H=UD(kjhdUuA6BW+mY$o_P'Q=`c\)c!`6%@rr>șCX d:?k6~6HZ8d6`Dc̑c6#+?}s{\U|6CK Ϸw]wj=rJA5g[IVjU%:@m Cn ѥ,QjYn{ 7]{oG*@K{v%gHN~!MJ"K٬驪Uu=:9*yRVP};8P<5TH2E` ĩ:*PCcTZJs {VXΊs]^F{ZFS>ВcdР]\$)КZb 80.ytT'ZNJ%?7=}f1da4%R'-yEg̡D%DtE V1gNgsO)L8($l$ *Di FVQ['^I"n;XG%z$`I\0Τ֛-)-*XNyD\!7[=+" ),|#*c"`KM<dB;H6 ;eF;\8v!rhՉG9X1G/"8X項iX'!!E $r2 69S el}(W[Ew_qRbE␮e1j$*x]CZ3bgօQ;k;yq>qp" 1s&X;T(Q 8K[4(!p%D F&Jc5iadWtdtI\ Ցhx9d.Tݰ])g`Կ>=eÿw pK,aޒscʹs;`68*6R K-K"۠ `Z4wS-!{lNo]؍h-od~2fY?e";uCqSH{#՚!nC:pNz鬚`լŪYS[5ъ$TFbv[9M vxd4w!;9PbEf83ccI2&q"OÞ8=>LsTIY* B=zl7=X)l. 3yq Z蓃X`}}~R(_0b.) bdM(np;s'\r$wYՖA.򱴔G>)J %-4?*BpVఏ8S{ +wQD5CMq\9C8&QMhF8 #bcpPx5OدT>x|-;_Ol$c v LG C3BoGwt>BGMYWYQOMB>? ͢=zWCbe4_8*d{˟~t"&5%5,G Bɫ3 W|Pm {j:{5N(7m|ntl],v.ku<Ɣ#3ST&_k-!3~qu[>M:1q W|e:YieɖRL+ukߺ˨CIt[5f5۰}[ ?FѻNL?ajH#QíOkyuɳ3h|˫C!< Hj}=7]ƛswDXsyˡF!<'5kq_Ib'W!0L6i)Dyk @Fw}=k~nlj(R{iB\KJG;Jdu˨AT.;8|T_j-LyF䵣 vfNQ-[/^RiD)r"hXoHN;kd4Hc,qjk ! Q̂P%Kg#goY@ڲV^|u.qM0%β%dk`HOӹXD,.љ},eZ1wAl"&$ʤ;c:OZ#Q!R 8ѝ,ڂ6 q2?ܟǷWAuUέ/>a}̛O#X1xSUE |Ӱl86qQHHY*25 䢶@ZDH*!BakEN\pN3KNLj˒A R6W"NIWz'-º -ZdPQ8d0GҖ]'pYp" J< p3pK8 qXJiJ(EOr^Hb˄1-6=#[l'_p7cZfؠfXKP#Dh5Y@u F*1@J Uiۺ|U~ak~z5?c?m4U3Իw6 8яq{mO sVUW`_2I&9@E=?0h0͚P给2_udU dF9XM޻OW)y?T|Sf$F;h䮋\%HxxЫO5MߎnsHSտ^r矿W| J*?|[oOF^ڬ)ߟ*j簴 ą*$-QQ eϓ|c&%d|`TU L4{Oo];5#/F)UBucf[<$|Y- _ ֘e4'PCJ9^t ]zRtdSw=$xOх{N u&xbn@6- tPϠzP&10mfx}`R6ˁ}S].1/ݠu,  J}Nsa! > .)qɁ'tOm];"<\ ;vsUGqin2Ic"x 4a"`B1QERNA{eSrB;o4񤫃q:Y^ ]Iz[ybzml0hR' @NT4McDi}R:,"uQ':O?XwL X >m3ct)ɴb+k>0XZoG}^=fo|t&oq5TOhKҍ 8=_k'!|{a7XW/C{5Ƽm\bSjyq9nm[5MYT k~ʙUZ-5|f6 aT߾MPP% C`_CHT VyE*(;Q*G9ē0qbvY^-?{WƭJ_ƞ%싪k)$+0G5%$%˓tjq.JCKe[dwvdH7o; /[ F8 .n~x3sBF #K ~2-RUjeUVRmVȺ9HU"Hi*Ӿ,xT)>_T'=PR(tk@I4ˇ!^!1Ǧ4^H,B袤>sRK"ќ3yq~4I>}ýG!g Y(wT4b>L8a#8:yðh GYq,b4XVսыIY(1)ݙn3^pj%"O̝YBtň(QGGug"FaEfQ4tk0I EKS,+Ĝe )nZ"w vWglZ^Φܠ"ΠpjNf͖-.&u͐q j_Km<7?}lĕadDa^kk0vC+VGT&!{ +{$ٟge[5K7\[#VZQpr~\#U/j.rvT*.%[0# }2N#%?iVwd-Z $ry/1B%XKG /)ixL\1V$b3B-NEJS1yQ/_ XFWLI%W/oWBSc4P,5B) Ҧr b;[JeH*qFG3%W$oqiLMBkO+w4ƇXKy qbN3PĉFx3i?4_UcB0܆(|̔]g4u= )mr= 3^f]Bt@guaO$ 1@|a057 ѷ˸bHOj }̚g 7sya).H{xja*:aWp(0L5_1bR &*0[/g&zV2$|4.L*6E'%ǽy\xb7WU5/־pLy\l|}؆垷7'l [njjX8>$/('0+u_ˤ>wzA>n3}&ZW+sjf Xך9)+#"r/OFLmRWLzЪ^Ui'uۂe6t(i)uA,TO[Zh+U?Uϛ NǴ%s> }a+$; -/ ĥI0*x0 C";? =z4u3pPE҆Y٥L6cT3w$׷XOTBdm=O+s… jEbKJ%W/1 AH-B-buJu` nGQ5^}5T\  3uk;c 2:C8X^_fZ)..#Q+;.?rU*j7ES{A(FC`O(BYR.>$w93ݣc{„ шۯ3h2`gU[9~NGf0y95N& PEd@q>ϧ?>?jNr94gt;rދ][ \l;K +.). cT+&3`i4UJ9@9B8+ϖ-.mETH*8q JxQઢEKi^qU_ YZBolثP;ia>;@ziH)9bKw"(@{m 4 bLI)XŋWe*iH!6?t#4/b)]}aKu+Vǽu2mRYHD˅T$v H'slrlcIi(>c̚.j7oȋy9LƻWO(2ybpe͢mNdlLI ]1aS,m5u9|befF QKDNIUU9Y~9AK\ 81Ys#Re5{6]?}ZCSrM>eWUe6tTU C5 ˛/]f/_?ٺ*yYVҾT \~-| jRX,cef-# :6k)+? D0?tL$XOFoNx Ag9D;t|G+,wKs{(0x%>@ߋ}9Ӣˎ1/~ʵ ?e"P<b$I9/89x37tF;EAW?b4 ;*F}/mzŲRE2 M 3^(k3Lors+w*()]^<^]*/xR:,:!:^ '`BVxΠE>WFC%l3ՋΖ]Lugu\RSus=5-M]/85'eB 4}@ EL5pAU'j$,H^DxG˧NG~ĨpR-,a\x-òm%x+K&z1x 'awxv`sue]ߛFW_أ2eTl)F(˲w"/ox^+@ek#P43&h[˧8wȋ0Ft(]v @TB΀ʒĝ`2JĤ}mXu:ñȚ˧)w9Jy}v[ʽq:̏0Bt ǰx!E<Nxc/r$K4h(]`??„_r].jy?1Kp`ﱅh @&,O.Lyl N"daSxbYU? d0yV >";42\]/8eqB*!AVJjzY>xSj>ڥp? .Ol]fxͦ{^<''ebx}s!&2gD >'(*Rx?'@06t :tQ|{c =ȭLcBϧ?>o! K/T+-x0qJ9&RUg\jP8'%!(E%T fFL_oqm9en ./pb_҆WO'#z{ dwM3xykk**%Q!YAD_y^J-rP㸠-HF ϲ\$Le댕Bmfbtq֊RZZũ~Vk<+"wܢ:5ӭM_MGw#p~ h~}:OH ZLAJXZyJ2Sֳw/Wyy Y0 Ÿ'paJٟsCySfRR~:9r~yHDqk[X,O 9Vj(SW.,HL6_F_GJr|>10!Yz]ͅI޽}G0ՠdho0602^̤Ÿ|چipƲn?OWWir?݇p b<"yhfܥd8LQlW;՛`(`LKNYQw{ĬJS~)1ȅOUf:j ^(su^ևܹ׬J íL$PaA90T ]0֡M|,U^ŷ._d.HnQZ/r<}IV8tQCߴo2{AܫGֱ s1sY}]: cuR-{,F 0z91zI.Մ[hA>%7_{5~نPЮ }wR`} ^7N r5Xu"^//Ίlqљ4GsSq+xeպ\6םz0܃>nk~|nw]xM !vA`WJOsrhă7媭߿5K54ILI?U]>6uqI5⠸zgpK{a.;ecApJAJlu@dMؒTҖ1vIhըߙ.;$X\)$$ݭq|K_5p )e.twNɨy0s"v?8?0{㭜lǯ'}Ʒ?2.ÏuZ4b!XdwE]݋klV\`.v"v;~SŇ.[;nOm]Z .wE>[K0zww7'߹Ay&ǛQ2* RdJI~^TY_ -YEp1TR%)]UK9_8#wTU|K=sLzƕ=ޤgGuy6̦El:غ\gz7Hd7lm|0ǘ3;91'N9g&vݷsscϰ:,`<6j8>8? BGa6gFͪvuqϛ .m{SqB- Ƭ|ͦWcVrŘ>Uz &T Qm֪[&xlUk=G:i_~ 1I0}{P\rlQ#bp.{$ǛRe89c-:j$QIu ˻zIZg@>@3Dձ MmƗ0i_ܱ[a%3ٽY~lId`w?dqOըpSz)(ȧ!&(+mģ#ۉ-DU0gzDZ_i~Y`X` 'Ev.U~);e[r(SeUꔓyYFnv/_ X" ]$Jmfvprr˥v,x@doG½sƎ kbȭ2VG?E9vou E-VT"+DKX:$hLwM3Sݰ]`xvaI72$i.YNEx)c&)R(hwŽEٛ"-{ϱ ZD5RŽu++ vnU56" vgټ1bSF0K +2xs\=J~.һX\WExVՎ.&.Nq,ں[nE6v`lGAצG+MyUx2E~;?o/pg"VB/EgmEwcƊ3~94l\޺[i'Gfƨh\:#Ԋʩ_眕h LWBzVc5?rR>*՟ͮb*"JU25~UmBBo?^BM'N'Su,9<֙x瀫\X>`JɕLtL`U*_o7θZrn߿y70Y+uK1{YVGlZV$wM J+ %PvO=3A[I /\Ke\z ]q!Ч4|hwtf쪿4+ia 9q!^zRvxRڭ`?WBI0),‡I0q~ STszA$1 Ez>Qefo,1 *J.S0).m ұ`[$8[ljG,(*pr"q"|ޑwy`MY+|iVTp_~"([-_fE■6Ol,}ԫ*H[-֥bdȖM\vT?+kkJ I$*ltH_w]W.w`Mo8H-3RQ;'/f:݊"Ń3[j!6/"hr+X3L"$!݌[GxzƔ~C Ih)h+#zdWK&*k`3RϬ?~l$s6~dAɯY/tO2~wX>-V J-T(W$K@JPjs}kj{:S YUk $;IZ P$Vd@``)"yjk˒(/!;x`@WF9!FeܾZe9ABcf0y`d /hbѮBrke'㉄. 5a+,Q Q x@mԟRьBHI )M1aT s *֪꜑,a5;38%`` ŀ&j E MaAIi-JC?sVsYa ܱV$Zg0}ۼufdcRFTW>wr2>9B1k ̉o-! ŗq]ƙ$KPf618 4 ZsHBMr (y&m\k!d_0t%r-L&/tN*᰼MH |>c.X.u'|\ D9 2Rk:_,jX.2lxֺזㅣGl^c؞EW(XYN+R+(HrA5̸}+rZnB%2K9cI9f_v SY# /ϾP:|P@(^wD88RKL4IT7ꟿJ]s+,[/2Ϙ_*|fohaX k s(%Z5+$ sR━* b];ʖ1,ȡS`F䈮BnF"!h2~0(ȋ=ondLFmgl쮃têr=0E45 ի eU}Uq(9%r-:M9!WFPL -mƬ'm9Nu_d@4W䲖u{3~>N%q k*fJڞұkp#^)wyhLSHXbrG-R}&\oNѦ5|B lpZ\@],ܚ_oQH܅0:fbU&ϩ]T@԰UY-&Ɯ,cp&I-ٖ 1$%`3OY{0G/_R"8֡4 k (hwkV~1i R>%KOˑI!m`o1]iNvTc/0;K6!Fƶ Ev?Z,-4jKQ&UT&t{&?U 0TcO|<3pRFsXw߼儵MdNJ6't0BߪvT!MO7Qg%ƶOXuBqGLTQQћ!.1 耵nn܋yXORHt%$#01X=vRHC-1z~=QuX>&#HPБ( U&9A} Myn.T>LJ%K<RO~T5m( HqD0c@^]nLs["(k3ɼ|b9k?C~J?Y\nCjFTxy|Pbc6>/߽(,@o-GlP‹e(fJȆ¥=[# !at=w/9bzG59eUl+vetϳI@]\,^=y-, S0 yefO8 8YWUB=܏Rt%> !ș2tT~X$W꯭s}R%!ceos~zBdN(hy5y~i $)\qqTDz훑z蔊HRnHH?p| e`8YzlpW'/n?hu1,DN ƀuں4tdH궛zpgTk(R }(kI&g8թFaZhrHE%F)槔@2L p;J v>) c1ݹ̚P0)VO1kic/#Ui@rұxUZ:,Y乚8~Os]y*TswHc-Y\i X$kh番Z4z)$Ӥcl6O8GX1jcJ5~E{0W J)lz!WXVZ\V89^?dzGu.5H_ KqC [K^8IqY|qb l2_(cdnd"פV(IՇ=B⸮&"Tvq_ */|Q >|s8_XHᷯMJPd(bk P ]SBPf0,F8޻sX1Pa8,msr6qOl{VzqbӇR - eF V9ॅhF!$T$iLiʈɍC~8g3S)rIJjULbwλY Eu8v!T5z$#\E2ᕹ?41(-(2*g_&DF 9qDCKTMJ%IJ(~yG#{=s$294񉐙Mdz,P,N*. s65 p`Ζr*[ iM/0 Ҫ]jQ,i+0#8̲In@ QXᩡ$G52JfR0兼a /$Kc'}`)IQ$pnO^>N`41lY -(Ƚ\;JJ夢`p-z0CX۬2L%Nѳ#*o3XɌVk!dN5܁V?PcO9cNjHצP{\*>Wܾ1'[ MQDΫl49HS/!&(-:Xk~YwM\]cNyk ;3b0ѽϫSZAA k")>QRʼnDUd?,^ј[qEgQ fw-a?D># 3(>D̎X6d4TAVs201s*#߹amGK8Z L/ڼ]t,~?$ 埸!r"Ro-rz^+lz./Zdw41LhxN%.!e3OzZ #DB3!)q܇qA `&8_@A@yS= WfBl>{$>|XL:nէNĐWN4E'9ɖH/Zؼ4Q23 P(afaVOUڢ԰wm͍F*qK_&Xݪ=M]r>%~%=6} e.Ca+C-9'Ct t/ki]4"&?<£O_ Q./UU ~7^IxɽHѾ-H~ĝEj}DWU6!!kb=s/`^`P%N#\ p~^SV`y/وa6 "eig11Ϝk<)g% G]@c -mm̋~vZYD`e5w*OkJ9  `݀yP*=H7S!avbn3m FB!<[|,B0 or"DWD" 1t12Rs*,3ɸ*2%3ck$^&[ Z[QÃ(}t MBq@ !F`7g"\`IXN6~]+K;yvxة&pOǫ A|&:T)xNyd'ۑ14dVo<* iuH1|b.l;Mq+F(لJʃ;j:,@W[],LfEp0v2/ʤ:] WWZ +F8FӍn d7o(9,e%#:,cc^8Q{'}{<* ]`v;Wp[*(g|EqnEx}U0!}by0vkI7f88v"V>f+`3xu4>_xS-?kN]^|("Pw㎥rz~R+xJG2׳%9`Up5\de\J&m4Bg<6ETEzNX@L+B@>0& :۠ =v! V@e_}:"<$ޥ^Q#t$/2GDM r42p_ݡ8KAOF4ds6/OLx̽XAXۋcc|6N635WV|[HHRYыץ! )/?B3bJRލBbwP _/L)m_MoI6Vrу*(hƅQ7fGQ(e|!JEPeR ?q6%Wj&坼O'*1Wk G^Lymcϓ*4~W0޾Sϭ 'o>{|dB7<8 AA91̴`{shNc'DbTVw/9H҉͞-a˪:-\jѤa^=i$Q A}Va .~}!hj\~IsB&9HjDC#NDA O`(C2Yn"%OzAg]1&X<[cRTI*@ kweXPkEAx0VO>[VV".FeqT/ `c=cJ T@0r_%K(| \^&:bW0L%ģy70/Eܓ}6L1B#DS!4R*0@Zhca™-ER9%ANO sI"7vkN"(< @1,ͣd<ǩN5 =[ k^b{a315nh'Oな=]IV䕉mV lO2:5KǁxLC\y:^,^]^J %cU~.G\Tqu<;&6tF`+َaY u9+s 1d^ ʁL \*iR$BeIɆJgV똾3 (#$9M$ ƩgP!ofYWwfL#Kx81$S ͫ"uTF"O"Ƅ v_*AZJ+O?DxgR2@3XhanPT%~u涊/YK'jjD6!p4؏b 8M~#w]dis) p6FCgb|=-lRڕ4s 5=ڮ^O[WCE}ww` bEd$`Bٌ4UҼ&`n0 ˻W!JrR`$r`< & h#]\R͖c/:"1bUqGs$xbX!Oճr[aڳWUdg_h~U uMVVMVG7CY JёݾFXfZh`u ۈ<-Īja=8^YR#J8au]8x-4zH$iӯe)\i:#0\' <}NK$Ĝͦ|fϬ*[hH=i͖oBEJ?M6wZA#[0_k>B)ﴷs7"z͞J?4Ϳ'o̎C2hSu2c넙%Dm }0Sn +FFA%FuWUZ 8mF$䇫IGC RN=06"%y&%]L'zrtt.K cģ ǘޗ2^wNcHλBmYeΫ%0!GÑ,O2- Sxk>*Dk J*cR^-v2Q|׬@"?+[A}&˧\̞i~g=7>}.: GV t2ki_AÐ!UTR}p 3c&!/|X$3?YVLNZǹ]%&סM"Jf2qZj_rHZhZV= СYg'칩eټB"4F@8p< !Vf^C EE#%Хf){!(V8υ*v[½ ¸+Lw@xZ%,]lw,TC'47}gezp=%Oi1Ͽ)嫱ד|ZO3+0 5f k?f@`]._ r"q:%{m4em\BS({^wa("3~.$%#f!$i@7eð402ĉ%Y8<)B ,|ΝcAIQBG[%ƶP#jpzͷ`OqYR We-AJDp^ Y"}E/H(X}V՟xDyḑڡ,fB"S)TUTUBS&G3MjzyR`̊zƆvC)R般oA];1aB2 4[x@H@.1{XÜ'aЯdtKjM! ks5o{+61W j^Ғz&ڀ[ەt"HG ))e-orBCP/Dm`spօ-F`MfD("zowmI lnu~w_pnn ~ň"e+wj>!EJ3R(l4U5տ ! ^(d5HJ {cĤ+oH-& uגOo^u.6$, eDDE&%0zq<0CHEKl0V:P'' !@%:ȭ_Y,"CGSgI\{jTާ @Fn+uj`.޳310yqL Sx׎rr"s~M0CU~ݗ \ZFI qZD%h hNۻNz:tzW!@J%3p3j v¹*6髝F%MΖzd,hvO KN,ͻ(.T':қFF:l|-tTd>6ьP|Pԑh:/0MMsuL~# w!E'-TKJ6ԥzheJ)jG 5'!,. KMF N8>թ rGu{M^ǥaz+ ZJ]aīu:4ŪE4CVλqr"t:3,U;;gՂG' b:@x.X(xŶ~c[ %V蓦աR^EAEGCZ]o )5$ȆJ^>G4VLTMj2 NՋ>_'|{Q,$]Bm磗0Ŝ"v؇m>KaR1|HB[tM3%vctƋiɩoLLߘhpI]80Mlf7C^ qqOv^3ĔH2"Tv!. x=#Xb<wd:.`hg7k ?YB/!}9K>B{b^ҁ\(sȺLTE7p~%0¡sq3MynBvttGN$BYr&g~ɐwxٷt2c+E=&.]3}3nUtk=z ϙ`LwH*+6^-ٻVHўo׫eMgPigIWjz+íA[-lװE|#&D6:^V@֤zyX^2;??Xf'e2ffOmV~:TϝJVyzĹzx7oXIsnmXsepe$.bJtcw5O ZOJpQ nگoigP}6+43~j腲@`.?o_q<_l޶|կE,7?~2p?,G'ٹO8]4;/+UZJ@R,]ՅřQA),_\ aKʀ#p69<@K)f]d1*>)oS\b&ubWRJ*DX)%:`K4΁bKa"\K.EiKp~Ը=gmkC\:ڀ7AX9B*=}R. 4!d0_Er6qô(z{MR CIo,~ٜ>EHs ^GMàEN-_Fg~o3ͭŃ q=f A?3ΟY=O9+K{1L&G蜂SxA#P+)PJ rOi7?,ǿqQń.Ϸ"P#r~?+8)׿9qATI4Yl!/F;ֈi6Hi6"UC~ g*QB$RS,@ y;rۘ#Pv?1*LQmИ-Nګk b)fս}.8^]l~ʑdܜiZ%=ZD;( X^v5 &U:]lU+ nOY3uv4XUTtsh gp/!9-`*F;9lC657I/1&L◹ɧqRB19xO7,#P9y:*ƒz ~d4/>#g,jP=lYxeY*13pi* %#*NnJFk餛bNMÐ_Ð9CJ;׼e b{Ԩھ nPAY'kBG޽h+$qZkpBʫh~_ĻvZԘGVƭywv~Ҫ4v0}Yu;oL{FO֐V\Y9Xq *IL3PжbPGh6_ '7q4P_&Y=5 ~h$j[ Vɩ,N`|,o3 iގ=G1f;sNUAlqKasW1W9+}Ldhǰa|k~S'n:Lυt³͚BnҰ9l_B .%,}@zcfwz f.q~K!\ `2=컵mmFoFJ` CFr\!Xz L VH'!˱Cš3X'#Kp )8_ZJj{sm33Dd hk ""Qqw MEItCMv&bi8Cчա^/BAG O{7L qW+ӛDl(0XwwFG4Q( AAXc6ιH2%*W^=I-C8LSv%ELEղ}=0=DDoc0z+!qmiqZ ɣeA{fEhHK1`Ւ\Z-EI@?{WFf0˘1M>[,l'ђ{>KTŒ-lz?]|w]/]ĞNpRDAO څ%a~q@G컳]tC~DKΆìH`^}uZ8r8 HF`RtR1؏$ `;\5h0 UZM@pB@ta1r9ٮiμU|x;;1iQ)Q5OĭCSPo}Yx,_7qs93Y@m直;AeXA4y~"{78Y4N I9+M(+ɻ,,Ov}vP|DfKIv{J4j{0h~DB1Y Xk цGK8ڙ6`&+nDwfz"?2C8˔xK2, bFngyV7=Q{oPB7Ĥ.?ui(EB[t(CKmrK" *`Z{ˆ3nzKO$z 7K!UO.^&Oa1<z;ήi}o Z>-\)&;hõbzMb#1*>ez|l&hNɺזFɼs2y7rqա1Y1|lpIcVxA*w2൚IU gV,*xA Yⳳws.̀>W5ZBlm_ %r70)HkvU{FsuUۇG,M=ǰ0>2`|S@`3mxݗ?-aJ0uJe hqMijPD#0hQOoX=quq3cT(nG2 ŦlYB/%Rb**6|H'>rP'(0n >|"<>hVl#p>6㍦Q5 |</l/215" o9QyOIML צ!qIJmSlbC;$IƝqNJJ6F[kiK =h~KF-?_TxI|~oBZ/ bgaj#rP}࢏ &'fJ!sd[y)Yifc-T \;h&|PqQryN$ NM٘t)ӧ\'ƃ1ZDb;h.xpuQɑ! :׆^Yw5>ME >j?ZxxصDW^Yk)IBFe zizK̫t95hD&KK၎h_22WmiVºt&6@*mZ@U6XkN\6JL\;fƢ|M7: 7X~`k rp+G4V DI./d[Eϟ- N( Zip_8I6-$fPDϧ9,,Bpΐ@ih3aS))zIP e=1Bd.T t QKK˃d;8Ag&M[ @]ҹ`? JiSp(FFғ@`T.Ϧ4 HAbL`eKo+!KX~̿i>L#]9&Pq˝h,gWƂ|%[{L'nllנC;(fiҶܟ:xp$hh %JCACJessmuR[cѷ&CEt}dq V:MA˟XAo旮*gi9+hޘ֓I\qB1Tzg)O=R%)8}[%+XGR4F֔Aj Ԍ Ic֚)ۯ7lC G;$Kn#6kvȅ%s ZKw4'biq]x0Q6׿=_fwG?ѥK9Bzr{="h;B;pLy3Zd{֍xF:<5_%Lx;5r_//ei& E=Җ)G1||tX &Kg:%L~rEJXgP;V} ف˝yw߆*FL j?w.ߓ:*=^K7mF Ќ-`ӆF fYszO)UhJ!^0Єdn5\=2aKKtPکoyRϺ\z&$&}f6d*ثI#MOS{?KSL(L!%_Y޾JKɻRedfWC̻ _?L>S>l=)?[?XM&3#hb3mYޓJ~4SN,erR9kx_j)<}ӋP6P`8# 3b!㫥O:F4`83Jf~c^U03Ri%၁g*r&]{'!JlAZ2!3I!l hUlʐfJf4Iu@$*5K \8F(/Єp1.5l2[eCy q0c3Jt9L=Sr©LwM05tb( J9)<$ g9e%a&N\^g ⍱3kI\L?/.Cbg ޚg g)\cOq)_ xw.5VʦNGjŒyK.7֒3Mxxqqq[)&aiy* AWrSewRZJ8WR5}/XS_~? W^78i,rI!+ו:[ݿZcjs$Zz qgqow~^q'OlDUJ2R3_UyG|+֊@rRPhL-\gdJy # Ci#FKVv-WiFk JS0+:1ƅjgF= ?$b>X흊7/?=mD?{V<0Xdbg^.e;%g1}Eu|/ 9:,X,fx@%Hdh#yB{)%I+@k[Uv28/`$-*WYЖJQtɓ|R?mLBIg)b˥p=b&2xIK& 0^.B*W0姽 ̼gof*/fMWӲ\`,\lG@~|ˏge{xSkYIw?40-jƎ6sb1„r 1*2wz__m``#2~%PvPEgVDn:5/f̘vߌeN׀r w7]<؟.~32s2ކrz>,}\w+޵]K߽1cxڽ t; Ize'zVOp9RqZ5s֓0:~GH{LB6vQ9zWv8k<"jա]vM";{!ЎÀCB IZvhסݰ}"vhP'h6Y <%U3}Es!լ_@j<ˌT:x)Hͼ*ja?@ lN ȟņ}|.izN{ڱU!9OUCM|7ZYOg=X> d=f`#w ׊T ůq yoд{X=[ YkQr(ci˶,h!g"a;!P>߶/|gA/YiMgAU~`bȮU_1:{*z'AC =A Fpbpa1pv_E|1 axc5/XǟW}.}z="ݵk>OL/fsSf9m*|N[oӊ{5*z[Z#Ú! +Ex~~]erRK-MJdRefeq̈́&kGJ`WnwE*Jف< pF"En1x8sE**O^?e,m웶#׎6=h6HcAh|?#GMYKݮ3h%$uki-x؜kiKx| #ة-2qzx]3*$|zdJ1QٛńjS~ -VrteV3D캽.+X|QHzvIrΛ5KkCJv6Bب`tgTTVj@ ok7w) 3\qyERٚHWP-8ȕvNG|=oƇjt`ɫ5]Hr vhs>;oDڏw? cdL\O[~Z R{Hc6"wC[3Z'̦[I&kG&V׏x3qʾh6Xij?GLgՏuؙ598qt y31v{ϛk%gr[x;'`AV<^FHU KEiT47H+{ȣ<#PfwP2兣`pGt|1"-vh;i=%Jy"C-/ӞN{::Qw:s,; gܺTfk8}~jƟ|ل%sWLھ=:,6&s(N\hFT5N2Sd1t&R\6*3cL6[o6A>]3Kۧƾ˫kkS+VfJ gq9|/vrywG|cd,vc_pBf1>2cK&cCʔ>Zg t;e1;topVp>Hq=SX[¥gȞRKlMtȊH=o&EQ\<^mS$x`yzGc糥T_c-$|w⁁Aò)XEօI/$(KfUUqŤ(EY_R 9\)Qy(gYKYQBƎ#,K oH꺤U]Њ]N!pYzJ-YLYӴõNi;^8[#76MOq M%HصdiMk[ 9wC,cVǡ=j$k+[/$VWÃsZ9o%chr5xdͲB,.g_fc@N󲷉7,yycU &~=?+~uUgn6N?ηu6ICȴxzBHc׾JV-۵w ce44,"5w>X>;-9S ΢OYL?sylLnFIՄMn~`ˋ˃?+wmI_!mﶭ~Uwc-p/ 觤Xe&)jD!ik66g~yc_Ia_WpB: Gd.ތFWnB\>iN=&pEF9$o'‹DTTldG^wpaW{/3N}hllqOc[#EBdT[HbRJw:4Hѻ.r 3!d̚S$4EtFAl })]`gW>-J_K#%!s$1AXn+:gBOʀ,ɴ,k0I@܇Zh5KK'WoVcܣaӲVjΑVzGӠdf]t_z]b1]'GNm HRA)P0F<bJo <gR{R@]/)LrSS^p2䀇ͮ51Vt;AOћO1L[ۢpTV˵aڜ$H1b᤽,[>::x:\"%hv&X4`]1IǜR mG?0@qSB X2 3 z]RI z=@+;G0]BϼK3xd{DOFQQS]Ӆ"5PawBs݇ ;o~[6vy-wM:U7{Q'#l*3Ou?LJξ*ٹUp F?icHlri!yaJ#O\nĥ*c) r۱7mwd&5Yɚ5=({i@ ڤzDHؗ f>t Zb8㛏/)Zniƒ+hy;j15D˟R}J"q$kg8_Uq)N'QL8rtݏ䳓hL]\  4z4X-KEKY@"g M u =FHm:565{Oog#fKr::=o1 PW 'CYHiG75"?ަXu[M~ڤ.w.Qz6q !rn~s6;"W@P 1gF")$LE1E0lʓNE(<(t:%Ϩ,H $ m %G1Y!C/ˉ Rօ|2Gg*<;^_W45D4}l4<|{w^j8:'z@ʄXCFhS^xWE9NXrpVT*$IjɴE@CC []$7(}P )/jթHۓ"LW4dЋd/h;--H5?Hj3IL#;nu%ٴ"bH$ءpW*m0Hfx:DW&"f$؈RH <$Mj 8DHj7!ceuMxUy>8X<+яNo=s'~xKcU_Tl>;n4)TY噗51O%l<1;ޞc)xBv9Y'DG$E*o#78.N-bhJj^:rxggzF1gzF1kQɉyo'f|'"-Mq1aa5vA ǤĴIqm[_,֊P3ʘ2.ϾM#858o%4-_VT짐lf!"2Ze(LpIZ_kNpK=Dt ƚyJxIE4t4@X`(n!QR(T0D&ZՏR{B-XKzVF$uL Uߖ'T,bii꒖!G ,o4@MഌnT ŧkiŵ,"pW=0)u;i{wuZrb5hQd)K.?j݆)2#jyPUUY.P7Ȳ^0nnhɊYKC"XZ-ȇ<䐇|RÁ5+뻥7F>b5Ќ:-hnKbIߦ2~yΠEfXʵ7䦷ۏ:O$'2 FzgaKi ;|3;)' =3!gTG>4҉a離LiDc)r`YYJ DǒO%S:dp7Ou=wMջﹰiIZǼ[+ڞ&\}x݆`lD"LޫQ kfSQ.PPhm.IqC.-:$3@uQzI'sKІbh@VVx@ $M1+W`.WkL/ǜfk_u]hJU9GahzE8u28mQ9I<@%8Qu;$Rwyr{I >,)9 kfg΍V6s9E@K'P\CaMi/47R$\ xK ҤJBWI-TNw*|48e@h؛L]l5xde}FH /@,ڻ>4bݲ}8\ά1DCSd%qX0/AJ0r+NVF\7L8PGbv'S 뗸JQJeBQYgESdJ/)Fd:}qڛݬ-|Ʈֽg{2zwO7]c_Ia__~BfI϶F|t2JbtoFN+_%|=Cm:| 5[[[-;-ŭ{?rly7Gsk~^9F||:rB 3 ErX#d 1DѠ'GQʨ9hZZ4MUZoeC&"aYrS~{@66b1 )4wa.~]]l%ޜ7MDo0FiE$| gpHd=QH1.g 0eF&R0M5Ed%؂ M)HڤP;|wsQmNsC 1 Z@`o?0RG yA$ۉH"][o\9+F^aER vzwFOL#5gl'e\.'er8'0bWS$"?RX5355me5* k`SbrS(*ڳ]4M~:io^KE4yR ;pO$~8g6b:wQt?j+7o?_60V"W0 |45xfs[YVmrvѦj R|U5> $zq<;jYT쥩</XyT 'QWܧi=aNd)xD o N Fwd0QEЄ\PkO c09 {Y@kRZu% : ĤP'0+?+v6Fƃށ?m;qoi#n&,](|dQ_s%^<;u;SPm޿[]ୗ^G%{K&zw[Ex!gTͮ[\#]M!c/M-L5}4 SѪA_$%!5Je+eiNmb[h*st%M10/xw|aܠ<Վp. fuD*#ʏ)Vб|s 7^P8r}TcR:Ϡ0Oa)ع=)9De[Ղ_4ͫ)tf$v^-޺Kg/~wV,{rq.5 "%Qk^S$(ΥVR+ZqM9Z.2V,i(%VԽPޚ^#| u7Y,NasH2Ov_܆K>u^>o&SΡVSZBJJ̈ad,V5j+eO)]Eɳ0J߷QZ~ԨP,:fΒsPM(0NIgZ$ĺP>fbE1,M؀[]nc 5#"F!z冉$;>׋&yHvJw'2:^d#ft_YTXs%WiL/#C3VD>$mŎgr]rbUyؕt9H+kk ?, GIzwrh++%]o/__?dgE#O`ZqVt @NWlgTNz*0{L8%FdVje^f=G򎇣e)IE A\8꜌IhX|Ӽa^%ozHFE`G7J&XF7ޥCŇz T1sQב+yP߬ 94VNG=dHE9e{??w|(t=~6/PqvM{8~p:M;rNVKF$;:x~,W'RƫVkn8&B B!{mJ/5;(FhZD'}˜J}eJ.~ uq`YkϭRYa^?:<;qyxӎ=:z[k:q=Óڎ.#{9d-w~x-ޘ.ȧ/j^^5| E4~/sQXs,G̡e T)\_,ip܎8g <W ķ9 CU VDArJT@9`Մδ+<Ix +Uc0[~sY/|ʳ;9nU|' a/f4¶x^URQ룸0`P[F ujm2+D7Bbb¨G;ڑZ'BvrpqdKg}]r3I홍-8];).2yq&LF^_߮~ +dیO$JRԭ acі${dYyXb]Fvrɖq%ʛ}`X=;nb}92>o:|,:΍Sީ~oQxlG7Okand472sAg{qd qP,}G׭tآI gkMB oPq%VfxYn`\fN!N&kW@N92ahwg;2a2d<8ײ/:ԡ.4~ݯۅB\.:䳲Vva&rƎ#4.&!1yhBCQ ?9⼦S,˛"e/J"=YPCL @w*UR~r8^Df̿OJt*c?䢖䜙H5L{r.u aҏoy0>AQPr7 ^3y&E PsJJ~H+u紾Q7"o0뜳Ai}sZY_#+xZ[rd 2Qήٕu.J}Yh]s^\s"vaKfhw/]R..%hG:U7sfhA>g5'5+ m /OksK+#_pJNJN?_%(.v?]=5ނg :fx_WTv/Ϻ3_/Fw(G M6mBBGBlX`b1#Z?q~-k$,2jpZҫ:o'kFD#f{e=FeJ+hmf#f6b6701_WJH|F (ddوEEEhA+\fh(UÕbF*]p>(fh7CMghCQ[v3&OoyGH fhw+iŷ4wK&^"&J#;7[,\vϟqyǒ w#]`~y6^mѧ̱_~ݞv_ӱ9ۛ26/KG9:-;|Ǘh&mZ.ﺌO1E/Q$$ݎ9hkb s%$ %ܞv477.G\)w?G 9TḶ("Qmb6f|ށ<#]Fo*2;U(z@y4G׋>hWB;]rs/Kb.> `d҈vPa7CmovhǞlXBT|EYɤzbv3^dwRݨk=nms.$0DMuS1\0Sree5HYwױ7'TtVQG`{˞sFAҺmGn fC а!#"UDWnKQ8KkıBz<-Yh=u`4;xrYm/cH+T Jq0 Mj=aݪ{S/FA!8-)/NkmpzGgkp [IgBXuӳL֪H/S5H0#UÌT T|ѵ,Jn z&,9W[q{d*%>|OFbxWZc:o?4~&wdLQ7oWIoL* U8y#LܪW.͆BPQ"q\$E:f R(#S芚 [v(wRGˢtFazX @U%j`3 6h謏E1t!TQﲖVtS 96W\acp =EVɸN@ J(jS,)bV8S@S UʨPuF ̅1bKTѮ- 2 FB>VQ|t'!}sgjjq&! RTfj>8ݤE2Mr|)$-W5XRB$?^Ȋ#dV3ؘz]Xy󨜍j;M j(5o?g(#Yt@t3r}%gyv%Nџom#u@D`;)v:9!J:!82"B'5Ǧ8edqq",;;poC ׈ҋn-HލZWGk] ='g!,>rEWѷnqJ;]nQ2}sD&͈`wpK0Ug+A)e Ć( mL7!jl QwmI_pw;R? A`pF엽XZQV;~$% !EjHZFG&[=Uտzo[)0idK~`S Y,15Y" c)'Oo*IA(F-=YG`&uDI cӂ쥝[ƾ`2ޥdx5HzH椧_g¤lT DFAʛQy%| c$fM(f} F 5J` n6J#$Bd,btvi@}d&9T-06(Rnm ȆN82e6H6C=sfiؖF K|h(bx/_ZJHE#|AGO.AC(G irThhq|8U ߌU7KaBm9t?\cgh%'F/6sLf'W: , >=v,2h4(S;I#%YrS'0VB`,qoئiPLi_YY!b퐗GfCk#| G[ykKD(c,ɐ#b(֞<9~M 'b,DL4nf,9^ ( Dncq+]|hIPk-!{)5?ṕbF)C/NGv&5eWs5>I,2.H~!lBVTI*Z҄RQsl[Kkcǐ/DFo!6i!*HYpڃّLқ-mfkK"KV|Nk]FTPtw̧w]4OnIzwI;cgRoMR@?/b$M'JKޒjul>8rPJx"q5Y\,/1q㫷)'@ىXūhOaP[S{T3AL P-+`M;?wQ jW2侇C =4[& xLnEKeH 47{ }Thkӊ5dF0$QY-h]^N7Eq1l⣯|ccOh 5 Ƣً3T‡a v%bh.uoM"K^QVzۊ䍢h"]dҾ9H"IRoPx Iq2h79d6C&(=b0g7,ZByw|6-wzj~HΠVoɠIf7rn'$J4 Z; &# l"Ƹ4-:&fm:[p>>o+[ YoOokrJʾX]r~N3S6d_\vOІ(W 1Y' !g J,r3CemҬǕUn[Bi@Rҏ?RRkbK3 Yy{)P.b$]]څЖ !Tz?B#PǸ :,QʊUVuGN!{RAdC]kHwy*h&4/J&f|\k^O-@H p߾7Ͽ>4:^.2^IRT@<qO³O{1_^DZ85ć<揷 ! ֫JTZtw8,KR^yG2f3usJZ2FBu?M:l4V&:/zo y[oZx>5Tutn꫓V (h t^,_gd{n&X ܑq~lP2|DO/,窱հ_y1WEO[[n 3'cKgYU}I[%{[2Ut=}VWRu%7YCVyF g+V\}:j>Wy#qWӏl`[-1nsD;ZZ=;B5snP}pgƯyWT~`FǟN`|Z@}/Fk~R BqxEjY _oQ?+%Pw0H3V{`=&}73=b<~MZL-Xo >c|~6oCftv58,Kft^N~jNGC*hW:4%a+Cmi=#nA_jN!/We9聒J[Nص!{DA0I}v*b1-0(L`nìrKmkL3lilz2sr?}D&8CboLNP2l'^}3]P)cSTO8p*M57F8Bqd},;̎ ݦZK,cٚ;gwƀMK~!WlSs[EZ% 4YyBm~OuK نUHJoIҲgWS _\U4oa@>VD3kSAvl\݌LSAvځ-*oHVacgzۏ?뛽|ͫ1ϧPo F'_>5-r2wwodfe8t'"Kp~soo*Zw7-826:kx(I8Ml3Q9R%?2xcy0lHJUJ/N oނ7{?\(n~2huꁇYZ8#6\'K bhc"a/iNUn9)Iյő7-8\|=5X#f]~2Es7 5B!EXpD(QWgXM$*TPmZ}vh!j"=5gPӤ3(3EZ_K5\0OڭA>ftH;<ω *[$Z$@;NZYݬLe @RKG֘ *oW^3vQtǯY^B>g*%Lv-V} ѺzeSv I QݬsH WsaGB zV$H+ݗNpseps@E;𛪱r5OnvlYIوWEQȔ&! X F{WG^Gxq#&WŪ,nϱSz 4+望@5 2PC՞B|7 qVV7Gr#l-W~*7D;YJ+J>[O"mWZ@=x6iLz {s?#*`)EbN"n5_V gg{ǥQ~/{Y~:N'_x mIw7ǴJ̽7j?;S~V>ԙJ_)O.gCc/#?lbgG?:55:}JKר~ #QYN>iNK3S 3jA1֭ VT7u3#Gdp z*{I5w"(Z~̍d6Fz{f^*x l$؄kni?z&Bk,3nQSH:-qFB6Ύ9fm۹Rye|}J6&mř(c]]sXceDLnm;W yJHSf52qT [Ly Yd|i} Xbtk[7Z]3K־oWݓv$@HzMxiݣЦЮBe΅[X_5w<俎aE?=n7-v* R[Zn-$M>$C@}6d}!&ga#r+bAQoY 2F'6Fal}rXV$kpjeF(F 2#jڼ[Z3JnNF JyAywܕ>lcyc41 ƸY@8ex(,zp̧6ęLI S!M䗉|ҿrF*1xuCg}/B`p,F#&ms'#2'>2d`Ð ':l;{4S^ &Uk tC8?{O۸_ew\aNvFIEBUV"#NwlZ,Rb`m"U~EҊ'n u\*-w)@ dzGsAR&>6ƀO zg4GS#+LO2|<3hE~Œǃ8Jh NSXK/<%!mG3{4؃) 8B Ϳ:osNȻB{YZP'n:w6$u޹ȅଡ0q4^/qޓ!|'hzxG`$4#ݝpS9<%S/= _w 6:ҧ_ss'/=?70'Sp Wqt:~`|];wS . ,Y[!bjHI PK#j?qIJ% V HQܭsqE#s҆sB/KV8Rbbp sk7M\+ Hȇ(=|DX 0w{]{ {-Y5yaM>>Z*[A7R7x{S.*D@H; ;ʮQ|w'?ݗjLmUDHS(pޙtHYAr$RO^QD2dy< fH<եB%D-yׂ.u).PTf7Z Prx). hR΄cE^t*LX6g͙xE- yw? m:|YZ O|xOMfOf2Z7lI+ogft6سO,߸$V&b@S*lW:qZtn篯; ;*ZԲPJ[<{X)Ѧ0gXjڼcXaI<3Bf93$iy MBIT<]Ghć;JkuqS{)8y.N5'q˜,$%a޺8[k/·x*&?$Pg;)BZ\ 'p )T;IN$KLJ!*&kjJsVdN<{d5s%-yW.@=VBTEmMC*MHa DM4P9 B qmtzT#skAaGZ^ K۞T8S,RMTmQ+y6oϣN)Ǫ#DP)ZF^XgRF#$Q8ثKB`'yn Z3NBlu3ap䳥W Qmi m~\*9U؃ +H{' i;i $pۢɢi(g7x.سGLSAdTI$Zͯ>TV?y?nf˶.䝟uGO]ŞJ( 0G%o/v2p坙[A.QB12av6J (%2vCXGu _nNX<)J sA{RK@`w.dH8gnDRMX7_%FyX/"8">6N1b )MF`Ԩ|`DIZbm<ڧY7}g-4h.dmaK%/KJu2POp,y QRb>â,7#?,?NLtVpe6# p&G,CΨ OS1?m {F-r5E1*ySU}}FRupF+wE(׿fRknյe} zE⼃{AP?LK0#lvm&ko|2J%i[u*@ǚ,eRp4Ux|!MMp:_Z:mo:?ܳ>1Zuh(P䬈>/U~:4.?}EO8sol[c0+: \vk WFyKwjI<(1 |ׯKƳ{<ÁI)LPJQv Ŗ9|qbpAi֊] f?SoEPJ*]hscn0I?rvE'zNQ'#<.z!ӟ} ء_z!?g+ҏ+/LFPa)S҇Tyi0X Crn#(%2C9<@t K$~͏d0ٿIЩd VUsD].[ jiVKPШwTkUè0FPg9śag;uz7y%b|xG%.lN1Y'ɃIi¨є&20mSWi$nEXzk#wV]6|Z[J.'P'T4f$ b9vNcM!1 ˔hd%hG6#ȓͨUJQr+,\`̱F=JShksK/BQ@Znyq:))J[ōW#*z0\ƦF`r Emf j %awL5:8H@-)HC g0]֍eS+T B lݍIzJZYv/JhZڋxElLF-Ìڰ.0tw! e0eZBW$uޘ zi0(!} n]uWhY*؎Z=fReu^pN1WZ9T@`ӖgsF$?$`pϜ!(hُg?Zh}gQȥHmAZSA8"biuJIm3{6}Q#-P*R*! f,wX0eaHz"N{)p=%an4",Cւlk\z^@m**PV F Y=M ի$7qQRبIiC) sb$K[/)i8XQW @nP}SonelFEJ#F4t57?ntqT#E(|zٹ-~j1deX;a| &$ΨDɬF=c)1EhU+v nF ֯KVqq1'S(@@&iI +9X8U\Ս1enJhYq>>HwTM%8nDQ"mmU%حS[ŚVFU޵?#EUCu\&;uK*ɔA[Y҈3 %zYk֑(@4@r@snA kz湝\n.d;hGtѕ#*w"C[QLıQkAQ7IG@IL{f2KIDQ̽wTiǡj7fM=^y9Nb H轡U_T1*qĩDp=Gkbp[g+Gֻ6>U]c:`[-&--rWնl'ݬ'2`؀YJ&Z0IHc@y}yS%i%i@x=@8"7O%QJR, g!gzkoJ3U TEKD".ܬnÏɨ`U}T%ϑ6g`hϠ :̳581-ߝGH}k,1 G 6 82Nk5n7vh~x'{ $;0nlJ~ij4ODhDޏ8{UhZ'Hl攜uX Zg@LHq<ϸ_oz|~{b]* +laWISYKEJ!&3B-#Bo5 µH̵5cj QAc`$BݩTzϺ{NM8pl>7߼޿XeݼpFZmr?z0:j9@:ir=xL3z6dZR QeLD[ B1*'2&4"F.ؓLQ1%q8yZM¨D-pH 9 ]䲿DAH0Rܒ2۾qC`h$*cf Ҫԙ7۪˷laq-<` 1A$6H($T =[7ng0}܄`J1''JJD(d1#!!"' \CJtPwy#.\; vffY)?#kf0otwHU'̘8C㐭a=1].I1ͧ7")Grw(j^Ȉ;}X ""1.z b$5eZ F4wq`iN%9LL5Wr =8\ tY/z+^I$37U*Αƛ r{JuW$ U$`t WJp¢`G},luB1R&0Pچ  Jt$8:4Et Ӫ(R--:CǕcr WE"[h1BS8j.J!l(ŔA@J)oZP%۸{ktuZk)A4Z;\oݞA~? EQ5x)TYf{NmwnjS⭋hKnbMNv=rn_z髅w;w׃/[侸vJA v|2w?uF8acCl;x\PE1nL xQq5<2ďveHZrCeB {XiBNͿ,''+5-صyɖ8@Ürt#_?] SasTؾ_VטsPN!>CPYU 7t@jPtS rO.Q0@Va;KFάi#Я@7i_d&76z30ƙUaa7"{ i̬;,Aտ1\VHt+>YhIi:B;[)}[0%3_rrUX5`AE (g[0x (lVHޡ9.erzh| Y?Z غ&77m8Śs5U7TKV"Ԝj3խKOZ:z\>զY\^W{բͬOM't^Q)\4h`'F~bh0ƼE5oK*X8eP[hiE e$K>Bgz|(xqj-{kvY|:GZ4%'"~vrm""X5p)x0yͻ>#:HLSiFiyGP->Z~bh;A۸u6jp ¢p×|wjU QM.=t=n'+7Ja]+`\u%e_?ϣkGu$y\h ,dA\r^*JG7]2fW F xn7UeaSQFFM*A(Bk:)<Ϫ8Qs u󱜽+AguQe\SB%/7/T5^o|SxT1aƠzפd6_g#{w7Q<]~yV鋋hN&/:‘.`o>g9sә]+{-H4Hl: iFIߘ;9=}"1zLu1ɋ_WkD"1^c9BP`TQȨ{FTqӢ/ѷ G`b_Vs5%96|+yݾU[N f݋^ *U &[K''wq*Xv%r?ʏdrus{9TbALZb8՛E\YRݭCueIARZʒ*=ʒjJg1B~ JF/҃B!Q*KNxþ[48>LmWk:zsɮ iVEA/~Y,ʅ:k+:ُ:aG<ϝ,"&( !`qܬ$7$Qԇh=ۼ Dpaѧ#..Vf^>`H Gл*7\B5䐊ܼTX+/T5øv[~CJr F#LkI]SH̕j7Ðˡ.a f|lNZ3e}3vօᤰNI8&WZFE^E†WIXc1T0JbHMC1(aK%fIn N04Lp cT᪅܎)xZc/8Z HaAd$t' 18pQbϻGv}ݚuL^xn}V0rv^0ny>)Gt)\ z619{{H@e sZL]>:{a89JɳM{|[xg*Zb!=>9JIIQ火A5cpoG; Cc{yZ74 >U1^8vVЯ&.):kfNm:nCs3ġ*a{]G9N$EV\,2;Y}:9 w7rr7{qAK&^E+UƆ4~'L:wd#h](+2'0'A4?¦˃z,}ȊX^\,Ѽw1h.B[1:D`9"uʾpa2m@s0fr[aN={ &@ri؋n0p;AqpKwܫ]ql=4ZSoNb޿g]Z\NU\*;`: MΧl7~gM{N/i5BC|qgy堰>scOn'ᷯg.ߥ@pO L75uxvcFA$2gu]*ѼRe;*o@/߼x7܃)q.7o'i.ehGY Kc(7;^Oˢ^&G޺} $g2ho?QJu%n_N&/iڟ (aߏߌӷLUiӠgb2F8D8`@Q+Ddġ-2$UcJ*1tX9UӤM*Nw6BDG{YdžzE`Bg5猬 sb1 .}ސw vyô#| 3t"1'f#yǺ`9m3rp1D5ko0TiEMA W%o:~ue.̱{p .m&|ejFoTy0x'S1R}vsIk1Fq L-ߌY{& CaPE $A $R8V޼\ߦo޺lu(vw"'@\P`8ǹ@y;5ް x{^WD!Ěݖ?G!Dt 3wV)H \W qBDW6ђIG3bքgZ۸򗫽x?T[w+uTlJ^%a!%# At_+2V)s'a9.9`S ӊ"o蕾^u(ɑ޶%W?R%Bڽ=0jlM2,trƈ`gxF\) Kc(&hEbΕ08sAFQsB]@LZZǪKe^E^7/Yy T^yL1/p ؞6sǖ=ׄk+&=4J>#@^B^@E)*ybmƳ5r@7G_2*SpK}ɣ/K^؋/y%d;_{룗U&V0#%T^/Pqs%3}ɩ,|{) gR+ї͗W}ɣ/y%[WZY&DUXiE}4i8E̗\58現9SiԊΰvY7Z"*С]'3! J⃾$2g0B46c:KLBgI,1 53Z}8bfs iǣ"g-"+)-w^eϔ|wSW IGՍMH x30RiPVoH7ӄ=,b:Xx Ժ INY`d;XB. (<"7I ]S<7-*"J -`|\qbjTEd,c.:g;gLT1(hHT-N/tׄ.ELm̧]k3Qh3R'BFhˆRʎ.2\cޡ"]h8R-QpfrɆJ,F8,}.8y]Up"KFZ @"G,`y0Qˇwq) 9Ep :4L&eIUJZRrfrYᨐd[F%^  5 2!dHQ -c"0 qIlxPX`#.@.S:mDEcRQ5ǴhLSk FI<v0N܃ 7 A1x@C%J]SV3:etLʫd-;VoZ梨;aJex#wqT aSiip-[Q4 k@e7sso53mcC8G*Gf1j(G9cZr{/r] NW X)=5UJFfx_\Y^psiqͪW6[UEU**$ @b(ު8n*`QFATMA3l;"8Vn+UWwUǵ+C3DC=_&,muSݨ6Ou^ԍ4>0&!<_en}X1ͻ) \aQ F?\?RxwKLeR84aV돟3|Iܖ/"i#SkWs'\x9hoy!CP|2Y0o.xff!Bږ|p]{̟U2Z`?q#k${3d3׼9}{sOF_=_ԟo7 ?hY-L[V5\wk=؋ )6ͷPG=)iWg ?Zn&h7Ia]D[Z܍JFeF bD`=\ENJp*sU-YEMЌTw/n`΋6woNiZ 7RtFʫ_|kseim-zRto o-xT|YsMajL#tJ6Z-n9ތnhZH$iNc^!!_pT18C-N;:ez|GK!OGW\=ɞsܴHCUXfg¨X-DZӽw;;G@!r Ќ`0X-|SbdrzP/&rt!H7kԮrzt9h7?$5AA! ݨ 0-VJf,҃eb~m ЈL9YD m2-XQ7ٲšQ'":"xO#R9oK% ן .DI¡|W{Q=Yi*hMKvr")[|Y٭Ŭb tusMfJQ< toxS<=̧E>."5KڣWj- ~]tm(tXe=ͪuܑ|&M1Cuٮr[%>1Î T rRAΚ ͂4"jD8biUx~98+p\bF90KwQe.xY/ټ%Wl }*Nǃ_: Y9|ՖwrkN r!f9; }հYnHd?Hh5>z<\WTXfple10/cƟx8zvڒ6r[nJ/QZVrmF1qVfsխn -9 N%荁vU2h+8Ld8WWI%*uBR)-5G/eifsu7ńwD䍍?씊y7I.o;¸Q>e┰S"nݸ[S n JaKΤĂr>O ݹT%$%[ D>ʋWGrڦ!b- KiE/Ak}\B(ؔ吠B C!F9e9+iEh qZ^+ u-\k~ѻ L3(oޔ3ى TusN1HJZ 1lz,0Kv.mJQ:#J(Ld:ݜC;5J)=).[T lן@)ZMLO+*#t,Q$fQmW0sX 9S02$9&ô͸n[3LA幜vV "^;Z}8bXF/0 @SP #-"+)ܨ>*.Jz-_VC 1 fRzibrAٖ)sP.<<\dYwDMɾD A0V*j;y9h!5_C·\c™.owsֳ l^F>SnO9rw\0Y[ǡf:j|:\ѻg5;^KN.N[ ;Пԟ>A0wZ.mN$N.1c@!kt +(uH4G0Ii0TnN솫s5'XQ-(<.HVMmZ:ۇSm%_}&k}:3U#IbCwMFN&*5u>Y"Z侭kvԧ Hk I`HNm) ~>ȌSep3 вR-ƭ+7߸V)RGcz7uK Jj$UtMO o7A}D :X`*!f+ks.i<(`_6ja_#D4"~ v#, E!0)iğ zc }6뻇&_ǝ;>4om ' ޼:{UcS۔yuA/R\")k?-R+gNHr,J:)Jzt8,tq18ӍI+k3`o<9_/;IE1i,St RH<5[Hcgf*D6VJڨ0r85ݫ]nyר[sv݁+K*J:[)p+W "mPu*{AH}#>'|w^ .]o(3a"2wE6s(&hEbΕsAFQsB]@zV ĩP&}?ƋM bJ0}<1W9oq`qMLRi|t ""Rѩ4qR}ztE)~7Z%Qo+JR!E{ "µ0l挣A#wmm~~:bqcN68 2JԒl?39Ś\H9zpKWUwWG1oR/Mpc'Np 䧶[80A̺70Ėj'۬CXL?_k!6 ~c 1&(m:A4Mi8Miy((ZDA:J3QJ^1b8j j"H<+h5Ѧ/ZF\6-BU\Jc~iw+Җ *7+#(k;bltF[M-G2ZLH@WV&j1< o#c H)Њ#,^̕6t:-F~u!M$ XFΫbr `$iQT#᠊ƀQAW:')YGB)uh2*.̃qfr@+9 `|S6OV;X%;E/[C(0ѕ?[*ӵ5 NhHQH?mhtGJ p{8ޤ Ut:+=vݽJn_;UZpt9Ė^y>Ì1_x'{I* ff8ۚmpwTf p*ݼ_k}37uE=^+w-R3:L<пc#/띋_#1QTƝvzuک#՝SZ^\@b~ 8SLx嗀^c)""䃄'0!z*B%(ͭɆҚ7fBx-xrޒHG-,A3jaBKm(b{*i(=@"HˢW[$Q`rl$*:Gtx `m4"N;$ \88*cdQmp;T߼n*oޚ䝐 #s~hDBJvb%Q5>" &k</?'zvAJ*\1OB>96[=NFRv<{cū5<5bt Ggu` -Yyytid,>=xn.\Hg|8 ^MDf2  fdz0L'*/-eg|RvR[u IڇO'F8M3ArJ`t%#\>a;P vqz9̡?يs?x'dolr[v:Jx:+39X\F}Czz~y_kᯣף~Iዋk njP8 ۿބ^ !uo|~98L߽ l^/_K~09Pwη9"_] oNNa~Oʔ,pfߎG,O\ ߾t j<\s-wW`)q,gԴ$:՝A%4+/49^.:xTEr. = K/,7,>烋M?m}>.&z4gÏX358'-]\;ʟ>?zO~{{xp F,/W^]qL .M~OiL;x3MOϹqP#<03 W83,i2s9kOQ}D%3l]^>k_N#3~3?/0 ?)D8Zc;~4á5G/GgWIq\|sMͷ,1,b_)%ۊtV6`ɚOӵ)Jyk 3%ʌ<.(Mj\OzA `J| Ky)i# fgvpyÃK44'9Gl{$w=Yz].ϻJ+$`̿2 K秛a5 `V7xnZS\Dj&GZ+2Fl;rU 1_TUrR(wUibIf.3ݻwۅ{=(A$gmk)*u-{{4%nXG)p0 JEДuWhD >LLxʝÓ3bɰ o,4cZ!1 _dpvqokRZSj-.4 +Z0|W|oJVzԜIXÒRԼksv? KDyMƬ> ,U-ik_A#lǕ}BG2o`]Ûkrq%ܘr)l]q&PGvYIxLTͬin*œmxɴ*(LvɴZλ;sYCM%;% [U~&n6Gy wm=RpmVAע& t%[^\K`~ptd*<׭>~eX{FVu}6'TP2fΔ& Z -qVcU^R^;kb?]937[yߪf֟n)gn"ꏇ[8dxXFlJC'ccݔgS%gS*ݼpV9eC)5X#dXXܾMu욽w'L6qO;g;`=Zۊ*UmtƧJ7Ja\IUGVT8knm[ȭ;7ΰ;o [fW"q,uݽsq0*X1I{RZ*؟SLA{T?TV޹~mQqV*X͚b"wĪlsusNYG {#w3eV͂@1}m"[/]붧N;6(uu o^}~:D֍G':G7wI@wk|սR"\rކ/sAY`&K֫%=0s[fcp pjSZc ?̥nuhS{=9[[8VbtTkðQs"ZdR'R`dF(R54."`jT=*cu䔈VJ%^f1LgH5aTb@#=%8h DX8=!EƄ5Cr $!|Tb-騋y Ԯ.ӛR?Ϯ.tvVFlfN-k5\!)8S*LH3 G[bdw`2ր p9g9c8 !dy5;KR 41M4DS,w8KB XŴ('Dpi L!e^-8uZB-Zs5c,-yYY&`D$3 L3i8E˪* op(58,i= cNԩVT Zr*ݫMl"FjE"\2dLzie8O@Vug7ǤyS)HӀT 2Sĝ&1!'@΢ ʲĈ գ>t?.zC4HchqqG-!xbI:Ā1{-1#l X JFH$i#a##_HX :!^[&f_ ͉~T=) {++&.~wV_Ҵ4*(-~U}heSɧC.zN^Hȗ>NV1<7O淦;~|gκ_RI E"B mdrJDSoEVS #vj >W=bx\;];H ^2 &o? 熪u솄͘`{_cE9~B:O(ݴ?G3nWRE큡Afw=XPq{2\#W~?Jz%L﷪Fɔڹ dd< GZlBV>x B}*7[*OA]_<3$eٗc,G}RqJ2o\d!p2~g\;v߹ӂ"s140Kb IݚK:FFY(*w !~ơWGb ?w|yީT{KlSt}Wf*Ca{Z}OMan #%?mv}e`As}pEvz%Tv'SI!^ϋc4{c]=~wkf8s-~?JE1rQlqܟk y1.ދwz΋` MEMț'G~Cyȭԃ;&5Z0GE^ͬl:HH/Tnl{pRf\=+E 9Y4d<Ϭ=zlxOޑ0{ erƩB "X,0Y@*MKc&Lh;k92[қ MGv,vH!dqG c)z_=@ҖJo>)(+WCwThTCBnOYdsE(yd\2@܈ N)XSUtw<0RF2+Tr~5?0¾@؊>|A^.ؾHc \T2R3,Ý{mUD÷w$`D$@"䐐t~ u%֫(b@XpЕ2쁢:hzs*Zj@T=:xT&O^;SAwJ) ڠLwp@5{L{>$(A 씪kKX~"P֠ JIn?roNmTZS{ TӺx3gw%M]:N{~#c9L6-bMcT2?Ȉjrxi[2<=Mo1͕qϚb džvгfQ157%!kW._u_w=g9-p[&PF.~xFNM($/M$8X& q*ͽVI;1*@ļ,V)? GP3;ar)4ឩBπcu5^ئŹOifQ7K f/O0kxgf6O'r.&(]aOcJ5|eֈC/a{P1V+&w;23˸ wqh, w7ʤ@ w(!&hWU 'ѿweSʇj"uT'L hnG薙J遀z7wɁ!z?Efp#ɄR 7,aOUpP)9e2Rie>͍,OI,ex*O<[1z󙆁Lձ0jγ6\2g] 1R \IV+l:w' H2#zkmyS:;|ܮ/NVKcx I}VOEӒJRx< TW[h4{Wf2pQVSk;VF*:<=9-)j*-҈jI;Ҙ1&iQX|x>KJsNq|=+%k>(ֻΑzD%]g$#gqKEK=0Eu_chQhef0TZ;+(aP@>ZLj :!NÆɠ>u}{Ud;&'kdbf&&s՜mO;> `R(;]H{wDtHQP!- '@9Na=tjn81#4iEV0iGo/rIg͹f7ΝJв1ioYVRJ5is+rvۻ$rY-{b'3Nii.R |̐X zFIQ\4HhiӞ{LFt+P@t ϴ1  (RiA0hl!` "[k JՆ2%E)3ldžPXʹ3Lַ&<ۘ,1} I{UcGb,׊rl2 ucƚ1i}찉fέײB'$@ګf.G=TK΀,zTT9$F}~zEK.: 9Y4vg(޵7q#2lv5^j/Wl\#L Iɹݯ1$E%p),Y 'h|cFI>ށ F飤\VaӇMSi+u,WfˏU 08-ZS>%给u9>\?q:1fkPYiГm~="8n\[A,](Sd9JY9x%2m=uj/ ڄY m-bmX+J؉?l)v;@R\vRh{𼌹/Sx1&' PLZ'ln5q=e DhG`CMFIQx^򵮴+_ĎY|(U"b^=L*Dj.X>#Z%ظ坌h:X4$?Xr"߹/Y0 ^iBC{WfǮCt^1-hO`kq\2Xg!g)ElƮltؽ!Vm€A`$j2 \3id3S R:% 䫠uA p;^xN;oߧ~F`:}%smeN)(w[:G;iϚBψ~#;ЀF~MFn~9D[ Nf{\  ' hЃ<˩3X k اFH<(zcGo].B80\srz eyTF9sa%gVe eNq_RN+R:E`jc\7z"ioS ~ɤQ ]PZEW7 fHat@8-j!% V-E1<30Tu#Z@EͶlJQq(/ zv^`Aϛ9M|QUtkqN dT:܄# B$GqLN C J^MF0sM/+shR~)5IDXm91ٝWΤwAo:_Ơ%TV'( :WW')x |4͂ɮPu2yRw?IGn!}}.[,co- ψ)ȽǴfwiz}ijoF\Բor>.]­eb`gF ",P&]}$(x̾1@Y4'lGBk1vBr?N*zH3NCA|ūoa) |1c(8qcuLmRum{ӗC*5`aXz&&MCBEZrѼJ_la7r⿇d"}o7zF*Lc M۶2ݦku`MwkӴ&`MwGlӳÀP(ιM*P066|)It ٝEOddƓP&bvqtnNKohC4Ь.T`i*TgGjM>Ce|ΏB&+Vw/ο@uȗ,Q[_mMCG%$vI-l֬|2<հJ@&P[z"HҌ{C$16@ϬϕԱX<h>0!Rdʐ29X(DFHI !l8 AZa*tiP^jHnԬkQDoqFRWsbfwίzqkl\y ;%sFV73UR|}qNN)ռ1^WFqX=?ɷ~(؏PUxSqMI&[hw B4|3{22fA`?]u3djG U0E;X-x;}pdpJ5$PK5%U6H!i2~!q.8dk7$E_5~#)I0`&F㘚+lqȭ W 㴵tgt St[ h %%? *J[ʟ2ho_,ϖh_&L:*!R]od2wCA!ȨWޛWÏ xԏgy;'nݒvlrI6u7X~dܿLgh*Q}  Svj'WϢZ Yu;?;Ru&`Mb+gkx觚P6#NGd$R[mEXxeOɚFGh')m hF,by68ɥJa#SKǘ&Gl2⍽jI/>ܜhG0i*χp: /}1YۭTÉv)6d\v&Z @* ݦ}ʢ,oB>F.a4 pNIn3-NVFq/8sHg <8@Y cհUׅ6J59C^`[YA-/>/iFg`YIXctY=&p }ry%2t2t%1Sv4fpy/7t$v|}Js1xJ@i ]++6SM%A;azxd=xx xN|O gAHGgX?9QSqפ wBv|LHm(RVfogMI_d8+Vb`RX ѲT/;ˇ.9ג٠ЍEmKeDv+M_KЙeJٞn>#9$R x4<ū5(Q bMPkxzbَ{44^RĜIʈJfPadsg0`f:olr.1PϠA罁 >9P7y33q Rޏ/FbZ.؛Kz.TEu6"Ȃ\|~.3 ǿ%&#dz&5 qmN~㧦iu0p NXcd\3Qi?Q/1Όߋ?|?x7fw;X KCwS3/ ]vRwy(wA*`Y:mj5NEE8KDJNsˤ"YC ˕f'$g ʹN2 M+?I;cT^F`+_7E703 =njF]HI4cAT$6w/Lgkq?׼ _]|]w=3|Ə{~Qkp]o?8<|uD@g9oC05}^=O~3__/sbŠIr$~\?aIRhj MlH21}&yN!VfG AfPKvY艳`J؞,n˘Agm?'b0=ͫ=+X}pRɟ>ΛnSua<,&3z^Pt;~0^]|N@Qڎa}cFǀp8'8g{ 5lOAfAFEX(OOEŒ?1á+]hG_z|f7-^i[GX"B`7 K>K2^jB #N%<ꭥ'J '}O ,ޓQR-9KRp 3<1ps̲t:((3ױ%uvĜv%k ̐E֙v|}߅.Nr@r۱prĆ$p%0b="4VhRE}F]jWD>;"ޠszdƺhKCo?*pifjXL &x_+O-r^%QH$"3LH)Y_uqL"n92H? җt9֢+,J- ^vks˒g``[XEiܜ?P73HM". $CdƧ*VHrb“Op{D<袾'`Ya8;E'UjOYN~s/E\*N<,jG~oٳ$ p`/oٴ$*nuM6% 2Ϝ2),Cfz41#(,8=iFEOLخqv1a60Ȫb[i]tWJ$^E3=6]ݕjf VJŅczqiݯ%\:x<8٠5]lAM7aN ɍ_š M2:.{@OuZ҅CZeC밂 ,l輳 w`Rh z,b=z"yU$55ƻ[M뙤Rd٤7}`Vo_ H>{nwR %"?/5)-RP|(e J(FlS&)T{?p=+g&{BCRjZl9n09xC4\"S󔚁=q3%NÒ'Db%%2Gè<0!c&-E#Dj!cS! :1h<s)}ְx4;̹ >s [98 I$vM,T+*|MN&K e;0LlcZp akH%0$6e~ fa$=Nb!.i=/Uf&럍I(!r@J:t7=5%pht> _X+Z,n\(~Z.|k3ʻ ۫͠ Xp>Mȧ&ޛ&-ŁI%nѱ8i*Bx RM3@ÀhȟUFl MSj3SNxSdVS7&̢dgA[Y ̌߼iu픵R֟ip+㰯J?U?O|Id`\E *Gߺi.k(/C WYWMTݛS}nzCfxM@Z,ï2%F}R~9e62q|cx7N [KW!`x~dXqlIc{&PGer8~9n?ԺZNc^o5s ll~ j};Cx;qP}M'w>}[[.T ,C*w\cQ ɍy? aeno~:+2=GoaCu8:/-@9d eB Ya AI>˞*_LU-,=>q,.wƖq[tQ񵻎ՋoFp# FxˇހnhM$auV>x}^B7)ҡ xKnh@׿@-o~ SlL9-ĤI{Z\vi'^B+8ڏ|oD%cqs+HO鸫CgT=qw._vㅡi%SІTM_S# |Z~| bLL4Mg'޺'X)k&ϥlaX:_@oZ[ᯬ.HGY'~?OpOuKWC5Z8w/JW J/rVdP:Rb bcPj0{I J#CVNk<%l/gt'%>8VoXR5zsCR#L3_6҂1E?f1Mg+J{> t|x0l1wc2,¯+¯$ T3BxD\bΑ&H"S,Sj.LͬGLfDi' Ϧ̟xmPPKR73iL>rWn_7Zj|OeHUzl $ wVl`ʓv181< 3,ZgtGqm"4IbH(EFŝ&,E0ŗj*KPbW^mzN #M|"id$#Ot ꭯X:+,~Pb {ܰ 5u@kF6GnF*=W{-9@N IȁT1*7@U_PDxbc&tDj${w EqsA/Qn 6 WqDez4_ _ ˗H( {DFe`*%$6"3n挪ْ+8')[p" #lC%[n9Њλ[dj1W8Ej{Ot5qq 4mGQ^\* %e5me?H`LCȐu9/` B/[G)ZOKԂ5*%J\ǵފR`*PR=m9VZ D l\tj+go"gm \ԝDf2-rf9hhh9G]e z{i7s|SgLzFsͼ ev} ةq$PIz(:˲Ϊ JۅS!tU^Qb 0ߌ]hv"Aw }R9.ҁ 𴠰֒pUQIYjy.VTaN7UILEgi h9}r(`f@ϱ[}>i-ؒ .ĉ_W%Ņ/}H}/"Ç'f2ˤhps)b+B;)2d>@p$缻ي)B~Kn%Yк0#H Q`' 3OD AːRP:"C%.u`vrA 2N܅ly-k/Q,y,(ݕ*G&Q%yr7R,⪣ox?n5BA>Syu" =LXzԀ5]vڷ[H'A|֪_t+apv-H?~4@Rpz;?5tkeVDѾxܬj/C'E@| ~%{<(#&VWV^">E@a 1bOVZP\"YW+j ;diݜKd> u ZbIߝ`K?vKiMԑ[~sCқD%CW -i;:cN|e^6M9]x.:Dc}7!!ܸ`H)%@ 1lY|nM.LW/a\!}VS݂XG';1: `!;¯oߏ&Q8jF5j6g!mټZ{ &my*40)S$QJbϢ)'^T&P=)/RF xD>0uUBRoΛ)UZɊ <=4cɫH qc{e&ďK6a3OŁ _}%괓2cw5*cA4-;d'$[d*7UvÕbU2 :衵1kO:pUBw`ƅl)7oooZZN^K+vGK.t6ZD~`j|iOa˄SX謣|nj~yЪ#}O~}REߞ`RS;vMCg3'Vot6IUmEsP`^P߳+W>bkh*xƜ)Ie;xzF]JRY܅3CdvfpmG3<hB-Iks16O%Rn}6rxq&C_u'ȓ'p4fVŵ |(h<]r)1rSjT|<Šxs+ZS=G;Z#[VR Qx,G5jj@alLFy'ʗP'^-.D0]xuaX9b\w"޾gqL?dV|fˎpd}-e!6f7Ƃv cJ? uVۆБ >aCAMP\I\P6AHPgb8JF 8&_Sx]90{_a]~buC`l_zݑPj< Ǔ9l`u cx]_cMLdq {4WGH+}Dr*KQTB" ,=ӠLK;щCApc9Ylyȥ\C"]BhSnfçdVrTjo3jT#zdNᖹR'=/a"P^$8HJv!.fZ5Œ{oK0"5n%z{ t*oE1PjO~P{ X ;-#ȐӨ*bݵ6.u`umʽZk&=ͫ[EEW ά/(]u aGòN wu)SJkTPj8t^ٯ7zT\GMf#2fӗ?\R1'mi~*>-8:ξ'/gaᴆ+)un%(oa؄ ֻi>e{W6/u߽; |rmN.KHnA ,i/ބɆVGZ`MxH@ , 0L$p >1`\,:>',G{2>X3"ED=lN"'RBv=`#%e5|aƇk`v6>c`YKy{aΞKV;v+<=o=|,TEFY~2tEI rzη*K:=f}Ͳzc)ܕe [ovh:1|Ƨuaw9q7*B'ݏ17MzZA>6ٌԛe(?܉Sߦ|ܕ2ǻ/Pͱh;JvIۻSªpgygTb׋y!yðuf\{Wq&u9\n3PƗ=4N<^L6Y'J#-b=(J-;V1/x{ܓ\tn]B.ىmGv*/-LuK]a N$bk–]q>*Wێ-8&[.G}fp!w1哅:7} /NYYv=aQv{ƵK(]vIꚟtK$%1Ĉo]|!cDglk"څ~y 5Xyu^mI 4镂zhVddW(R1LG #øGѠRNMW\!<4oNV!+!("T ÕDؐD*cxb"L"""+RLVEPx[#h$#ic#* f$JTČf6rjA $$H\KlTJQ5I\4K)qgjM3kg8u (לYst,x'B"}𭴞8Œ\PSWTM5뇛=giM% IzTldV +?6Q{2o-ʉ+$s]aT3[+V[`+Osk6"Tވ yTͽ 7rzZFވ kZϒBx&ylHLKݗ AA?l8,c [ߧdؐ!KVPQz۟U}jgag8z'"b/V;c'>#޼*{~3?y  nǒt'aeU3ڹgwlK'Q.'\;ֵ9xKdzEH*8{7W@J&m^Q`g"{݋n"+?OIZc.ѡIy.O^QRdѪqd.N[p“c"&hE+f'B[#L:fiY:\-|lB97uvQhQ/Fk:DMy9hn4r&9/9csRCt4سl]Ns*-~ށZ4FṘae}8g c1{KB8z;^Mon3~7Q X3"c? f{ߝ酦 w];O/q3_ .v'3[za6^OpȆnTS :0,?ҥH3]QikD}d8Zt{[H6'p$0(\ 9L LfXMhMWHp=`N@ t%dS%zZZ`¼}pr=`A{t2;oHxtl>hUN*: tLW Qyo5Q +-[ Z1b›x %|#BHWky{q&+   (R**"õEs&D,FX"Q)CĆG9@R䊂fE;^*t#, "+FAEs"" Vmȕ4ZP☲&*FMHk.a~5&:HfBw>NVGERͨ38vIθw "? Wq8C~JY GÜђn:@U$\O5ZD91P>cG5_$-1v ӈqJ*JBypq.K~ly97-YEk9Σ5SWCxSA4!`i :x$`٬8ġ!<_zcbihl._fը?ALeUpg7y^L7/@F=,y|V.8Qoa'R- ~bǸSL n\ǎ5y1rn)zxkz׉; yfanIpTA( l9'QDr<77/Ú#I_6&fP,11ޕBg^<˨S < jlXn4eʬ#s̜ AM inxmn5j>ECNLOok:fD~ .Ξm `awiFл432f `-:Sm.VKTmDgUZ٘x g$  SR[i#3VheD1ŜcLDϜ>[M= 7B6UQ{P" iI&|g5Gm"w4"7"SM_eS`t4*Go?fE~<{Zt7nz5ij5S-R ^IZ<=8ۻJR;Foc,١hqA}-uٕ>X+b(/P?:5!AQmG'a!vѤ;더Z`ƴ2u[.潠ׂp[⪱ |21MF/-Ͻ1HQ0{!^C”DJz-@;mVۂ@|dȲ?pi\c BhK' 99`ew`RˆRA.2Ԃ͡Ъ76{p&/bs6_7/[ŗ.$O.E2X\?V_u㍌nNimGd\m֭x֭ hL)"|d8[)9SFv((f݊kݺ?/S(Ďljcuswlv`+VFo@3HA%,eu}p{y>:+2;+j1u~<]Z7ls_`7x8ީxgmAn^:x'4M׉=*ǕyoX t*JQ]/æZ_m2}"ۻ+s b0ZlL0M'| R/(`6%W^)~}Al6~w}U{McJ<8%g(iFj(iyw1W/U{x<+ACntf f}Xm8e}K`GbR'h@åfrOfjB`bJVD E(u6|+_mk`u sڞSf Kq9okx2_TM&բ,uw_iyRZ-BҐ8[W6拟)OT̼y[ qŬQ6U;BV$eq`q<)9tZOҝq*3m=yY ϔ+)0˗ێId1{6)C͗?S;eta:TC-}m~ DMaL29\ BwZ"2crq4w:&_Yda.Lia19”qfO9$Ȉ)XTXp HcA%a*vHx6T6+A:G_Q c&ʉĔT.zZG&)F+1Fa0p@L`vX9D!hR-&,: ~Lk Rj+f"Hk#0U;*>t]W&4Uo͏ͻ|fnqIUiy7ΞhHf6gI{ɱlR WZrV9AfWXIy̘c|z; rܓor'3iHϚu[ bbcqa5(3%yNaj:\ZM"iLeD.H3\"ȉ>^ǜP6"Hza9cQ!.6T.6+ݻGcfmUvV9iy~Fj'p0o],.wI0JϥSP['1WfřfhNJiu'L:)Xq(FQS$Yp(D?b8iosڽY0|$Y' s!?$LjA¥l, Pcv< /cmmҜfK gfrnUxcsGm4hxuEi1K$$; 읒}0KyL9k bqړ$ֺ|ewqvZt|F#+0i41Nz8Nlݮ[𑲥QludnӃ[kT:fT#2lyΖe=|o]l[ U1 =+8BNZ=F+ A| ̠GzGbp|Oxhes8Iϐ'fz穓-g[Iw.D1gi+wzdk#!$=@3yDDq^i]g2eLY=.SVfVfd +T;&PZ8=!S.N'ʩPO8>R];-؂ Ց,\n vIzȕ}f5IťdZ[drQ*QE&Q&_^LFsw;|E5p-.T77E ̼HfըPcWgO ⃵Mއa٣I]H4 3*@4g8~ojٍ{4ɨ]7Ŧu7?˪q{6&cK;L@ò*9Bx{9nqb^2qhyƒypK1r3JmQ3[!epq/bd*ކa뮍7d%rCťǺ0EуTv{ Y=%h^!`4JI"(r@ 9vBHAG!ϰ!hdCBYV!yBQ܉QiY,}X'FEh"QeCtOC1A;Lpڵ Z*.E|0F8CU pxB!7&7sj ]zr󮶁iꕶ$j6JE> -*at1ߛw3?],1/pocOo/2֍JX< Ez #Ƴ8A ]d2z~̍Mcʌ|<>\HO\![G'R[ӿ@=Єˉkr>5j,y~m]鑿[hvXVo#[޵u#E𗻽rHHE]hK D]*fPcrxstE&EǙ yT-#v/,v1~BcD:`/fί 3zDrYjB~JGαv~l)NO3і9}=/|{tkцXf9c^q}Lk 8qb-j6 -[γխC4pF~ 2usa{5"*vR.@)DHĥ4HX%eL0TT22H4G(!jc4C[)q(a0$x flxW2xźIƻi|݊zxqk+yn`WuTpC&>7*e~w*CBpmϢ9G;!h J\* ȫdH_B-~zU~۞t-q &?{E!~9a_C/ГW6#Vit<aI5DHQhJK㼷Dqt2յ ff/NQ1RМ>eJ*իtW bƭ>d!P)V}T ո:`V%9dwuX6}gOTf߻@hُdA&`jeP9Xu6qmݪ-n] CS)Y7V+ƺ <3Vhu!OUR[L6nO4Lr_ầ`/ n c"@w'>/LtLXZQO1xtst* zK&(r)Dnqmn5bqaF$ 4hr VKҼxJ|/NfʠxW*̪f6O_T//:@h 0{On$ߋw 6ޕs de- u'*Lu^*ePT4U]o]$bw[5u!Z)EnՇx.MG Z9 "|a=2!ឋ 8'-*zTP͡ܧ-FyvIK p ]02IeI H%iTQ!GkT+V%ƥv6oU/dgnK!OUdͺ {qgRE+hug~u8aZFw'*Luk^2vAcU6. ݩjFc!Z),@(mbU-er8N@2mRMujc2waEǂQ iƤ ,e8jd1QVAQkQF{RA^T&kePTԮչV=5rY_z(jdHceHI<8~^\Lߟ;=*iN@%-;,/o{)qb;윉{b+cAJjtIvR(J1 Q@%->иJ/H|ܣFK7f6/~td .0J2ZuM5zPJ옅u4_J%]:{F*#F%* 6]/&[NwbCwx)7{NT;.XHcS+ =ZUʜ=83qap, OCn~s4Ra,:R3_C@ίr0?F?C=?t~o=t5ԧ㿘*fi:0,f7 WZ׳V5FCN?=Isvt/,诲N~iΎ+\^f턟Z|FGDIN/>}|$Md3A Y_Fd!&[[EK$_g!""ewn/-'%Tc♜" ;77h$V Sy"qJ6X%ET]N2 A+m^0ՏĢ#j:&;Lwd0izf2ȯ#)mq9FSpx!b@~q>s&'%# ũډRCBYpNb_:HzM~ET0 $JH4*=([9mUF@6U;:2ꫫI/b(?G(5X d(iHS0^1:}9%1D" 5yҹ,1Mi҉yi:lW}Li0G 7^ )D7+"N6 LT6.Gm7?`0`HN W3f-R.& GK̘[%ǖ9FG \re s\\6u)YVMtm]s/Q=yǮCCAD9YE %'->[DQI iث OXv~&^f˰~ϾN4cA;^-Лt]XtRmNKhN24' I͢Sr"'Y JeYSq\z/Q-ڤ,{ҖK4ˏx{=+v(5DϏћ|߄z4=HFX~l|k﮼Yt6K|u|F̏P\oN^}Ń 8ۋoe T#3`tW8TfP\KCA ~$d S-Z)nU%O"T8i +>m5bU:@t#`)CÝgH)Cm.Ea|Lf qc,nlMcOZ7q܄vdo7_!.;[gȊ9CU^$s_GO=Qpgiޞ|v#4ν\zzд˳(PT ^~N8PuA|kܴR_{I$k:2 <0#x繍epZ%DP(.D ({⫥Br5F702qT^oe藻dP}u&Lיd3izf-;z#,s5L.䨢2qg(N)ssϊkٵUղk'zy1a%M!µB*V .N*TT[TO353]3 m(M|c) #W >)FQ&h$KL^HYQ^GKfnwT$jiHQ Vc L/( 8u!R dF92YIX"HġM 0Z&8&* R;>ñ& '8$ h̦bN|>~[[ȷ9ΨX?rASDP '8!:h'P֡cUzu8 tp[Ѯ/#TR: D_P~A`RTp꣍ƤFޤE['ȔoHmGۉҏVKS 1jn`$ Q_Bh Hj) Lˠ%ZZpբ+4AX4I{0֕B1A 1}{̀iP2Hp{E*7y0NH,q7FH 1RM{禠̘,IX-OqEU[ey9J{!RҠ M)7,172JUHA?:`P Э4]3/ZSmLjlxWQ)1jnZNnyj=YDzN0j;!p(\vhA"TSF[)@]xD"ɮncߘ8E ].hxBཱིs@ ) A6b·,!|CGpv՝\`4nm(_tѳ%lCr}R~u!wBvIC<j;Gg(|h&5v:#Q#uv[0 d t&|׎COO8Ӌw:2?{ƍ /[TF¥qsb=JN6/Q*1D$[ 4HC )-+ۤFF7З=k;?+L"`sJ8%V~I3S)?!֧MorE{ӡLfBh%^&Pf_3uworBe܊i[@7D9%x׎,[{1q~F"q73?YZĝ_k{IOaxY9]̛GrS}:+O޳v Z`vSwۻٻ7Oof۷8[)<_L޼ fvF?ʉw4ӪʌoJ@b{d%|p W߽`äsBy* +PW&H_!0{ƈ1vNb޽%(Jb)7}Hyih*`+ OnfkQDdL9_ 㰸@ȓG o PѲY@g__m^漣krV6έ@֖1rRY> >k9|Nfx:ճ*)4'Vn[X{w>8 Hmc nm@Ìv-VBs;)ωvLrplrVn~wn&+5]D~n'=t8&~)\+֕9nA.f6 ϚAa"&(TŮVnxwn9>Ѫ 1"vxD@̥c"nfROc&{b>F]9Jrp)B#y0S1ew *@Ν‚e1,=w Yj13e5 "'J, SX (+h(YT:'!HQjD3SmRRȩy K!TjyY\bXWJ*W_(Xoќ#,(4dqBPցQR(ײL^CEȎU֨ j70\4oWӮy&2桌L޽E#dz?'qEKcjg;/<?cr~1;Rd1'ަο;tv4Ղ]kJs]>}~k}Vyڜv_,fOxhtg5Q.R[!St9E.)X3)NJ wvM8$-RZX{՝僬i/'F[墏+#` Bc Sςd$0RKJʗp7BP?۟-vuwjdu-0ݛ/zr>iMe/^W[4yJS3Jg%[r| xzyG/,΍X֕zDH}Zu4@M3p9=l{]Z0!``S lN4;\2 RB{kGmv*wX0 5#L8 EeU6$-c,xD~1>ϊUĉ3''#F'%i-˴CRT lAd=4yiINI/z;h!s2Z ih$QB!~rmIHC#p\װn4Qʼ'RKw":XPḵL (-Td{5!XSfiF0$h=R QA 'U*TOGrF<Ґ! 2zyyHC! MA)JdAABneC p*x5!FH댵F2!#00d &k.*#Q"X`w+DSrɸUY/Ր9RvxR.ʥȌփNʥeL%^C7dDOc/1tcuk ]<9k -3yY%X-o8l!hCø!bC+$X Ɓ1fKUqJf4ry!(ZP)Euŝ6xlk}Cڵ؞Q6FdKܨ, nKRKrQ>s .kgoů_l0 kCQuJ1.AW[ؕIqD3^jiw&i{kkuE1zJ;e-ijMFDNԞ2"C˹ݩ%$h;UڑӘOC{Z)ۍN^~MsLxAK;q .j!Iσ??Ȉc'0"1;J3o`y"hhM3#Ux/H"_NJ[p`+nRZa0q; GLR_y #A'?1q\.ooRy#հҁMmZ˓Fl1뱎CW ">veS#3,1۰\<4,neù傌 |b1ok+[m_`;-殈9,^v8`${+ Lp& @F0$8x% -GMU$Q'"Q)GG" Q^sްkdfQݳ%ܓ(c@R*Pbg c^ş 5Fk(ՐYowW9),T9ʑcy{Jq~2{k*e^顲A`T(|d]>Gu)-PʩabGb\Zةo  15[ =5-!&Z_ DB{Bkf^zNhPQtGiffsڪFXh:,E 2:6][ DKBv] (g*C":KڳxB+$\ǟ*.E` e/JVɿ|]dއBy"s[({I,>uIHw y1֋$EpŊ/ **@E ӑJ 鼣 -c*̇(E"i1.Ƹkz">DI-QlҌT1X.ښwmW &y4IwLden! ȿMIKdĺn[%&) cu$>r!i鳋r_uvQ7jʔdG1)$v{xI 6GQuDuUIT]EaSL@XK究$ 6%# }t:` tSO|Qa/_^ i{2OaRREOS)wY!, : zLpoz^m9 _W,`;da `W,:+W@RDR^ "G߯ N 6 +b,ab`H.Ԃ"d!853MO]{)apkNRRz4^dԵ2 kgN0rL G%b0\a: 1K0ׯܲ1"$ ,ٯi?prnЯ,ݣ'Ҏy\p2CFlbRr"W7elSr#Xx sJc)Wҝw}ПoW'+PMF3@ybylӻzᦓviÉu!Djrv Aq 'P{JZ &eTId4XQN24"WJҮy7ċ'%'ָgl2t-𳨬g ʨ l\hZU8ӸjrMCicCO [ R1 +XBy JКJ!W!nw"Rmad%3ܵRPP,e+9L2:o ;"Z#32ii ܩ &ހ`77xk|4V -Y$@CJHKV+C=ͺhZD,jfjHY_cY4=d >ol3뗦r]B._Z%4qآgD-]I)ZK 7#oT3}޽(/$>WpIUE\̓Pob E2"06H5I(_Ik.S)DfIYmڤh;TN5@;i,z8jք .Jf"h͞@$ g~U="[sf4`uLA|a. ډUK%?d[!\ݑ ݲ ,UزNc*LQ+[cOQrᎡnl8̌wTBlu#W v֔;Hk !{ L2K&1z,}ֶbBc'~Og魙~uɔqDFl`W ~<5%PJZ8 16n&kmJOÙo8f~(V33hIy.MhMu8Mo9jx'M뽜L],M h &O=2;_qY)hR<hʨ6,YaX4K5yGm[h 'C{_\zg˱VQsICS7RtQ11^p捛Oh8QSG͈j0 3Wcc慣{rK=fGxQӨXQBde FYZGI4ݵX*|lHԂ悦DעD#T׋ҏp{Wzo;[qIg+Ki<}K-RSLb72nxe8eZ@I>P Bx"r\|Nx˔o'!V6siKcx+k;+ͧ௦tn N!&\5E^*EulFUv v: |hۢΊ'= 1~IJwLBJ+dN+Pvf򖽶١jRƗen|%^Wai{IXkw_x(wtrH{EY[|ou3T߬!ݔmIpwmnB}aTj#W)ergcT Y $] pN-Ks&o)\C 89GUt]9!, z9 B050;Dۮu l=!|}Fɶ` `_{' |XI uskb|NX)${u|(X9**[_z+3y;4<WP0 ŋqmoAɃ{ޏz^pp o{b~x2<\G5էn1ʙ]cyn+mGv+l ֖\ԭWb\2WW9m)sJe.<ښ.ԡy0B;ٔHUx=)S@;P Sx~ڡQgۧnx=:)5C]u)_'v||{;)\9AKDUL95sxU94jI u8s!dPQ{Nq}neޥR]λSCΉb=GzH;m@jvۤKIooέ&domƉ[}})tRWp2ṰSEX{T[I{zqJ'ck2dÜV6=8*J0v:L/7PvmYe.֞k^$Uűb^G-Qēe!AqwfvoTQ[}TzBRJQ|X >}/k)n#+~C5^T HQI;ؖq䫪V˛Wo S\yBh:>,.Iob񨗚 %B5oǡ @Q"VЩN× bp4z-M`C ] S' jhW FuP?*ѿv0HXr赃QʉJkGӬ=@ pBSj殏QV?d;Ł?XI3 EUKn=ѵ+n)_#!\I8QUT= ahF Z#<0?̆A>My{$3 _xqlF4OWQrgF&1? 9d&&{ h^F_~oPoNez)Tz1~؇ 4ކ Y4~fѯ,uq}Y(@H#RHbk5$DB^yeUm ޿Rkޏ&޻Ñnz`33I/Eozgf01ĠK QlC(dguf2@gwip4|o&ӄ|mM  %2|lw?_Ą $Rx4AL`Db߾$)y9]pGRwڅsPa{6tsD4/zw?|`jqULxF#Ny,fK0H4H磳))f{px+~(K=42r?O ,\CKb_rH}O^Ô|kf~ݟ9aA@ZG O^E-^1*Fθؑ.E.%he0cW._w/j `kIŇt^w0T~1MNgMO4d\Ө$Z &YrraHa21w1¸4W"7 R K0(N k=َch[u#e6[ =IX$E*+0~?^ . ɳbK9J& vnSsD-nK 7TbZ|*2*Ly 98΃SXYM,l C!H i\&ʂʶ j!K+(R* %u@N`VHhd̈ (ܒ09Q2 "Na P-Xdĕqn5EN+ Jx,1 (IԖLi@afcD5¦=&du :l^6/sr$IRb}T`)Nü(- A x[XDd^#bQ1؁3HQX)R`2F[?7D;9R8X**!ZC*Ru2` =QA+f5ģhB(xK(Bn&< rf%Ɵ9)l}G_O}UFyFDž. ]`fB#lqgG? f+,4f+Ob',$_CNt~9Or?U# n2sUNi>Oj؆ُ1;I%R8eT+ixfv謄z3҅+Su\.?[?) Eߛ`Xd8ݔ$&7.e][[siK%eTZNU$aXo=^]}I(*kaB3d=* MH \ ۱BZӜBh{Q~!,x xgZy{`5Ig_tv4pF)[^^f)~.F!jg77"R ߼|uw U&]{\]޽ݹR^yۃȤ0ADv$G_DI1~"ȋwfR܅u3t07V+vr{wEŐҝ)ͫל?@p"$ޔS9صjhu~OzABr;5Np> }i󧒁뫉O !uQuN>NѴ}(Fgg.?\In7!wzv,̶ -I>NBڭt}y4^B2uGgs@<XsaYIrT J\@xIJj'~1Bg:` /pM^`ؤ$zM>&`*: ;S3:+jǦz4#Z[$[C9)E^c 1u0)-Zp ~,7IBM$s <jy}k+})=JR^+/frp=^yh2=&I0 .*+]MW'ϳՋۙ:]a HBI'j]m@tIz V"qy;!}ƅB#Ȁ&2jGk)""GeiMS`9fd U꿿:ӱͨWR9CVgJǼBj& yc(H ȘFm#_,iZej~&3p& ฅq7L|i d8|R*sꃃ5mu!v#v n]at:gsam!x&FHZ9Uw/{ubGPKc?[pf7s47?_y"pVV'g8ެ` NDkp2{7g>@ffQ%u#R0&שg9놥K>+}'ʒ.Qun{ZF$J폱owm\r3XdjIo`Ѳ~L(R6U$ֻ+'[=*>Wf5֟uX`uSk!T,ŻQ2޵; *On{nOjUlwɣ/HWf_c-i$tU0,|ZѓZmZv e2R`nLU+3#ֆ[0#HfY͢>C]JcBce3L[i0Ԓzo6s"ɑNSL\J!Ɛ\lB/,o1]8ڡֶm d16tn.7Jl_aP̒^^|ޭ5NH> Tޟ03Y붲mOpN1K/")L"Ԅވ ,yn3U3{g# w{;l6Sz|1D!r#)fՃ&0[mzw#KF)!`c/Z6j61ܥsb-Q+Dõ\6p zKU-$XDa l P81 .Ȝ@;d5Z!ؗr# )pu~eKPŎep S΍&,g4T >2):Qjc100҇ (4@|n9Xnq,  (ʕʜ-!tјHu EC [8iM@DWH@`+A%FJ0q_ d1_w$\⵼.Sn:z-+iwۣШWIؾiG/#GjG$ɘͤ&$AmYqtJմ28.2|s*SBVOln#7ڕh E0}V 7{ֿ6 >jhWpg|Dn˥K]Y۰ncnL NRI3p}_jRVkGjEaȺ)L?UR⾬eNaVVM([{_uɋxB'އ-{|9)oO: nXPn1s4݂ܛIx8Rşr{˕^9l}|`yPEt4oX+g=+z~R8M,BRIQ)u0gqZ]c3V,G EipwQsXs3 1n]NӰ]C%Q?)ֆ}}Y` ꑸZ8A%ӓU:ꢒ?lVFX'g}Y*D݋.%A&d=0vu1NfroFDSOPNǚ XI|U`O~X t B6H>8*V%H6~5ͦc`J¯wy$:HuD/`ǽaʵ6dgb9L;.sTg{Ɯ3r]xjޔ?ϓوP-7:OfGm Ư蛯HGѳLv4sY}t8IͩfM|ی<Ϭfi"9lhve³4ǧz@]Z}U(8=׎j580tXT/s۳ք4$ rh?2^|H`Lh/e9{U-ggIsGvVw9ih`p[?Jn ֶmƴqD/$b`.p_G~6ΩU^ O*yaӼ^&$r&#C^z`Ts^rMH F_b$ͬ'`ߨ. <^Q+!BR51p-87KW\ k mYB_vlSu:~3vf$/ڢ-I9IˬfS vwᆰR kP7JjS"`,D P8Ж IM}=s,cYșogxJMR9(vS{6'Epْ[X[4WSiSrhIWX#%mEoJMxڵpw1*Ii^c=3 4d1IPN$d12+`vo$Y̷KN`h3F+*٬̴'{%1\]5 6R/3S,* O<^El~eߎ*X\Fڵ_B`Րtqg`\B[t`x73zv椧4mMF~ZHFxxˇɖdҐS'Am.(S"pmSJxÞJR/KQO L}t؆6DI1[-waќ・3,){z@t!]891/ڵQ,G#G5WllYU}s,;cey+]ѱrlGF1D#TgoWR{,CW)~BW)~U{xntY@R*"AB{4R赎kR]'C9YiVjE NS|7RXPCU!,'5%-S,ibcˮEOS6@y³~dlQZjО9;$9e c]grznF?1]5G<pBѷS^rdQTݭ6A>V0hEH_yozfjoKx)}>ŚcCA<Ǝ:ÿ5cg F sd@S,gg8HI F!G!J(7ճ=T>6=:+Ě#5Kgv3$F8'ddEI蓩\yJGґt~^I"Zw2981JK_jSE) xxu]'Tul4Ɨtz˜*V+!KZ*R` !%RL4jV&F~&n|9o䀯kn'!mC'_s/C u@2'-hBKl$ud~Bv]%*sWɟsk֕iD3RccBǀ@jDm[LW=mrJj;^U WpWP'h %Cu P=?>TRPBeM]'D]'S{AHA%֏_'.1;q]U7$rߝ!m yPm;YNGLi GuetWN쥽 ?8d-nfKtRۆpZѳitޑZT뾸X[֚8%u3v,^'1,Mku_Vi1F;Crj@ BF2Wi_8tI$喋bh!h֢9-PP'*?9zM;܀VFGW64s}#5Ykcg.xϡ,(3hm,5:LZ|]hJa\hJ̦-e‘%ߨ5E0$*8I)GQJREWȂ)3OQ`Qj @jMW7J|5dvW1"o\DX=XhS{奨VF5{j´IctqQ^^q1(GlonFwo_"5ZTyQ]NgݍߌO ZOT/$myMK0M 82*4_)4O췊] V-PFew42e#*ɵPa() dO_. /dQ] uXƟw. g-ǜ*΄ $, h|iTU7v# >l |HM.~w1|yB3n(CUE+Y.Q`L8Hb֖BRYsLP"i]T%Eai: }ozggN`hi2* J$AMVrgP$G-KK;Au }ZsL˪cC04CsUՋZuߤԦF3IcmZb} 'p4H|*DTVOc9HT-+ӀDkŌVkw(D>KLLqzJQO )R589UfрJ渄m`D$3xS{1U2h͖!3)N)~kJ-G@9% xRt(-W2e hU2gGq[4(21d Ԇ2jOl@WQjڻ!\)~"c-e6O=E> %ZD&-,bN'-R;Rhr]cj1ۚY.ҩElՆϏnmEzjk v>>ܠ~9Ίyze<v/hUTx;S| vf}RT{ysբ& LFyޢ[8hKHRaIAPhD!(:D[G9֊iϓNt6\ɆQ1~qy)+4 Liy@G}}ʝ+('4j.K 6]ͭiDDNa'NZq &+1ϐ\(Z^2"`÷xyqN.oFr1e˜.s_޼7)fv}M/@sݠ5Q[έ ,W*IP~HMډBeq PCpE@Rjg[yR鄐:;* *OoF5GaTP =&*Tf&͐ #9<&GG*j'rf 0G<ܟc۬;n9;u](ycl`Nr@%2Xa&F!BĀ«َ6P ©Ǒ粓SEǖVm(Q&A Ri}DJ@x0IP*jL*0$q\ :h.ѪqɁѼwFD N1gFGp ` ”yEê{gB8TQ"U wrnBmO֪M ahJ"(1Xd~~WA2YBx*LMG=+0fUx[i$[WN9HGwRDLuht;I5Mq P̢I,lo?>a4̉ [?=f|]/{z}աG4˗O rg' r<ńGQk>a DEmpwƲX7+A,yy_j>8@ \&%O8Vu>F0KO5\7ōs4!=1 aN 8!~s  >^erϋ`M3QA3oƣdVXǫ_+uaO+噛]Ys#7+ N$nGziw{fiBQU@eSǒT ")DHJ|y dʬg bJ@9j s{< ,rHP,:* #d(A]/;_U !d!i\v^.9j_' [S؞u孟oVsk/'ۛq"ū,$/H}?~6Q9>/MaKW]$r>uHBvuu4@x5QNŖ$𓘨.@<8s0+LYLfC@ɳR8s. 䮮iO) ";~L9Yqn5|. rWkȰF#T1vͅ;ֹDG5;矇r.$Τ-8r`c1zvᅎYt`&s-o1@2'$岧&A:{J>sân~u.jruA̫Ɵ]۰JbrqVƈ(2IҒ}9+gӒ} 7Nn28eon8* *+vxS*Ui-HSRrF@Ji$J5XJ6Ͷ-S_Ԓ=G}mPB$҆Rc:]/8kE>^r1 b*}TSF;a՝Р &[/HE~n\|o޻O0Iόa۫^xwmn_|wa,{-AV wR/xtHlz|*޹*DqTPy^'S *zVM"Iq5@9 Dq=pUzMd89:EV\dwbw! _@DR*ƍ1Uc\2&F0NHp*Z"-asI(G2_v}/GN)tJe8x !I1IA!T38fֹ k#+aV֠k4jiZʆ;^2SV\9(;W?sh}AH_wĴ>96Y*["I2EA5#ۇFQ ,R)cN{䎯WP.'v9saKg 衆 RV A뫕Cj{G~e",D1 '8'X>; wyR9fSx NjM5S}DP0ywKB) (ptme:g"//DJJ7:D9DF_FwKF|faR&w9 ̻vL{kG&}^[+ŪT7юE6 yM:hByPaYy2Iayo w~G#>QR͚$ۮDHtr_IcP.H'^1'B:ts2^H `>;1XIvfs&WW!sy9JnK%LF'"n]Di&~5nK:GV> }>>΋i@A*u׼鰀. 5$pwӞÑ{8^6dmWD. )Ӳ@sM4IIq9#Rl!/ `tl?6Zׅ.957LEOCvA\oRI-qų|-Ks j5|d஻% cpH \^c`15+Wy3k)՜~K' TœD㶾J!V 'u`l5HT0HsfPs^RX3XI]ԼR 77S0=j@zxoO<( ]1E8`uA@d;Ɗ|'Zo>8}qxb^Ε]}j5VHr%P+-J#(h,Km G$˺BQ-S_b>-GŘR;Jԙ*ߛ7CLvHTP78`UܰXɾduUj( K) XDZ0-ba]$kEEp)Ȳ6 TRXU J՘P nXd@T[^ԄQ: Vwu'v̟F?\ v6ժZއ,~Gd8%j_ jQkf%ⴄ,bH$vuhTbNJm7PZ$&#گDi#(/n/|>}ܸ+[\&P-vYdH?/wZ];ʂ??1.8X|>uwt{w +>\^ (|g/_Ye:$H;cK&(,GBѤץg.#YW׫ PnZd Y}*!-\B|v{Lؚ h +[鄑F;%BمQX7ֵ!(((N0ˆW̩҃`ad݇CpZA5;t(exaΫWE>*S??*fF*v"p97NdHDK7Iں=ۅTx$¨.TEI*(EQՀ ayz]o/A}yŐx4yb?9ߙYǬ}L>?>, 8ϔHY<76*tɍxr@ulTq^UnR/+#ȕ2i 'GT" )јk0q3 19b;>,DI CJY^)~dӗ S)U=OY_oOxY4Td,pl,m䎝Hr^~~}0}S֮%fd_sw ./-KOl>:Uަ]_'yfz &ݚ`F(cq3RyzvcyNk4L\>!2k%LH/WP'!|"nIX >>n- fzk*~,B3 o6]> :=V;i#-o"ܷyt\MF'}8VqfZ1\\lпFJ6uip  :m V #F"v$ȳpOc{ ܍KH?}\H !ɽYu_lӚ't!FӧO;Y=8r?2drz=Wv-т@=@ש}7isE6eBmQJ8YzI$B99[yPFL9~ ;XEKO)k?%LROWVɡ+,%ͱ6E}szE3`ų=XO@VO"ەIĊ~$LdvSXJ q_5"}T% +r̸5+*Y Wk F7Jܪd y*ֲG8x yp+fO q:E]`m#*°ϹOIZۍct?0דgm&B,i݃YK7;R\v^lOr+^iʋoJK/ERGnԍ!&pc`T({*r1.cçsLKmK:d5a/uNw*Oagj,p׌<(p0Ea4\@%Ɂa.LAY]ZkǿH&`wr+ӌ 5Z'}Q/E yS]#-g!ցD0u/aJ -/>KѲ;aքO}*yǥ XVh֬!WrH)ԏl)ΰK!21%gW Ymx82yew>Oe7G{xő$p>CAnbߨq Z,ۃ s:(~|tkF{~~vײz)dciOA=ܾ=qU~Xh!W^ m|uQb׫QSZ3vDwRLLQ^E@+Í0iWh6Ǻ;q*qh[am٘kKaYlJNxv m(E! [B@hH !XV ~&sI%_ &=߆uc}&m!JRKk}sڗiR4ClRӚMo>nqAb:ݾkq#la5%f.b5QlL! 2fʺi$xhIYGKCۈv$o[ WvX*E  } |Skrncw07Ou0ĝ6K,[JQDCRSQ>}| _L35ObSߍZfm|U|(o,޵q$BeqCo]mX#l^r@"[(*Yg}8HᐔVbT]]$`ha)]: l8{:u",OZRZ#UJVxεsG/qY8[OiS nzn'!ݿ)-M`H/Lq1TSa67P3d)h©l2hI9sRޖj"ҥzS oؿ#]t\@Ao?4WDJvZҤi4k9]k{>;#JuUAĂ12gD2KTX{rHKk' zvAsf'`joo{Ko"z^^dkLZWKPΑ5rR@1߸G鸉L nwCP={b7 3\&`35dƪRql,s-EҖis#`뇠lF)Ь(Tf Yl .:9rRBwI7y8,f/4@h.Nd`Vu.cb9sYLqm2KI F*g)g08굆*!0^$GKy\}A^d:W,V5}䇿~wwL3OZ4I 2qۓM{,@('.&9oOS>l~i~9w7o'\-/nL@ = y.dW;ܳ2ёVY״}9KNr^%l5hLzɞO޼`#%hسN^sWGB'e6G2*ydW#E2=L4#pYdy5R`*$eyY%,$@ʌh Ƞ=-ΩpgNd+:-B݇!UEfkء%NIC|*)pC-Q6"EmuUPߊ݁/(wv}cK$ {]2hxVW;vQs8 @ G]UL!ܬ#!GL[jR1Pmj!(:mu-أv=9"e{6ZsTfĘFiye`yX|7R朗'aF5 lvLIJ ݣCnb_,GOnb *tShMܼzQ ]GF\^ۏ5[P9n[r7q`"āZk\ָhqZjǁc1T")aM- ʇ cxbKW^qߝ-N_*#oK`!cGq/YWiX3-]*A=VAj_ Tz5aY2y$'_yEK0DϜ7N{lF1cQ"rAmk$pɞ˵:sc|tC PQӽ6(Wݔ评' NM1MF>]\޷l؝j c }@BBSb硂e$2Bzqq ,WnO-}b4X16#xdjvs!1:b\&T b ucR"f {*HC>,he3]876nWC巍4y"y9oOa}[9 7^Ҝc⠸BKcmkn_uq7>cٜMyJ7[dҔ FZdzES6E+NV|W^/so]ӿVUHx 90Dُܲ=qEw), ǩɤ]8}op$W,f<Ҵ,oΓ+k Y{bd5UޞꦧxiA*8sx RbV=jNL.ԝ0LU9CXJjeZ|T=N5jS8p{OoHwV{N}OVw|.G"$>hF.~nϊeV ;+Bß.m QluC%dPl)gi[{5[E9"-nsܶT&unv` CP i9kƟ n#lVfo'%k?Dž?Dž?Dž?mlgmxA )6 HJ!qXlDHgsY^D{}+kwfZ'1 CuȬ:=Q\"|D%t?V>Uhƈ!j#R V# x0h&Ж>lEnCLRc<dmpQ|ԡ{4҄,fCY}4AN_H4᪋kӄd%Mӌ*e{Ϫ`XHr/;j0=p=!XWGP= 09AIfCFaVe?f5CrA;qmzH`[r&P1 r8bPTkAo K)^wdkuU'+] FF zﵜByp"vFymI%M%rmqNdٯ_ ƀl+l5Css Φy+G:wAm[9HS~߮u pmqrjmL(9(<^N'N=O_ !3F;C*';FBҜbxwӻhO8[Ӛ@ɓ1Z͎S՞'v~i]V ~t>[5;AgUD|*48޵6#Ř/y\Gu|eAC.,hˏ4 O$mٲ&ر<"UbXuW`FDҺGYNbk$5LnږnQDm"j5Hgs \^US[h6T~)"'ʘJ-rr\< :U~3J w|yvo$8u_{qtAL@I7TqoWgۘ JmNUdZZICO  ϹȆ_6- ]LA<8,4SJŬ>@ atVD}홍n \b䐩nPdivX6Ry坑5痟ga* VQ1Z[Ϸt9ЪiӅJ[_g<ྟ_3~p⯌[V$~QOrP/xsȡ{(g}5"xwC-;qk{ m8{KNimЀqx:[k :X `r U-hZ#7Ks(/)Q\L3}Jy>;[|̜Rn4Oo/ќyO)#po "ZAϘTI6p.${}?sfE!hL*p85L:"LwS(g9pƯ4 & Ih* ]TБbk׉4GSPpj HL@כ Hϣ=6Jn5z/X n5jT 7^!8A$鵙_2 S7qm'q?s'r C5#J2Y "HE  5!M>\D&M"`y6EgO?|:߲ӆk5gyubeVlϔ'gh' ߌxr$Ap 'IM=eh缑HU$H #bj&ۣD[D}5 (@|}pA)ޅBLhCdVUmNT8ǥ!0{k_(@iB8 [ohp4}(gIЄrQZu}pʑ$L4/W~}Z.#y6%@`8=4Lp]1ǃ<9[ +J-kQ:HFc@'GyiOIXvE YRmwh3-%nr@/E;xL[$*q81? z) 9aLb?B.:xb$-:@RvEڐnFp"k :GָYՈh܉ϝh܉}`$!hz螣:IZ@pc&*/$1Z-uPTGl.5b;w WDOM! *+M@rJ)i&lPcQ#-8eC{HxtˆF?h jF~6bX3wZYxK Ԃ#JQ ?頂ӍtJctXJRݣ}Ab܌մy50&J  Ѷ3XQH|')GsQqwF؁zy{vXmVp.j~R FS]IsQxyOGLS9i}O3H~[$s?jZ]k$-uFWHcK2LzInq"uN\|yupQ4QAu3UHødůa =AɻV0Gwn $Hb^SJ!+)]ya$P<a44JNyDCM{ZӋn$Х?;Ekա+Pw'O_Ei3 Ȯ?ko^k, 2DXG4k^t?bh=4hO~]-h:}:_WCgu=ǻzGiuJۿ\'X wn<ܴՏ$Z,%gl'ViSL/7|ȷj)e$DhL oM\j\NC/Rմ[~%ZvBB~p͒^jq>nFӷri":k n"1[~vvBB~p-SJQk^+=';qӬ|=ߞ m~ ,O>مYcRH ܬ@S4ͮ.y}xZ|hC/a3iY̤FUmƴ2ulܸ EC>ۛNɔ^` %۵Y,sLHJ-{ԡt;MR& 63drl}m$Ɛbs]}OisUW ؀)),ڱ;awШ/Q[YD nj+1_#uiˏu?@jQwFiJKYOiJ[1\ɣ^ }; ж+K/Fps>M*ؒ#:JF[n=5= WJ{ -Ҡ,_L]|}'˞^'no\jCC(;, i1NZEmmY[|9 uH)ْn% 7l+-V>##O$#?]9;w._F 628Jh.|Tj~@QzM6D.(~#h=*ˀa;~ؤ2p8ooIg20|G;'-mBhic{& =\,셀AlfD̄F; 5Jkvq<޻=<ŵ,6 $/1q5d7;X$ix"apTLT~@K fteb1+ƛ/,LU NHVwM)uwHQE>/fF-f<0`*RN7KӀ1܆& ~%T a[k0dž V\_r  Vx`i@8+\>lqKXc G8a {8\4 Y䎛3}*!sTVP'"Jhk<؁OfJ_~aV׆E~8֊.XbXGskZo"EEm<Y`7cGKjxߔ-580z{Xܙt\E<-6`>:J,q8e{8\9Sɒ:MUB~~#{9c̨0E'DC3h)6X>ԇfLy19e{\9L!>֓F66$ڬ՚yvS6ϰX \3iRǼٷ/g@&kP\=PFZivL,I,0/aa0D-}.KKbt nqxt+8sJl%0Vcp㦬cxVF*b;fJeu.)) jݗ\\OYd\d\MStrHk?+. j'?g?Rn5S9L:Jq|57{L |ٜ߁ rIm<2`Bw>aRM.j7~i.,*Y)AhEi.t깨 z9?˜PԻ/8YǔcL1U8H,].#e#1 &IibSH"%vꪮ*ӵϫąeS9!(sr.eWu ɗc ud>;:VgК挽Q[3.E?ε'R*FRlltBRM+{"c chƈ2?cҙ"L35Z݊)3WE7 e?n.|ڣQ)S?ӐlƔљe\XCEŻQ9qAmsTtOʓS )c "éʈf!̭wv㊴Kl=[#o‰Ċ,u S+hS`\+PT%1jkO2U)E64TqUKM&1)0F9N֬1i20DVƒq._T^Nf_\ *$)c4ln WRJ6hfסƓGjYA[SM| b5ߢ-Ƅ޸ڠ¾"ԓ(1xAl h;H\ T(3GL+2nEi~ƜE֊%^z7U2KTQ͖o[@IfDRڑ-O\+12,p3f̴2 J(͉t 0]. G[gp.qy=f}]LY8_lT΂46,.ZuqUq2O8++DƠg:3yqFTS7J5Ihݔj)Fm&r.wQu`/u/)]wɥ|K>5j##&u7QXC[CN~0}uIOt:DX(k*(iYi,0,iBVc5\8C; =h&0Co MZ_:XtIEq10ÎɎ1AFu?9(T, 7%HAjGd!p*MPLEXtlLa\w¢ xG)lΩRBbVddR^qc*4c*#+J~F*j,*1;=:8%/LH"-&R|"-=7~"/L KZ9EM*vѕJ+VsI֘bOGJZ k2^QBp92nnc.XXf˼# 6"M$ ދPR!TkՍ]^oZdBDD;X<ބi^y, p:w;?YN-"0??o:ǮokX}=֑79>\ 8`Q^mf\LN'*7Wn *5nYNk^N Go"YCfF4qyMO"^=7.Ri$9rІH)81'n|bxMny+W.w ɍKLoXm*JXUT10"H1p@Tp9[h)>EPpݓvz央%K8I Z3W'\7]pv=µ*mxJJ ,\]e6P)Ǜj_YyXS_umZVb8Kx9\#ɡi!Ρ .k$rxKe:{xy ClO=fu_!P̓±ǜ){.ǎ2@F{v_Jbzp{޸|˝[RG$0I9&i;021+ s(]@[ߜ(9Z,D"3`~sqOU!'BGG!VQRJ=Fb5 i2NY+M":h8Gڳ?> kЀH-g,ilE3MQ+_$ efF/~%,k: (vHMcl7 31$2Op*cuXk$XZ4 >6 h,( zFS8ɋ(U[ qfX2rh‚o`&P7z2Y\a XzIJ2߼sThnn.IETB&XX:KS\rg#+tpE:1:%`E4WT˫p0!Ӊث#w`,q PJyCe `p`&RqFMDPQZKsGU@*u &XbtާG;:p1_^VtmVw'ߤ#}H@>Uhf eL7͘ H&1gnn+pߤK,zu$7/tr)#D/Y)I&oaC1jZ0$<֑A:n`w/q<#K5NdNwO'0 196aD &yi$?}w׳)SQcv"7y$Hv5"B6UWIJ2q1m(τd4PŊZi,vWT+LjqS RB~5l>I:{X'u\kL<XD;)ɢ 1.2XF2´"F-UUbgsJT!B6fd{{$Z K}Tϩ$5|<1\I ܨsNtLԡ5N7"I蔅(DyA C,R/!\AuE,GV q*hQ%10%RxH01od6﷟?f3,ӟt3e2 G O ) ojgpzn8:{@t6ٟ|^>7mgMïqv>3ow_>y3o} jn޼hs;w濤`![_z3~,߄DSo~m:x|/IwB|O7?nc}Zqx6`k ] aop6 ׷A,3bd\9o#cUګ^ 0>(,pdr]N~oz,! B$/.`noݸ|nN}P^?.yX{o_켇x< ~*Kz7'y llHp8HIgݱ3Tc N\w ?oȺ .f3t] L \g V\_40ה V IC]enFLgh4_NzrY, e{Lei1Lf-HpB3:[ #[-`Y"d˻:RHMWEeDí2{pzxG8q҈n8ZKH4k0e Yl\eoF2˸d̬d"H9t6k񡠇sa>0 zX>t bc.84U&`Y$S.B#BhcUA ^H/jV~ 4$V$Seb^sk&N(fB~8fA9-Q%ǔ"VmT=LՑ}j3UmTeF榩&*{xKS󱤩jG.sTUр?[ L}={>kd.Ji ,,6KI6s*[d"'H ][d"YzzT\|=-[F9rlc53 N&`0ah a^q\;k0$kj` T \ [0؂5_ $c)sqc ."4dP nBF a r= VL X Z0!hT)Q Iq,ΒZq1rxŃO Ox HpyHpRٹX0Q,FSxP#l~, >ۃ.U͈_9C4z|\U⻘Jlq D9>v7%Ub# A Vt}N8<$XC9XC=WAp TGkNIrsle,hXޚ$lq:İ3gN1ۼ5}Q _F%]Qf"57.IFdy$`l1%4( kr;/-1h M_p(vS8;J3>89ťn7}K/'R cS7v~w?+~[Do^, o==xR´R{s:m>iO LxʖHx*NKzILnxTk0z,ui;SJ!X s>9E*s@&DcR2h$ZjBxe @K /wʘt郵E 9C*ddp(qVZMi-uEhgKI&+G~+g5\7S1jrJV͂fga+ [Qn؊rVTnZe4mqĠ&VFG{;q>*t:k-8Q a2DBBUh=$'N[g@I, TK$3b@Lh`ljo*iF[gadpR+i@O8C`@Vh`ZpTmD6P\HF`WA?@ N4 }ucX]NCN0(p2*54,Q@P nx'U<)PeOSno UȸXVFʡJk mw mfEQVM[*۰kPȡ|ţkou'0\h`08f |'kצܬ^+ Y;PX m &/@EގH'a'\N(J:l-Ԛ P)w\cT/Е͜IOiԪΖr2mX@@ى `9vT+)yۘhBB>~TPݕ j T\M6!/}3~B5Ypڴz`ȹXݱ`e1F[Q)lN1F֪X]R1L+Ue':K`RQSEOIӕ5QTsW֜5r.veM)XYz+KR H.ieMty+k%3#ЕD)U$b&-')[ol<^(TP<הQAqM؊A:v)U 75Y$Ed+[!|6·* 4ZeU޿%- @J|Ao3bFEטpIC=@슝QN9f@(ZY*&X ,'!t,k|6jfJAVyr%هkŞ20 H 2tH %#+Yg.zʔ>Y`LzKX'̕w2"ƺaoO No$zJ&7q-r0 PHyƨSlsܒj~lF;|s9%+/&]T1딒ùj9buu$]:]A+N$;AZN \ysbA&ΊgwS7+B o3 7\d} "x"XMk*Um1-$scy{5U'kQ#,NxY$5R=\m;J{rJ惸^ƟKf-J2i(z ('i)9uXԞjʄfI]2TF:Dm1/۳_@qs{;y 7&~r wtyz](3Kp{v&obxȗXNNw7U4cw0<sHͻn]fŷ6՛vxSyt,D U>>݀iFL^'6y@ 93`Xo:x舼JDHYm S?L^< XS3`6dzog%:1YTb'D%UO{Վ **  36"(x(pIgZt{ip;FM*y/3oKEYU^wGQV`RgIZ`մldfg}IzjeE%e겖kI()&jI$X o%+kS\ʑ+`R5RUSU#+e <|wl=UW(@?X9FZ1Py⹜ueݷ}}Ǫ+XM)ƀuP}v[Σa_hs7by:\']Z9NbIƮױ|*eg_GktuVTi6weiG SȑIiTmL\c/w|޼O.>i\55EixNm|ry8 9y6,PS)Q95pwsBTE?.yot{al}Z2zk$`h| B5\O%mj\2s5-ܷ~! VK6 qAms}RO뢺#gkB7rE ΛXipsK2mp~,<H>iA'T4lCu!1YT'8978g<λ 8vwӞvw=ܿf*msO^ ~̯/ `w*J;密SfQwq${~tQٳƛG}SsPVo*͎" GE4\kjS!~鞽e@/W}d뮲!5ih)G`F8,V}~}f,uX1<[欄9V>^;G?f ۴}܄cĆ?`OҠw.4o2*P3sWsY0UPp1 @yh{Lt膷 G tmG;{ߋ(iYdM R `˸EժȺRɠ%j`L4 ֦b*_}¾5RǀOO6EA_;0 ~#zSmFTSȵO63ZZZ(-,XWZSb 3\M>VRqŜ=hf4#l?j9_qLq2<.x =3wBzOS {4 u/gjm{g;s'l"'@=sHMҞ3hJEڇUA@Sfl6!|nje,*G{FK!c]>_B;~VK@0?p 1;QC ;nN-b"S?‹1GU -{֛>?LVf <9wG`O ;V{:v~?<{z=Mq _x9?ߓ0z/W/[qhL>OAquoó zqq!@7#}Y}>؋W߷NI𵸒I0i |uƋn+(@ox~5Tj׼<ݓ,_\Fҟϗ.7Dx`~.D6v.Ng'g8^VZxshЋxOkNivJl)_Il+:[,prܪخI>^/;URÿdzn<:@OTBӓIaė'n/#ƞ?prO|ab'cn'N:?#rvj0l?aKO'q7xzQ?[l88=.G3 ~68lu;㓓Cu~=FzowFWrhZK5QF+fnqr;U@dצ^#D> \κXփ( }0E*PN)#G ^pß[Lq<Ւ^OnSy6(ʒw2 c|E}6EAAJ׵(s+"ԐlZԲj5п- jYP˂ZԲcA d) 6lkh!GK)^ DHIQ* (f~@(~V Ċ*;x |v^VPjߺVwX] .!tA!Mk`*  Qz Q%UbxZjdJc96=e0NNdxf0 Fh3m2!ҏۑm@cH :71%{MĬ+bfwXARObl9.7hB0v<*@2m "3AQe8Q))A6;L]`u$ 5%<ڳG'-jyP˃ZA$G/eAQm ^%$ʔI?ZרfY&qUԽJ'v.H: JWpSZ+ <îUS2ϭWeꁚ"JcXC4CF=HѹDw6]EϾ`6f0 ME,^K硾 G,H%Y9IyqA\P=,F__3uVԁAOV7WX~ʵv*>[zz\*- N::Xk|Vz-ժF}zھj[e77le<O' 3E%R*e#c&,hٝ U6;ot pINo]G~9e=E>nuSl~{yb [UyMD{˒a;puj`iU.uW1|2YUO?vjDH@G|2MZM̗<>\ӘzUsUBW}6}+f8,S&CWRypoBߛq (H3'|}´i .V]w:g?z;='/vg³nu*6,jnշqFϥreRѹgc-KuxݭkqnD"sKB4(&dOG  Uq:<_b~3¹Xݨ@-[k V |n{Ǹ H&0Ʌ/yՙLS=aXk5W/WOy;_cav}{||e7~\/BH#f^p~UH`~:Oэ4i_ N5ʫ ҷQG!CK xIq}fx`KsfI3m Etҁ ⵁX8pZ6BuG92CL Y^z3إN>triv;n|:H*ыOIRgvGiU"k?|cW 4Mwe)ڠ61@r%]O"Fש3'I1~RTI ak.u ӄp= 76Dx ݨ5Q9}Hc'@"opmXET8#u K2XڋE>GD#I3œTKM |elDU R~Xo6%U ,/0rf4WEW!d%$͝MJeDqv^Yӫ;tǽaU˳'<;9)w7U5 6v[YWu|QDŔsi0:y )'KSݣve m1 gZVF㟺1!2`! Vl3*դ^ؒ* \hO٘ 8a/Wos*ݥrPk%v6B{e5 bm%\[sA#EĤPy F)t3zm*n qpԠBi6VlAt Ԙ<3bB0 MbA2Iqcnލm";5_"JjkQ}PL-A0X]$x@i^fH .K$bD;4ImȲO$Ch;Z`ƢJ4:$I)rz&®)ĩvFlxY;T1 zKv # G?#  aOŋiY)\kNÑ%K E%ĻQZ,UYل+yzԦK!Ƌ) ]XNY֪{wa%J$+G=|B.nuޔ9/2&'qҢd$2gLgmzw ϖ*NއJ'k:\"wα]W~[S[|UdYE59ud;$/@&-=Izu`g{ϸC3P%.iURֲ8/[ZԲ`NKg Eo%>wILgY8KKx=@"{%GdI!m)ILL{ c9DѸX)'rhhμFoiQ:I qh>,GL $&;5V@ίt$Ը{[T,E*3)aP1Zfke"mgJW ҅lPv1`Ed%i綳"Dg]ZUQ1[ZlҲN!]L)DzJ, m&J-$+J[tɖ \)%)/Eے`DFƈ<'bD11"kmKFۖ-dQ:)8\)ǧ%[,A{v7Gr8{toJ8AɆPE8%娷ZKexFoiQ=Ne` xuos`;;v’B:}i)[׀C{O/W~%&P >ɼhʂ2G´ձP+ HuXjqfmFCx$]VT3EAMQV z!/j)+ \@ٹ҂sRCVְ`|Q\ R*r%;*(?+=cޗ@a,W^cGh+m%W#RFQAbu$znbÊpyi\(J_igI{g#$4 w'mҗ646%״cPV@j]ugثF#Lb)dCIZcW'ɳ)Poh+7ٓIuG <##9쳯#KOU"bJKnw7 ^In`";'Qz/g}< o[\TZ7m1ɠ_YDgq=VUjr+wLF5ԅX#;,zl,|=b6X?d.7}`E5^#L"*|ٻÓocᯘ|qu|Ǡn= яOCɶلn&۳܏llH#8=e@V\y?*Vf fΟEoєz=OM[o>Liؕc @]-`PA.v,eȴ9GԬ{jE佂!+0\QX&Pe{P 2{DKnɂ5\4Z?]rf,iu$fpP,'% /$^Lr7--JTLJhS#l@ 0Hx!6Jl +AVCd86D,JeL-񽃔Xر;]nkcgDsX QW Ѥ!S7_xR8o>Pk61%HK:ʽզcL9DyRRd hHXԙ;|B0-02S|3ZN ?ϒoNٴKɘvf2KMnc8FǸt1P@wYWa{{6,BR|'Z!&]9dt.x F>&Aۛi6zw==.=g=~wr @]9\LbhXz`6mt9/#pG青vZ#TCGYC5-w̠>SFA@~,Qm'w,vv qPGJ}7Mٺ7պymK`gG:T~¢X?fp^'{ɩVKvfqT$I>)V}!T_N`.x 3U4+WE7>hr AҠO]eƴ.h1[ XzbI,ޏLv 8?c8<$mnE glrfg$,z R&6ۭ>AξIFxXЪ6;S!pFܩHuneOmhrNTYЂ@[( Pj.]DK"=Vx=Iװ@53J3BQ ZU4`6 .!P}}X75RVcLygFf.eAř3-Z(tr L=\-<{0,uw&ggy=K>%u7"ݜՁreN"4TcHhj!d!7GCI`萐O(A&Fg>zeVx~5.1`/lO %;O4TYn+%zVJ&klm2MҘ tUV (ԙ%_mj!䙓r&LN)ʝ1&,Ss)\N/Ceؑ ţ>eQ Ԏ6AאȽJS!^`sw{EˡC 1iPileVG vNQgY(tܝ%\-G+IB &`V)}@9{Ji$*VϕVi? p\+BniQ62hiVcey+J( \2$pdyBM;B`М@6l X٥c#i)`i161',!J&8j>gRb 1p9g' @ym*2{$6 ;V aw$ 0˳&]7'>!6 \]K>wĞ>[-uOOxS\߼8On%hm凒.4|s|!}B> BB$$+\$ f,HA(dPo$㍦c2@D9 f?;א2@¬~D+ KO(` Y3)/{曃|֩/Ar6}91#i>~}ᾳ|"}&G|.pfXN:ǡJ[.Xs~;ϗYwW{Jğqs`^Ũڞ26[XHд=@. ku͎={w jlWr3pyahFá~<0Zv<Zl0ߚ3Sr_?WŸ+ϕjȟJH JvF9 Rzh3EgԲ[] >qi7]l9HU#J'bvF*GŞe&e,2VGV0뭭VK ._>77RY"N~, ? !N{Q)Cб `n$:$roH:>s&(~|YIMA'*^❊*莜78xUmlmLN=(AөR#ݜNOSg!7G9(dP|oaoee$2Ve~tE2Y?cR4^y⯧u?w Xy/VqsW% =tU]Zx}7}K r|%rOLDݨmRC}1[N.΋S\ܘ$d&#Vx9lTr(%)|k]=)hf7qjCc?䴚k\ё}̃HʁPG{H}˧ݔZkLN9#Nvc`c*gܻ_N>;\RCLcGj؆jZRٱut.L5[$P^rJreu~O(g0 }:u)5_\kD?0^Z](YSE۹қxdή9? zZC. !T" i|YD 1MT8pRPx8ky~UUٱ'(xm*%!d.-Un|;9`\ӳإUr-MG<|5Ngsic|=,^,h$be̤O1LdqG٣#`8D oJnJnJnjxsFDC"D":JŸ6Ť.X县Kt2v.0Ѥ<&v#WFč\C6߭;Gzc #/Ssdo[c(Rƀ揩M⏩_7Cf=P&B19Իy;U&QS7ěʣ's2ަ* ZaG oB^e#-$x q`3`JQK?ޜ<佝 n}t>ɘȩܰ&$Ms^[}DgY^퐤@E_hqgIC[6*iGh˳tE =<߇jxK^W~l 1k5YRpM20޹CtAFClnΉbtFO/o}g4.]3hX#|ucX*QlNL>EԝW`Τ^}|*Zm <)bXzӇz;4;xڄ <+V^0>tI4 Rqg9TM_88"IQ_̔,oAZ^<=Ҋ܎ɼg=LA>&0Uun%DV[ #tLA'T ӌ1CtbNwo-1G}IAQe;"tF2n7aC=Pp:Gׂt%t%t%t5C6n)kۙ@5wm#`w`}Gdv7 2af(;_-겚&"%"HBRTwWUU!.|2\D>GH~ I$dle;0N sV {z`>vp8ċTdGӶS=v]jAQaӻY|eC2Rר gCw| ,Ez^E.FH`06!\j <#f8wnsTZjlIls AzS0h.I0ǚUWo V۾l8`0*)ZΈD2tfh'`.i]<`wUakDi ߓcI0:΂Q`ƕemC9S VZn++4>̚u I29/¤:o)Sx|:1|ݽ ugK˭o+. / *Q;h+{;z+]UWZn++D"je)'}<2c[4z2[֢2%`YSZmP i.j-uZeN6j!AO-I zeOK'V$['ϐ~'s LHsCۊ ;AĖ|O⮬αyAm;/ਦĉsV}CYSq3x;2;ϯ94VU׬T0[7 uL$+Ӹ K-=XI Mj5I@fjnEВQD&WJ bg×ʃlxChnd_ϣy)%>*IΖ\vK8H%+PW #qs?\PEp\IșGcƤէC#'mg`ԙ5# i.4ex2₡ufneWpY}sp]RG{Xg%;l{` ў;Ŭ} fJm>+:LjOG4~Lq ] 6)⤖pӽ,Ƥ'hHڤTժ5qITGB5iXKX q Ԗf쩈J +?Sfת5gx7hLѫQjTZn3pmFI2^O`/^&կ9mhkmSK>vacH|RjMĮBM;VuHyǼ 0iHq\ &7'.ʭΒ6] #6<7D_ i.@Ƀ*$d.:\srݒR2%3EuIB 6-]͂oS !['.}"p"r̸u`c6tR"윔OF`:Ha|*5J$=嚯( yHC\/pCB&LHzH!q7RN4UW)!Ԙ2= |XPape8/6[o/dJ%xu IgU/EyLR.] V1:3MwO+V Mu7fn00E*l0hS) I7qZzCAA\ 0"ւO8*adM2'd,lqTlBxSTy+fY/AaBnB3dkȟب_)&fIzxK^,Pۋn@aw~;V[pNƛa$k'i,^v.cVC/OȱU#b;Ia\3!ꘈC];C4ЇUn 6kt kJ|\ߌ,B~cF᥅(fQ FRrVzGij9QTڈLL1U- 7z>v"aPr|:b|:4d:J&̩61Y۴/0K|!9o|ȝ|j ;2-< Fl1@-EpHD>ˠzKJ6l(Fŝ9͊:) 7.T8@Z uqnhkAr. yj^C`6,.6 U,z%TmA!׿MD_!mt;WPvVrg$fIU!RE[[ ͚p\iA +zڈcɒDEtvL.|i kZwZnd(Bp,ڐR}!ݩt:<g?]/B[i~>I,;.Yt?r[ %U+ R %|C>/p5 s;" a}+iY 6G+^\XÇMjX1ZRnҪ~!_ ʕNmLeTzڒ瓝E]a,PL|ɫiPwi]O;a ux<-͙&oT?(xIF|h B4|R0&aģH?|bMPLOPgtܭ;sI5;Tֵsۭ)s[w 3[%ĥ8niPcC3AdĵͱYwUeHv\f@vgww{kH  SؘOWȦBv7.ݖ*Ъ;i iAy/kJ6؇Bʋw>\8.t ԝuQjًۑBX@3ڂFv(ʋw[ۑVF%i]z؊4Fٱ9:`i0״UiTѸ!̐ldk;lvxq8ft19_aRRVM<(p.S3vUz͒u*9¤6EeY&u`3bƉ׍\Ԙջz.y ^;PSj8c"O\64M~)ZP[P

0<(+KjB;zwg8dSޤKvh\Lj[+ :J){i@.^ZsҴ#W\j,d:-~ZJJD!ٔԟaYym7W1r:zhI#|=J Mx;]"䗜%axJC!ɑsO[+8nvͭѝa4xѲ)gB1O d uZU'CV=%4Z^ F5=/ԍm/5)NDz!(SiJxĒ#.^'Y!1W5X5OpX=1V׺)qHB"Ґx+UӲZj,:ѕv"EJybQi3F^ I{v"7%4!WL>:.0jhIyUjCrb?}$6L6\)go^pHrJKD! Ûl2v2 qZj(4r@g:1 `uwM X@2K}t&*a ~uqWxg3f/'Oe8/o]IR9$-_p 90ëY2arBz7Wp Tޒ <;n@|xӏ M'0S9t|e0^:`>|OH*xC /૒å+uLx0vAϺ=$nn%_۾膦o<\޿1P_i`ѻAw0%㴧@f|Y2{3 4R8L~@Bȇי.};uD} ޽~Ԯ CiBMW=F Od>Nɣ^6__ ]

rq/u@Ok ~4,T^- zFp~. i𥌛Wd9і^[\xIL*6{3^&,F|o}9Xo7=穘H{YXr66b.;#${|zM>=$yJpXiGS)께KI%qΪ0`ricd&F Æќ| }jP}wc?!9? S[7 EʢG> "H:psqSL_51M8Y^v(Θt=V0ωX e<{i 6`^ Gp-^b1ë*>>Q?i)yu Yԉzo}SirMEK8󃯝Y_/U }H> }HP;,y:Ahq; }_ɥo5tF5r3tA"]?ڤعrL.@"#Dal*Y&-E eؗi^~7%@3щ貔mx) ̀eș}7RF lDK&BĢiu` N&'Bs2 B Aȱܣs[&Xw!M%M%M%2 RA`{thd.j3E5l7lNͮ0j 8]$0@҂ VJk(āC LaGoŧ¡tjV–Wg}J!f%ÚJ hgEhaoUQ'/\"CP#2J3Eb'R]\`P+XG@`Z?z[-23k5)xDFӺn8=N7zJT.Z c%FcN sCv R) |geOK{Eӯ{`Ȟ OD%ow?G@-޿/u|ȼϗzPIjgL0^s 9l@G۝ ԱYsrBz4ƫQP|{vȷ>fVpqœo| c ׳|99e0J|..Vo|{ߓ7of@_KLGVIo>?cћ{ON[׶))dYCVfW\xHY'Ӥ_}p= Qg0GFc<<2in ƚA׊F2ǻZnW.2!yT9Egk{G (1d ٦ZR u.&{]6#8UW{Ձ/X GwVz#ܤlA0YmFAK#9RY{\A(*hv#yxyR8;7ĄdJNQ#Gj:8%6XO1'.h+@;6;`>[{1iW039}S$8b  x&\c-5TL XLm,"@8$U;ʒJTs6 J&.E dQf_5IS 2:1}-.Mј ȘBκ11́=uR@~6RhHW@:Ym6:#)yƳ09!VìH b9 '5{eض`Sؗ_Tkxvqv&AܡWΠ.3cT&"Oec%%P;9uIg.l_!p9y`hI͝v)W4o,__OgMM_ύiN@WOä?kW_|&DR)w{+x)vp}{v^P#}:ϵyo?_^6[]vmtwK_9t_;XP(b^D吩n?/.xT]U-g)$G 1Fy׽ǼkPd NR?1h&)fym\Ɋ}yȜM83wpʒ/?!{1ulc>wpK<196h k-&&s;5!P[ou#$r1 Ԑ↝av]n؝:a\U;_J셭5NvNX2l]n4{G۸abڪ.C3[pe=_ VX/0hq5u'q[˷'tœy]{^'Ok7&<}6)wxQrRSn_JAh맺 5v/'6!O꯳y>wAJ %d򗀩2(Jm{4$w}mf}^sJN*s8Q ;`z{p8I^Ov{j=^PU&S Z|O:H j2WcdgbJ{SIB¾r;5%GɤҼI?Asbqr>`ь-W BbYXj@uYw}s0Hx /}G#F3|=r$ܟ!b%^Kb8yg߽)zq𴂲Tƶ[oF'2:w2`X>!<1ѝatjxbB;J*@cFZ]-}iˢ%FXvAstx ?ovM$mUh`|Ef=n5%ssNsgeYrw?%_4 oQ/=;ۺ^/Y7~Dp7|WC&iꞧ"mwq^NmRX ecFM@OAŇmr%3Fsg-@H^R~B~raj>iuwM p$x"jǏXH6M"!wl{T@Ty[*,?OaAI*ʯ/FIk.>*0Oߛ0vmMbDdǿ<F'?ɵԁ('o%K-arz$Go}dBImȧ( RHO;>),9=!%(IrĽfB+F`e&g (# Gt]Chc YJ]WR٪H6FԥE)S)@ S4tcNAp49zo|BAHNE:)JvXeEYÓA[AYbcO&KV F4Pff&i۱6gءcWKO!"qH%u5`'RQq_sB;jD)2@W[+Ϫ[0.Z[sPudz_f "( ǟ1F ٮouZgmQt'Z*]wnL,'yVF8ًNϦ)Guu)G=k[k ]2@B0sH) oX[T k1ϛC5ztwScvu 'z:ac>gT8 .өǜ1q1(K䬆G#Կ;uT{/ `p~|D.]̑_p^?*?{.&BI{NDg1 ȹ042ZpZNkrbxy18=(EI D`orr-POu40$e9w)gN'y:WCUt=n2Ϯ/y湽kğf='7y馿L޵m$eo{8T -{56|J`t$Fv;~ՒfLI)>xD_PB:\擙Wr8Y7$ɻˁAl i"$Rs4FW@ Jec,!)KW 1!? m}[nfAkOot tۙzO; =Y{9/IR[JKxTA A#h$2H#bwQ2ޙtOZ-+.@Yҽ 9A=܎ S" bҨ*$5RjEQgRh{c+%QU/%E}b+͜ZDH|c3 rg(u g+t{K1܆ZSZܾ|wW{Šʎ7~=OXh`#{8[VEW *VTQEz ɥ1iOT &Ӧ+WʇS)Z)xfI%-$X޳I#NXΎz0X)02C)%sd RRxX:b~SfH׽0c `|,'\|~.'r%@Gu|&ҕ*,ޜPȓ,7e޹GO} dN/@fl}ϳjÃwT điyy+VOΥs$n+] Ig<9yo]!&x39FPͮ>nS#Yn#кw$5r7bZtQYMdzEeFIŞuqw(pYyԱBc:䜗5O*L PynntEOc tB~Ѩ̗>lבf+,7ڷL V[]x#(𤒎OBVBg!GU9J[` +Y`^:. `ke#Kގ%'(-m"jKVzd?g/i`(Njı首QxT PȤ/TR$[yQ[VR)X*EPnAV%控^ @2zNaC,wD7K9@ꃧ*UoXmPtk.r&XFցw07 h2p0 :xnJ3mrr' ^K2Fҏɷ:nYZ*^h @?M 9jcl'0e}ıB*ͅ=' \%_O-lV@ٷ!We]bFNP)b`>:KDC5׋llsEIS-$DrK1C1W䁄ȃؤc#{va(V]C*PL&IB6T4[&FJ% rE!ʕ5R_^vy[ Qb2kER i" a: M75:;)6}-}2gP.U\z"ށgd9!+ob.TNBcNhC9IƢWƛ[w5'RȔ,ҎE2GfFBs82 e'er-Wm^ngw?'𬺜upGȫ_}<b=^ yu O|^e<;Lhb+KMfUPWFư JQ d֭" ѭ.zM!*}aЅ$RQ2FɕC4Du|ZVWC )]xŅSH\c ALh&M7$1=O]@@KfKî46ia hr sSX.FPmʨ,P2ɎrR 4η[˾tS(@O-JؾV|j-juFB˂39z;sMLmd?Bzw HE^oiXPrPQ/c2zoM 冼!##jY ZaD,$vT,xyӴ~~tE2C؞T``2b!mO͛ 7NJH94q`.~ݧ9<<߬5N޼k&_eT$)x_~dZaS߻9jnz`BXQ+3: ƊPcEm4Iu ա7~vP/H@MiZYMNO U&>*{BgS%==SZXgvzQ;k]SU/3}Ȭ/}v PH[3hSu;(#ޗ˻y/E ǀ y1=|yؙhdiRjc]/א:_=(a9{_̯V?_0Yl`6B! eM.uj2rn80{{3tWOރ,+<0$~ OT˝KAA_C^;2 +G 3U%@+RXAYlL]"e%#6%ңlwՙhibʎ}TbbI` }Vyc= :/>’_ ׵׸ڬY,채y4q9Mȼ>pHyCSg aZ * :cQhCiSa Ud=r 9ދZcY{ P9ç)ȖaIFx\T6LTƁ21_X(yS[ud8j9,!DNѫMo\~XxKS/֙_v6Zm0b(!fW/fwev<:vIA)/yrU^=Hۻ_}ƒB/ͫp 1($_odH*)2^e/G^ F>8F49tޑRz0r8|"Kj;6ᶭ*hb#:}ź0j!ֺmzhuBBq[$DuLOWbHYy-OʹC/C55|SXmZ3M ECJ|"j5^ŅM-;nmSнZ/CDK.o05bSߟqY<nxRF =r7SПO(& 裔`~YBQ"oE]+ ͋ȴL#3QH7V? ]q,PH_aAi]hj'';)Gm_o仌,< (?@ut97jɣI5-`fDdvq9>Aì +O6Fr gpQ2gMՠ뢔 Ip\BP2CQ|x#lO83_,=P 6 8@lp Q9/(,lXd2~§x t-gb3;K~xNHrĺcE"WߧqǠ'8%-6Kg<07. x{Vv䌿/3V~j05=qX6\]:m E*1yqqbbp l"Jj7'Ǎd'rOj?@cխytu2}irq3EVO(/ϗ {5 BqV=Hi;G^?NE%9aiR1N$_!Ɛ:̿F/nf>ť.a& j6uk$?}6|}_?5"WKzoU?桊(QE3o@@ x\MJv'&[aZ ~ `ŷ Me܍t70#QO>mrUgR&eA֔%Юǃ{je!/luZ= I IyDž#V [YPB`'*LV#1:P|_Lcظ`2'v(hTw~ \ JU^+^[ZVPl_UǪ%سLNu9^V_=I^l. l픿XE%3?NO'اMROyă~Q6"t'fd?fx;&yOգ+x:O{!>ƴd(e@(P-O>W93EȇK/} ] ~Kx1cJ?/"JǓv_IRVАg!&*\_&|p*8=.H*+gKZcտWk?iݶy\_~?s; Db 3|tQ<ھV7֯mwǶHq:1\ ܾ~{8+MSm&{6gg9-' Biq!A814DZy!)\F_YM2)7N=t[)Z,b"MRz( g.} b,caIPFF\JJ8^_{Y{jq+t aq+[73Z?NYT.V(i};)?pE.Zoqw\= X>ZPzN~Qq0S5H{>숁e#%H/%X51? rÄZB9͇~ϧ IL0fptĜmVSX.2/!c)!gXqFh%A/∶Í0i#_4O$:aj/26:;{qLZ#U1 S^pG`vnqЋ]. S".cY~%tgw8pۉO{Y8W-#oj+ dp`ZT] 9q).0P*Ow^mot۸o-X +Q%ђpklR8AYjo%-FnG}k)u1т;l6NĄ8^!"dDl/ aws4{ſ"yJ+QOi~j9 _Uvfڬ*D9XGof6\Y.[.7i9;٤as& ;(|<~O0A1_lB p,lCd'TZ(FVI_J _npMFSry9O4FƳk0992V˄!* ?U3A: X[Obt2(HD#Ph8FRxB+S%<)NZ%͔NSBdhB׽pj@30n3n907p-hZMNN1Be%VHgk+V8FXȼu!X%2!h2p LAgI5Ayp+NEX5u>Ӄեx9~5 ͩk;EWQw0w7í3 #Of`%#>b>}XQ,KO1FT=x:-ҏ6RcLKLK{I0-V +ޟ^q@K FXQydv KZ9n`0w=޵zcLjE8-Q&k 7xG?XIeH(<}.>uMsPK},>K2K M.M@$w},dz]NO--鋑7;0fmef~9&y#ݷw8UlUOf>ɧE4XJr )ِE'6[`Q-=yZӈQ{N:"MvV9JUcיMcO[c{fXٮ-V4X_E{:3idv.Xk&:{=hzȣ :ˁk$ˁLzD3+|^4"RNWH4.Z=GQX!EN׋$| KtAq>bփs(3|0Udg%Mb*f&s?PNaX뚾C])x_愮*եB&<# 8d|0b̩򄷾ĭ9mW[)rCz{?rj9Ւt\}/pP>՛FVx?;+"lHϙg0Yf pc+FE%zrW^(q7~w.pNЙ3ɴ~%뢑⭺jG SFR hYFy(Bid \u{LN'̜5)ۑ?oq=sv;oaB|zn DJy'>\WRBTvWH-+BJхb8[Xf@$e##^l,ͤgEܸbr&2Ɯ&!H\! ^*J#QA1n5>킦)%HiZ,1c #v}~NKFNUxܖϐuJ9Wp% T&*liiÛmBO9p鍙Is=yƋAM Ђ]YiMs[$s.7-C;rjluK[EBn1[+7a^%34XoPЎm0 Y^Ze>#y,)Avw  RNuHK-og"X #7\cF1`3,eS-D*Au}@G%FryRb=Fi`tcv [6 kN,ڸ /ǟR<7T}o_QS;Bֽ9/mx3#(dze>\EɜHd+Q63,LOXD `ɜXz;:wq!25@(jE8!,8V*ˆFS)ٲo?]{1Ҍ ÃPůas=t|X6|B ;?[@S rjm፹aAXvIh`UIv# S |ro蜐 aD}=Uol91%T?KZ8A9Ed}*='0{֧i ؎70|SFUT1naF j=]ފJgbUNr"J T>$y/gw&}'bdەS\wN+s\d.Po /auSU 8϶cT_j4 52')C.2/ kaҫ밴suXR밥]aHdcb"I#lpSIHڜl wFݍC)'FZ7<O"z؝vKzpp'}+;qt+8Tt:W # BnH:Ҙr>]_cTqRfo=fIV%*AZx`{#P@V m`NܹzᒻI6~rEꮽZ^kKpP39`k-W\`4^){pVz@"P1/y`Da$/bR(n%&N Hٙ`k^x x>838>=_> SCXޟ]<͵X5}{wjEbuiסΧо "W7~5߉ևs;crj @_;'#"|+~I|H-b骔TzH??`*l8OKuJ˦wllwun~ q>RFÇمlrOCtξ;c;VJޛ2ܻs`_mǣpL *(钊8-Ctq)]l.5zVkGhL+N/D`镤O 61Ga!oSFUrjݫ1Z`zNU)h!2%H\| ^,JJ*yRAɾ,1W%i.['0mq.$ / z2g4wB\qΪj_UWIK20!ePvKD7ǀ8gzo~wDǞ61_}ĬiTV@+i8 ~9Qgmltާ5`? ʞX8MGǨ&SiYQ9̧0 K{4rn`?P8h 4QdzH5~OQDwoQq!V^wЬP& }Ĥ>0,ִiWQӮ]5mU_g&tc46HK$ 4(a!W ^*"5AbKP=09s;UG]dG,SjлgF EhsmGɺ[7X}yGBt~)sMJ̝ڦ&$EHtLu!_9}ϧGo!D;aы7n=Q;$CHst3I.uggaLІ lOk3%(J!#y0$'oy4)W]͝Q}v9HAމf\ȍfm ;v3lb|a|3FYIKYzr6Z6^EmxU N(-T&R;%!&#wa,ƔA0m=sO>1iP7Ve)%P>ʛ')sSI[vNOx"<=%6ϓ|sKWjILJCyqn P!!:ÑQ׃@ѷDO[F8nB'EԠ 9{;!7ux)Es=ڬ@2ݶE KKYIߨ(3̈NA%-WY޵=$l IJkJVHX#X)3y7{KlcN3iuho t~ޔ#Q~`2VZWQj]Ul 0IdcZqq6Z!I$t809s wNY 6Xs9UW\US-\; LL DSyPQtX`*C^Je:nH:Hc~A ׋EV<횧`td#I3?nQk}.U:eUVHkInkMVc.'8/xo@'aip~:LrjNyOGâۇ,/b w=/W?eiRM8l\eHuZeH*7ö]C!e=Qtfkii=+Ғ #Clk$k3~ӧfzA1Rz&f+Ѽ[rQ+sHx]_re٬<=t,.?}i6ۉR!(*{z1q˿C,te?PCiAunS>(}wi =Ezws@^ eP^988Io{4 v/dE?/|-&4(Oecww{;21L#4_VGKKKߴ hl~T:=WMb,X*37cib´Ü,S[ɝ})2csw 䭾Ah-7DZep9^.-a,()`sA"p2phWyq}3þ(~ݶwm:(~D{ש#+'D5 Gp ]Inǹ~9o!MGVc$5Έ($> ÖfA,`Ǩ5(cͰ ?bLp/b se xdIBƯJyO.tkc h:+jݻƝ2xZs&Rd4Lf|qg|ZL%'*#N- (ƆYϳ]tA?~Mf= /WV˜$WL QuDdyU n49VkH?+_*'R9(ꖃ;88Y\ҨF9ὥ=uunA]T繈OyM$0)H 1]Ϛ=P"RSvɞkҞ>v@+U"/R%GxPπKc:*D@'ﷴ[y/1D89 p9 [7j99y=)w@F^$BP*P^&p딹%kg,R[JGDn)0 957F xnŵbHDW?o]hYL!a2 fLJ/W4tss};H% Jߞݾf7z=G<`aʶ<|&BAMf+?=݅bv['nՇ(Oz2:VW K.Z'%'>UsEA"meqI-iT)$]#BH0 bHP6!F^iXl+xFaKJS6΅ ʦ:ydho7K2wSS(ifBip# aX 2 on?[ৄzsFX0O]|>"HKzV0˰E00""Z2@"`pN(* cܵD@Q7B=|㿺'iw7W#ETniy?ZWUY ^z}zv7bE}RU8I<9k d8"r['&%;7?ӈiȉ` wKpDӬUXɩgDs;Q5F/r`4NmR~S[`ܫ.>Ex=Ei`o&rU U RX?W=$~0šP~l4z.f@=EU)t&0L1&r-&w]f,F[|id;僨 n4at@]p nrfE*d(a="#f ('.rs#99VQnJʹ`gm }Awc`Q"~Xgz8_!;Ȍzgag )ђHݒ57GvuWU-ģYŝA6^s/$:~*aJ)RHmCnMwBNc6OE[_Us8Yo]=ۋo'\eitu=]O_OoꪉXxA1R2:ĘXbSrbu9מ%f7:G]O3nnf8ңj'}_t~|23Ǽӏ7:o;Oos륿^H_(+PN+XFoWiPyXb~KgvvF=lN.Y&מ;'*cOzWErF]ۧ){_0g,gxn_?OO҅2'?q~7?=b&zgfc҄۫> ?NumdO5 ,JlC.& 0+%BX5nœ7?5}x{n]x9_o0ǒx$r{] G3=\yՍu`.b\q5 Q2 >Qh[XMxLF)=k V.dbk}D5_3cWyW]\t ~eYGLBؙF:p7Ӷ dN/jڨBmꩻi]nw ؽ|ccؼK}Z|EPʾ !b%Uᒨ+%̞u ʉ K):a 'R d/ |K:7!샠jG ":7,B ]ܬ}^%L"c0߅23ƿynH$8ʡP*!G0*bbrp2EHhTĥV)MiVU}i3MëҘJ/X kWCW:CZXbfMЭ++5;)i-е+LތenE(wQsUW8]P uLֱa}R :J %rԘ1C]~t0B((@l*3.m@ZCӂS`tςv,ee19iUʹiJ̺؂GgΆ9 HQ9NJM*yCX@S &n-(]IhGWgDOs~3r`_Q 6:$;m*L`׶*zYLh3$4YJX(*̬@d០ .wlCLHsW]b^E8xݮn;j3\~cN"o郷vn :}x]UonB'7w׃՛Ozs F/^ܫۮJo;p$& ^w4HfЇ}z0V1_^]}8MՁ5bruS4Z"w2ZB\ks*.&*?<rwqZ^^?tusd)>U/gx,CpuڿQ"Cg,) iR|J:HBIuAeIF H]ߣ u_GZdKva+ĐA%%QQ\0`bBPYFr )x}Q)IeH`N׭(1Al3e(1RRîO*8-Z Gc-[ /02*$bcaM=.c 0u֐P,nwjN1)dZhe1P`)fHܚT+*1{uE1"5AF)Y 諩e#B+)Q1U'`ֻ\WxSl} Td*9GgSX!)KulH p U4 ^Xy/索Uv?G`0-N+Y:دt'ٯY-{ȥ3*VQf*y4n-!ݸjT,i:3Pt9'4c4E5MzT;&39yH?/}K4ԟ^. /uozњ0ThB*ZVB_{!/JN񓆹钌e73/~I%u1HٝRan=r+z(G@&]+6c[t0>HK† R8w'Oanr+[# g8B$\~f5A?k.d/}5|.[t¡58xFI%Ǩ|`EZgu{fҧ8cSʸ%;;OW:- ?)i$ǎQfL=zGǔ)I<1?g70G#']F} v ?@5Lcܕ My[;)(^\&VBO{Z#$]CD[e`CQuFh%幁 \s|s/r!܏l\FUhki22]_1 '{<1+7otw8^at.Sچ#nXM1DجyǝY\-W%+]fvӑ% Ye[-(6Pe߄ 6-@OozG ȶ~-ݻ8L1BM$2 z٬$wxSʺGފiDOڶS%"Ck*vc םB_xM#*b}£o{Nr9'b':ĆR̍sJ0*FiR"<F)ް)G̖}1ic=ek= MgTz +׉%kqqԚK#>(z4.g< )@< %M<G~g0+=&LŁu1;Z!hu:Đ}Əj5*78Gxay6PZQO_`ԇa6UkK7,[/Xv\=-ɟY8[$^p$2oNmN3UIHVDr%[( T!aFh鴲S!耤e'I|1*W fV^Sޱv}ߞw:󧿹 o땊 :i|:p̐mT))^bII9"G>{DɈdV螡$¾.[][oCq1# TtР8{_w ,AבdwF>A A4\DN;c*&S ztƸɑV1k1d_e<lc&sצ7M=ؖb$S$$(rFQߚ}΂&^ 9hRֽs蠝q#c&:Y Qsd!)LR&h]9F1ci&%p\KZId!#;RNtJ yႎd,cVމ" X7n(;;HvA낤fX M6@aeJlQY Ja6=8 ꉤr,(0GŨϚ=CYʜuB4❌VZӞ+:aS$ΎC"xݦdb>Hú^0J@f>0b8aZ7-]zKn,ao@CT+XirQf"9TdVPbCZGC/ܬ#X1p]jisCH9@lumBRU|1vxr0 zա'vK-.DFc EoPkChLJRڣάW.Ю pk4 n# K% CeAHTk*e%4Y{fU+ đqJ][QYgu&+2END'uV)л /W)+cL&`kq4#^̚eNx֕eFhK,By/ NPq|m:+˿$bkmH=#b,v18psKBRƋ=CICr잾p3b[lT_uupô~p{KM;fv'%:QwW'%/n_Z Q`XMjWIYq+Jj ~o )vG>쾿s>?N@ex(|zR7fV~54^ZB&BO_7h5X֏x>qz[* nT(i+PM-jM )L /wn۵M2|jMaߞܾ+D*ss?~oG; $"޻kweX( @.cC:!ζ3B-'H\\~%TV_[$׬V ,\2jTY?#>՝MMm`xO枠^3~ 6>!z?m>EX\Ɛzp5@2]u_ Wy2o:k o4(5fm:$Κ{`x?A;-~JoUK 8屻ut2*^`!J펬]Q6Id>gtB74/ߦW?ABuS@0pjI2IyهƱRQݟFx6ڈ.#t?OjzGڱ Y[.@Cd=Px 4Ucǖ,L{ԹV^^utDsztg A۹wNbT-'t |gTÝ{9:6@xO\ٛH@($r@B XD2$I* "O3p!t(sXHIX< U;'vw{9LGsh#8>3G&;y DSAt/V(S[YKKZRơy*'m\?G߅X`egxبWw!(YKt )Dw!^Kk N((hP(R5q~w6׍CJ7f6]LSe;'g'd+R+Aᶼm)mLhJ4w!VY#P])tY^lg\wύ֎0ԕZ*Y4\+0@0fj/!v8;1;AW+]Ӆ#]GP]L,lFBBGԕ4WAsF͂|{%LIK$՝6ƨث_"QF31!鑢yGpWQ9=4~N: @BPω퍬o|c%밗-&`2=B;D~*w}2ߨ|j. Q/rj+%DiiL89>dSC.Hc:($0aD2ˢ9X$3'?}jOUTP<!N$q0l 9~#`Ӏ홒LJ> ţb"HkL;N(]}Z(:`XyBD?&8y8T)f0)|<_;BHyq҇v2q2Gujq:Jy*ַWAMyǘ!Gγyd t|J(p7vش^[3 ][3jlWCf(3'líJ lAf "6:]BK!Ĉl Ύ<gG#>?΀S/>Op22gi#$ D?Ih%~8NGP`Q$ݵ&AP84:?h% v)m Ժ TO26UJi${[ϕC| &5CZIV.We^?)~g}}~B B /E~B3T@=d @QIJ+vjϻb^ח.6]ClBQcutUcmZ8|%R2\SQ5\on+vVF.:NyAS@ 4KSӢVyA10ebqI(+p]\ ,ϙU)'`D+{0tVi%,D} !(`V۹Q@J2Aee)Q PcUm!lI}3Nq3vΏ:'U훼sLWWֹrI'gg ~~r{ۛFQɞ_rYJpx u^כȺMzt֛^fhDV}nkzrsU&' UI>=7F@1aWL46FԈf;jj}hWu  `*o$#fζ䜠ݜe=k0oT+dX]E 'ta!.]}|!]\L/uύv'/v_ J^6kLo;LiU1xcջJ3J@hUAT2@@٬Bha{usC; ] @Sut=a@Laeո{Ǟ(5|W#ISBr\ADAq!rBL2,SeАZmR =vq2[گ]Ep-ĐXU%zbOaRLxYYw [)呒 BN!* g\1ϱ]u+q`*t2m呆#/E`1Ej~"`|R&3HO&h AVHCl#Ql BQw ĢEZM1#{1^)ŋq Pz}_߰5jlԈ+ȂQ&1 ~+;4:2cA/j Ÿ(|?\moGfKTȫ$$wC[nG^,K V 7n[`.9ǫIBO7|QG>IqN!  W UO-ChM^s2 BJ"?s4#}w7VeeʢE3]P a 0CsN9 cynR֕ۤ ?_CLgKY>0˭+νsyX|lhqC0g ߶b )~o0o0EvMwsjRӷu^A0i?x'qӒdA^ԋO\4s 7 ;Wb @1ҒȣS9A|^ydg'% *lCХ2"wX 9_%SIZױNs߇ 9dV3"NM9S'J,fه1j:' 0qQx>@Bxk"UNtU2x" P*wZH,r /ƪ}C˄xTjEj `',/5@),a(3l<bl4cb=d'K1b1"2ᔫh0uRSi&C4Ǥ(IQ!heQV̾蓵hPDs_rɘy{ɓm>=-2Oٯ[wY #L&»Vj)|12%IhۯK,8_8Ă䒄Np3dC r7K\i~q/9,% lKj/:^: OT')MN%C@qBMn m{( 9\YyZhN"wI3-̂@b? S' !%Z> Ꭼp%хC?oߕBPTHDMH ȊrR촼Gvz]0Ex7=|#Bzq`HGn9J:N ?07냖[Ӆ#O|sLxŧ嗅0=8Cmn L;QE}U-:0e-&[=3"D8Ίr4*2CNjnw<\dW8H)֕=$n8yk8{6gq l4(%"Sd%.^*-YQP;ٻ6rdW`z%@l0,Ɩ\ߗ|iɺPnmL$KWbXjp5W"TRǐ̄HVjPݯ}| _@ZJwyz`4fÓlDf]n!9Rӣ)d}ace56ѻyI2ť7blwijVת:{ HuT8WAycn t(ʽ Bu;\)r a=Nćdgs#9YN'Z8sYO#IKR j6Zx{ABZD狎&(_\]z{ 0;Y,k%q^RifIJyc;1]ZHf°%݊{p'>dYBi +e[/&"u)dTNCIꔂ ƨbmK'2mp?%M^!bY4 z'nᎬ\~GP*Ƹ cSH$ذ؂@}Ed#DJ6zs $]9 : i˻P]I EYU xV)ho>^:ss[ 0.nݯJl׃JyMzߜ*NDT *w $pZ/ys26!?^ |@oޚGj6Mʽ|~xkަ:R/c`~kSFʘn̜HMKL5Nt^T5oyHsm~ZBM++EUACv&@r& E )$V%KBIZUC" ee0%dÐ5QED/׆cFwQC*6jF1ircP_>׺&p3½O'x띠; +`]*N'd(- ctr@6ȟ`8 M ; 2jIN'Y@ kU{z6S ݅^]/x«߿GwkUU1,Q!1P0&(.&rVA f=RGUZU: *G (.0ƽ67fURc(lV8j_g%;˦0 hD a*R J _ejKwԊQ{n [\B ΐYBݟ v֞s`5['t+jL({YKP[\h34;cN./>끟]f}}ig0$g4:y (mo2k t!+,~юR͆1(BD YBzRև3}ˍ.T9,v}[}ahaWs@ČTx|QA6_"F:GwDDeAA5Q6JǪ-nl͚gYr3? g j}*ḳ9QxR[]d2PF{Uɴe.yG]ROpjzzer-^JY3 3[p"yo~<ʱIF.nƓϬAiy;,qlGq zxc{GΟQ9g~-A+#)4T[ (#5 ZL4ʺ)ʴ'#u\ hA{xH65nݤ-I?N~>=yOO8iI: ~gDGF0 , (Y1BR0(I(ix\ fƣ0GR{(mBڦJQ)YgxG)Q-PEį.^"uY2Ds,{qK @ZXZj T5Zp% q$p.ڊϗrEZF*/2"K|v %+P3;;x1P̷hN4Cj~(Ӌ Yڞ?:;OJDp鿮xAFpN!4(Rl^=Ú~bY鰠 FVB+$z 君QwQkP䵗^L|t|]^Z(wnm|q>]<*9D*#kY(?7 ~ϝ rTdAٲsf!zh:u֗f8U,#sjٯm`9|5ºyur\[zt1- ; dm{E\Y|r"#(W1DS W- n^:!j2\۱qTu 8-p"Ig/6AuH~QdB!oPGF|KSy.Q<%~UIPPj; ?b-/3GvSEEhrR[}ז9p:4j"zJHc(}xp)?8Ht URoqj=i@b3sJx ^/>?l%~Tu3[Լ!/M1_F79^iqnfW_ʼn~gƿ6W7v4=xI<?.⍦c,$?uxr[xVug y; G28w{? #o E.?}D^\lή)3WT9[_ 9rmdS􎩋qHu~ [ٱP^w;cn1Y]8N%lmuiCcWr&8K~0p%]%׵-IYwJ͡$|Rhm yfm L>5L}7 zVf&KNMVNs!l_Z@b8JC(w4PlҁH}9Oby5hSL'N` wQN66.\X W֪ DrW<0BZ||ܹssn e9. H8lM*UdtA) -Ƙ )p$B򵠚6bR| *i~7FR![dx{VN=hH щ/*`M:򲛴V8?}<]7z @ 6-[zKG3\)\0PҐ`- vkyy!fbYa0qN݄\Ͻf#\}k?jT[3CjecxQS*wb Iǎb 7?e3)M/]yDM^0&X3gv{5fE("8La>Eo82v2k' cDŽ! :H[`,>zS (s*>gg#_cǡ*NV硒j+Ƨ8Sz( 04'urtf8:|㋔(aO[ĵ~$T@eGa.6 ABN+s"?uxs]u\3}蛻"#!-rm.5|Cwz#7^Fe1tAGNəOÀx}0H9H'< Ɠr^0HR&K^2CZ0V2ظ'#4qf|n` SFsL8lNJ6zsb)&h.y013ZRB9~~OyDBꔍvW/A!`n/znݤ1ßUt_{ǢTXAAQ"ƊʢBܔh9?D-I89 _pocAR@.AIurM(BHpE ,/k 029pJR5*[FOxxª(AWRH, 4+FKLS@_@rH)5FC! S<~#= I'8YƑ" 0 B0"Q@NpPHl\K1%!  ~Xp$欴Dbmcpg-8lpn\ɗ[R(]0˵b Aq+JNXpu)ԛq2PLWr{-+wSM8B)ό'O{LPtmb}Q>^"͛0M^8Z \#;΅-y, oN^ݪ髋~u7ˣ^ݍ]z'&b#')!!(p{@@k_ 4Fx'.|xf|D2ChY&1"Dt ͡  $(I>)c̱TNBW i AX `>b7bIQOWֽM t\ 6%kJ!3Y* Ra[pcR5( (G!j ZyPHIR*#2987jRp˰I9*1+t W %0~b(جPђ\u(=,]D/ BףW7>~~EW3zI]pфe)'&[P&*,fJ;9F#db$B4⛎z{T+;u:оѩPQmCْssG֎ BBOeɹ$yhɉ7)_JSlm'UJO>+N}9#h)Ӷnu\-X(0Jf3қ>AhoԊO-=궢=Z徨ur_Գ/q$ A$(aGU `+2FxQ13}?TL_?ypE`{ryY2,gSOvO͔EM!%w ?M81NDe(!E @,b!ui(,7mhCyL,8`f\(,3[- Ӈ TC?E$@8, ;=Bc  *K FX2vgYo},/ jBIJP6wF"KPPbtt@H$BX , {zlətzBc{^u׫:y5On3w2G3^mie6SVEVIFS)[JLAmLhwJ J5eA̟*O  e7:ȟcmcǦΓwkQ[V':Rr.3ܳNO{SZzL nuJQV'RJ@B^sQ[(=c.ǦllT=rӹ㉮V*w ^d73Uw:7N%a%0$[AAƭcp1@EFhHP|DM#w);Ⳬ(A[4C:EМ| Qfnfpn>{<]ڷI)*}߉`]Ú1{ǂQȊ@ ^& P"#цsjm0"B-рun'BR Z8Q)cDUx>glsoBdp\T +Q9'SbR },Rr4XP,jL4X$2J:lNU^).嗮LT-CPܨC-/3>lit7v[Ǯ<ع6lĄP!L 5ŒsggEnU5PX U E&rfO B:s6]W N|H qiXKyasNHEKV73A:R1дD*'3A=sTa4>6fշg35jWL3;.U[CXۯ^2*m̙y%D<6W''5ϋ,X.lwѶ I̧7۝J wP>T.N43p3鞯|§7IJm?76q&->O~*@.]q&2ET9(^SU7d Y#r_},ؕ47t@PXҵ_IpG(tMx墟n\!LP ';{n*';e.HmC tu{\^O#*׽QAk$ZLWدrؒ(|&xp`eċZIB%xMf.lIfRE6wezBMC^CR=e!, ax>oe}A@V!-6"bš5EK= *}1z2 /'~NaX1$b"~8׳2:{r~Egֳ08L߬Z?JJʳ}7KF½'Dݳmp}U܅_$}~_^zFs Ks/@8Lʨ>VQ|)\sߡ7ĽPP<>vh$ _^՟_ݼհگWxՋ|~ˋ|\_^"K^^zfDw~~>^^~ϋnV귿^>Bعw=ᘘ~ѯ7Ak]PחFe8͠Xyt?}'u\~NZOn5ٳ'cPlv` L 4՗ 7FtɎkOqk¬I0gk8C{6;oqʽQF{=y 7jMorwm'^6bkM3݇F+P db~z֟{o޼6>QFkK{7h]ga*znigI1B=6fy )7]etۃMBМIg?Cō!E4pT/ɠLu8 huqȶiɨJ'Y8,kߨOwk%zߩ!2̇[Y>VK3qH7wBjF`Y2St*eX8`@Ol(`Nta)r1,oTzHAJcӁn`]y_M3&jkL p/qry_ zRo˨F{Ǯ-gǕNdhatlx{BrDіі/ÖΚ:plpl^p}I/o05=\/C gCi1(<Gch13Ƽ6#j 8TBXfaR<јc~.$` v±dxIN -:Ft?Y/ygx"4&[i҄jOo<2=p_L=5A0 1m=tq'2;X)R{}=>d+^ HH0/[|rz֖|cOnP`bn6x H.XMHՠ'{/IT҅[9;ֹy7{A}FmP ;Sצ&[rsbk# 3qT{l}˘ ;U>cٜwZw ^B7S>Z]Io/q;ҏ; %k^ؾ9yJ6 {z HoMvˬX}60fs+{/jOdǜC '> ޣi4O9CYqD!2 ۟[P׃_XRe$V-]1+iE5ɇO+F+ՓJ/(JVdtzGRCZ;We|z=.Z kDG-YѥJn}<%_y(;K-ube?zڞѐޕ6ms뿢񇾷}L>fĞid2\m%Em/@j%QEoisp`6NS#C&FDh 'Pn}.3@e]mdԂFҖ-3 q}aqҴљ+*_/jݚ u e6YHCֱ͓Kֹ,6:(b8;v}nhZzϺY6 {iFb(}2sJN5¶& I呕r[6s3T7 ‰Ֆp =諗:C*C N[jkc#bMjkF RZ{?i5dQpx!mmOes_S=hj b,(Gn7I4X PFQp(E޶7 6'si˥1 TWEاMug-kE/mBW?X\1i- qA[s|#IQeg I[;7޳lv]Uܳap^Ϣç~5 6GևrI)%5P&4+4D5,qYYʫSm@ vYVH(&UVn  ാܶ'5|xTti  CAC!S_FAVt+=W0$Qz&v&އ W cHƂc)eB)'pS$UGRBeu67jd(UW |"q_(pĽHB)~(BU)c^Tlva<C! …ߙR\'^;+kWI̢J:+!rByUhꑈ{.L !<99ggք<_g4{ j_eNg6/S-u>_.2پ9ᇻnL۫P58 Q*mjN2 $lpEYov#ĞȬHLDwP [ dZ|8S%{Y1 4PocTd@Ajŵ0-#8C6L١z(VҌ}w%=^lŕ__0$d£cB"^@d,)5ђKPa *݈쏓 Wg^# 2^x=GeP N-> ldͶ916vOg7wlØ9HJC H T7O=z!@7S?ZOu;l>p"hÞ;O8dbCW05ׇi=L>퇮:Qg8&W:zvp|=xcwix* XHުP^BjV,]6MP]yLGSbyza<{ư6L3h'|sQEP8ґ{I E+9σP@Ftv=/Q%˙:!^7T-jf#A$;:Tapu8ݤ]Me-AgHU WMle3yOc3=t+[RUlYZ@V\>bىhKrRa|zR;%[pϱY{n|;Oqz v:X4'М{?k8(:O/zOw}O^<{趓#-qr+[GT/[/eL+-%IUH~[O=2EcH1RO!\J_'^0x!/?_x{}>ky/\~qZW/^vsqzEɛ^~fp=\|0#߳[O/.}W﫭x{LJ_͑='ߖ}.EzD P+M 5r;r^7zSnzgJ^rܻWcM _.FZ8n:o'~{=Q֓_6Kgjb?/^qn?{v~p:RF%OIn74Jax7R h k^M'*|Pc߳O&2ڦZ*]\Ml{OȌ8IN fEf %2;3d%UF, t@?A!5"P ##Gx>\Ԙp':,Uli7f <<q:@u$蹊N9{63X8v$,8xwLdLusorϻlwsYr,U{EZFQgWwj/s= i-g8&UoCiا%8{ 23RxwE@*7?8<̙ רu"8R!C5v '5jgd֫͊ܩ9EV|-0[`& mV"4W%(1cEEg5ؼ09:Isu$zܑY5^ǚo @vzڰ6wipkfC\} #jW[蓷´h]lyaa@NqQ觬$ O#|hb0X}G7H/娨{A`9BT6Mn^.C߭46FEIŞuFAY3@/hoMd']L:zY;[ny4[z[;i꘭岷G3'Fx,6F(#327d6rCQDna\PR( G;.n׊Z]7`B&/ EP`մAU^H`anUcbpdjO+݋&A'Ƃx:QsK׍NaM[Xd. 408Ca0!У8 !r1!rqPQ3J8}-$s:GA#+s>UQS<9Vmu&Qxw5ʑx#p>}C)g?"p Y<;BpBpVG~!D(t< EH/-F9#p ^V1m2 g͞Lh&N(H`+S2!%2aE0@=ݿx ؅rdH ҇-8"x d<'͊{NBZdڍ0a>*.zӰnpKles^83_6#>|͊W[v/ ^J({>q}L#sޱא d+ +Ĥ2cZdkR;ڠqml0k4=Iq5x{]%k.Hs2x:yRy>C`s3t*_jOk6 +KfTMsޱ3!22M5*l%. =\a/d:nmNf16)(B[[ q&֙[vE3 T6%VWٯignjR!ʳ. 1Td&ґZmX 2nc&`0L(٧TK=& +C!;IY&`b/env e#D=#Th= B["IHWCr.WkkX9=fqFy=\[k6xBi[;n7GyWlxl6 +Amw,y~v2QMjOxau!`{R*7˳wKf3EW&Q@jUXpn-%}QkӪq`N.@$VjT#؆JqhɀY&s/̗1ox$!h;$F&xyT(q%nr\c[3=LKn4S,JK  #FcZ8_&V*pl%:N䒝RK5W$q]@$`H:&`wyz.==݁!J#[," [FS#pJ摥FX򍠓; EIM:4/^&=HFj2o%zU*U z1&\ZA#S V&2m2x[ _1llfJz/嶯Rl5*Dntd_'!Tf[Cx,by8dI߱Z~qOtϒ41j3$ćrߦ"#&@vիQ;9f38ފLxd(چBIhMwMV$\o -hhil`1 uX*TѯP\F̫S*ΠVcd97" }3I" s㔵.[wƓe/B򣆝&/.s. o|#nTh9B匸Ņcy<ǿ/7%m<^5&^CZ>hk=7B~1G`ws2-)$t%_ɯۯ /ׇ/ڿq^v丹5K&U#~n qN= zkw tGz1,N=?~q _-NR3wO~72fԇjIъgctbɋ9"1moEwdc[sCSkՀQ'%ja~0]+jחbN%SҰtCJ^gkTO٦wQ*н]rŲq\NKٲzmW a^[+7{cFI}HIN b pGo# &*|&hѨ+]N[[G|C Dzo ?J9=/4R4YUmy}?w(_HMW*<`#ӖGA-u.5 "Rj Z2X"' :iDc)ۀ߯Q-BW[l)"Bm0[`ZĽf|$00 dL_@ hL9N(qi ӒM F^2%XvcFb(St֍}TkFGa>*[ҴTHH?7@r=]u2$R]] "O Q\)R?8IɊW$f ]pֳ֓Awӟ䌂m3//.QZ l[JD#Ⱥ硜Jrڇ5ا~ (fn}'iJNM]>x68 }#P {tn3s9Spd%e,0yw9x +v޹%<3?yŋgߟ~˃L"t^z\|Wu\ٳ_~dz?1񳓗=}?^ݨi(.;O(d/ջq"?g==O=|Nbz.˟ބa`YCkƟ{:E8˽sJ^]h04r!?Ci11^y7HiX:wM9>~0Z#%,8)*VC|ה)"l!ÿƃj%m5#( Ɠ#ضF&h QnrO}\/jaGMdU_1H} ~nC4.>zэ}r0:~b)UNGwX`$O/ 5C@;]q[o= .闣% FXF>/9_wˠ:L}KO~HHv|D\osue>::DX > p QTZp2-"1ۘyeè˻[ Y1h'dNxP StV`H*q$s™SͰB%G*x,zA#/ /qKU"pПcoCu|Qh.+߂vˆuykW]+H"SƓLG"EI0/Kjſmml벒fwsJ@/i!Cn ,CTط&Ɛx q !ŠYskfd0]&{*UF!`FSopr_ܫap;rM j=l_ovae]_qIjOuHKv{v2VqtJMofɬKyyKѝ3Vq(V+ 9%egza$) "'L lT,`.@yIH$)m?1F[݂j&k:LO ;V?޴ vRl @w]Wr(z{ n$8 m?A&R EzAQš. GBFSG +yObέT+!1b te\9˲7$CDʠ¢>:a'~`0O aB&=}g0e|FHUۘ}ǸzI9ݠds\;<%&*D.Qd]<\vnBƵuퟞ;mΨAA&3flG!cGXϑε7JϷWx9Z*4H % ( q^#rLFy؋\.pd*󆈌/3eɂ$p"爢iy 1ՠ / z`*1(S,nJE䜈IVI6}b.#0KYMӧ^KBGP-# pC4h/!”`T" J5l& BѕnKQKBm }mfT92su,AZaQXeayQ;s5Cr߄vo`6`s&LR.t}f\L:KϾtn}]OOl 71-R}r*Oh?Hcr|nɰ|0ġT\[)>ԕ:v[}iTIXf[BӚ\{M]aRU;>(dB}kN''_wsFi麘3;ұd|VaαH/F/O8LƝ֟o[ǿy%o piwZ!)<=tol/$5<F ov;y55`R-<UjXǂWXG[pɋ֙^`vI]Ysɑ+luЃCayr08x} jAѝeUUY_ʘt"bhFo(1r&0mb[*w Ç >8!`Ӊ\( _99DT {DxG`ء(>I&ulIqF6+-u9/:lxy)$\ϲપ.B$*Յ+ s =X igлO;ŷY`:VmJsL% ly Ȫ*$X6CGJO6Bx`mΆL AQ0%'^i*4a;,m"q˽HRC Gz9ʒ!RX1 7r 0Zw[[* Iq8--UX)5 F8ӨCFҥT+Tau"S2nRZHufLILPc SM!0Ym0~d<b4^b5ֽB{i,)`SS1z!xb B uX iLi-Ab %G;`tӴ-IӇޣ(D:L*4@Y>aNSnvs#LGT Sy `8mG\0nʲV O]$Ue(D]:=n-T +A`!Ec n.7far: *T(R$t:Щ<-bVL *~ (MNU -!OH5o+l+n:[(ux1uc-^OIjHa4kUqzap2Tn}(mypẂeâۇ_<}Hl?>oR &֘Dt0XXZK [~+z{{xCi,DjD8#9ipXoFXriMT +.H8*QݗU]OF4ElU{Px2/|];*mvGW ɚ@Ǐ\ǃIׯ+I$=>_R0P1=/O6 ,fAYlg\R~T.ݚcRD7_G +krVbh$si!Ī&B0IT[Dy|N 7|p8{<tLײ^\]$jZ%M®|]h#$_!т 0 Zbů> Jl#vU?j nk*n䶐q߮#{K~j#28MP=U10 oI`kSci'Ad}$1`eԳo4up9In 9I2iz9a*ʂNS:F,MT$ivYoqz!l84dRf驠^;۠d]NzNz쑫!gї?*1uQ?/2ejDбuRW@\w:lvq!1jZά޴QHFKѠfz^4rdiۿK8Z>C)PҙK%M'Օv@|UK B^Lk(ȻYK|-ۧo !v_Y=w#\ ~XWlnF z'yƴ1Hł3#t:xL IUgdn<Dn݃w =z&ëu?ZzftoO6*M8ذbH3","sU"fV(JRQD D'! 3v#ؑK"#+Qiæ@p +E<?:W{eUnʵvdH3oH`S`&_g0$';JBGi̔!Zc>u u41 kʾa0fJHuVP4&2D$x!\1k ,<2'/x4g/WkjG3d?6Madrx;]g #E'/bx$:<{lٶb}Ķ0Lrb(9WRWܛЊk<]YZqzH F4]j(Ԯ l هK?D1҆q0Wr=sk'gds݄o.al#̃yϮ Ƣ4Ӝoe]QC.uo΍mnjzDۙaheS6>L D6\Fϼ%`C0z6 uYtWj1:bmQT9#s[y=up["炏ZZ\c-g.6B#l)c2X] :e4Ũr:~ӛRTxuJ:7*:_bzlCu;xSB2]uM"[۬IP,oIeg{173 *;ck9ȝ9r#578\u3 %zsíVoD͢򱄂RoY^{p&Opf-yO^2ɺ8^!TJ%zowTu a^gSEU&ʑ)i3ըUe+moG ̻1EdZf&?$O ?{igлO;f%iWwH|'+d[ >'wo o+@3W}7UQ#  $_~[bL1%ºl+NF $Ӗ}EY LHlh?R*Lj ?m i=>bبQcKE0fOmPT_" p}~" vP?} 7_'V]]ߐwW+U0VÿEbYQfm؀+˴Z Y6(^x\3DN~T`dwzGMsmp˥:*'$y|Nb/KQ[:;SAt-zeS#܇1t%L.m6lRAuj àj_ 3ҫ)vKk !&2Rhm$B3d굟I{4\#/ OaSK @.F۟` =@ j8U.$^cs ~sU 8Z?*ղcA ~\P׆AO0Ԝp2Hɮy MObK'S0jbV͋v~ ra" 'q}y̮noW1j6/Of9%3%\"ޤTt?oɨk x}JIߡIa$CV?Nm?S)jzN:])0ʈ "7H"s˔ VQj!ff'dJ>5Nі\W0D cq_gT r@" $%ILBjy 5(⢆i,ZMaQneJdžk.]6 @em#CcFumcCεRmQI1# |eA1mKx 0J65i:89h$ T!ػ`r= b)(d40 cRy@mIVH)[,;/s/d.X;ǴsF4&z!'r)BuV>\%|ZT&8HZW>zSċe σ0.5=ZﻋO Pq&Un]01_o *lq1TLYN3 C,h7/]>]D'z=_g zq5pv0FlcogˣS5! 1"` e޼aEP:[KlF㹔A V''aPi?P_ΗU+s9:Sf՞P{ߡ:NcJ:kqARNҢn` !x*E%vJiF Hﻞ`˧?N7L{K=*^g~, {5:+lriGéu b4Tǰ ui+<9eB"rF,2Q{GYʍG;D)84PyX30 5^7ӨUp_(MkHiҤTS{# 6Y0c {pέIhq0c1+΄Qۭ\bp·LbtCIWZMg1>Ȥ)X?HK"^m nE2U53uLZZ>Z\jԩ;ߞZV%oFNP|[EWU~#4C4' ^ %Ro9]6kPRfTYkA[ ֤iKu{yE&0=sɥ0$ϴ:c0ĸ̐)md6[4([lJGWi: lX/3qCgHj1P2Cer7?-ؠq[)pnw]04ɲJw=Qc8 g3umֿq8sOwO3NwInQ0rF4%n=Dg7je:4j^ߐ]I.:KbeTTPƄD{IpsɓX]_RӭW.*dh=<6udTsN]`" |kkBK Y9_nqe>XK*},h2Frsnj&[o|u6rvG?iIap>,xo0?=Zqא}۝7mK(m?Uf"e]#7%VN7HUI]=Ftzo,Ct.@jGI׹AĪ=!G(IaivhzmIc CD9u|;0dVyO( {RV{udTR*L?CH S2/,sNΩcMR` uNay|M%)q*HSɠ Z,Vv[D#qk|?c)PRf-N䀡> <f~vQp`TR l!pBMj.OS9jQ$eS@Bs\ R䘣.,Vc@8J" .َ 7WtB,1Y/[7.x5 0g'[$FE;f ͅy|k@bEEh?.nB˧q1^᧿wtH!rȇϟ_p6*鼍!d {xwA"/?6:ΖQǹ,b(b<ۃwńrLpaߖv=\('))V%?=T|0\$|J|fo~LnFB:/AR}Ifbq{ ';  Фo4iHR~UcBz5M 7WsӠ+XGv1%Gc#pC# Q(%t9]vi1[Rpv5Q:[)g׶ę%*z[G\\ڐ[(shL026~vb`۱%^ -$y4.?$zǏ0,:킢5'6Z _8xy4^|,a>“uȣH甞#GX.mdžB*—V0>lYg𳎓%/R^գߏ{7'oOO_FW1k&m&=t f5r溌t[,{s*˄EYT>7mezk4A\ͅd<9o%2tLbe~nS$Zd%?\)/IzN8`i#MY<9bt#gTk zQv#DC!oOv57D?IOt2o-1??|kUDl|oPEՅVߞ8* > ,,R> n?iEM/(<(Eۊ|!g Zs;x&5NӾHsy%h`P:I4(az Tg#|M&8|V{!b{J2Ul| kW.K"4@8Bõ*eg&&䜣N51CvҺ\Ts S -@ I,!BuT FZ* P+Ȭcl_NkkʖJq_VZ"O'Hmb'S`5 M@1)MtyaBs\2…[/+<^rd3Vam_y>5U'-WTp[哇az:}+7A*$;:+\"Gc mEitU|We-~{RH},YDIR] z,2o^޵6rEЗn` Ef 2IK^=ƖI.Kj bQ"Y$v`U\$}&q"|zbd3V]v,̤5& gqQ*ڞ5Se3;؆TCXD?<[T1‡ݧ4>ҳܢK9'ElFvSRm wSM2嚁ސMbe}/q˿pp_xhMK?'U>'!=2sJ\ѨLy&2x|  z2p?FlV [!Z] JL֢O ;Z~E^̎ %3ۊfrfrJf|[6tHPV MlZ ZN[~(~SռFմõLPfZ51އ CoTP`Qʺ +H!UIH`<#)Feπk;?m6Vp~Mnj3Zԁp?cq[r\,Ϭ6Ci)BYRy5j{4iHO.V糔l/ teQ_|)6yq1ecp`l(xA=2?n |- ސz:yZZ4?fJnT3yE1dԇ={?߭s RՓbDl}%mb[!k<-A uyoAvp@8gw? 6ut:X֭8~avX}bk8UۜW@O )Z2N90 ZYEnj؍<+ҽХ^dQWּiA~ AM/S@h$m ( H%3P'/l|UpIBFz853 g0Eޝ@8R ^eFgQyP'<09A-̃V^J1iq -Q\YYD5&Y} uѿfh~$П(b͚g={u͋af`Dj}DeB؍Ow:f8Y;r4UxCjs|8I P'{-^NVviuK6+$}y?Y;qMp#E\fNN %QLSNOIn(Ŀ. *5  MiݝTFi*%ji8SR2s`[/] H3$ (uY& eXQQm;*_L] mmmU uCvJ*-x[3CM@ ͠Dq#FqnN<판Ej hr ȆoEuڹ wדӼfѱ%dI\v?Mϝ} 2#Ϟ]aF󵏩lB+lYq g5/mH I?. m~6@sƨ@TƋ@X[60ޒBi)JdB_5A_Մ(ݖ엪_5 m)~ƭ&RW68ذ曆pwؔDۤY &O1>nݎqZ<uDg#h;sGr"#*F$p*Y\ T\0٣Cq} s ݨ6Q%,k<x%gZTTcQ&ӄi^ JS B"^Ҝ\IsE(͍.s=`]`f8n[Pvj\{f'v|?& Q{ڢ8^vw>0^B\vo0& ~'đ&tq'GuQU ~^_w{3]g U1Rr\dyϩ$ rQ[TP A>|T}j0avV.YAv-Pn WUjpZ+\[+@v`c QenJ8 D #ɥ3z} gLJ oUުYx51BVdJ\&s9=W3e$lW(5.:XL/fXOOyΛ[?+{9{p2ŖA\- j5g =~ݠ2D@7R24eД1PBeh4,ȕυLH&W ' z r8w@@DܲQV L^lWA!MA c \1#8 tN2|Z>)G<,DEq:+sˀ;XC#lZ$'&E9Jҹ2%7T#'s{7%%VDڥs/l%27eUGr^.!eJ;#hV8'\]1?y7;Z׃a߃~& ~%@P 3P.Ǭ(K7vFqt3܏okz<HRrƨ:#go_h~ 0C~<,Ⱑgo:cYdG?~7^#hgF4ХVӄaK5KzbaYB$W'EpN_x\uh]AETW?so KRWlĩYޑti)/79EZU 3G ]P@DϦGs@ԱHW7:h`BME?L|K[_uXGY^3+P3^'+,Iu,FXc)3Wh5Za XpXK=58JRf:LM-}j}s{ʴ>K΍EQ/u0,g( ,A }+MliيBREhΤ&]?U p}+"kjcP0N]2KCS)+ժ-y3^%"(+#GxW J—2.I4 qE.駪t<~S;;Ɨ~R5覍 {7 8ZM F62>Gֲkc{UJRG+;-(jEZQpP_=V"h؏ \~Z`N7,Chxߢ`YׯnvзoM`}vud7oU}.`;N̍>}ĜTa k(*Ɋ6j\[(]FZrΌ]?6I_RXVJaGAceP<,wK*O+G܆$+t ծN+3&|ut$-my6L܏`DOjS*};e)69Rv ;x<tÕ_,_<"_S&ӁidIܩb>FϠԻY5\ o#i0bͥOÿpދfkv//ZJqƢ@SUȃ"gp)o]hJ.~0*( >Q( )ͤ]!PǑVM3f ̹n@+Eh]ͨpўWܑjeaUU$3Ab}lC,sfji2=$[rmLDg<Ӝ ٢5ѬMKI7iI -#y7Jy{Zi]-jgr~7ڬ#5pDcّӷ<&Zxj`rbIfKWJVk;7PGsD&`j7I=_^f;] ;$t DҺB8>;׉x~\Ɣ+/ )(bKz束X]꺁%nٺ ph]گȚA<)Sy_RJ K=Dm~Nۗ *vt4 Fйc|Q1WƋ ~Ց.^~\GSŹ]|Q'{1CஶuZ݇+Xk^÷s ;3hIt T:ۈ՚zQ@k8Z  ;s4ȗ#RưiBT+2W]HIWXƲBUc&(ߥ$FiEn/f .{rb`PC)7( A+@LV4JÕ=P*wtҩւN΃AV55i7量)K`8T( \#3r/U\X_z[t 8>[oq_C/*Fo&@?3 XdϦ}4~M>C>2;LMlPXo`%#, L}@ڢ4+CEEc"M͛|1Z@Ey3^9~|OXs`c}9,J.pR`ubi;irlmѰL/ ȝM5E8I OH"!X76j9 )uFᬶ`'.2ӈaߥ1#&5, Ȇaz`*R}5U ѫkc}y"%\S 9vdzρBEAeK4PpeXrI FD𱢖94KEs\h`p)BJ\ 560 ZA\ V;8,5;5lOт5j{d gjiK^}kw*C}~>쓰PRRsIz^@B3@2Iv[9" CҐb8G ǐ鎄euU3=S!;'`>G~ x9c* S?ͳ.h?&,6JCwg ,{(> 7auLs{ @ԏgý{173˚W7ov_v>MpQ_GoOfO6`9u͞F]˨ǎβ1o@*5P,ۘ4h8.2ѣ󭂆P`5 0_kYyYM6v.PiQkQ{[ƣ'X4(y" BvoUWSI5ѹEj*x7;sѰ7g#~>!85[,{G;C60:~ˆN1ѸL-%&0lr~g(`)B 9U]8 'Fᬕt=W6ԦkGd#CS x=9& +AJ[2icw8\).6 <7 r,]O *@sFVcsXM | C"3P:XqjJYb7VCػ L#4̠uЎ3W`U0cVLpaG#wX; $趪a=xa0FS;,n̉dBPnOϤ _0_[R`0&B&PI'Ȧ̈́yKF+ <=Ν`3 7h+/%}o='9;= \o8XX^]x#kY߾;JqA1f ŗ0~OgsyZZ(ѻ^PvޝIbk0VK nJ4IOX7y+`ʙђpm$S|PnwD9j XwE{$ܾM%+eA6c=+pr=pцpm$S⮐j]ja{S+A.zRZ='4_ b֖NY0]^H#T_Nǜk%՗ӥR/n˻.8+~+~Sy;ɗi UI*{_2/q|)Cv<3Rp_;HSu|~NҸJt8F!U+p!D-B-ǸێG7YCFt,`I3˘>XZpF3djsH;+Ĵs>S&~VI֋;[wjVE|'D}mş|@(uH#j%A^l;-[fIokz3z<|z` I\!I̝윳C|&iI)bθd4.tr2w3ϛck23ͫnS:7aDw_Wͻc } T@y>qOa8Ȫ^8ـrsbU)uV #Bv _S/*WS{hUG@_M5]{X̛^oVp&]izc& US% o鏏<_2㖱a2W mdZ3,o[d_YRd۵R zVAV1"e;e5mohףftly IFgʃA6kDE,ɢ ϡQ:R*6/1y?<Y!i_}6A·I*z};{}2qJL5;JRDp9 əXFVj%сPb `K9FВX\ZזKT)|Yہy/#y/zW+X*ͻA렖gq1i1vDIDžo_|,8| x7Tj]PS88oGNq|0cۍ>|}RU+2%Èw80 3-? dы#[*j9?m{؟=&w8Z 2>bxN!żվi).֑tW_:Q?&< %zDE~hq'S΄qE:ۢ'\Md{t5"5lg -?>.X,ʘ#ޕ$ٿ";(9] bm̗6<-!)=߬"%"N[nI,VER )b8g<Ʊ1A[]I$.X[ $qZg8)P g'|Qr#k$B6 % SOQs$Jַ9F6&ڼ$R]!k,DnȾ`÷#ԛCVC>CY_-}I&T`(D4#% wT(*^Q:*q)(Xd4yW _N'ŲuU>] |;;4Ӣ|ѓw~KI]o<$?̧Lp&d>BX$Qxr0KCD!/iT2Q1T8p0&@ kwx )_/ioԮij$#ҊYzO.bH W*xFW`UfpuC]$+1bE  qQc/~XTb5_eW/SSJtqjYŪ(^YE]-)aнUC2߇xj&aMGCZ[Z gAƥmC)$3D U%N߆CGThJ޶8vU*WtCc̅Ä3*ؘpzr n 8NcphGM'o]# 3A!O{SE9}C vGVJGRo0X8L؀d8x'`qi3DqwԂHW^ Ik_LkM"zA$bFfs 7:~~Bq)}-yϟϊ]oﷴ蹡yryP$}}?Y _;7z6qi~gׇx}.6򩣘+?1E"O0u'9(nT7<[7oFa:=[6=!Y(bXh"(97Ecdg6n'7n/9X~~kob1>e~s7OKc*Q.\ź_{YF?[z85z'< |=§h9Uz0ֽ|SA)ڭ-)S3hCg\o,Qu1C^m<8RI#ʓE@vģA*Y-%NwHCUKYH%K9_FpoW2 p.a'~j]h(4 !*}oIm.6P)rRw2ܠDvf}-=%5j20OnEzFz_S f^ř]}(_Mf,שi@| rB^Ѭ$h7fS,!Wr3F D+Ǎ-'Rhk-Ҋ,ђ.F[{`'j'9<\_s풜zp*U_ FGtІVLK'r SHœ¥QU烓bQ2EAא fq-Lx؇85&' f_sr_-y郺轁aFjy5D3S*3Z orP#  )Z;pº~+3B93!Őb1rST#`f:pc깭O~g;o|&fQE)DqQA-oBIſ05t~zb:_|¥ke,0IM:qXƭ͔͋M1r˭=\HpG|HHIGE 04)ؕw\}-z' })Dam" SEH$*cOP1.ђʼu|iqv`-a>6uƶ>8iT\\P-(aLƨpf!{WXQ^hsg`(`&hS2(6yDZXV|1_a Gr,MgA&*5$L{LIC HXas͵8 lJ[]eSRPU6[`U[!FD6*b~:iXB̄ ӱB|_ɠt x6do Q8J%h/?煝݀sOs $>%Kh=[ckٱ8Iǀ,习Q#kĀSsGZ+ eﶅ1' /g>HɃ EQmT{ʼ[SF BvSAd+ɕ6x{W[4C aOƘ8J{(+""2@8s\8o 2 4*hB!P&A{"A c ⑑y::TR9h~pLl);gJ8=Q($Hox0Zh٧ Oqil9v-6k`pĦZ#5'1[됧Z#6 Tf78{JBm_8O HrpAܖji4TSV,ׁ K}oC:.=󵸁$bpL1b&c:oϔG/GE x*Upq?Z p.S(m0c\賕60,( j41d#nbßg~*~DDTz= 㶛ƿs*y#E\eWGg9er5JDA|m!9;L|>Ỹ8|0~a~W??g pZx15„[ o E-x@pZ[I5zÿպ*QjuA ; +OXcenh`ڶcBJln #tF~ BZ_ִ˕x(>J륮9}d]նVB}]"[.-D&d.w<φGy`Bo!tZ0XJʔ:\Kj`UYTB*cȆL2 #*Rc o͖L{D:_NIF#Ab.Jj}`qխo"G]U,:m +=a/#c+V9r~я꺪 mH+J/{\돭F@I}.WAkΝݘs,?DB,A g8P{ e.EFHΑŒn$juJWj@!Nl]!Yz8ъf(TCNxUW׷?σ)QL²BsD"[Jf+c]&<!5yɥdrzve&z^իWs_>ӟݵYM~SZsxֆq2$QARP ɸ bV7%[Rz%ưD w3)D\]{.CxeµGA  &C Qq':aܡ!'Jp!eGN/a K#Ƅ#"Hw]7g~:2_ͣogFT+*;lukZJJQϨ?k S((@,.QgyՐ2m}V$ᧈv%q7A0}oukZt!;MsIt{tps$6+jey{tLpv 7g;MAS2~uñ1ofYnκdgݏ9Yzpycf യfERv@S*N+")z:9WГ{iB\$}سTNc0u^Avݖa^Ns`v~rO 3;A>̋Jꘂ[- tҳs"9'l%qAdh<+¥cS ms`6- dIxEԧ^Cy#wȘFqI3>(dfҭO?[n\3o43Վ(RU,ג:0eNkVHa%MkZ1ü?WD;<ȑ\ g)v C@Q 1P'Sd%: |7_k[x;g1XBgn@ٻ6%W,#b@  ;9,Z\SBRvr[MQ2fBҦ!$9*c1(CK[%Jc/QV,OG1 ƭDm0c-cjΩ7$1F T1lCS,[=ԕ'دϢtћ?՘ ?%r&Zԛuq*ޤ%^Oyr:a:u?UR&Dn) 02a⩓M0l 'Ġ3˝? Td*dẼQcJ?ݨUt7o~8]=2Z ޒzrڡ3,V'%Һ;#^On]~ 4YxWBhҝRVvq~b9o|9/Zn\K[#tdZy^][/>Ru6|_ݻ5.QiY}Rϟ澎$㜶Tt|¥P|!~tH^qbQ! (Uë9S%Уrz-RWsZc#ŔoYMCoK{K\7>*E[?/?;w^-ƪ$}~d}s:NٸeCck#|YQD0:%,\}x*y`|[O1|n@iYw~7"]u SA8GK)9ݦ't|̛[SK{䬅Bǵ[BSk-M[va4B%& ӱ HjG]5yavMSr8ς>>OHRaώ/{`aT9PX $;<~RdRv0E4,p\hGu!"T$w^s*GUtUJy3k9\qTS,T}54Xo\_w e[4EJˎ#3hEWɝZ繪 t&喝M)*'e㬡M IE$x:Vt%@@t9 !(&eZC`))J=Jy]Q'2˩bSGC)ΩVSx'YXFOzPⓩWrY Ke-T[g['Qr#=)Ĉ=q~0(N/ ޕ4R0go'|%E_/E/ov,Kǒ!-+Е]oK ¾1+mk;FW.֊vqTnwQ*z@RZf I;t5R j"6K//\ @6],JLHhu()#/CTb97^XO,r |u7ſ zUس@iwt8~Eџ# -pMz1{|KS|gA6=y<njFvz1Qrv7CΤDv-J~**vfmkMKP({a_t{֙]{+PPm'81m;$ABy>Wstf cj9xJ",= DS *t5֌C䋊ф_UR E7֤o q:ϴ)#(LJvӹȿĻvfv{}qLFC{xUzfyWgkHgoJ&h(wA4E^ʓxf9Dž`0AVhSq._(iA1u<4ү/]>LG$qsM K%ĜR+oLy^A5_81.}ϋWN3`&lcp3˿KDS@iU穂K.X)n_#:RjB/ZP6`$ ZFv oHVr0= N#D*6K- VTnG|S_cKE1e)ol(*@h(Kd@ #8:cQ^p@­@EW!:h BK1aTp$ =py FKC(p>eKhZ:U,޾n:;p\CD )|rh`ޥY;I+dU7@QCAွ cJ ," ?0@mi pE ƾ& JOeyalº5Z̀T(ʢ8Jx 2TZ\Z@Gc !GD)7gZ˿[,HLШ C 0uaqЁmS1k!1=P+`0l Q @GyfIBOvju6YъΘzɜ69Fa aEvuR-|i;XXsԤu@ŝjTeYIy3J[0\\=O;@}LBJ J-?CE邧MѯûB=@()ʹ-? 3@6'B/G#Gi3@~Z/{8I-͍ xpl[%Kr aO@MkD,qnIy57 `灠X G |$d"e zS\9(|Ass^uMQ)e'<gjd|9>p(ׁ`BFD2`"s鄼6hB3/Gvu9Rji]G|ʙ lk Hor;BGv9FGt.Wz M͉||'v34:Ě+_/T Y<=CpO: 33ŞPλoE"pq >xCw3z+џ ⑵UPaǬBr s5qVa^BY ' ۘ*ޕ]itJe 4jHFnޕq,З=R0`,ۻvưàO %*$%{V4Ispv؜UuUuu<)z3ѝ==E n}@/2}B tϳ9}jt5Os{n'.7 /' խBxmHzmӛVQqp: G4_ȴIպ3Oo|5;\3Jup,;{4wc9>ɟ5>)@"vx"?\d֣;'it~S ,/cnk=<P׾ mpz0Gw{|Q X~1GC?sbJ@+]wB{;z +݃Xnnp!+Ή{M]ScAQw1D~;rLwg`:r g%ޯ{XGxTGX |iM(UwAP贬=6P8amg *@Yѽ" \iMb83(`5H4FJ/xJHj[\Z#qv"˹kZ%$9mkJtyX_{k5JUt6OʄM*Z5W:sZ:]ʲIe7eoӤާGр7E¯O><~Bϋ|g r1}(.Q9N-Rܮ+bi bƅ7LCXLaf7s ? T}} S):ŜRμY\ؙ3s8pqWgzնCpK=`oRR(jOe(o 2MȤ'#<#C,Pď[X4tYw=s]O!r [^jH3^ x}Bl3S~m9HyG!D(:f!0q_7TMUzt1Dt>|g{ף*e* TѐE5#SPuNGofu7J0H %ld1OOL)4W|}4d8>m4H;OyԉLZk'I0EVL ޷0-޼#(b19xmKfUYK@t䠺Irnl[ƚEp[=0U0GZ Q<uQ\NpI#"N[oBFuM^]ur M(t$*UQ9DGZFIT1)mE]BkAa)2(g;(@}19)E^KDXkdj>ӗs'Z4H^J8ŤiIŐhG!Ekƃyk,@1V_6lI! M׼^->َ|v9r?d̗#|׾Ԙ3o_~Zhx>$dWo{9tYɂ*z)a|]a>S|6|\G?wB1X4s'A ƆP6V m.L+x*STRỊ~Q/4K2?:ˠUrOb0',> t} {ciğX/L^/j)@NC%UD_]Ōk>ш*@CuX#EAi*K0 SRm =ٶ1 %LT__v$f%|;V%Qank|;4C!Jh3e53UX{-bB^ *_CO?O׻oE6DH'7. rfHph2qh+8-$E$2uG`ء d(FIa3X<-BR6pp_8@`}˛-d B\iԔ*g:Z2"”yG yRQFEl^ V -4Ҫ/{gxxȂV=>K?q)`c5/&|Ϋ]ɬI Zg}țrLV*[r=5.0$51I7m L܎uXB38̴exmH_Qvft>53x<1=Ԏzc ozqΟ8gΔ7,R0QXPY!wB+V*!'뵽_-5EufhHxs'ΦɈDYB-l {#G'3!-Т[gb)"Yoxts=ͯ/VNq3p'!٭M'l곴Igw.uAuˆo`?j00$F.jIYn$ A-sZ\v7c) ۰(؜rv<ڑd00MndklrVIG֙x0_=\Ɗ R^29Du5[NĭʯmZJu#E+_+MƘ: `YGvFd/kXPrJ%}>8s 5)U^1aLk "2*,ycl В1JP`;!ĐjU7Eʾr 0R::>"o7Ff߬o(^uZ-Ȼw`'Lu7)U>2 AD߯_77' 07+;RE?|yt/d45ld1 ŐOG)|INF nYi]L[ 4[cY-,1;}2KwCC[ք1!Nf2)%1_QGk۔jW@},W:cT@tWP5y6TMD(9 +~ ;URJKMn>t1tO lf>9[|^ř;nWN<]L'.8`~+_S5E*_S5E| ppAzXpbTmQ14QcHNRI\N^,iy SǷOVgυX4"jt5K1@c W>arͳ]"8)R"8)5NʨJfzmd#)"x NN$cR)ֿ0aGbĒ3/ TxMPGŠ-j UsVY KzRwi,"YEHEcֽg 9L9,%Je-(Ꞥ m1edRl!ʬ/Af5A[ &*n/K̪{%Ȭf0hJl`_5guR<󂔟"LǺ Uj@AG \ j6uc8S,-Q£+3˽թҐQZ* 19rM:P˂Nygt(RY RÝr1O#vc4t5x65xGS]*U))r|xL,]t4 Rb~ūrR[Mu)RG"u)RGQg AAD!/Z{l$bgİ PCHEXP[Y~ꗀtۿ5.ER.ER{׍@ci¶ıh%&XM8Q 4-܂*@#*'I9\E}:J>R4f5H< s(6.Fe` DG8eD,"ey)˳HYE9YdHK]KqzzzzQV%U=}5+R`U&bEXyRt4\)G3o\U"%Wn:f݃:{^;uK Nx % TS1SQi"{ X9BA% 8z't{yuGY>YPAG]"-R"-mgln8f[`1aK.K2aaQ`׃ŎR2qM`w7ض:ϕ֩4a%X'{Qc\+].8!pVZuk"H"Dzc (qB>k‘bY+fǑe.J:b@*F;Z`M5f +E{rgGMz},.Kv\dw3JE˨tv.Qzy'rr"{^E;n-|- =3 ƺ"6FE,DkF^Y~dWck{i0Rvų\GEhw9iU+Gs,aT:d, l#0c%(Wnzozg כ}纱/1!{pYw&?aĆPy:Be GWZB(3j;FQǢëA-U8qѝ x4Mьu)z?l/>|ݽ9"0,Bsѝp4s!8dvt厡 sxS4ug={W9/`nfbիm}7 ݶ|[sz1AٶhtG%LvU\[#4>S`G A0hB5ERQׇe A\jt5󫫒y ޛzHkcnH.&wQ*ZV6"-F*Re1 YⳣV(Uc;N9VѲqUirI($PÒ#'@#޷1$'"A.nZZ"*4YjdH'vIwT8E#Y!5Ֆ&0D,Q(DU0"`R1.ĄD|Bj%X P)yLBZEaa΂;KT)3, ֧R=)SaSZH*ob@fR pkgښ_QekgJ6CMj:yM@\ʶJ{һ/DI ARDtlQ!ϋe!`:Jx@2e59wE*A&dϙΦ14f:LgjEL$H`%8Ү0IT /Qqc @R̜Hmxw{>%zOLc": |sbS[b=#@eOWl|xZXyh_ڋgDUv?DDJP%` wz!G,Vw\Wb19OjC&?]ߙ{sJy0z5L>?Y#2~i?ھY|z+M,8,Q\Guw7hXc՗oYTIŀPQ;9^iatա*!21ALsk(uxɣ@r ɣ@Ji\| 3@_Ith}]_T%Zf嬆7ޏ`4SJ][@ȁ{d[aq(mGt/ tb9zfeyR^~|77Kxif^+>\^LnPDotgQ4!%EL,e^U6ޟhƗ?}7 }$/n÷x`xW{\IB`McM1\ HT"2E? ~{we-_wrr),lyj@5( Mg-Ҕn|wMp\L&xWtEVjZbnsP)NvRS}?Vrk=@j^EtenI{Cdo#][bاi_Ľ,E\W29mUYǨ55z2$C,z&yYu}2t N  #榽X#2#*bЂ"Z#0 ܇B#$:d n$0+r@gɖG)*`86F4?*CMK2h G "+) LfG2ɔG I*4+ 8_K-7xD\E\Q8.Rs,&Z>2pLƪ`6(nZJwIDuN&GjhW½57kJBUEf9&HsD Vh$1,6FL'ۺZa#Va&s [1fuxTnm?f#FY R 3ܵZ#TH%cV-)}Kbꔇ)ք-0*x"9g` uZdCcjx&Z==8btUd(Q8WSQy+T%avnЛ~|0OǷyw!9"bYAɹu_zٻlY$1 &n_䵞Z$LM5C~Lnpyn;pYYN>\\N)u^*QrAQ #!@'kzSDzk M>zW걶)HVj N[m4ӠY}0VNzP ޛ=XD9BZH2PfbŊäcOq Pyu 9 0U4$P#ˆ1.%Vp 6T7AJ՜ |f4F =JBm_ls Ar!`Jte@9b ѕjĉ^Byl Dj%5W#sZN#YwKZΨZ0"!P*-m(g+MR9+lp``p4i@(-Z UBs*drjbI&v[ؤS[؄8N$cIig}hFcUMoz{!RlH&Hh2RC fhߦg+=!ۦvc~F_ӓ#VONiE5a0tb')SlJ TV`Cm0X801*:AAomNW{EaoiAƒ $pIa1rqkk+g XQŦA[c3:ޗ+i6/M6=m5XGR, A1Lhͯc {q"0?K=VWJP>v.A6Ôϯէp)=VIɲl# Ow^~7<#)"7e43&_~x6 3?>uu!LS%Cۓ?cJ|F0QTʳ,໽=X 1vιKN&PڊkP1e@ \og+8Gɍ[רqr2d ;xԄa&w)9&79*'y'w> &պUN܃![Ee2z{x.V 5[FQJ)_>{Xחۛr7<&bEE%wT4β2:tB"A;PCGEl=U?CDX|2B9a䍐ˣeJj>Zr|uDw!RT/c*/4u ̟zusy:> ;+雸gg|MW#D!6q^ŦW-}iྒ^[.fWKЊ)~S-@x@USN"C퐈7IlD1%| G߶(7IlyǿUl@5)+Pcb[\OeW3$Ѐ'WCbz!*4zq8~r4kK5QI}`U=DlŽ-'8IhOs'Ӷ DɎmA}q2Z yod&mȟϷ./+J y} ]hWS՜s, pTz؟ϥbPGq$:=?-' E@˷_zM[ֳ ,7kkZ1u't"kz2xjXaqdM TZ@4 3z+*<5F,qgAHc oC̠…%6 Md?Vw3Z&FK) n|!ǕEF}6HF.Ea-Xw\+- dW ≝qW?j8PW2@De}fھ-8]]Q[1!c6HHVpR"l "-Q K(,$0)Ri)elF6K xRA= ?A-tz5ALgq×k oX$s0*R .$ژ3L,0&Xб@A{"DЮ}(p|J(Gy~WI"\M? 9ӽ~S<-LP~6Ϟ$&쉄`Wu7M#`kp j\\R#R=B3b7n, ZH)BqiabEmAa77CKs>=!)o{*M?<ކ[[N)3*va4=F-jry!;Ykt40/oaᷠmJw3NIYLZª,8/v##ǣ߾<^:cimas &JlɵK9AE"ע<KxʽkQg9DrEqFEJ`-7 o֛@n F5Q>{Un†9`׵=q9)G`;}R^X'=YҬJmQS0G#\TNBc(tDA:tmC7\7LjTr" $c\ ʀ|TnmsW75RʨpGW˺I^:aղnǘXsSa}& jq ٲZ~8@-:%3JxHzzpZtF1xjM |>8e//6m(!U|u ).f.KV`[WAA l;?x$\c2y$s!&JL*7̦<ٻƍ$ۣ0ld,f?loW)!eNOYg*jΤUJk.Ak])آ*dN`TGJN8'mT*R?]DT>b <>Ş0?˟xaGc">+֤^]~ _k^ta:k*Zl dtњ:P ./1IQ!%V:N}I@9k#-GvQp^aB6VC#uW,r^:w:*Ζso(Q#u)Oj95qCہ:Ctiy[wjO|(хj߽0Zj .zT t1yBlBo|s`Zz9O ݺ9g Zs޳(VHsp}0i=&'&&zw5=?8覱4|>T{Wv1^S *jR!z{m\< R|H;Ǻ?c.j j6 iseNA*zOaO,[a~dŲ4^O#84BPgC{`YSpNUs[wv\5@] hHYХj&I G ’ LVnTRHAtY6`DCN2y%2'8c̀Ko X(V"UJ+k8)1gB] % "<\j =ƶ" A[[bQL [`j,jyye'Nu@"]]/I X9FԣrSiR"oQCr^Ph oZ:8~p"4T&jQ0m6qs,sƊcmJ$ldeĖ` X͓#+ w,8(ZZ GIc-c\ґ#c)w`6TbLt{}ڀIjoR$>q]j2q?4_'gOӹYNfwHw*j}L6ƿ'1!/ٷSZ?ݻ1OjiO|Aݎ1E[ sś1L#t6q)U+_$vq/:ӗEfOHT)ruL=@EgOzWpKUs|-Y7,z ꕬGnN3x%Nͻe4ջa!oDslc:wӸ~:w trĻ/;uZkеwhwkB޸hMw trĻ3GbDֆq=ܦT{ [*}p嫡U w )\ DHIzzv<[IZR'fH+{~! "<^f }#^H~Kâkumz2qi,nCCIoiw6LAJ9_LVz!9GK1]9^x0o>*fWyV|܆1iA ٔ&hB&!;kKɆBA~6[+vMxޯ-ЖZ:dp6"o4MLý7*q4h.Nm?<DhZzdi'/ uL&f~}ycۗ5@8B5\c#VͽCJ1:p/,J??1u=; T%uZ+J)ʔ.`  9m*KI( K7{OFc\U7bwE))Z?, f~e?`ZG&lVbUz]qt2GrG18p`(p!zJu V-5"J.Ada[i#HXF2a5Z6XȓIdT[ $} E>88JKo1%^ Ze+Z6JZ0T UbevmK3ZI:X]dA`c$h sJRD̽ҨpeƓV882Mn#X-'s/#= ~1(SlRmGysyݮS4/F RR[NK8 M+aEwů$R+i'//aDJJ)VĂwA8-1e!ĄRc<%"my#!'1– Aܬ &\ 4ܨ@KE,mXE XHIZJ:-F-À k=<Ҕw)7&J|3GiXBz !bJUkHvK et&_Bx;1\ȹ`/&. g4mǶxOtq/< 4{f#fo~u .F[/hT Zz0OuT6ЖbL[-nT %-[Y4| ڛ~G=䳇D띢¼8j\8XpxOMb1Jb'Xeb˜%*"07c3XV3!H #Ζr# G5ZnmX)ap" jk amr;1OmVNwkNRo^#}|Y~J$B*x4ng`Cճvd-т~p) wUW]b\g?ELoei^u»&JH}{j8 E2թ!eVx ^2`TJ:Âe"C,5Y˫E)kG!<5%̹rE|_>د0y!D|} &DžW|Ͽ Tӟ{|)G/xh$Vŏ  AD_D(MTm߉-4?b%|&GߛEWEQqY@BZ# v?PаR!dPU}O3[o d,IF%S NfGJ~ظCD38ǂ ?{dA@A~ՒH{ b._ϰ,~:QaZvW[rJ,[L#v"y$k?Mh4{|p6taq;K%;%-t܏fqzEt>x;{ ʱi*b/ơ( Wj=3WF FeZ!d'o 9|XI眔%H+lBYs\-QF^8nDWch62ޕ57vKJ*=)u]$嗤X8XfhԌmpxVI<| @7G]9ۙD9o.Ae<1?8gHU֠PՖ%#H"u0UR ޝ&5l;6HhlOil8kik8wq&}[ڮqCW!BNɺ󆈴LNT+i_öJHYH9o+zWM7~,R5ni"ƭ'[':jEgvXpoS]pt\LFCu+qTb,W!pͮ˚YCQšQv^==HkAfj7vj7V`!EH]; 9բF7^Upd=3uuϊ6ߞ43:ɴ Etmݡovn1D}c+5 YgVczf9]t'jPQ~@P5~T,8˾tUfנZ5sxǞez?khޟj-dr<յM@HM*djopdV^ ՘ޗ4XV7Zۮ ~qeI,/ Ozwȓ꤯@$k/#A&};d~䯳B%G6CDGo5B&{n0Mڰ'N\?Հm"qdzgNKHrf/zP }{5dϏDAr(M+ŅB5RJ5I4C\Q͓lYŽX>T5 WS.w_l"^˴1}|1w.S,|ooܲ~J6JX*?;Ǫmy2?Ngd"O҃[[-RKo  (#oEa_E ҳhM>\ry! 9r%S >dRHGVѩ;G_2xn{!nMHȑh-"z$kM:hnu1sTn=nzOL/ݚ#Z2 5_ni[[] rDU[iZeVRvkBB\DؙZq=TB: -JBA<X!%<ӛnsp4-]_g6|v,쵚C6}c"_mؤW.'/Ki i5GP"\\a$LO㷦#<UxhmߨN`zp0a|K} u0jꝂPK}Rʀ* zRX$ +up3TMSTDqXOa3bIf@>[\1zL$sJ8c\$ǝ\X{E,{<2тrjjӷEgs?)GM"&uȺIʺ2 Ő. Uڒ;$|hs0%8>wcx`qysOqD yg ՞<_>!Ƣ]3' O6{GM]g. %d"-& ݗϋzxp_W?_}5wﮦ廇EȪaNf< X:*/q-`Cr 9gƄxAz,d%DKq),R!:a=?8IӃFP` 8PÑVZ +c셢PpSS)^Ƀ įmT@ 'tsc5=i'Kϗ*mtE )׸@I+X[zAL],]SYrY +T z&;v9 Iô,j tQL0"c[\ɉl-]H*WX4lS;PH%AeA0xXF3" RIfKĐ\o_y/8g lpV*i}A^붎Me%g)!ж2*SHc?.`Hs?HD纯DGai[vN~n(,YӛSWҌ/ɗ+8 x( a3S@ 39DLr.&u0Au[ |_& 吃]j+y@2(DD(CE?,) +o&D9ш^&ܡ7ꎠkq[YU> M(U1Sz{J gOf(,mӼĞ06}7?bh@RVjy!ʯ(aD}bJ+ԸOF ҂bE;9]e a"-CQ\͐D]\xMO]/+1;^0XnQJ`P<(XsŹM5VPᑝH4̋ =qgӼc&}m0}_gsgeH\k& {¿6p{pvN.R JnCcZk7ilľ܀)V_jfUҭŸ@:CWk&Zh {+Q\Juw:pxƟ,pEbhXh~]X^%tj^9’!׫)V`R4\yЫXi^+ rQ*d2)2`!mbtյ/Yl2bIu޷jVeAket02ّVW5` &J4ed@X i-C8eGe'~~Ir ܘO~;Bsb73vh8ifWS럾-J?ouLbI1:&i@Cc7/c2؀Xzj-C`Xs&R[/T{>9|tyw7nzid܏Nt>87و!x~rF~157/?do՞|%J mqއ3i"iǟ~^Onxpo?*P,tY[Xx"BJxG=OޏK> >Rb?Kr1_PKt=ߦG/c()Vutq~ fn6+qZ )0[Ϙ +2]tyx)|9$)ǣhƊ/y(7j#ObmI<'imqQV6iYɤC`u!0bfΔ 0+߬;Q-" H%VLY0vgj|IQI<7G&$=j`'K‚gNohceAp oeХQW;B݉ji/-{*SA)HLb՞I3U{&i՞Te@qpl奴%D +7PL04o!7#mܙ_='Ռ|X;B|o;Nff=f՗Ip H MczFTi5_4Ÿ9ڬ6dSM,;s[8&US$q}P2dLgmYQTX+ SceRy*)cb;E5%T.R!0琤UB$K]~]IUJat@Xim4ľ-񜓠 DSrDxjL0B:XiPOg SC9xKxV{6XR"Di FŁ%4r^8Jo['O#T@ r[Ɍ>#fZLx1jҸ=x1< f{7-n߽]ZE5+-Y<䥓>8m /v{E0xr^='ƣdYKTv:f)ψy؞$4]͌;q3SmBy|T)%@i`FiQ>8¸TqIB-FsCpBfDkE3AT:N7l*i}fJ{c%L aF97;3M]weo[+iq"8#Jйn=vfp(HE'k7Ҹ?ZؙPhfI2lgmT*εv@x+t$t4J$sC@Sؗ#"ZbhkCM@a0dh/pQLY& !$#BNʣ44x tI &X>C/'zsTr6NS1=x1q#nğ ~-ˌβ1Pie)\ 3 7Cg]h& itHVxO!b ۻ+}Of!t-ձ %qrd^>|,?j_F>aPյzX_4F]ޘ56rϣ+r iqD* KOͺ{zJ{zyRQq݄+gPe])&]}Gwv?-FL68;D0OMQУ뚂$&F,mm 0 ڶ,C7NOijkpdV hIQtE>EdZID7~_HJy4ljfG,թp׵JU9Ff7Q9hzgdS c>Hz~dÛ6:[sG–%p&%G#-C|xcgob=S8 i+=CӺw1b@E#v%E7rVʤ1wocjFbG. * $TKGQv]Vze+s p}*tNnOGqē Ph_ØN|~nM6t%KL-"kB ŎNWPO`"\QuYTh%91$$?)9 Z a~>Lg#x9 C ;:EA{*(IHRE*xbj1J)ӌQ ZOXJz˧II̞NWn:G JM CA e\31!NICץ\FR4Q@r "Bh0sF(eNGhv%eEIM-q~| d۞HJmD=^H 4֣\!WâUۚSgWY wc,@س(.-][8KrupJ%=E^A-#@/N֖4Rhz<3mI SdfL7$_M Mga0.ƝUp[}&U-8h$XֻϳrR?On| 8ep4q+â0nxǐ!GO9M5輬G<oap*%|-R)QX(}4S}j4He9, +[ .E & TeeZ7+meYR T0:\#+rt%ݮp)f_?e[,GjH:PXf.!˂zJ*k)pZU, I ^n( [[S|f5фc^JViJkBb=E*qrVZo("- L3ALx>Ʀ{~~_EBQ=*kJhq3%..x {p8tXUY W%)9~]X|rK eiBu6jGq%A_~{Hs&.Km78xMPDi{džp&\3N Gcn(<91h*(pcpS%TN3׏w73חObQ&MogC }Ok'A[\ވXt?Swe%PjdH*l|lo:ug{t J{.114?n]unc !fpdgMGnny}-nzqwXV'T=J곝^WW_Stջ k91+W<ٷ7Ĭ$W۸L H.h>(A Wq%%!bTK8xUiUk酯8UCK%]e$H]R97#"L\[ (>$^h+%5x 3%3,T=Dԃ`k'ӗec ]qaA` @Ƅ(ѥU;TԔy!%R"Jx WZ'4ζr.F'H/[ ߾Z#p%S߰7?>~Q{`h5U}L N+56?~ t\j~^]toD꿹_WooV(0A}yO}zg^+"]ع65{EJt G $q#ڤ{ƚy$B,S hF9hK5uQ2SP/M6d$Bj, 6R5F("> ,@oQp~uy3./23[$Y;sw.m$(VJ-(WVU'KK' %KLF&/wL_ N8$?OwCqËwZ,(5MRv m'Vl+DIӠIdip8CakAmX  ] Zv vhK7;(R7I#]mp[v|aǸAaEvuUX\P!Үx$#J'OOl8`zu + >-"EDUMP2Zb y]>r5~vU':09KIr |~LOt$M饓pG*.u *LYBs41#/~Ũ5t;RI:hs+U]7S%Վ1a=N13ʨ/\E(C6N\v\̳_e2~uډ5ilBւzY'!ٙw2>CFZr}s=QDn]-rc`H_Bk?mrx)v~~utnW_jw.g9vc~|~0/rM/o_^2}9yiC2p( u+@]!dʨ Z"Ai +23ʁ:Co"]X( \E%]UŠZ{sѴ4ʖtuQ:O*Kr:!_"X; *eyU F1b[{;0eP!mYEL=ᆙd4@k0pwk˹XWo(@&K(znYq[~%Z9ȞS.cMլL1YpEENVz9@y7jt 2S:}WymD6J~.$ܐS]j:|) c󘾃h7SAh O4ШAS;)15S潵蝔7efJZO+mbyI#Q!~ }IRw{>,.98ch %_u^r __qImw~L0\_|nʰX% | <.=K-Q:U FhM_V&.3^1TXA>K,J_YݥJS0hHT~_'n~3|fzi!.z qT!;A,@m%Kk'.JC(ˆ{ 1v 4"i-YVnqWgo@=&Y5aK.։K^R"b]PbrT }"ޖd3cN.'J=f~dȏ4]LP?PΌ/qR‡p`( f֬d)E1M@ʨ,, BwJ^XJ+f p0)OYPBѓׯ}{瘸0aOkQLEEڬs\R~K,yI=qC=Cѯ&K`n؇jvDW!."Cٷ߶iQ^NZMcy&ѸԜ͜f0AUqƼ*g?!+<f- Lyc=Q 4qчcҟ -z 'gbĘ$pcxKKҚz&3̛Jګe^U6?˸kϟd~"H$Uoy!h6&?#p~mN SVoss<3MMk:m<)C,퉨ܦ 3]"YTjR\w=s֛G \KPFP0ʘp5z2((Ƣt*xPktB[+|M6ЄvVWV^ *QeW,eD"QitC~6, JZ*Y8!_Jo=BT B3ARWWL*A UEOٽ3q 7y5Yߠ$ĵ/:%@QV-*d#]L UJŦٖx%#o ZeTdZ[X%L2KEC=6 Z<U!E fb=ZMqAN ,\R+r]c0˂>2jct;ɬ ?` I5t{6 z/{U˂ TC;T@ˡgR@(.cY/m!JKd,lT$!X !Z"^!]˜8}`2Pӆ Ev=Gj!|O'hD&bSSOg%6PZH&/c˄ L?d"R*#))-~ix+5l=zSKӒ9ƆH{l5dBs5*mx^fYhPbb;^HQ:_.\-#'yq*!|_Emx8]}S/ ϰX|xwk㻵ߌSz㏳<]h]5eF?[|U!|Rx( dl۸)߰EଊLR@WMBE@R nLM<`Cⱺ!QjV^D\o~j*J?'Q D?\0ciA0"^(OSAo\()4B`LB(9 4qtgݐY2ޏגz1EԬRgE RҴJk.+E$X`! Lj|~1l|ˈYۘ=f8}?-w#i0"2^N>N{N 쌖A R#Qg6βӴh(hv+/oZ%Fd}mǪD~,OLszYQ Ϙ.I1mt1icsi#- 2m$]ʴ/t"57*~誫?9m^_mnfgvp tw *Vm#Fl.q–;P0YF Gi+P~N-%5#nBViOHALq?UN 7hH6@wM 8]!$} KꦓK] hH<-B!j9 \&R+-UT`>Uen-NBGURdbPL{UP'>yʮN]1~Zo}蛰Fߔtt7(˜1`R!k169LNkʅR`Ȓ "`0}>ANA壉[+6Hc!oiX|1c淾4o`?\5FH%Z`\]k|^QLRU?ד`q8WsE\p#e^:a ̘aʝY oՓ.Y"'S-ϳ\;+lN$|l&ϝqhĬ@8N%)I$%gAVHCK+ݽQBX;QLMzNHnM)4̧0rFwO~{X3+D?{ %^}EY񈷀MxPsAD } 3 Ng0xao) )d4n  \->FzXFE?tcC59Շ DIP47}6_s9>p k8Lqڲ{)׌K嵃Z_6e8#s<7xFyTf$Dc<Y ƌTlEMuX#?{/1f1fSgQ>x/ esgQTG COCu8s j:d 3d'q3k1g NXHyϳwlVs\guݖ9L\ wu L31+%B +o=>(̛։G|5VwЖ(g;g N[N>xvާ\_R0;<KQ#ie)鱱J(%Rc2E0Fx04vkCʃG]:)֑I@%Z%>M֛0Yod k<(92[M,%!YPFƆ:,$aLhq}b\[opa2.2K3/WJ9'ڑ)AA)hi$JEFX iHfV"_{@1K|WlqQ¼"9$` Jȭ@nHod)?~9R X6'0uBl[~ލ“aHNSSCN w>p" p$Q!9v7ECvʂmssD2a$0c`sXfYgWF#!.ηxd#u 賿Y)B"Qi~Ì{[5HJ YnQ2qHX4X9{su.g0>'*N+ C3y0-LY/xkRDV`s+ProIg5,X=uWTe@U52U_3| J%JE'.qz\**.uVplI-;bMyT:A"% #P#CM=%Uhs1_3? zCH,T8g^"b8"9ϰt0 CؐCʔXI.*T$KCKƣ뒬E6:ӟ(~oZAGY (9t"3Z ̋U(Y wdewvmң&wQ>W7$mcM#.`EVpKD­XϺ$ R{!H HAE>vyf0Ftj|?TDTo&H%i5E4u5 NQK];1ו?N>+j\)CD֠AQ>BNlSgh.h-%@.O vY*SvT'hZCo~T'G7f=㽷 9EpPe=W!s=硏0[11 \՝s}]4GU@K=;gxц9$Es2snqu&N5GsD tmF wϾ+[$t=#e`G\Χ^yι!flbӶ.=h蚅+u0ԻXuqj_lb.vģEKXEs+9Jcm NcëULJ[SC&aUL8n.{ZEAƧM騭#w :k>OjPRn`{|vR8rӈ 僙GfT#@ L!s ,"ॊqƜ؛p&Fb3,BKL̂ś>e&&pܡ`4|@ G$.\~9@ty:b)U$V JdLJ P od&=W2$'AjE^a Ŝv9wD|^ -(}zѯnzJ$P1's BK]:>*^S Hy:Z!^EY挡!8069LNυ*p$'B 7~# Z"TZ*?B[I :D$:jC}sBb~S7RjG\F޽kaZ:nnJ]F+AsL'ku,6/y( CހKM컀7H?~fѧ碴1`n͖fKaLy>1zm&.T~,''I`QZA?'Wiգ(qI)Gs%PW.esQTXSRE Y7Qp_,/QK~-E I]UVGfH{:Y[sF$ȯՐb#:6LcoT_oC`dC?~>ˏEcO>1|" G &Q*llPNBzUOYhѐrM)%{ͳT5[*1:Fvĭ;i)Amގ(R5!!\DdJô[r]5C~lJAnh)A*I˾v39\!Xk>lȆ|n}5*ڊ(}*,Be9H׃q) -Mˋ1dKܾnk>'=Uj3KYtZ7kK)srbjⳢ}WKִp/U! /a_@*%592G/1@8Viϛ>@3iZ^R4\y m'pCi%:bRI i:@OJh Kr41:/E5s#5FTe4g*s#Y-^T;P9jXfnfPu:F+Š4>qNs9:ϔ6w(#yHfZxsI5w?}msp,gOs7v\_?MxGQ:+j. {BCo^IW?=_%kYR}1SM.pP8o*Bz=;d=!<0Ṣ  vb!pqcVX WZ`k'# 3kPbf2'XX&*H81?\_mz I}U4urx3*TV Bd3Y˟TO*TRΞw }64 /B_tOYMOnč~t|R}%FeH&"D4#8d$؈R,aL`ј'{iʭ`NKLvZ˦{{:|UwBM->M=,v:ܛɟ1|ܦ,1'w"t:CDwuK0 Y '?\J<tuyi8) p@Y$", *:됳YA2[eWm{7p03&q˴Dʼnlՙ ?3M06tԀFyp*t]ʴ' xeЉMHH1 #8-"[瓕#e93?XbٜcvD̏ s?QQA^l̀*Am@kPì˧ ]f PX34᜙@@OPL2NR$HBp(\u t {O-T|6긻?fQ#[pb[ow]54doWo^T8i 1}5CP̯%M pJAzӮr!IrAX/ρޙ8??l-`^U\f|bʧJnGԬ5XG'ia@x4; 2fk_4'o'͔Iu`gƯG_Ӎ-X*f6u-4P]\D? r"z;{v.*c? fpgFkj4?Um'Sf'(Ҵcʛ{~ru-^|=I똞i~Z JR0idgZK<ޱ6=O' HtzTVՆgՆ$!k=uŎpSޱ_{>{kO¡U0g[hq00-P|% ō[f|H]pXO$<ּTL=y{?,wT|/Us]p<.'7,b*yXA [2.ͦuG7 W +%,*=ֻҾDSߑ'c?@0&Fxţ,NF*%bdoe93w΀9r1Sܑ͊,挩S[$S0(`9IE买d6v)T ~T%R;ZAMB<2|ajZ%FׂՑSz !FQ.L2$r8!$O"IsC2'QR] ~1C"_;ouM}19/!7,b!:ŰnvK$wS t.?)zb1N" pLb*&P.cD!RbVzhO5VS!VzzudZVNFۄ/$Xj#DO6\ZnB6ZTm3Zbg 1|iy39~%ޢFɪ\oB*I5<ӵ8QscPE0g<$|P|Iw;S3!šk5/SJQ\m2UL/Raz ׆F m;>[9 y *nEqI LHN:B?h6M-c)6Ȧu,ٔ@PK՚&`(v {-g)KRanDa!K3BM錓F%a$T&}ӊRS4/-J۪jF'mݓ4 WŏffL[NG>-Wew\h]G\j̻\u嗟 6cGut%ÉUH(Lud,jf`BѡY Z`pڍ[m}խ 0?EN+jօ4NaVn6Z)HaM?}z~:s,<m?r*K4FD׽M .ƩY-2jŭIPvr;DqX5|Pih!dT1Xj kL$JL4t9%J\{,tUO͆&8˼RJxh0a"~l'Ԧcld)yyґl=$#H\9~j斶s: /{|7g6ux=(S'yYܥ} H"(1agIbIq˟YS:B Ð,OYĦ3 e`khӘX Ȅrܒ,YqO4lzfLb,N`%i69\Xj.cfv.1 ]#wi} 4Lg)1&[ƵN MHʗX =8ÈSaf"|y0cC+"'a1N-@{)8h;s±ѥʎP@_x:@>Pe@H2Ux)T$S-PsYw0q5pGvP"Όh'SC3h)zz-r%^h@DD?IJxyG!'=bXۛ+/γՖJ7o},lb`-pl1بtɈƧ#߽JOnhPR]a~#C:өqBI ma RwJjp\ZJ Qsm$|Qm3Eyyl-%^ C)`@Kuo. [7oIp@F{d[߃h{lՂ!r.s <kͳkޮD]2@[WPB<RA1"0`"1e;JS'2Ӕ,IU3b 0D@JOjm !RӪ0BKA0eǭ'TvpP`? 5\:`"nVa:A*c2m F90fK`tO5Iy@Q3ƣhN|92mY6'_FM,nGAZ?4` -g^HB;7-(ؐ$U Yt{w:7?:JN GrG+Sp'RprNZ-'@5}Vᖠ}<րKqٖi %FDJ^wӏ7!>!NFh859f~汃"B4 c]hGٔLҖkŸh@lҔ!ybs })Q[}> K={yz'0F7FoVpG }?*|1f|4߽bnzⱯ;~D1ZN✪DJyQyUQE<c׽wΤl=)_TPLjKXߣ{D@ >`4/O׍5s<< >mkX>ָ-sHL0Tyƙ!!$Gh- >G [reh]E M?|326o؇hBp$) U1w.{_,j HL g0ID"$DJ-2+Rl s a*kJIfфpg2J`Ѡ}܈R--!l `0@S*`Stm{a>=z~`&18?'bz~20 y[]/Wc_nADĈmg-zƓX.O@F8ο"gf2a^ugFs\͆C K6]*Rx[(kp y T~]T+DJ6>u[,/LrkYO'i;'I`colّ$@l.mݭ֌avi7*UdW^ǽ'Xzh&' C|z~y{c.7C_=-r3vW>qйS}0c~9z|X41uBUm>L^S_~ayh" b~<} VA;^%B?6VñVtZf :\Z:~baEO')靖=ߪ 1uKBmŌY"7/cD6&In'Pt{hz1pc\Hk\&uK6x&ZFDJn-Txv`sژ'0ϠgϽC :ϴ9%S68)k^۩x]IAvpky+2/E.)ѹc4uqBwAJR;ߌ! NG˔q+&wf6wCpp"8xuٟgP?QǴ%@K%*ruPgf B2K-r]=Ϣ!r^9n"n$.E& `%mSƺY !b33" ,x揧V@\/>z^4QucFbq:泏hj84|!T~MmeC_Juʙ׸d,5` vυJo+8Cld!(,`H;x! u /T#hJVZ]Rij]ũuUZWթU$+=3# J(lƔ,vIe(!DjV@YZI4]2 -NYXSJJ02, i@HJ#/RUR%0*^:!e^Z8c1bT0<+IbZMR $GGSt/YF4J1p $RKÀ؞3 M3RN1F _]LVEL[g[|}Ѵ>bSV,q.WA߸cb]eD=fx\m_>/2?M-MnmY2V+J,q@W4.~V0 ,%9VP܈Rg=ДܐxLpP)'XrT*-|mXqͭj/6Ws(hObJ(f|ٵ׷U PP"q3I&)1R! ;3W/`hƜP`OmB iv2L]Ы@%_<.Ynaq~_z|X4 Ox'CxxgS/1 et!}s{f4:uoVe^r* pH>jI"+b4bQY6&rڍ%$$Rꎎ7hk84ۦX%5]& ~iMBۛ˹B݈QXD?aZ,761 D!ګއtHD79D#mfFbЫ-!5ŶK,EbJ81[ BIFX7Z8YBw!k؉8Sʚqxd%7Q'2;͑/4R3J0K55>D}@}pb1'gH [[֥iiBcf%ܱJkwYr(Ygd{иBHjQB#=Yr h ʁ9*R!c$Dڥ,_KmdMH`" Woeqírݵ^$bv/IDŖ;ϓH $y#)bå0Gu7S ea)BfMfQ/(*}J:U%)(ŻIA[OP=Ze6zn?{e;V^9'/ԧ$ՌqފbJ~JVRmg9~Nmi Hmf4L t uJB=ƩڐWh'6ljCL4NVI!*̺9,)ִsH`+LFcAwZg>kP| "}\2f~N~;mH@EFw/W׳],Xڇs q+˟(/!|XU#BC-o;3\Zǁ"*2),*H`XѦ\Ƀ)8X LB* ),ƌ3KnFi-ș1 ODO2uP= :glTX9͠Ÿac{d1.41 9o7`(%ݖ1&zZ  rQFqL/lhaĨC)abD+DjJQR"dj=dԢ]HC-c #Nx4у N^}@DÖ'LyIC or)7&wdt`\o܁g*"|7&h0_X&wt3+R?. a}tb$Lt\`wj;k(-įEˋ-pq{jW7+c.꩝/s .Uؗl&q"V(uw UIOֱV/žlxY@Ĩ dҺʼ-jx:fJ*eWR}/AD-zԝ7€|v!\g 0;f4vt{?I̐:WuwIRLJ9I,Ĥ{"+7m2k$&NJHYF\;Z;J"퓤8RѕC~+Pځ)8E.d:/ϰ^qcj1%!ፕXs8Bi#D/bxIڮ*[YRW"sce2绰xj<Rh m)J%P"S`P*fH-jr@ y+5grm.s1H!;F6'x}\va!_fTsk&pu trhr p[n]Xnm (8r L?~o׊7'^O{oٮ rL'V`.{~ݷ7G}Q*TU:<"$8 Uofzhpmڞ?[UEzR[_ÃR#*R} =Oox5Ts%aַڟ5׫ǧ_SXVX 'uqUObgT(Alxq73>t:3~)*uNp/+gZ>3ҪTq>2U.LjuV}fpU^g؋ULzS9.U.l2)?]N '1^^𾴅g^Q8ȒpBewe^!sd;^`8n αۦpNu+.g2؍q@ceXnֲI;x- Gngư?SWH`GU`z|{JqHK%LfqTfe20([͖Ef1̡N-?+$-_{QbN¸Z(ʴ6|kWT9Jv, Wwp]&|{ϖ˾Vi|*%,{W۸vK~Jz;Ifp'0jcZӶdӃ)JMU$Zn"Sg}[.h_\4 yᅿ^fS7R/݃Y!)* 5] )i^!&b!*k Yܔs[ $IE^#)wfYmF-Q3`# HnE R.ܜDf@|(V*rD ~P-H2tUZQ `$l_X&P$,qX39ġ}NPe$?kB T-t5aGo'CԈQ Jm-QBS8ǡR>׀Ӥ# - aØ ߎh/SF< '/MLA>{{еT5QHfÐK|PK)j1 Uul+)%nQ?q7+OrVzw{ BK:љ0A2sۑLr2 NA-Ȕ@,!ʍ q$ D.g*bscI`~1z7\ }uwmC'PS#$zmqsę"N[fNrjpxca!B`5p5QU%(*(G:B.ëh2G5[UԳ@ C4X4(v+N_*K&i}7m؋51,[Q͏;[ NKɛށC70B|S~MZ-puAЩխ}sZzu[#̇h1%(ĽkPr rɊvv8%KrEk{<}%#iL>gr~,fA@ Z%Ek%u::)x>-@;6uGT/AFkxKlE:FQ‡i5Ю^+ӝE쇭;#0"}@t0@ŭw"k=G].1a{;+&w7 `.Rm{e}RkL=H%kkx$kxiQQgT 5|yu<&Gqr3F+w `3lkx mQcƯ㛁}jv z[6K* om9ØB{Y>״YpF vĭzl#EiujR l%/hR(8{eeJH0-5e5SphSGD;N.{r%ťM5G[R h}UEXR(Xmܻ NLDq8{ċrsʾZ8lpԟ-7M%쬵{dpP,E\k<_K*[59 96{%,i Ԥ; nIovP»d_8.J[ *mw 4U~JE>KQ&AEBD1.P8 L߬|5L-Q%e-պ`G75ÓOw .[eGVeh_]e]BNoti_GyMQ5zVsaU?w`8\-侜#38o KNMQvSӶ3rjgυ!kS6WZY% bJ%3m~Z5"3|Y>hUCa_Z?3){tnL{M L 3RrÕґ<Z6f9M{Yp쉐)ʍ#dk!EiGͱAD=6X e{qJ RWr?*h"A68 f0M&ҧkr@"!"O8܊4@xPvNkxTҭ"7Kf + u0=>?opT$ʾzQ2v,SȐLd rh/ ٮ/vV*llW!wZ=sY/F;!whHbm "A0JLtP`"h`eh+}S5T^5Y>=ă.K93 y&nUE~pP"-@ 9xVιpBv,d]|ER{u7_?/ C \j=lÁ,pT.0 :zy,Es9r nyo8ރ ✈Q@ Z>eك238Hv]20؃]I ` RXJ4J/aj)@VF??w_[e"^c-pUUaV$fEaV$fË_ޖ"Ox.-Ҳ]Mѻ#_! RӮS8_fM1npS̀ XKH V3\`(ϩ#RA\8mfYkPPp|PΚ<ň.!#¦N3.%#S-Ӄീ&_Μ}ꮆkPM4`m:+WJA>㕧0hf} =;DP8aTW%F{"!:3>c/w_*{ 'هUbe*I%BrAYF$T06 WB#mN91B-pjb-`&1=RX+r9G,O7]: )XdƓ|?ef6ObGbF>ܨvon٣RqK`F}uC,Y'9XHN;ߗ=OA ы & ^-hZӧrSӰv}G"住 cM^LҍVe*APůaxSo'M!MIs7]>glLaV]'M9ate.-oC>+k(yuV$ {FM|/cm]Bt%ǫ$ltZN:Z- b}Ow3;]Eδ׶?vcyX0TsVu*+.) 99Zk)L!-dpH 2_!Yd`N!¿F.45B΅02&z4zH,$w\ҪYGrFQ 95AVXƜ@R=zu.:bĀ;j!&@ k] ~|ѕңIˁ0u^8Ąm |v _kpkH|yƋ0ߠ˷]SYxuu+]<,w]/hJ`T ɩq*Xi~v3cgu1X9sg+4s^Wܫ~NnĀ11qqJ*zwl t@elUZA:`k9E%A+-CɅ8L@ĘcT2Qs?‘O#WOjzu;WG_G ٛ/__2;hOcUFNy5rS4!f^sӀ kucB1y4[rztO%h =Ov_]^_;_=pE̬b+Y7j&?-6 ew.r _pMRq{=z f[|,>y{@ \7~`«|5x~xO~lgşwnl:V?>N&B]D'ۣ?Ceqjg57 n*<"pKFʀ_2lwwjnqWSg?y#Og'}Q_2@Cu*>ܰ.5 mN3{rf+&+yu;_^eWV!K)q2Mo_ʼnƋdx{Q~ nj+MW!g0MҙU \ Ƴf`ՕWR c¼:/ ojc~P&tD0ۿXZF!g@H[92GJ[Zs)2=xZܣd|j9鲈8a{0K3E--[%x=6ߥkyb_ ۦVW}ȧ}*1l1 p^vlyu,!/Y-nq8pтieMx]ÖW nuM_%׭+,/T<;pZ|s;is͐=c^PQ@ ʩ`' E{ @3qH;ġR *cR;ǐt5o[]`a?WBu -ERRu(t9%Z:,x&!+<#XDLl5 EB紺" ifaB5NM,,xXQBHGisbTpbvLZ`7hLE-Zup5V J'{q¿WWnM3Om_[ד{yk%Pq@qpwoYf[tSmL4突|8m7ePkg)=KQh@tv$!/\DSFGoh7Mvk˃iv@+[뉦j.$䅋hbnтڭ-RD;r!OLhvBB^K)6hǢRFF<!T / 1ؙ06[ *'>HqWC_|ޓG_sVZX^oia!E ]Zek#(Ig/i(6&p!8/8\弶RyQ7 }{ӯ唏B<`!eF S㛁LY~I'qzW128xt9nv%[{b[0GTFWEBB`snyzX[wc,ױp1319t*6,ͮ]ņ%=$tD 3 X$mp6#vk=Tօp]TN;/tihԄv36yƧĀ]}߽ym=W eSOϯ\t[ty~?U]߾2&[PusB5L'´; Fs*{s3pyQd:6!k/ۀ"A@i3-p%,{%$.gvH!F_lmWU;7rc}Toa|`  Q C~])ZB_p[Pantto=BZpg~uh1% s_ԿM$6vY:N];29yuz |QOj&WsTW;uva[˛Yyz>}\qVkCPvIJU (kւl(}/?i/`wk30[sgK_O[,#pKRjL_PtE;*sTriȠp$SItQ q^LO.civ72RH3e l8g17.m SLzT?$0w 2Ԉ&ƖtP}B+ KVTq/=Z =nCe .9 -qp8`Oum>`|\)niˀw:3R~5EjA JY}}-{JiI~1‹'0YV4_⠓_$7P?֥ _=x\.eQ~297V޻ǫ` %";D5;EK+Nlx0 #~'vQ<{ J)pPZ/ u.?z#M>=] CB`.㟬BJa[r]t(8BK: Jqy+ʲ%p#-%B H<p0znr*t'VM{ʒT.ukέ1/5TB_Z\@%TÓ=Tp/O,`Uiv,SRtTN?-)\^wea i̩? cΆH ]FRyew-VJώC,= Dd@>Ļt'q)К-$\(?ڇq&Y.Q#-2n4HY}foiQ3!vI=jsb纙`ZR5mF'ʌѾVVv.pO8\LԒJPgh-5[tP [Ibb`;Ycj\֑ߥ!Xs\wOdh6b Q`/xDrb } Fn4Z:$G$^G 1T5N1G.+i䱀*b*]_ĈdSoYTG*ҨTLD$J5`9(jzCt68*QJBNi,|7L+3& NIs{oڰc󅎑h9>W5PZ쯘[,^lO3 WۓNm)$29j,Q>}`^@v1G1aۉcLj],#@'GC$uQP90 W.y A}2-DsxSw'N<6; j,{_yr2?uzsֳk39 ^=jnخ"{TtgvD>F udA٫L]Z}5 Bi5>Y-\-#g#),K79pTzlOASZ0]fEn A!;cP* e-*Ϡiyln ؑ965|qѦP?sԚ~k*pqǑw|ˀ&F 5%c\D~X10͕0z|EC%G iV;PY * =8nk]hL'_tbRFFtE#TJeU.k.MI̢%U^W%4&:<=H(&2Tʺ1epTFfs&Dyƈ( WV%1JR!`+26e+a9e >)eBʈ:j'YYFgJ@T8 bfPTFF-RVRwZ j _@4.FeVh)rPF[3-Fo]xN6LF|,:#2D&!Ph1$V\>eTŭ283&9Am6nN3Rp#9y3>`k-Z˧Sӵdth<ˡ=AF`u쑅FS_)WawUsʓ/eqY 쀩<rC<1G B${^7 ]_汓RkrJٵ\<< »JL)7 |_xUĖK%ѝݗ%3b %(lg^ʈgg(0'e(h܏GzEbFY0y GBC7#Jю8RRֆuWqz!ē.p !/榐ZՊ.4@Psz ;fW`w躁1))5BFI4\i9kG:A !6QY3UV=ܞUS2ɫ8_ZfMQG1~}SW#TUHl3L[q\G,5B*J&yWCrJ8ϯ=iɻ:bEU&~;'3xl2s_t+tit? Is)mYՓ `P ϑMY 潔5J5`X%tEKʖTm"g^lޑrH̢\ҹ-g#!8b)ᒦ"L )FԔӈR\( 5MLH9] ˒`P}`pYŞQZ:N/˅#qA_] 1p_\]OM.q罹%ւx6 bɥ|5?m ?n5=w}۹\.oOhS-ק^Q S@:|qdQ]O>Q^PtcMLZq,v_^aiٻ ^UFcT_O\ߏ1hc!x BT(%%ۈ:؟9"BdhbiNQQ+< *Q.yhfѪq/"#HE!t5U9r#s;;-zfV# Ӱz =YXn=PzDImoph90؏p?KkMGJ>P̺ u?0t_ -sF}NzxçCyspI\yPo$j"߽&O<=)?M S4%ssBW&bݒTg4 ]Er=y)(GR }Ӷ+}}@d9ۖ*7Ձ,䍛hM19w w+A~#ǻ1 :msneÉm y&ZdSR>|9xRN7r1O",ɼ[yRwB޸)h&n핫|9^|lŜZfMu_e}yQHڜo<~CsKR!_U՗Yừ39po׻WS{ѥV' 0Wy-ܶT u0ə誅ҭ:NhfBgJX+2D%mT#!XM=\D0~CO} n,ibaYg-!FP#u*|,"\qo˥,( :*Q`G*էZ-754Bq )՞ؚ8ZClQ|I]R u:IGt$%oͤiI7}t!"!jp͆&LGzDwkRl2 kU# Y/D"N8/Xͽ%62W`zZW@cs _*&.l-EeҖSy^׶'PCQrT5GK6.Hd!8S3؇+aI+L\vhƅbpES*aqJD4 WdJg20P.$W>>F9Ⱥ%[i(-hrŸ)^w{ʕEdն1L".e0uBtVА=tȥ<P=e 4GJ'`\`L!6 qX`\] b`{m #+%ocǵGchf{֯&2JӺ٭[EeYE.hX,UT+*k .c8zc1)9WhQ}ÿ'wV<ږm^8?zyuC 7W%:ӌ7/$`q Buq !m:P!m #1jLX_x&[û>'.OF߮yT>83.ؽd?m"Q?S?;uâ4DŠ=X=L{0Ym$*0y~9g{}y"Lkb\bQ8ZjCDmp;I:-rצd){xpN*%50m8?+"B2U`)\U܈yb!&FdC`}y\"1PcKWss9fu탶wHן: dß4 v4t[74Qsy+WcZ!Na{MS5NNqH5X'8Yu;Li [Gèt <)hJ]2I+\m9]bD&蒐⊻&v Rl<KAIY%h:)⼤DJ1ՙusK%Ic7uƜwJ_K PbV:dWơv&wBk]1.W"ڭ,q׶X`~r>|k:{a5#I{v_"b01%M䓟/f'_Jc-wڢ+[t+s BUkUSmϩm:Mo &'Wd mpbMۇrL%Q^&3G~ap0Q8$䵞p=Hd/6X|?OEjIOma/&^NRXcRTݞ0//o޳^կE=4lE;NySfu%G*&W}|(nO[(yP\P6\VߺN )[B؄CtصsKFuX/Ђ\hf"GP`{D׺ L“i^&"TG,T-=x%BMi}{_Z"ʙf#U%t`CQL0l]SfxSoO:䟲)=,q$-yPL(>'^S\REoI%gYunĭa(n+lL{ntxp+mHU+ :,FyA9 ~%g$3Ee)$JiK*B^ Ocs:"Q]Gh&-JZ% fjG94&}@?#O#_R.`^=VP"#>̂:Ц8!"e!4ʊi|mQIQ}f5fQ Vc4fbI%e^jDH+#5Rj==R<61[4dg#"<6N [d7Eƴ1#T}*2Ddy7FOT+@C.@Ia@Ӓ ȩeW@y `0M1mI-p5%V3]gxƉdz ijZӁԗM΁bbyic__}eA.xT?}6|*"VVh/#IR=ҭP3;M+u)g5FW.WjZ/XEm8Vzغ8M̋ dzuI*z*Q$O`QڅG/x˄rٟI[^Z-nv\ݴovVFoat '%#nRN@x[?hd?z$@7'+r}hr=cdsR;cdJ>t-G_-`:'Jơ !;/X,@@A9 tY\Y6a E+9ұf n/[Hv )>HO?#samE(60Po yzr->M@@9T)58̤FLp4#™Ŀ&xBfLk.gt`&K]cf 1&?.d.@( k֢P~W'dtTw;>ۖ*7 Y7b-ĹԏdޭT)Sv+ʛMȦZSq>n9xRN7r1ORbrD6qؔJ)PMk* 9}{h=_nήW74,n?wK:Ϸ>W IiD-l/|+ VVJSj+iJH HLxhjBL <h|TāΌ{u\gUMzT8YZx[W@jD!&,OqGl- 4KĈT'/ا4ՠ%i@ O a2( dB(G@RCu2 Pp!R o\uߧh7$ R&FE=׼=+8 yVޕ$b ;<60F< k1+GG݊zS2\:2.Z.('*F]*`0.%^*I t+2L:| @Z|EaTEo!ەph/wa"J)Z)Gty$[ʕ{Γ*sk^nf?\~w.eC{*ܝt`ݔ<и3yh2zﻋtxW뷷Ţ2@$Z4Nӗ,l{xPB;kb!X :ƈ,D6RdGE8Ňe >#5?kx5V:]V6F$(w${s} ㇼZDks4y`LAI",e׾d$ŷ1uM͡F+[ 7_&nb쨎 Kp3SHeꗂ䖀^.͎_{twc|>vw.%"$.]ލ«Eo3]׻MG&ݹ3뛳;Zf¼O.0S`JU_k16YEVH0t~&dd3oW>_Fӧq"7WKG/3)#~fWvjjj,jj$US7QA9iɵ̴{B-Bg620Xn},١L4s$ԉcyzR,$ᢇ@N2O, {ehf~&EXN:#"[_IǦf:eRY fm=;S(`A:dSJmVWYUןG#eővvo 7dy|n:aQ*={w ahgLg2blt~[ONjFlw;*(ІIA/xM|lH Su1ΐ-݁sN&F& ?͞3^Ug|/G76%tn2#\'&GiDO1V&?TB>@r^ww\$\}}S)kj7 ݞΓ#21c'}Mq+~ 36̕A4N v "yz-9"sտ =o7k_-Cc}Gn`ү;?ngq+81R?{AaJ5gVUb)?bQ"ҫ0gdC3%+cFAdr9F HjVԛLe/S>zL, 9HzR8ژ;YVgk 7vg08ww&V~[~ Qx߄õߖ0~$Q!6s2 (in b셣h21uY̴ ^Z 9XUV34LX x;/tXcLsXiD$^zTYlGf El\"EgK4L *X AddJj S!o7<)I z鵔(i?jאJdBG 1E9zgI!h3wgÕQ6X:0[:LX]9HfwG;N\W1b^biÏ&W3! Ϙ$!ɦ[AbW<}Q[}&%+F؎*j%ziH-Z*fgF|J_oWDSRޖtFɔuTu,07sߐ^~ D˕&ܽ2bتA d0KC獓wN:͌m3׬dy ZiL|@au沋LtH]DGL(&l'fMJ(k`+J;~wk9#Jk&cv1̶0Vk3*ALkðuJrP]M-Y蜀y\ Ù۷k 7m== 0ӑO:talN\DpC6/X͘$/9]5*Ε^OZ9 ްr809>#nL"kmK enAHBNȜF۷h 㷨66'RV1 MX& ;B:h>e1$+VD|,x2QYE`(Xqܠj4~|V?}0aAm2܄$: )_@J ݆g<ۀiۥ#HZBN AāԾ3p'm3tlv 7P dixq#nּ׌#+6OExo]wgbuﻨEbﻨW}F.$Z&>Ng>w=ϻty7 o)x]ᠥ! V$.;xAn86Y&r7#^j*p2fDU{bȩi,X4vy|p*{:StN(83Ɲ6P4d<]j8uy*܋,4JȦl.00/:$(~" vi=$B1y"&=GCLVVJ@&b2'-EFD>4Qfmxܚ󢆹LdmiT Pf#*L28 Us"֘+*+fE뭱۫9IY޵֑".6h-b,ڑd$-OH[w#<Nn,Vbɚ20PTPфJ*MKJf Vg}|k2l1Τ#ZhYzڄ6/)d40;.faTGxNf)lqΰ#3j#JA6˻l]nml٫pom+kmv_/>$'Dž5_p{o.Sf?._NZԓvPzbNɏrv~q*u 냓1Yց=t/tYOjk,3|QzݧSY1vz߳'&EzmWҽoM vI]3W i |}0"wfe[1(O_w=bP.X үQ}zN7Ī~%0gqަ4%7MdX!`_bYZ=9̍Q'+tVBcHz9;wf!vlFK{&Ѷ1A-=۸B!9$>ڬA?1cb:wԸdwxR\_;ǵ;Nd.䖧P| Ĭ {m@<PZ+.P v;n6T;ǿhC1f~Ç !t sV ݮ3A1Em j(Zn5%>NP8-'{:)CCT"eUb۔`49UaC.gqER0T;#$K(M IQ( Q>R9jst Oڠ6NQ-d[E(> VH8aVM).;[Te{Ԗ'0U)i\̾5RAKHbFbi81j#Idtk 4}!$R+ 8kt$ǽbJΤgP[^)Uju))HER$(r{Dž )ƏgLz] aensKUR;)@"KbEd ɌNۍlxD7[ .Yh) ٸ3 cg&VT 8锴3TN&ĵ%%歫7dfĩ6C,`,dkqȬiTA|eBYeÓ@$#Ա9vK gb$g2bҩ7ȥpDl[b 89ScKjDjrfᖞOtt0 {Ի W{3f~'W"~wP퍳r/0t= e$c(4:cG1mSٝږ2MO3ƩNS]$@e; ʦ w>(d @%դ _mvlɒ>kfs79 RG;Bc:~ G5h?4sk)\s(ܩS}W4Z@o^/Yjzl(qTß k~l]wͶߍmjtvaH ֖08ˆ SURs-%%ZkTdrDnx_6yOUN[sA{J xvbK7Hk-L`d7m9=6.w߳F 3'rӪm/nW蟾 p/xjlEF1ї#h ջiaY_m?/*ezB`}sENvx_rvrt6[K^>#:"r(r@J75ooQZ[M槸lv 6v7ZF\:Vj2$[gR蒧z/q9htmJ~gP:9-ꇸ1. q'~qNÆiඉY`yjp%Z3ůr(Kfs7j1_ϑ9`g3f3Z2.X%Jm-=jλ9ܢى0m,Qo`/O.`|,{6q==KFu\6{Бdbb:Z/'քiܟb-ѪDee!!c︎ҶQC½EKI;\&y[ў _4K]YoC=]a^Z|fbg8Wܓχ?})ǣzαSjFrQu mZsKg Yftr"{ω+]ڐh 8%+[XaъK󊸳n}3UJPm^b9_+OBdOFA( 16!ϑ'Ԛ8|?§ { #\)Ҏ> LɹJ>KFC-@>kX im,TzE-#Dܿķ6MVmlcjPm'?HpKmJC$y%/$zVP6>X"#CDe[ڢkb?}ܼg Ъ tmŹ;XaS-i WSv٣שK EŽAl AˉBEsy|^g ļgXzPmwv֔A;|2~dOK3߳;5n <0 ɝy&#8igg%K-Qjm$ X7m(o)1߳CwC>T iU(xs}z|qSMY%Ӓ=@9TS,9Q֒!dYwަѷt@9 $dT4B͈m, @,~$jTrcI9zzzm;ȤՋ贞.> L֦ ڷ{^ ?z1߈+&RK@b)\`XKBTߴT,U޼>bb%_v Joy|aLgn[7]+^PjcX_KVK%︮PxJj]`$Q뫎ӑWyviu  v~@'pL5o֏dOHTnM[9JIP>hJNJX3eJY$FH锣\Q98q3סr:qQIKv8t"eb|!rR>&6VQK@;Ω"/ʔn6-vBC4ڵ{d vg- ;K[*%BnIxU>xޖd,ђpF=9P7fnLSpZx'$!zë 㴟Ҩ.R!A?|z~'@ !v ,bbˠ#]]s8+CA*O[Sey$J<ؾv;sg~DlQjVdZ+8$<@c+C*RVF!s2wJD824e C+nTe.D \:AzԐabgk8Sq|ke-pVv6% +y&6\FLm;K9VU\MU4Rn ;pYD3.OSIr(/O1Fi_:D(g>5MO>`rf\~ r*=~vektyCrhc(-z%=ڊQ*h3Tnz!nv)"L\y8T֪IðllqP1iL+I7m, HȮs|K mR`ZùQ'vr BeJ]HEOcnZ9ۓY,lye[ X~| OJr VުD#@r.sWAS NC\gDo>@)bE>~18f Iid҈1d?d/ -=i뇫uHRnqJ;rZ; Dmp h\a BP5>9ʃO2d/ჅI~=W ӴVq egMz1 #x"ia&x*z&H /I-DdyZ!,>4&avGŷȼg do,ص 8L"{^ʕe^ '*0;eLE8ҖjU`b5ijjUfEmKa[jǖ{ِhWXI4+):2 lӢJov(Bڕe7wت$5S[\l=T^JkPn 74V)y2ZNjq8iHhh} "ΝFƼ/5ݞ]rddbD.^a,ǡſW#ݟw!r8^{XUjhWҨd.̷bG`E[WSũTȄ|7[e)uq-..hBWCnD[A`CۼP{: GDvs'!7{ӶMr\΍g-hLZA7]MfQe\8]jZN vcg@[-ZgS:+'t:~N,u-[h٤o(OV^:W4褓ȄqP pۜ.5Ϟ8B]`{Rjj4!zdvuF)`8PG|u;RSdVP/9)4ޙCࣼ<D}ȥ7xÝ* \ a!M-Y' _vZ+?I6Į1ڛ|LMΘ\.??pN=}'sO?2G&`éܭEy"{K:3sv\znF9{=w:JspnU𞏁Zb_Z $N@VGbqah[=4@Xk7d†(Áz_%c#mOXYےr1QCQ^L3/9u&:@rך=y~r\٭(3~ pFjVuɟYr[Egd}?ܧo $k]r+lV5ZRXSMTG#ۙ-zΔo;)-@{D6 '}!*bkfI P&b(WBrlf㻟TY4..3h0ӿ4.3q(N)Ud.. 8;f"z .d|iѻ6lYX2HVZꂽl}z2ShCbk0}@[[8I:]zS+-;ц@fD,ˌ6]$G jZYˠh!C Bɦ5$B4T&IdbF>jF_a`l6cTɤ6LXI;?ǁaǏ1DK~2lǯZχOXۋOlunK}7A6iy {w{Rz^I/?0^#&{^b!h+#BtWP l@"h;>Ga"|,c^)W Yzyc=CԗYdna*1$9Qh{b|V ix:V!}~Zx?TVVBDMD^g+f:hfk9't{~e^L|fr-$ZN)d 0YGy6D`(GRͽL,~{N%#mWnVe[ef N4T^4iՒ=NW#ũVB]d7đ<y+weUoGt˲0>a$U(]k\enVP\)βFM*M3 YH2D'#mz٢?=K)6td":HEjrA`}-=uʱw²Bw-V yst# J*ߚy,|)+րvDM㶻wt"if>90س%^䙀zU>靓ҫJ\ɈT|2HztkBũ;:?b~qʥfh,ɨ/g،%LG8݈ӏwǻI/b_ '{?/,"'p~1c/2e$4%…V28Ec҅}Dh!gGH,CJȸySon7\:ݗgQk]0cvCCEe>v U:Mz{MÐmO|U{ ŘJ`A:u5Z|z2 [XVQ1H)i&\ӌ0y+sw]'v; %O^&;ݯ.w~Z, eWQ = 4z0|.7f4 Y,YVFZmu\a T lVk)`'Bt/M`M8Xonb+WWMBj$ #_xvs_y(K%_11cw3=cb / b{>n|‹LV~ܕ7sw%hQ;ieλ>H`|u(S)/r'/^ǔ_}Cn܅ۛ>Ǝ ^{ZO7N6zBǾmC5'#J ,*#ėc rb=wI5鉡w, r}&Hx@R骫.)ቪ.G#cltҁ>|p|r`6 9] + Alp-ZN_HrrvX2(ZNfQL^Y,rF;5P%1pK'%N -ۙlє ܙB4]@iؖڶ$ %?<q4kO') y9 F"ܤs-5*(T>l*&r8UXTO)=&AaMVV,lA6.ԛӴHA6]Φj, ۽6Ѱv%ߘaMM7gטHvZ+ u`F4ȶn`v *#ò%Kc6<ԇxʕ#}h&gtQ;;weRlD3@ cT&:ufERi!T2Z"q>Sh8^:|'d(F#E/m'cl_\^]+v_㤊rۯ?WO]\ǰvf~lΓ?/ԏqFM m}yp.nַ'c$u񣄸$Ŀ䧈5W_Wkw{M?8r]cTnڪ}`d/n{ k^צj΁}ɞW~"2h+B4Z5؀ D(}?xD`a0+2ߠEm?~ux%VxjojAAZɦVm\#]\e@l|\K9h[R'4V] ûlFu{W7~D{hߛAYV-NW$֯๚6n$"boWr mF!:QA uG„36Ga$!N`Oj4$rԄ@LU,͉Є$7sL<7Tg9V&2KR?ēdDbGM#*.R!S&Ihnj3KRUb1;3o)GJ\tn7ֈ5ԕyp{o/ laXHVIC0T U8LG3BA]@'VB)1 Kl`{/U HDrSSO#` `umEqJb\Y<_O2{X1W5z՘?567㝋Nm̟` e4T1/8R.?esO clX"֍iF\t<Ty?}FN\K%#K OɲdYgqA0ΕW`se(~%t} 7ާ|&Ex: c"P\zg\Ym+fl`ϛΔ:Eoz)U|ݻ荱^gdm*&$.XHC9/0%za  G8!w>zZ? T,t:N`?-ve2Q \ipLZ̞-.?bی{ZQ=k!!ܯ09Q !Wjpf{qK8 N{dXĨ < DIftcn6Tkp4&é2) c\.nTH& vR+-́>ô2W&3ra Ɠs27uLG0Qj_j;n҈WRn]Ǵ[yލ].c˺1ֈrЎQ?#<h@bVh<ܕGK{E*6/S^^F\,'oK+Y | S0}C,Ak)ߩE^}S t#EDd/)T5M/*<}Kx;j ms~a"M|95o-nE-nz=~rݘ-Yul3]bÝމ;u!#*'ѯ!^>9dt03DZv"-K@3N"rZ_ɨ1|D{! uDx,}᧝Rp復MK_B?vh|Y/TA*OWVCw=\IEPOح~9TR'8mwKCD`8jzG/]ĝ`V^vW:{-]/w%uZG DrD'ՁR*N}s,1Kt(ӣ oV!kzQQ0>&o:GT<4=*sHx9~fG _n#N솽LAه<,G}ɢ'ţ,^;Ged&AKqT>p5?7*zbW{q\)_j[Vš[kEJyϘW* fYl>Y!탋7!/ba5濎Ɗ1ZHݷZ/2õй0wSrޘ)A%9VX'Zi$D&3lS-oziUv]8㯳 L0C]sͪ$Sڲǧj)Is-аw?sh7ku WZaR6+&*]u!{{oW[ b#Ǔ܆ZuXmx$* TJtL*)g5_I' PŽ@Ձ\ky6 zH߁X T{|PJ͗^|ģw'5ȧ$qNpE克O`a6vM!t^S-$] 1_17_X؇a ޏnEL'w:.*~0*ٶ&hE^6q-0yLIْaFZܠ[@ڻƿ8r,=/*m Fܽy\%~Đ%"41Cp)jkF̭1|eu[*4f^ W?ow]y{kXb$t &TG1WV*./4꯾VLcP#V~bLT$6i\;PqYXqD#v6-#LJ#Fk4+2bZˈwPPePV|)Ew63DeE4&>j Ϲ\Pܗ4I1@@#ѽ#>-!j(Ƥ! ߵ^S)-'(rY)Y˭FX.vRP r ~35͘BZ`pӜSjr ƚH0 arȥ.:ݺ|jܑxp~ xz=[A(}WdEJ.?!7o_b\$S[|ngCDU=7|}1|̏OEz(Bo)Q%bpgfo ʈ&b?xt޺jݚdxvW$lDƒr٩;9d5VXԗH~n|O+8#1Uja=滛;/ cb7RHmt4udamG *#linπ9Ixh4jdڒd垖#5ѳsi0$cњ(8c;;a6^ΰޥ󽊗V#fU\t֢Q=~wkx]RL&]`y2hܱ*agqRV~ԢûJYWyƇkW5BK2ak5wkQ'r at97 ݌ ֫>uc%Di nj|uI@:3J+ i.mpʕ‹8+c5H aܠ_^;tY>1pם@dhAH_P)p+͒TdL 1cd /wӨf)p|zܮ&8jq{nDu!kCNI^dPJlXjEmRQN5l@BG׊2tX`8yS-c$&7$Ä昃mVҍVIQ *PZ[}<, /My!a*j}UF7T3*Y)FUfGEѲ(N.d6'o4s:yʪ5cʸ-]ٌm=#cq/&vV ټ7ôfg-T*i* M [V>?MqdLi *] IYKhLa"{olk7^i]pB v;! $nnmHW.A2U9 t%vB v;[vhOW!!_-SR!nK|V9'=OCo[<8pG hwr1 xy𗋱7&ϔ MP%j6&Y 4'J3+ɳʌUIk4\KN3=7BDJa)Wq㔤) /:{=n=Lvύ8, =ٌ0 GˇꆦߍBJy>]%,re4j~.TT>p8.E[+%˧ T~6"aDɒV%"$)oAGxfpbב!6l,{ѱgƴ DBi;lp'NweUG_|av7Tvٰy0)*A 6>Q$yv: 4dd!F(ndHF7k}%{i?0׌9F/å^Y_%ژA dدZRkT,{CM/w8V[9ZCs7*Bz@V7MGgPsb>;̽J w|Rh1f ~ϻj$o%٣7W"}|V!8*>ĽpzsHR!$o{+~=zZe/l$qknG_E嗻߀rٻ۽}L@uq,$'35lQd")ɱjFwnOMu!O& Ъ9gqkXIX"Li3pcJ}^_.[Ru-׻3fB2Mie+/.-l:]6uf&vueٳ9~(^Snר^|LHFq+4b}M =/\;A3li Yy[ȗY(Q< T?r~2h|Y6{Fn$+C"/64`qGHD7):yR;VH+i U``Zm3wK#++;č3`Dw}^8j [(Elf׷"n{gwe5uzlHdΪ:~0}(pxǮGXvK, 8&&^L3>I7]u+2CA'Ky@SĊӷҎ myt]Ǹ>T&E)Sp䟧W ml-Z̅=bIJ$X#ƜLJOJG6&In a6lv͕JDlD9jp emq7SJDZiPSLQ-}otN<3l}WYZp2vO8INO$%`)G>E̜cG)N_T_ɥӓAaKr{fL۽-ÌI7+2 3Er )ܔ\)DuGxj%uFa`_~3)\92fnxg\̦^|b\p\(=)<d9,b;jC:8B^xTm;/yJAy*xe΋Z#3( >^EdH.2qR:q !|Y~cfv fu<&J{¸_&E›(IB{[.1}x_.uMs+a2jLnW'1NSk? <#U?=L} =b Z[ ZqҾ~xp-fQ! Nm%Ru B UHh,\dE&@9|n JV9ӚϵR*v;`pF&IrK[Ku?O.'q;ސH ۹FQ\)O)uN;0lOzIfFH9'y``4 P+Vh*&6d,#Ǫ}^gD-,xٺcs,QfrUZur+qgeџ'l49K " !Mbquy+Jї7.n>\\ގ~~G?$4&qdnhvqJxV(uHGT by;\=f߄̋y5kHӎw~aIΒi%PYJ"CN_ϜO㼄ʖ z{i[*g)B!ܕc6BHjlDs&D8rvD-Zd>bs_& JbyCŏ |P_p]1;WpQ" ,۰ ?>uQ:|M@*Мzr|1[r y FUh3;K xp WL(]2l:8'֊3 Jktdn`,TUidERX"JWfօbWYX" cJH%Dkm(tS1)Ke4_0ߍMJh 47"Ӿb 9S(OBWgX~ 3<1RI=^ܴ b%wPrkua"^3ijNUdABvcCL D o6X|Z$&C^"GSJ 's 6\318hP!i ,v">M2-=/YL3bgyIHzzw %4岙Ц SGsj4j&jJ~# [zH@fZ%Q2wC Uy>}8cz)[eEH.:wX(vpW>$ ! T_808ӽe>#cX#iO/6z &a/ccD=_V=1՜Ȏ>rFދc}cKﯪob;g5C7dӫKr5ac '\%] Gz8#DX.`,?R*٬p< k5Vmd_YݱI[`"I7Y9萧FZ\I-N>_pb2eKIg$Z\hݠˆ1r% 0&D2EܕhNpa,d"B߹,RrOH G F`2JGuTsC-9?:KtkySTSaj,}VL^TP(Fzj2PVF]Gln_K-?í bʫЙe-& ]߱0//74_M2aYF+ @h6 }%1q+ gwҞJ/1,jrg=iW:%I TucDGn]y:]ƺ2y0ֽ,Ѻ!o\ET :UsúIU@=uʃ6툻i֭DևqmT" cF⺊K;:}lԞtvS; 7bڨeQԽ]ܼbg՗1\1xKLTkنs?~,i˹aElzQl8 E,xdPB汹x>7Z0-|ĖrђsӴ-uaI#\B{u#Vnp)ڊ؄G ʃU:/Mn}˅P+8rs|Ha *)k9wpSG[U(xIC]j6nnۘְ:EfZOJl-k%tY_El#[_OaVT8,FO{t܋}(=iهr}3;߬ע5yl;-Lz X쉳+rԈg}k!o8KV?ߥ,?kZz=@8RH G御L"|0\Qn2ssJ89 iN iO.SU~RgfS~jl_Nk%˖^J(NBK}j;kҴjYW+1~ߤZ*!ZT$jPc`lxni,_+'}Xa垸zDCP4PnpWֶϔ_bEh`k^#IsVE?),Fv>|\: 6"=<"ABi?-""iv_$A!> Z֥¥"PAh9`[c0i, z\% 2QYy13U&̃vGN6K 0pCkIcG% gKw,$@x7gV6y5a(. Q{e$9>dpoްҒH |s^[(I1R{ hЫEwTG|ʌF4/ *>ȨT/yU:uTj!^,5"/f*8#;ʁpLE\ Zul8J\߮FU(YXLUl ~&*,Ϟ,nBLn;|omVƾKQH+/ bXPA dY‚r YnSƫS֨% @=fvO$޽\]ԑu!WMhsq:z !;:6JŅBvjsNxA5Ut  |йvƒ 6D61Jpr,yໍZ+PʿvL5R3Q~j`]0#YJ66B|3K 8/ ,8ոrrׄA5'{6&J n"iEJc]6BPDK_UNjM\cTi%ӰJS7NtyNS!40$e,2| YƌJ>,֣R-e.IEAH.Y-U ,s}s$!."U0_:L Κ_\֮,dΎlщh|Pl:]@"CO%Jbn3_µcp7WY5dI8^-[Т?YI , { $d;IyumM m7Gc,pDȧY]lv?-%*c;~M~Ruۮ~굫J$RQ|((iFL1jXӸÜDشǩ!* p0TKsU dG7x"|4cD7O$hΔZQxY17̲bfk{V*>]U_Bn0Z D>‚iz.D[AR[VY a3Q[ Вf"XH{qLSҧ)J]msS0@itҐ]mS!0jvG0,b{K=dk&&\>X0I5>6Ly8=䒌^C6BT载਋I#0i`QariDrm՟5]TM Lv> ` )\#rtjI0%\ឦ*| 4*=W;ʂRu 4[S(~y;7 Ә.1i=СMӸ܈ P~45.59eԨHρj_/oc"]X]rcT,<{,y}qwPzq%Uywݬv |QYcy Oy-J/mem`}]U @5z[de>21{kZiWry#"k9 YaCGpިHM%$=/RCљ'DgRLZmvrm(&x[Dڦ=b(npi=^:N0.ڈ j-FߓeHXy_QQMщ ?MqG׍XWpEy(&7Iۆ D5W MoC@Qsr!o{8v2̙w048$hcTKXkuElɧ׻+R&wFۙCn%CB޸v)NLڭ+RD7hY_q6v^+&Q!!o\D;ɔ⓷8G= ֕)M񼈀ՇZu/7.eJ)L޳SBχ_aO"ihXQ}I5=֓*̄n[Ίj}$$JY LVj-8Eȿ/?fYD5o;`lkׅ$pp_ViL6F/y ; j: $Y%;6jIGX'LBN70=ž!ki 'p셮91DAf.}NBI,5R[J ASf`9AdwǮڝ_sԿXB$p;qʠ\rI@kKJcչi u|;N@H1.ow{ x(--「;I$W70kYq!,Ě̀q!,T].#8.rfj* @g`HLyCKEeep]=YVRj-_"kdH] $޽c+7C$yUԪ`1I_F{V0eFOdB@[ޑ#{P ІjU#G^i-SX3Ƥ>P)L!75tUA:ϝF,XXE9Oj5v%&j^D}fX1'EJYX[*)/r`ai"F׬0+ x|6B8Z۽vh.Bv3~Tj۽Zkvo+f:jk@{Pʄ%8ngbD --AGKgs<,@_mc][CMK2VҲ. nD6f̞Aj #K&:#Xu1!' W*)دs6Igb6z~M-;Id3-̉B!hA Nw.,mec$[,IuvېL!Wt8Q;D(Fı V\z_~rʰui!Ni5fX͌E 80u/w/V߻Pp x蝡ܑ6G"Qr!0=:uD5&yuw=ջnŏY깭qլqCh8FfРL0zף, c"yp'X-v[y\ fzxJn*:P+5sCToR-`L($ELrL\N)͆:TN)eTbG)B9ToR=%>p&č&|j؜@qR|xu{RiRZQM,'-iRXǔCToRM,-;>E3^DJ;+-P PHt>Ɣ/ˇڼ@+P tsVkQ\a;r=X)\-,t/ /1ך7y$%,)YKC`$!rV*3SZ(l̛,1xN2TC`90qTaxtyA; %5dAЉĢ>͗7׿j^_.n/l$$DΜ}[ą0:ْխgEh4TtsNl7OSƨʧ=q*ߌAdafWq][i< :eEd[d2g$g.U'~fʼa-2g6l\~Y s^{ܜ_rnUVGHz857gIF/9U@g!؋WF-]r_hZ @{`ًwF1|{O^,Ol ~Q>=x4eD))sܦa7Ivk%.Dg.MG;}eKH9/%~9Ç|ޖX zeQ{0(4/dF’uƴ9+\'oOU/=}Ya/vƬ䄐Xc(W_$D~_/z:a7"YeB! unU6mC)t=Z8)\#\#OastH:j Tǀ!A&Jvk }u$;:16=UCZuXC'oʹZ6E8rUwGKV4Pct>7 |}_ήm ;dx]5/?.!.,FV7J\W>zwo9T<ƸnsT;e+qX}~zٚ{.ډ^ClP {+ (tڃ9@Ru, 3{nDUpދAgQQ; vg Id 2"Ys G`Gnb =C7 tEV r2nu.֐jl)2sA6kp^SNѼ1; JDP4WQA $s(XUM%S-@e+gYRsM\ ,:ԯΉqĩD֕7]I$a\n>WOr)r)ڪ.˲Ca!tpxbqoB !f=~.;'Fkjo_bӴ/7xLb䇿}|wV^ݯˋw4[F<|?F=~Be~wU3áaRSl==".ߝ]U+kxѨOn5YI$Ƀ>rwנv2ZmKB7[Tr@!s$3{ejWm>mP]fHk-s^<R*k2@ \<3]?D]y5}+%Z7g5sc?EYy[q5/q][VNdV2& ?s5JfW+Vq-+Е">̀&g# UXyqU>gf_0DAѳ _ g}vQt|Di#iDg):+)~֑GbAo8w+ÔEo%& p"S[W掬2hV.|4C6yAnM6o~4 &x!3+M"ͷƃRwK$RF GqLCZi0hpQsOKy8\#PIKq=y՝_/?^օֻ0Qˡu0xӧHo05ڳ<$=D/^LSI6^KF G],MZՅIDED58 ɭ:Mv*WfYqQk)Et3gAefxpMApp yk~8_}}ޯ)^:q:L^(.'?qMպ?!}L1_Nl)>ߢZzRzR *MJѥҤ$IR pVSiA(,sd5P) \ yAd԰ALKi\[>ߤ6^[KsoSSS}IT4)A+)ijcJiRZQviK)@瓓R4)&"I)ƃֺRzM `O[JIRiIH!7$g)=Y)KI^=jAcܒGޕ6rc"e;(5w觤i rjD;$ka,V٭NЉRC,ӯ'}RO*ڈ'7O7 _i efA7*7[nt[z{Ǭ\̋xso^5plva+l8|:yߚF`~>;AUq  Bw Ub:~*n^>x SS.N~1Se\Sf];SQN>#%4 :&5+7f`ک:!qt:m[tOFP$ŭ@+>׷R4l IU.:qq/Jc_ڢT*^h(UjN*K|cɌC:S*jLBhIw[l+boNt% A/X5;|vNpo*UatQ:&^x)6> yK*WT-1W2)U; 1dˎL'^A~Ac/Xؓ<ؓc:w,fVmcȨc&E{>!Ϻ1map+уy 6ΰ{鷔H7^:h\EwvZQDh3CƾJiф DiZ^A m{ӫ˰ӻeáK1^bP!3Ĝ$)=$H 6r#a#1aKIoOםg;B$rfkťd/;mNڦsNlU~>Ir* '/Lvv)4R}]|xHfQ*dha0 @7) C=+%o]rdH4FZ0۹Cb/;Ň$§kNSXbx;NՖo ^%j02jb0LLiN})^tv2O9EE+CT?^Toʿ_'=h%rΔCx9TVٕ 姓7r=مd' ċ= 81AatJ.yY( Yfew%2Yl4S-? wBI`>%a2@DSL$ЂeAK @)T e*|Asޭ@OhFV hD 7,K&UƠp܏pvF |wȚn;-iiӯNYx'Aq˜sf6!p.O6 -KC*FZ0pM,%TV2) j`i6cJ=ZժW=u+U3(+K T)έw% Ţ+%*c@L" \'O7ޫ|O)_Fy-S9ӯwGH{_XqB/Ÿ~\~E|oD?L[+>#1Fq/]cy(ů^Gnznn  ݽ/uGWgEb#@hfX :/O?l궖NQ"̍ l8Vr7apj`XkքX] *eRiʤK]QJY:7{8zO&"ׅZ@>j  ,^e WY-$#U!(I-83~d,-sYCgʲUW#>:u&1ܙkoܒ^U UR΀9jn}e}52u#w$!NM^:?Rz().!w,2(ki DK7TeC+ZK-DUSC3]~σ9v0Ee}l #ѝRiG-'l_) 4.DWE?S`(0 ǒ+R+Q(2p uYq,V] !o8Pj9&V{7H#\Q?@׺梤H\J%GZJ8hD:_j_kS~Egp\O2rHU KsevGα_zZ@C E'Ej?_t6JwjnIp+QPsMx]!$)OmBIPRWMQ'c6+B{0rS/l3‘/S ?@o(5u>V5^+"#]Ei\? w5u}Yৈe ,١`Jjdqڙ $rܣb]P=/ @(>]x~A~1.g›`5vVF7ZUm^b/.ؗvLR#Ə?oi@.TZb=+b,ֵ`@[,uij\KL41o?.F}Nm(?MITEknehs4Fߎ&1A0޾YE!7m߬q zx6BNoJ_/"A3k*&U2PAeyI,]+x D$#Ahl_Fw@H[%oimA%=,:6vy)E=m,0\ېm!jη$*>&tak5DeæO ya9;M%Q'h>"VQA"h6_;-[| 竓PI/FPvr{mrϣى#CI)%?}ĚM=m遴Mm jIωLr :EAQyY-.9Pp$>f3e@sMYT!tVQ8#k 7gqpavv|z_l§˖g-NȀ1(* 5(e->0v4KÅqM ;|H d,aD$t Bc@SVT ^!#%FnakS15R *]k"HejK-CNm!}(mEù;X l Sh !]آ+S FzXvKoJ6 |o˗OT.In[1FOuh +Nr˥ K"w1#Da5v[OɖE 1r ,$X%!S0)pAX!E-ʪ"\3!o؛_y I )JwPJa'#i|Mt(l: >PMryKODhgs+;u3d?5zpfId~>r!Fyw=>ϟݰ;#v>#\K['v3^OFs4~b1K ?%3?]+JXcc UϋՋ(Ы~|Z&Zeݽ数1b*5gs9(ORlV c/jaWR\ VApg*@Rg/Oggu.0*E8) p@/f'DH #A}';DŽͮB !=m!Uxp*&["/=IssF#e'ۗOK$MSpNT3.qRqS wF]0SH+uKCo(A ߟв4 XU*D]`4$X"JLeRq܍Z!JvݻIV\KSBY b2󲄚3d֨p[^QVAMXc\")&s  g/mmmAn: \k mOY4`Փ-fP妈)"dଅb-Vڮ#yJos&p/Ȁ(%U41HAue3y& 2ԛï3\hFy>q:ѯ4(H5&mAӛu\jפfvkh3ғaQOjH9.=~m+pleߚa-2Q=JIe7h J/u^tq_Vx!Cٌ֌z0F`;z%}%vt:EX41314Cv=(Xo3C;qU_qߚ9v(lcFƪg"3FtV{PdZg]7_u>7 ^zG6>vh/~ B; }q앃c $U- n_|ueD[.]ײ-CxQ5"@ޕY(0e!9woǓW>\/sqr6!"EGree"nX&iAԵlYMmcH9czF&)KT0T`)x8v[dVGQI4Rx؛fKQF0bBɹs̟"0JfH!r+&"qt"7.`Jըn~F~$0UdVMC,9O eO1+pKX>8R& &quR6UD-9[)*B Ve<lRHL@[[BTC+`锐ZhFcPj:ؙ[e-TK$մH^GQ#a@ tڢ: GXMm䔉ǑMb!ʤGd07D"\ qK.g&؅͞]~}3w_YӷRVN\SthXUHTxtBܹ@2JftHu9h̿˻HyCƙ"njQɐp#9^m:QAT`UGp/2:NF"Z^b(pqN\_Pkvh)>F'?)r8g.Q#h:{(B7p ?v@:Cu,<(I?ǤMaS#g}ܹ^o[0>ı|" #:3&h^{3c較.UH̥\nC2x&N[mCb<&; ݂ꀡɌ>FзfM"~xrv4}G3^}yw7/}xMIkޔŸ}BIx6$v̍o:}(~A]Kz0l87kD?*j붓l3vsQb@U&TrfޙE[v^p|)[IWv+tdVhh>RyN_״sh/˧/F/ϭH2l &I(pN}tv(zAwޥث4ktXl"O fo̮_` 'Fʷ-1Yy޹h g $n$(:onB,ưVRhYG< )iaz?jzT~#{dC_Hq5lp6l擡׉lV WAdڇVK@s!`GX!O@0QPdso% .Yq SZDŽ "5Lq xpn٧+npQfn/DrgoEWtGCc_W7l 4sgn Ќ3!4 Bm"pH")) #wOy&X<?%}gK_KMM{h~J_hd,뺃~4J We5K8]W@x {࿫~} pHަ}h;~( lOD!(Tbþ%[#?O ,j V|m` #S8<T!(=MJSlU HHDìdc*4U'wt;6MU@Ɗa$TeҖ1X)ϊsuŷl|-V_[<r0  wf^:K[N5f)}Kޕw;P9ʷDZƓvY:EIp\ai|rX DD ksW9AXQ\p{#g_ݿaZ/O7> |?NOT&S%J*/GLQLq>'} ^\ NIKzG{ Kڃ r Y>Vj,vop}{y]yL0<0I8v[eG) ,_E2bXa=ўU`5ޒkꉋ c"= %sN$/cʫ#Ҋr$t Hк>ֈBmtsO&}iUǐHKo'_$[ǐ5ݭ 褴dkGlu ^q^LhBST,Ͼ\ sv< Prf5/+z4:u$ GP(gҭayspgwr)yb51t1Ec;Kn{Qi;n7Ŏr3lT]IڷBG%sC:T{c֞&t9Zwjx@Hɶx&ϗ\g?O;nX; -33K|l%A'1uy甔wu/mTنeE5Qm:~QzPZsn'M@U$Jlt-{l\~`\~yR^sRM)EpbJsN*߹wҵD%"Ra<O0H*v(9u#ݜ _%_ep|GV!Q5ZL!T *5~ӅVog !U} Zv?Qi|j?n]ZY?QhwdV5&_cۋgg\H1ӼՃA|6Ėt| L ͜H$`bS:lXH͂BhqռdlNat~#n`.oNT.=cF]~aF)dOd3*uC!H|F'q\|{}>~xS\W16Cs)VN<-دO<-$gf ;<Մs=]%Er0 K!U4C &; nr h9Knur")>5' 6 ʲIt 2鲦,2v|`)BX\:^!vړ<+$o\?F Sh޽O8m ٩ITGNؖ(7,W_.ZKil'hRbIG_\ )Ӗ!P9l0&XgPbsD3B!NQI.:7DxX|U2)Z#udj\;DE6X,B k(&SDP:1Z`%bl{% S-pz g& I!7c 1FVr8sOc83|L)JWr )JS$<5JƒR' %mr}̅dcqXw)SQWUK[NYYJR8фcr:NƲBVͱ ҪBtʈ K9v|ciB1ոgfLmmD/ɏU~d)1UwB`xK2L I_@T3pGʋ>(ZEy-IࢩKCꦝ)TgHᢍJ QRe"5{r-%GrQ\GL&3N"w#GX+4:q#P 46"f#b,B{Б%Q}$(ee(nI(rz sR%x(Bb+<, $wXq!CPHJ]V|dgԐEjY 9p'z:~_eWX lde?mlLULGʯncz4(AhYI02JQ̅8n#MBI$+A ~6]PE#JT+(2 ]UX `l[""XL -k-붕>җXQl_c|WSJR" ݖlzL*윧J_3)R1 L1\n/$%c,ē0tp{[{PN|yG]00$}1<#Yn/urί꾲JuO?G). <>|Z Tΰˬ٥e2w\դEҲG7aҝKq;=&QN:1b \,׹7 G**։qU%TU DV jHةR.Q0jyػmdW~Y y4Μd s6 z{}LoQ2%,n,VůO?3'X)JsKk11}~MVkETsԛ5Z6]KǘS-}B޸~BVJir #R+ٹοT]a*DZnG Dt>N\Md1_@fnf᳝,n9t߮2xu} rYc1O˳_A4oO`4kkTש/ JEnj+/`+Ҕ|,rUgg Y+7(,!x7*g'n211x3^m{@քr)G;ލ! tbǨNE0\mۻבz&,䕛MQfSԚԁY7ooֳM? 僽r(jLtzwv˱aN>z6eMe4L}|6^7{jn 縘h6[wʿngtmvU?1IS?uE+ ^@C/aU@$r֍}XX=)~V7'>JvǯtQ$n 4.s1Q0X#(n cj1'7M4lߤ`O4PU`!1 +: du2aĩbAPTJselwh߯Ƌa,dvG5z}`Lde ؆bTGp*1;Y,J#]G%r"T}UYjB$Df fj/|rSvo D̳&%Īd '1hBo!!y]]kNbラ ,t{n7rݸѾ>ۍxitB)%.>k@Ftq}کC(F306٩׹3dEH$߷?" YeZ{0/yo>&K?Hv*@2%$b P֐p٬hA{kx a yq|Uz=<1.vI2ג$(0KPShHS@Xa%x8sۜ7 bݝ>MfOHgH$pBֽe-0:ZfJ..)cRobOh>H-_SX6[[!: ڒ'Rz$?jFZ;*dVCZa*so(?njsMmG|j֥Φ0MY/ȒhNG"-ef[ZpEJ* 8HAz(I=9XJ(&5%HVnTRlip!l I`.}%^;l,%8 bhH"iǒcYYCۻ킨TӑV4 Rzq b @H| ތCl28.37(C2ôܙiUʭ!hS' R ـuj')UY *+OXBzn<6Cƛ JBTBp4OsKr#n6Krjd&)V1D\~>8mhFs @!&!HU2 m, ih 2 D:3СUOV lV{\Ί'20P l`a{Xx3B~*koCwf!T~\ݴýu)崔w8DFvq3#oz>~6 x2B!A^Bx6PK˧E)T n4cKK#V]bn&.۱U3vRLau0뻛90`]]2 9Z_;x N{_\3Wbz<l\y< ~{Ձ}5uT$ x±hۈ3Z&}Q\@ƜԱ űz0X?L–ea=cBm*z%Ǵbchˁ9pt禼rLvfNh+"9:6ݻ%$B Ffk. 3Nք#[E?i/Ζ} ԇJtF}Ooa!*p o{<߬%*3N @i )ȰY`2,[bmʇ߮6{86K4)WVJ$"͔jgSdDIADBw #}!`j&}I֠i!O% ف~V`CB@r !, )H1RV}! J ؐKRi5կs"JSԠW Y%hVH:8?&*o[ʊ_?nS-dd" s%:tqhy )i)oz̿/37NYy.>׉hӭwߕk?d.c]Qp_Mܬ lb"SA(⣲fb{?WqLqKWnܼʲ! y&eSJ$ībeb:cnN+um{x|@քr]sw\s9wAĎQǻ.:iۻw`nMX+7Mh=<6br ؏hOZ ͝4|o<ʺ{7-`T>>k%h^=9'WpS/f*L&yNY#5WsWdf]ܑ<ጩi+; X|㜤)٩z'6TѺpdu5 =4ڛ3܎rߍ :,ǎa^hf1qiRFOtDgV#H8!o8|qLou˵˵Q@!D/s1ye,~xBGZPT75M/q kFPVj6,VG| igƈ\+|UG`?l]^=|:^zLtwx||sOFCjh3t]lܥVlz,$ yÁMvm;NӃ@.x ~>e_ox1B*}bxgb=ʇ#o60MN= BQRWSEwn}>Q%foۘF7^)ߣJQ8 }󇛘Q*R]YAPIM͒҂$R SGSυ5\B>l8=j4s?Z) ^nWB9ۣR dX9b*~9ʱqkTetA OZsr&T$SJg61D`3j!ha9n˶B=ېni] vE ǡAT#»y Ok ~=j_s*N˘aU҂"RPjlm\2aK6;eTR Ys嚪,w: itFR-cD@ZfTܞ_1;Ԝ巃¤G9ȄRe!-ߔSck(sESŔ%`!BM mJ1۴2Tk?]wk%8}!^y>Z N懭k)ΤPJF`6ˢ;)8xgfN )Ե Ύgӝ~2*8 (P)(ju5sH K$#JYrF4\6+ yÁb?]mCSQfuD_m>X`~@RoEqF5cY4@tԇ'dY@t4 o^rڗU1*ŃTc 7) Dd:i bVX)2FKP֢ #"$;- _UN{E|) t(6MFajRrZw!Oژ3<6%(0uT0@}x)vgaGɹhVbXD,OJ3e X5%&) )KJf_>0e $fr&K%'L?e sy0xU[8I1qlXF=FЪMku2CToAF~;ݗ9'ϔ]?][S[I+O;1[(뚕~nDwμuX%@#`RΗY \c"5SVG;Mܗ2^=Y?eR9\K506Uf IPvN&=Z D=]EG4^,*/f, с!Lɼu4[øY =Q^g\fK3Ab³D%m[#@t90OkvrRJLv5Y.; g4_W lAkg-b4Zpؖh_@yCt5ӲhFG'~yXiKH-b^A 飐tTX Rʒ ż>{:c %PldT"BMѮsCe[#OaIO6~ 6ơױj(GKJWe(7o6 Ul;aiQACC6C_Ұ3~Ųam q5H,Ѳ.תoz>{_)AhaMiBF HU#sEb$`luet;vxZr q͵@㌱ah +<@yl2iM$z̳Ĉ>[GI80cYz_sH;НX)ExZe/.,O/~uDߌZ92(V}y!.+,=g[O/fg_X)ў9'W}Nًhmrwsмvt_c:LD*E"څVd3!"ʊТu:'(3yB"C䭵w<*Z߀\Ȧ ^n3oHyDErX?8dT/~Kyأ JgeVh5|ە$)c|i6RuujB;O }!*rF ײLX֛e1;Ksc0%[Fo[" * Ⱦjm65Ȯ$`1`Kly(\}7ETnՐd7-I*M I\b/dq};MOSk77t#RcZa,^l-iAcU6"@\T9R-G;LW9XMOբ .9ٱ-l?R3aOE!죶Y^㈇ێYg~ౕZWNK{BOn\f -/ _%i5 %^2f:tR'߇^ (k3K( {F!j'!{Vp hcw]뜛;C_WDJPNТ)AҥVuՠb;63= [&ײ7HТ 'cWt$"34%v ?~D,!DvO45xzEr5Q`'14lwQ>?|o>^c=T{4v_l[1,CiBdbS giSBUXqE b5}RhOmJUBf#a UO31BE+7jtVpd?q I(U`Q[U>:kbN4z876Ԏ_OOc3@ܼJ!V`SOry1[ka|z..\Q%P0*9FR2o3?96fCCDI熨80zlNs_to}ёx=[+ketA'LdWɧȆ(p@GM IS$*0:flb5אk*6(omG|u^jD_< .R]qZiYӛv7NJLys.Zu-k 1ݣneZȑue-KV+d&Iݣ rNor;Who\y;qKeĉ7 ,iݰ՟C|:;pP['];kE%:!Qclb+ jF69 ޸wk7i=fFuuuo~-$P`f!Ƴ8۽ c̊,G8|ƖZ-ryY\]@=y륭1gϝH X AF-U.יŘ:UPv,l(fiDZ Ŝ&eVX]z?_+mŀU1F)$)i ]HԐޝ\woRً~<>^S.5x6RkY2ZSeCٚJRXl=hhZ1ġ\i] |8?nsƕ JpA);LWB5g֓B(.ד"Q&* \L>}i$ NB_cUb{DadH (|Pl*&=>f`Gޙմ6_i̻(ZޚLE1aJҹBn`3ӋeS0>Y=l2V3IuZZY4ُ/z:xiV-ꭈ_;k0~%-JHy KZ"z䩨]k; oCpS845cPb/)B>ƢEH*0/Ndg>pv J.S,]Pq:/tQ-hRj"\峭qy 4x/\.o;O<(X?;t@ІDH'X>^ҟhlC=w= vuâ?2!ƋخZlX,>@C#nk8aT,PȢ ƫAߚ%O+B[brvp>>ځᬎDjp^֬e썆WAe0 yխB )ܱ}:pQ/|!6:eBBŻ%S%6mv5zw` -n3M>Uo|+,~zl;v/e'7t+ƹbjAsXc즕R5퀻 :8̔R#ͣ_N #5}q&gM]Ԝq+?p A n!bdtUk]D_F4?F]wzMz3pyԭYe/s(i]S ^-z8ԶbY[nI.7@PM![ɵegc[~tNٚȮJ՟gǚwbԖ8Xv͋{SfWސӇh]x@t ?zٍVO7 Y 'gMWnMOc\" RYNPQ㊌+++r (JeBCPzndK^]'EPc9x;&7/Wͦ~UyȒ(?NWd4c|x|9/|guc Pb=[_leoWٛƴ r/}teVFI/N<|$O<{5\ ؆y;Xͧ=QZaN<y7MC־|"[F[5J+`iCΛ7!KYJ 벅^kVblM[p]BJ`c2 D[F=%mg76 #!iHqIFU,Q(a5dR+SS!-8RY| -;;RM5:dl5j"6*VIbx[ hjc$%.d%^t 2Xv6 ߝN֘`u4kv}?zj:aU.;JDV;޵q$E~U?A;,nskl.OIE$%;I=FEpfHv6MԻ~]]:<$0 Cẃ\[ h1ftq9ԉqRTP _Ԙo$m#Fn}=VPZ񰰃B?Vi6(m" WM%tփʲre[ xZ*; `Z.ʲ^y ɠY,Ļ\3m֫H t䏜j:/7爾4Cq׳woڏjDF܆[yz|h~g <@QPٗ W$|(90n -tE'Fi:=>e-QH| Sݿ?sGPI0?*<[(LKx>>b5׊DY.ܤ:u3⭆zJntl-L ϞE~Vs뿽s-'cFҩF95Փ3+Y]~oZ\%`Apx,[o3PLC{HAxbESYl m-"^Uk &МUv׾|n?@} EgĪo`g-f2Z2캿 +TVT[1o[[SNjz,Gq?ДFVѺSs{STR =Dਸ਼_̯gGy:<]\MBdZ@ BwG3gf&ףj^L !U[f_w->h%兡qfUQ2RS6U"/~0L^kq<][lb`[a!4+\nud!߹)}ItCnmePtھIt#B־cftBsmcSq>n@!D2c:m$ՋxnՌn]Xwn+6Ue] *7yi'OF895Rm){Ү8\n-uJy\qWGe{Gea_|?}b'߾{3)ek7!*eΚe__1(~ςn[E{n(fʎSm9G=5' 3`hŕڮg ?;k5g~߾Pǎ3k/J{a?m/l rM%k%gdQ̪} AE(7ޠLpd륩ۦZ_]lmY%dڵd޹דUl} -Deo CXÖfB˝" [Pj?pm`oS;}ll# 6dG6J rH4ʛB)j!"(D(g2MiE;;ؑF{w{~t{Zcڀ2=VYew7O^17gRp5LE}2g/ȤYrƊdFU9W*ʠΖuw4_nA`2c:m$L4SkFF.,;7mJ[*eǽ֚7y3fVn$q;&OVFv`Y'SUnk⦑UKǐ=UNKԉ&O9#vI˜ȑh7W_Z9'eć$S%HB& (~\6^Q@ @ J~b$)w/M?bؐ;C%QhKXQ<NF4E(1v+Dj2h(T\@ W@d$fftt[0 #7W2y( ~=:YDx U@]qR}q*[x"&jP4*> 4Yb,f86E{m|X EK>D7UM)*jt|DVSocٯY+eTD7z=DS)oYc!X@ʀvO'ˡ9AuŽ0o?Na3UO2^~KR&R+)IL'Ldnb^reMou8OF޳!\|:؀xVzgOAxbw{Fn/|eB&W? L c 2]q/a[7 sws^* 9(͙T<=`+m J)_ fzy7MiF/$2 D9r'-܏q^ ¸b)g4%5ox:˜ $|cHb. X"ger6h-WXa/$\Is8;/AKQ"jic1 +Lo $/_eAH38iQ;=2LhmXo^^ހ3=Iyɱұơ\F 0Cj V`z S ټZ3L/~φQ)#L ٘,ZE^psYg_>ͰM/C9.ςzxr$s {2 yWLUYq(aH qjp7H%+ޕ5/,vɩ0-\SHN )_{ ksǿ {3tqbh8`/+0L{׼8޲\欯Frq0Dh!!(MXc:`+ 4}w۪UVWܪbZش|o]%Α-V\|BlJS4)V ʨo d=YVpu=YF= nT7OoTGZNri!+eQLtp9-4_Nb.ޘx*۷Z!Q^{4gaJR%['ݩJmF%RKVRIe)w^= kTRzߗt%0M7{]0r-?YՌF3Y}8 6qo״Ri͋t9/I15} &,1iy(?~.sfHݍ6yQ=⮖&<#9|r{f7VĂԽ6Lj5lC)L1.@#/LE|JswrSQgt_̵`楩h֎t=M?/f4_Lk2lgF5'2P)Gcޓgq7%iVybCЀ.ژY_Nt$őj1 a߮t'V`60\qݍ٫IQ{ulrͨ_o3Y2{Xw h4 /!XƜ-N =)G>W2r* ɦW d9NG{_,+As;@:MiF\#YSr>dzv>+BBugߗn!˥S"GU! oc ñk}% oQ)f:Kqٍc@+ο9ri$D[ٮ|vʐYRi2{$JmkÜ)dd~^NNG7WIiр`LRt7gxKUM@hl$̦ # %c~ 0yHȔɼYF5mdjh8{ gfx#v1 o@of'<{S<ӡ:qn1FETDx+uk_~QdYPQ'Q ɳKB2*9S΁JF #a/wA/\+ī|ݣֲk%r+"rodjD}G~Ց (SY&a\m?!U2DO$iˡLZEpԒysݲ>RHIm 4Ïm2d2gŇ<8 M ޠ^1TIёg֖uPYdrAtĢܘ Br) =@BiSNhk=j-עfYqWEAMTRm?ò&Z| 1,/mfQB[K DD!S;,(x2Le½%㒆[ ̰~?sXF䐗̋rV##F+Nl= LtmTj&=3, KCՂLj gU4]yH[~1hwB19<9NZD1æKL1hr&dq,~Yڨ,!,}84̠ cM,1?$6>&8!k..~S#NV~G58\{1rIxY *` IS\C.#|U&FW5^Nۿ"v@>,/۠ݽ8|$nG%9i_r$#ٲ94cQ3E{+zğ-E,GJӺJ>\<#')}-RɕLrn\ӯәZr(]%Q: H^ۋhG|nQϕ(%ɹfh[F>"lRgޯuCT.p IIMU6Aq=#D{ڃ,J @l9e[ӶxD) l^!f Y}7R#vu0C ܧ0ְQoT`W[Kԋ! ;J,,]|?ZsT>kFzrrt$0D/4j,56jƼaH!V:|^~ ʅGӰeJ`%/]4釵ƝzyeH..UNhB/rdpe @|JQ}'s2HFdl'GKi*X2PANNXVJ)Zp֭$s h\C~zͧԏ{r=#A'o),-- e\&"M;05ZJ[(/MJ2Xz|硾XZHz8 P]eTwfV K.DLt#,P m04 Ӽ.ZGUw#OA9^ iHM["M$D`b\f^z ag7$N#BSv:/P+dl#UG\KFfɫ8r&d8@D%" ؈g>W(T.z|Ɩ"]>". "$PW?wח5l~8(Oh( բ J=w$d~y e#lAb =]۩,[FedH(%Ge ]ƥ";eGQȨR"q%8mFόP J򀟮F¡E h[;BCY2q4Ғ9 "T 9!6LBpKgh! t^}[#K~P|.܀P.w~G.^$HA &tEI$+4_Zcqߥ>YUdt]IA";#+ ׬`܈R\S8! }=2^RSt5 +҉2K}+-rTo ݽzpQ?P:4Eę*u*@=u܋HXh}%Jʩ~#юjPU%@ y%"f1_3K Yt5KLTE{+P U+|<3RIVP V?II)%ySuhQaL:˻wƜPR|v_VG>:@j`]}v9Ra-pb|mCp™wqW"hx8f:p}MBKsdChHH"G7ֲ9}1͸o~m5E{>*>\u?FPhKD(6Aڢ *h0Xcxiy9bu#LiTGH @o JΤ:r;DsRr?!k=Vsd2gT!3Çにӹ҈RvT|k 4|T꽍b:TGEిQB P_kr<`RGeaT"dAqTCtN7̂qnZmJĶt_MgŇͧn꿉bXrgQ(wkZт-C{ jȄey?zQx\EtҺ!@ja V:sh5Yvrw=W_No&?{K7'5"}2 ̸2 &~p9U_|qx:,LկMӛ* k8_ZϢQ6R/2JL%SFu,~S%:|zs>ۨbR=DDڔc챨E2ycFktI$9Ըġ-5bgYCuHofw/fA]NrxY>4}8 Gtھv@ .tBB^)tEUzTC)WR#;ʱr ͱωf)̮EPӄZ@)hiײ ZV[%ͭ6D֋lv ԜC1 kC( C_ M=q1nI@ 6u߃|j ry`?JzFK- e {7\Î{gun4D9\- eT \pO,Lj@(1R, $+%x͋ xa삚Ȝm0c]5\>h;!;n|ViF?DJ $6σ|JU\9:U?E[EҌ|0wٻڀɅ:Nh_3%E9['/4)Aix@%92gV1U PպSl 1 &Nr@XN-CVFkk*6MH Na(tlSrSjxuyH8̔RVTڭn0,Q{+; g6c'9XyiD.^)E]@Ҹ9AfGݴnHn&IfR ·`un_7#b;^vai>*Z͍S^./%Gp%{&E&b(SBkOe@ ""NhA!@XO|jYs*2:(=*j5\HA:l뵍g܃rJTŃBz=gDJ**U(9(Jh+Zh5v]#L+0nF,؉;iHUP(R;4c!0#]+ 38R vP5ZӮ.M7 n=JАଞ@kj#a)- h?x7ỳvgũ`VZhZ =`vk͈ Fs'TY8BvņwxyZ+u\޶>P!N<ڊG6}V7plvȿVH53FMfʼn/ff?._%W *iJtQb({=Jԡ*I%NIBҀ&<+EHtAyY"$1fϏ?'D?KLޫl۾|薀jlע|ѯW_/Z/5h&, _9rb mپL:BF訜.-L0!HMF Ovz 辢KUZT^wd_ۯ$l6He2Aꁻ)Vǿ|[:I%/7sE-#\^۸& #Pϗ?M<&ق%O~'5Bf?|{p,r9~X!86ٲ+3σ DJ#ן) )P7USϓ*'#?/n#\A~{>PE?~g"\mDz6Z\ې<֔qdc<*,*;Oٿw25Z\8<-g{"Ti9yG"-'kqvZNF'i984c}Z>7KFQ@ɦL8; ͈X׼ص@sYj"5WGW ` p׮Ŏr" s 1:킔]ϧ<ɴ<(+$\qHaN1먰3.Wa?jp5XbNM1 E;ʷˉL&o5lv{~vFAICϕ7ѥL3{;X]QydQͦt/x7MYHm"+)%p2v eCӤ6޾YO>y~z7͋@"g5K>JL-kTkK6E#A3q_JGH?&i`\#sR@3ļ暊?-ť_oڱ/g6YLjVOY4@4$$5 <ӌ *!PGKH:E`=1')m%e.cvݤ.5hZ "9O{W8DJз 3b\R{2bȲ"%;n7rH2E*WEZPn{;Pt>1>bb~{02 ʅrwzAVvBŠ8z!%9,xIqb7>$w\$U|j9='A.6%w$(`y5%&* vdE/ש /j Eua\N٠'_WR⁇\JHgĀAJ4/wҟ1aJ`hg 6MZ˔^ZT*7ˑ-₱u!W`oY0XI*uJ4i mM'9R0~rʈʽbmӛ>@zq9G*(tPF!p2#PEk>}9l ZŇbt_XBs)aIbԒ5u̴+v] 247Jl![ [9ڎV2ZoHV"jݵ^8S'B nG0 w|DE|xJyE%jD@;m+ւhr26 7KJF!_MAٟՌQʥ 1 !N\@f\zMTW5m(E4"•o] A~3b-v'E]*.Pkkddn1]5֫6Ұ:fǿQ:=2pL1O51$~ LcL">榟,  bU9{ !g# Ճ ^,[6 >To)x" ^1NWӳKq7 H( psKq؁p;ɮ1˪V"YYI@(B*<ڃ߻!;q: ik7"=udƖ"a<+Z _R t}nB7!=KZ~X=Vt{߇/m-y÷+u?}jA N&w# =O']M=ZiЕ$yTU( r{Ly"O=چ_H?5(BhN3YS_9X\ bT')mFh"cDf<"[ U4Kǹkd [.1v]= -?4ҺА\EGT,]6*f ؈^K e9;ɬvar/home/core/zuul-output/logs/kubelet.log0000644000000000000000005322463315150276455017717 0ustar rootrootFeb 27 10:17:26 crc systemd[1]: Starting Kubernetes Kubelet... Feb 27 10:17:26 crc restorecon[4707]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Feb 27 10:17:26 crc restorecon[4707]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 27 10:17:26 crc restorecon[4707]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 27 10:17:26 crc restorecon[4707]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 27 10:17:26 crc restorecon[4707]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 27 10:17:26 crc restorecon[4707]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 27 10:17:26 crc restorecon[4707]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 27 10:17:26 crc restorecon[4707]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 27 10:17:26 crc restorecon[4707]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 27 10:17:26 crc restorecon[4707]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 27 10:17:26 crc restorecon[4707]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 27 10:17:26 crc restorecon[4707]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 27 10:17:26 crc restorecon[4707]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 27 10:17:26 crc restorecon[4707]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 27 10:17:26 crc restorecon[4707]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 27 10:17:26 crc restorecon[4707]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 27 10:17:26 crc restorecon[4707]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 27 10:17:26 crc restorecon[4707]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 27 10:17:26 crc restorecon[4707]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 27 10:17:26 crc restorecon[4707]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 27 10:17:26 crc restorecon[4707]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 27 10:17:26 crc restorecon[4707]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 27 10:17:26 crc restorecon[4707]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 27 10:17:26 crc restorecon[4707]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 27 10:17:26 crc restorecon[4707]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 27 10:17:26 crc restorecon[4707]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 27 10:17:26 crc restorecon[4707]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 27 10:17:26 crc restorecon[4707]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 27 10:17:26 crc restorecon[4707]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 27 10:17:26 crc restorecon[4707]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 27 10:17:26 crc restorecon[4707]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 27 10:17:26 crc restorecon[4707]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 27 10:17:26 crc restorecon[4707]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 27 10:17:26 crc restorecon[4707]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 27 10:17:26 crc restorecon[4707]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 27 10:17:26 crc restorecon[4707]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 27 10:17:26 crc restorecon[4707]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Feb 27 10:17:26 crc restorecon[4707]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 27 10:17:26 crc restorecon[4707]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 27 10:17:26 crc restorecon[4707]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 27 10:17:26 crc restorecon[4707]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 27 10:17:26 crc restorecon[4707]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 27 10:17:26 crc restorecon[4707]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 27 10:17:26 crc restorecon[4707]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 27 10:17:26 crc restorecon[4707]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 27 10:17:26 crc restorecon[4707]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 27 10:17:26 crc restorecon[4707]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 27 10:17:26 crc restorecon[4707]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 27 10:17:26 crc restorecon[4707]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 27 10:17:26 crc restorecon[4707]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 27 10:17:26 crc restorecon[4707]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 27 10:17:26 crc restorecon[4707]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 27 10:17:26 crc restorecon[4707]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 27 10:17:26 crc restorecon[4707]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 27 10:17:26 crc restorecon[4707]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 27 10:17:26 crc restorecon[4707]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 27 10:17:26 crc restorecon[4707]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 27 10:17:26 crc restorecon[4707]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 27 10:17:26 crc restorecon[4707]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 27 10:17:26 crc restorecon[4707]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 27 10:17:26 crc restorecon[4707]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 27 10:17:26 crc restorecon[4707]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 27 10:17:26 crc restorecon[4707]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 27 10:17:26 crc restorecon[4707]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 27 10:17:26 crc restorecon[4707]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 27 10:17:26 crc restorecon[4707]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 27 10:17:26 crc restorecon[4707]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 27 10:17:26 crc restorecon[4707]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 27 10:17:26 crc restorecon[4707]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 27 10:17:26 crc restorecon[4707]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 27 10:17:26 crc restorecon[4707]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 27 10:17:26 crc restorecon[4707]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 27 10:17:26 crc restorecon[4707]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 27 10:17:26 crc restorecon[4707]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 27 10:17:26 crc restorecon[4707]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 27 10:17:26 crc restorecon[4707]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 27 10:17:26 crc restorecon[4707]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 27 10:17:26 crc restorecon[4707]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 27 10:17:26 crc restorecon[4707]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 27 10:17:26 crc restorecon[4707]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 27 10:17:26 crc restorecon[4707]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 27 10:17:26 crc restorecon[4707]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 27 10:17:26 crc restorecon[4707]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 27 10:17:26 crc restorecon[4707]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 27 10:17:26 crc restorecon[4707]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 27 10:17:26 crc restorecon[4707]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 27 10:17:26 crc restorecon[4707]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 27 10:17:26 crc restorecon[4707]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 27 10:17:26 crc restorecon[4707]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 27 10:17:27 crc restorecon[4707]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 27 10:17:27 crc restorecon[4707]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Feb 27 10:17:28 crc kubenswrapper[4998]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 27 10:17:28 crc kubenswrapper[4998]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 27 10:17:28 crc kubenswrapper[4998]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 27 10:17:28 crc kubenswrapper[4998]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 27 10:17:28 crc kubenswrapper[4998]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 27 10:17:28 crc kubenswrapper[4998]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.487276 4998 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.495530 4998 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.495563 4998 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.495574 4998 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.495584 4998 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.495593 4998 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.495618 4998 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.495627 4998 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.495635 4998 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.495644 4998 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.495652 4998 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.495661 4998 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.495671 4998 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.495679 4998 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.495687 4998 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.495696 4998 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.495704 4998 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.495712 4998 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.495720 4998 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.495728 4998 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.495737 4998 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.495748 4998 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.495760 4998 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.495769 4998 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.495778 4998 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.495787 4998 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.495798 4998 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.495808 4998 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.495818 4998 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.495826 4998 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.495839 4998 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.495849 4998 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.495857 4998 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.495866 4998 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.495874 4998 feature_gate.go:330] unrecognized feature gate: Example Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.495885 4998 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.495908 4998 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.495917 4998 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.495929 4998 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.495938 4998 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.495947 4998 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.495957 4998 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.495965 4998 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.495973 4998 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.495982 4998 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.495990 4998 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.495999 4998 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.496007 4998 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.496015 4998 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.496024 4998 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.496032 4998 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.496041 4998 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.496049 4998 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.496057 4998 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.496068 4998 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.496076 4998 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.496084 4998 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.496093 4998 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.496101 4998 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.496109 4998 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.496117 4998 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.496125 4998 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.496134 4998 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.496142 4998 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.496150 4998 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.496158 4998 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.496166 4998 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.496175 4998 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.496183 4998 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.496192 4998 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.496200 4998 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.496208 4998 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.497422 4998 flags.go:64] FLAG: --address="0.0.0.0" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.497451 4998 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.497469 4998 flags.go:64] FLAG: --anonymous-auth="true" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.497481 4998 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.497493 4998 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.497505 4998 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.497551 4998 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.497563 4998 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.497574 4998 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.497584 4998 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.497595 4998 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.497605 4998 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.497615 4998 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.497625 4998 flags.go:64] FLAG: --cgroup-root="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.497634 4998 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.497644 4998 flags.go:64] FLAG: --client-ca-file="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.497654 4998 flags.go:64] FLAG: --cloud-config="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.497663 4998 flags.go:64] FLAG: --cloud-provider="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.497672 4998 flags.go:64] FLAG: --cluster-dns="[]" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.497683 4998 flags.go:64] FLAG: --cluster-domain="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.497692 4998 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.497702 4998 flags.go:64] FLAG: --config-dir="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.497712 4998 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.497722 4998 flags.go:64] FLAG: --container-log-max-files="5" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.497734 4998 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.497744 4998 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.497754 4998 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.497765 4998 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.497774 4998 flags.go:64] FLAG: --contention-profiling="false" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.497784 4998 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.497794 4998 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.497804 4998 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.497813 4998 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.497825 4998 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.497835 4998 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.497844 4998 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.497853 4998 flags.go:64] FLAG: --enable-load-reader="false" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.497864 4998 flags.go:64] FLAG: --enable-server="true" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.497874 4998 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.498548 4998 flags.go:64] FLAG: --event-burst="100" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.498564 4998 flags.go:64] FLAG: --event-qps="50" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.498651 4998 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.498667 4998 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.498681 4998 flags.go:64] FLAG: --eviction-hard="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.498699 4998 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.498712 4998 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.498725 4998 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.498750 4998 flags.go:64] FLAG: --eviction-soft="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.498762 4998 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.498774 4998 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.498787 4998 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.498800 4998 flags.go:64] FLAG: --experimental-mounter-path="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.498812 4998 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.498825 4998 flags.go:64] FLAG: --fail-swap-on="true" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.498838 4998 flags.go:64] FLAG: --feature-gates="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.498862 4998 flags.go:64] FLAG: --file-check-frequency="20s" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.498875 4998 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.498888 4998 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.498902 4998 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.498915 4998 flags.go:64] FLAG: --healthz-port="10248" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.498927 4998 flags.go:64] FLAG: --help="false" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.498940 4998 flags.go:64] FLAG: --hostname-override="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.498952 4998 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.498966 4998 flags.go:64] FLAG: --http-check-frequency="20s" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.498987 4998 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.498999 4998 flags.go:64] FLAG: --image-credential-provider-config="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.499012 4998 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.499024 4998 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.499037 4998 flags.go:64] FLAG: --image-service-endpoint="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.499049 4998 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.499062 4998 flags.go:64] FLAG: --kube-api-burst="100" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.499075 4998 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.499088 4998 flags.go:64] FLAG: --kube-api-qps="50" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.499109 4998 flags.go:64] FLAG: --kube-reserved="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.499122 4998 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.499134 4998 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.499147 4998 flags.go:64] FLAG: --kubelet-cgroups="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.499161 4998 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.499174 4998 flags.go:64] FLAG: --lock-file="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.499187 4998 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.499199 4998 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.499252 4998 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.499275 4998 flags.go:64] FLAG: --log-json-split-stream="false" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.499287 4998 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.499302 4998 flags.go:64] FLAG: --log-text-split-stream="false" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.499315 4998 flags.go:64] FLAG: --logging-format="text" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.499328 4998 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.499342 4998 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.499356 4998 flags.go:64] FLAG: --manifest-url="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.499368 4998 flags.go:64] FLAG: --manifest-url-header="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.499396 4998 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.499409 4998 flags.go:64] FLAG: --max-open-files="1000000" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.499426 4998 flags.go:64] FLAG: --max-pods="110" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.499438 4998 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.499453 4998 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.499466 4998 flags.go:64] FLAG: --memory-manager-policy="None" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.499479 4998 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.499492 4998 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.499514 4998 flags.go:64] FLAG: --node-ip="192.168.126.11" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.499527 4998 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.499572 4998 flags.go:64] FLAG: --node-status-max-images="50" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.499583 4998 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.499593 4998 flags.go:64] FLAG: --oom-score-adj="-999" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.499604 4998 flags.go:64] FLAG: --pod-cidr="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.499613 4998 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.499629 4998 flags.go:64] FLAG: --pod-manifest-path="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.499639 4998 flags.go:64] FLAG: --pod-max-pids="-1" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.499649 4998 flags.go:64] FLAG: --pods-per-core="0" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.499659 4998 flags.go:64] FLAG: --port="10250" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.499676 4998 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.499686 4998 flags.go:64] FLAG: --provider-id="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.499696 4998 flags.go:64] FLAG: --qos-reserved="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.499706 4998 flags.go:64] FLAG: --read-only-port="10255" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.499718 4998 flags.go:64] FLAG: --register-node="true" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.499729 4998 flags.go:64] FLAG: --register-schedulable="true" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.499740 4998 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.499770 4998 flags.go:64] FLAG: --registry-burst="10" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.499783 4998 flags.go:64] FLAG: --registry-qps="5" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.499796 4998 flags.go:64] FLAG: --reserved-cpus="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.499809 4998 flags.go:64] FLAG: --reserved-memory="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.499826 4998 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.499840 4998 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.499854 4998 flags.go:64] FLAG: --rotate-certificates="false" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.499867 4998 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.499882 4998 flags.go:64] FLAG: --runonce="false" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.499904 4998 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.499917 4998 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.499932 4998 flags.go:64] FLAG: --seccomp-default="false" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.499945 4998 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.499958 4998 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.499972 4998 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.499985 4998 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.499998 4998 flags.go:64] FLAG: --storage-driver-password="root" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.500020 4998 flags.go:64] FLAG: --storage-driver-secure="false" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.500033 4998 flags.go:64] FLAG: --storage-driver-table="stats" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.500045 4998 flags.go:64] FLAG: --storage-driver-user="root" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.500059 4998 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.500072 4998 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.500085 4998 flags.go:64] FLAG: --system-cgroups="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.500097 4998 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.500154 4998 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.500171 4998 flags.go:64] FLAG: --tls-cert-file="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.500185 4998 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.500255 4998 flags.go:64] FLAG: --tls-min-version="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.500278 4998 flags.go:64] FLAG: --tls-private-key-file="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.500304 4998 flags.go:64] FLAG: --topology-manager-policy="none" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.500311 4998 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.500318 4998 flags.go:64] FLAG: --topology-manager-scope="container" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.500324 4998 flags.go:64] FLAG: --v="2" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.500346 4998 flags.go:64] FLAG: --version="false" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.500354 4998 flags.go:64] FLAG: --vmodule="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.500361 4998 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.500368 4998 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.500656 4998 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.500664 4998 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.500670 4998 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.500678 4998 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.500684 4998 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.500690 4998 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.500696 4998 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.500706 4998 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.500711 4998 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.500717 4998 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.500722 4998 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.500727 4998 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.500734 4998 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.500742 4998 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.500749 4998 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.500793 4998 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.500807 4998 feature_gate.go:330] unrecognized feature gate: Example Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.500813 4998 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.501026 4998 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.501038 4998 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.501048 4998 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.501055 4998 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.501061 4998 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.501069 4998 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.501115 4998 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.501123 4998 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.501128 4998 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.501134 4998 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.501139 4998 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.501144 4998 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.501149 4998 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.501154 4998 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.501159 4998 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.501164 4998 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.501169 4998 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.501174 4998 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.501179 4998 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.501184 4998 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.501189 4998 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.501194 4998 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.501199 4998 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.501204 4998 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.501209 4998 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.501215 4998 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.501252 4998 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.501267 4998 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.501274 4998 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.501281 4998 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.501287 4998 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.501293 4998 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.501298 4998 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.501303 4998 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.501309 4998 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.501314 4998 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.501319 4998 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.501324 4998 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.501329 4998 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.501334 4998 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.501339 4998 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.501344 4998 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.501349 4998 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.501354 4998 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.501360 4998 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.501365 4998 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.501369 4998 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.501374 4998 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.501379 4998 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.501383 4998 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.501388 4998 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.501393 4998 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.501398 4998 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.501431 4998 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.512061 4998 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.512111 4998 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.512283 4998 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.512299 4998 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.512310 4998 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.512320 4998 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.512330 4998 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.512339 4998 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.512351 4998 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.512361 4998 feature_gate.go:330] unrecognized feature gate: Example Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.512372 4998 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.512381 4998 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.512390 4998 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.512400 4998 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.512408 4998 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.512416 4998 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.512425 4998 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.512433 4998 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.512442 4998 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.512450 4998 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.512458 4998 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.512468 4998 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.512476 4998 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.512486 4998 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.512495 4998 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.512503 4998 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.512516 4998 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.512527 4998 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.512537 4998 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.512547 4998 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.512556 4998 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.512565 4998 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.512573 4998 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.512582 4998 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.512590 4998 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.512599 4998 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.512607 4998 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.512616 4998 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.512624 4998 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.512632 4998 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.512641 4998 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.512649 4998 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.512658 4998 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.512667 4998 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.512676 4998 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.512685 4998 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.512694 4998 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.512703 4998 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.512712 4998 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.512721 4998 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.512730 4998 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.512739 4998 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.512748 4998 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.512759 4998 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.512771 4998 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.512781 4998 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.512791 4998 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.512802 4998 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.512811 4998 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.512821 4998 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.512831 4998 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.512840 4998 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.512849 4998 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.512861 4998 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.512871 4998 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.512882 4998 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.512893 4998 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.512902 4998 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.512936 4998 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.512949 4998 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.512960 4998 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.512969 4998 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.512978 4998 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.512991 4998 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.513221 4998 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.513266 4998 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.513277 4998 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.513287 4998 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.513296 4998 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.513305 4998 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.513314 4998 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.513323 4998 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.513332 4998 feature_gate.go:330] unrecognized feature gate: Example Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.513341 4998 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.513351 4998 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.513359 4998 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.513369 4998 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.513378 4998 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.513387 4998 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.513395 4998 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.513403 4998 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.513412 4998 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.513423 4998 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.513435 4998 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.513446 4998 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.513455 4998 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.513465 4998 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.513474 4998 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.513484 4998 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.513493 4998 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.513502 4998 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.513511 4998 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.513552 4998 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.513561 4998 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.513570 4998 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.513581 4998 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.513590 4998 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.513598 4998 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.513606 4998 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.513618 4998 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.513626 4998 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.513635 4998 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.513643 4998 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.513651 4998 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.513660 4998 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.513668 4998 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.513677 4998 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.513685 4998 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.513696 4998 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.513704 4998 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.513713 4998 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.513721 4998 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.513729 4998 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.513737 4998 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.513746 4998 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.513756 4998 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.513765 4998 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.513773 4998 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.513781 4998 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.513790 4998 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.513801 4998 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.513812 4998 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.513821 4998 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.513831 4998 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.513841 4998 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.513850 4998 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.513859 4998 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.513867 4998 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.513875 4998 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.513884 4998 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.513892 4998 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.513903 4998 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.513911 4998 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.513920 4998 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.513928 4998 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.513940 4998 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.515633 4998 server.go:940] "Client rotation is on, will bootstrap in background" Feb 27 10:17:28 crc kubenswrapper[4998]: E0227 10:17:28.521646 4998 bootstrap.go:266] "Unhandled Error" err="part of the existing bootstrap client certificate in /var/lib/kubelet/kubeconfig is expired: 2026-02-24 05:52:08 +0000 UTC" logger="UnhandledError" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.526370 4998 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.526524 4998 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.529439 4998 server.go:997] "Starting client certificate rotation" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.529501 4998 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.529763 4998 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.564149 4998 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.568417 4998 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 27 10:17:28 crc kubenswrapper[4998]: E0227 10:17:28.568457 4998 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.173:6443: connect: connection refused" logger="UnhandledError" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.595286 4998 log.go:25] "Validated CRI v1 runtime API" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.631606 4998 log.go:25] "Validated CRI v1 image API" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.634049 4998 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.640759 4998 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-02-27-10-13-13-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.640824 4998 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.662091 4998 manager.go:217] Machine: {Timestamp:2026-02-27 10:17:28.657143605 +0000 UTC m=+0.655414613 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654124544 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:9cb598f9-84ae-4703-b4b9-6775104308e7 BootID:a76c225a-d617-4499-bb32-eb42f31208cd Filesystems:[{Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:9e:f3:5d Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:9e:f3:5d Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:4a:07:3b Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:3e:57:ef Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:74:19:12 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:a4:a7:bc Speed:-1 Mtu:1496} {Name:eth10 MacAddress:16:90:ad:3e:e1:5f Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:96:d5:9a:51:d6:a4 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654124544 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.662421 4998 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.662584 4998 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.663026 4998 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.663259 4998 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.663300 4998 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.664604 4998 topology_manager.go:138] "Creating topology manager with none policy" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.664629 4998 container_manager_linux.go:303] "Creating device plugin manager" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.665381 4998 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.665416 4998 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.666745 4998 state_mem.go:36] "Initialized new in-memory state store" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.666864 4998 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.672440 4998 kubelet.go:418] "Attempting to sync node with API server" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.672473 4998 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.672495 4998 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.672512 4998 kubelet.go:324] "Adding apiserver pod source" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.672528 4998 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.680975 4998 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.173:6443: connect: connection refused Feb 27 10:17:28 crc kubenswrapper[4998]: E0227 10:17:28.681089 4998 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.173:6443: connect: connection refused" logger="UnhandledError" Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.680978 4998 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.173:6443: connect: connection refused Feb 27 10:17:28 crc kubenswrapper[4998]: E0227 10:17:28.681156 4998 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.173:6443: connect: connection refused" logger="UnhandledError" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.681693 4998 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.682686 4998 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.686149 4998 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.688395 4998 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.688447 4998 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.688465 4998 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.688481 4998 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.688503 4998 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.688517 4998 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.688531 4998 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.688553 4998 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.688569 4998 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.688584 4998 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.688605 4998 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.688619 4998 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.689896 4998 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.690671 4998 server.go:1280] "Started kubelet" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.690719 4998 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.173:6443: connect: connection refused Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.692055 4998 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.692078 4998 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 27 10:17:28 crc systemd[1]: Started Kubernetes Kubelet. Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.694021 4998 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.695540 4998 server.go:460] "Adding debug handlers to kubelet server" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.695637 4998 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.695676 4998 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 27 10:17:28 crc kubenswrapper[4998]: E0227 10:17:28.695921 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.695930 4998 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.695960 4998 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.695970 4998 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.696862 4998 factory.go:55] Registering systemd factory Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.696907 4998 factory.go:221] Registration of the systemd container factory successfully Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.697163 4998 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.173:6443: connect: connection refused Feb 27 10:17:28 crc kubenswrapper[4998]: E0227 10:17:28.697294 4998 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.173:6443: connect: connection refused" logger="UnhandledError" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.697363 4998 factory.go:153] Registering CRI-O factory Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.697386 4998 factory.go:221] Registration of the crio container factory successfully Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.697475 4998 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.697505 4998 factory.go:103] Registering Raw factory Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.697565 4998 manager.go:1196] Started watching for new ooms in manager Feb 27 10:17:28 crc kubenswrapper[4998]: E0227 10:17:28.697787 4998 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.173:6443: connect: connection refused" interval="200ms" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.698794 4998 manager.go:319] Starting recovery of all containers Feb 27 10:17:28 crc kubenswrapper[4998]: E0227 10:17:28.697356 4998 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.173:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.189813185a9c17f1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:17:28.690612209 +0000 UTC m=+0.688883207,LastTimestamp:2026-02-27 10:17:28.690612209 +0000 UTC m=+0.688883207,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.705161 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.705220 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.705254 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.705269 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.705280 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.705294 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.705307 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.705320 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.705334 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.705348 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.705359 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.705370 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.705384 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.705398 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.705434 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.705449 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.705462 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.705486 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.705499 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.705511 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.705524 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.705558 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.705570 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.705586 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.705598 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.705609 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.705625 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.705640 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.705653 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.705665 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.705677 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.705693 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.705709 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.705722 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.705734 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.705746 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.705759 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.705772 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.705786 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.705798 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.705810 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.705820 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.705831 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.705843 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.705857 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.707848 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.707955 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.708018 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.708053 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.708111 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.708141 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.708169 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.708311 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.708372 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.708416 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.708481 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.708533 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.708565 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.708595 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.708632 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.708660 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.708699 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.708728 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.708757 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.708816 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.708855 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.709052 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.709100 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.709143 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.709200 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.709275 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.709320 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.709349 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.709395 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.709442 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.709485 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.709523 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.709581 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.709625 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.709778 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.709865 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.709980 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.709999 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.710021 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.710041 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.710109 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.710133 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.710146 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.710161 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.710184 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.710198 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.710215 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.710247 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.710263 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.710282 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.710780 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.710826 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.710842 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.710856 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.710873 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.710884 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.710900 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.710971 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.711007 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.711102 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.711125 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.711142 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.711161 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.711180 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.711247 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.711267 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.711302 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.711367 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.711428 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.711449 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.711469 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.711482 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.711559 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.711589 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.711615 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.711631 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.711644 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.711660 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.711671 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.711686 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.711702 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.711731 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.711778 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.711825 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.711836 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.711852 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.711864 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.711880 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.711892 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.711905 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.711961 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.711976 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.712767 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.712789 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.712968 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.712991 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.713070 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.713096 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.713146 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.713200 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.713215 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.713252 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.713270 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.713328 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.713352 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.713365 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.713413 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.713463 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.713654 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.713674 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.713696 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.713709 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.713721 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.713788 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.713933 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.714015 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.714085 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.714108 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.714126 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.714141 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.714240 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.714259 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.714320 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.714338 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.714458 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.714486 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.714503 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.714622 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.714652 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.714670 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.714753 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.714808 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.714899 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.714923 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.714939 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.715213 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.715256 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.715276 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.715339 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.715463 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.715485 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.715507 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.715520 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.715537 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.715560 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.720866 4998 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.720960 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.720983 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.721001 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.721017 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.721032 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.721049 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.721064 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.721079 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.721095 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.721163 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.721180 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.721194 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.721256 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.721271 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.721285 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.721319 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.721336 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.721348 4998 reconstruct.go:97] "Volume reconstruction finished" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.721357 4998 reconciler.go:26] "Reconciler: start to sync state" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.730293 4998 manager.go:324] Recovery completed Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.746046 4998 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.747795 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.747867 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.747885 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.749384 4998 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.749403 4998 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.749431 4998 state_mem.go:36] "Initialized new in-memory state store" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.760255 4998 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.761945 4998 policy_none.go:49] "None policy: Start" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.762935 4998 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.762968 4998 state_mem.go:35] "Initializing new in-memory state store" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.763515 4998 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.763594 4998 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.763629 4998 kubelet.go:2335] "Starting kubelet main sync loop" Feb 27 10:17:28 crc kubenswrapper[4998]: E0227 10:17:28.763879 4998 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 27 10:17:28 crc kubenswrapper[4998]: W0227 10:17:28.766079 4998 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.173:6443: connect: connection refused Feb 27 10:17:28 crc kubenswrapper[4998]: E0227 10:17:28.766141 4998 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.173:6443: connect: connection refused" logger="UnhandledError" Feb 27 10:17:28 crc kubenswrapper[4998]: E0227 10:17:28.796575 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.829817 4998 manager.go:334] "Starting Device Plugin manager" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.829981 4998 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.830193 4998 server.go:79] "Starting device plugin registration server" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.830608 4998 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.830670 4998 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.830862 4998 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.831412 4998 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.831430 4998 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 27 10:17:28 crc kubenswrapper[4998]: E0227 10:17:28.840003 4998 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.864837 4998 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.864986 4998 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.868243 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.868297 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.868307 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.868578 4998 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.868888 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.868961 4998 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.870253 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.870289 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.870297 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.870334 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.870345 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.870301 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.870649 4998 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.870831 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.870915 4998 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.872139 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.872170 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.872182 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.872305 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.872335 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.872350 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.872351 4998 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.872569 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.872604 4998 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.873326 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.873349 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.873359 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.873395 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.873416 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.873431 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.873578 4998 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.873687 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.873714 4998 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.874379 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.874400 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.874407 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.874572 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.874604 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.874615 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.874883 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.874942 4998 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.875886 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.875909 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.875929 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:17:28 crc kubenswrapper[4998]: E0227 10:17:28.898725 4998 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.173:6443: connect: connection refused" interval="400ms" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.923179 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.923318 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.923354 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.923430 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.923516 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.923578 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.923604 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.923653 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.923677 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.923722 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.923750 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.923798 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.923824 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.923849 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.923896 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.930890 4998 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.932289 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.932336 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.932348 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:17:28 crc kubenswrapper[4998]: I0227 10:17:28.932376 4998 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 27 10:17:28 crc kubenswrapper[4998]: E0227 10:17:28.932825 4998 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.173:6443: connect: connection refused" node="crc" Feb 27 10:17:29 crc kubenswrapper[4998]: I0227 10:17:29.025664 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 27 10:17:29 crc kubenswrapper[4998]: I0227 10:17:29.025739 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 10:17:29 crc kubenswrapper[4998]: I0227 10:17:29.025880 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 10:17:29 crc kubenswrapper[4998]: I0227 10:17:29.025874 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 27 10:17:29 crc kubenswrapper[4998]: I0227 10:17:29.026035 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 10:17:29 crc kubenswrapper[4998]: I0227 10:17:29.026123 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 10:17:29 crc kubenswrapper[4998]: I0227 10:17:29.026206 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 27 10:17:29 crc kubenswrapper[4998]: I0227 10:17:29.026308 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 27 10:17:29 crc kubenswrapper[4998]: I0227 10:17:29.026395 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 27 10:17:29 crc kubenswrapper[4998]: I0227 10:17:29.026428 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 10:17:29 crc kubenswrapper[4998]: I0227 10:17:29.026490 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 27 10:17:29 crc kubenswrapper[4998]: I0227 10:17:29.026503 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 10:17:29 crc kubenswrapper[4998]: I0227 10:17:29.026556 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 27 10:17:29 crc kubenswrapper[4998]: I0227 10:17:29.026582 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 27 10:17:29 crc kubenswrapper[4998]: I0227 10:17:29.026604 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 27 10:17:29 crc kubenswrapper[4998]: I0227 10:17:29.026656 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 27 10:17:29 crc kubenswrapper[4998]: I0227 10:17:29.026674 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 27 10:17:29 crc kubenswrapper[4998]: I0227 10:17:29.026710 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 27 10:17:29 crc kubenswrapper[4998]: I0227 10:17:29.026758 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 27 10:17:29 crc kubenswrapper[4998]: I0227 10:17:29.026770 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 27 10:17:29 crc kubenswrapper[4998]: I0227 10:17:29.026805 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 27 10:17:29 crc kubenswrapper[4998]: I0227 10:17:29.026822 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 10:17:29 crc kubenswrapper[4998]: I0227 10:17:29.026916 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 27 10:17:29 crc kubenswrapper[4998]: I0227 10:17:29.026979 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 27 10:17:29 crc kubenswrapper[4998]: I0227 10:17:29.026929 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 10:17:29 crc kubenswrapper[4998]: I0227 10:17:29.027005 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 27 10:17:29 crc kubenswrapper[4998]: I0227 10:17:29.027101 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 10:17:29 crc kubenswrapper[4998]: I0227 10:17:29.027131 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 27 10:17:29 crc kubenswrapper[4998]: I0227 10:17:29.026871 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 27 10:17:29 crc kubenswrapper[4998]: I0227 10:17:29.027270 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 10:17:29 crc kubenswrapper[4998]: I0227 10:17:29.133434 4998 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:17:29 crc kubenswrapper[4998]: I0227 10:17:29.135977 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:17:29 crc kubenswrapper[4998]: I0227 10:17:29.136308 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:17:29 crc kubenswrapper[4998]: I0227 10:17:29.136401 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:17:29 crc kubenswrapper[4998]: I0227 10:17:29.136493 4998 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 27 10:17:29 crc kubenswrapper[4998]: E0227 10:17:29.137382 4998 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.173:6443: connect: connection refused" node="crc" Feb 27 10:17:29 crc kubenswrapper[4998]: I0227 10:17:29.195012 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 27 10:17:29 crc kubenswrapper[4998]: I0227 10:17:29.200977 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 10:17:29 crc kubenswrapper[4998]: I0227 10:17:29.215636 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 10:17:29 crc kubenswrapper[4998]: I0227 10:17:29.231932 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 27 10:17:29 crc kubenswrapper[4998]: I0227 10:17:29.235862 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 27 10:17:29 crc kubenswrapper[4998]: W0227 10:17:29.248135 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-52f412f2efbaaa5422a01f300f2fcb61c19d6fc7fb10145e654d09b891891244 WatchSource:0}: Error finding container 52f412f2efbaaa5422a01f300f2fcb61c19d6fc7fb10145e654d09b891891244: Status 404 returned error can't find the container with id 52f412f2efbaaa5422a01f300f2fcb61c19d6fc7fb10145e654d09b891891244 Feb 27 10:17:29 crc kubenswrapper[4998]: W0227 10:17:29.249845 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-821599ddaeb88a49c91710a2334c0770ba436ce5ba92efaff84d4d60a2dc4870 WatchSource:0}: Error finding container 821599ddaeb88a49c91710a2334c0770ba436ce5ba92efaff84d4d60a2dc4870: Status 404 returned error can't find the container with id 821599ddaeb88a49c91710a2334c0770ba436ce5ba92efaff84d4d60a2dc4870 Feb 27 10:17:29 crc kubenswrapper[4998]: W0227 10:17:29.259645 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-cd39913932a9ad99a003d018a3a43d628e92f283291a816000ace93714bab549 WatchSource:0}: Error finding container cd39913932a9ad99a003d018a3a43d628e92f283291a816000ace93714bab549: Status 404 returned error can't find the container with id cd39913932a9ad99a003d018a3a43d628e92f283291a816000ace93714bab549 Feb 27 10:17:29 crc kubenswrapper[4998]: W0227 10:17:29.261991 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-d39948050d3ab95e3bba8a741168348edce3138db60d762e814ae57913760eca WatchSource:0}: Error finding container d39948050d3ab95e3bba8a741168348edce3138db60d762e814ae57913760eca: Status 404 returned error can't find the container with id d39948050d3ab95e3bba8a741168348edce3138db60d762e814ae57913760eca Feb 27 10:17:29 crc kubenswrapper[4998]: W0227 10:17:29.264482 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-796c204a8545cb23ee0d23cd62dc9a49f066d909a731e0db30793c9c82d11fcd WatchSource:0}: Error finding container 796c204a8545cb23ee0d23cd62dc9a49f066d909a731e0db30793c9c82d11fcd: Status 404 returned error can't find the container with id 796c204a8545cb23ee0d23cd62dc9a49f066d909a731e0db30793c9c82d11fcd Feb 27 10:17:29 crc kubenswrapper[4998]: E0227 10:17:29.300210 4998 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.173:6443: connect: connection refused" interval="800ms" Feb 27 10:17:29 crc kubenswrapper[4998]: I0227 10:17:29.538160 4998 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:17:29 crc kubenswrapper[4998]: I0227 10:17:29.539787 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:17:29 crc kubenswrapper[4998]: I0227 10:17:29.539861 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:17:29 crc kubenswrapper[4998]: I0227 10:17:29.539882 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:17:29 crc kubenswrapper[4998]: I0227 10:17:29.539922 4998 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 27 10:17:29 crc kubenswrapper[4998]: E0227 10:17:29.540555 4998 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.173:6443: connect: connection refused" node="crc" Feb 27 10:17:29 crc kubenswrapper[4998]: W0227 10:17:29.685920 4998 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.173:6443: connect: connection refused Feb 27 10:17:29 crc kubenswrapper[4998]: E0227 10:17:29.686025 4998 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.173:6443: connect: connection refused" logger="UnhandledError" Feb 27 10:17:29 crc kubenswrapper[4998]: I0227 10:17:29.692313 4998 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.173:6443: connect: connection refused Feb 27 10:17:29 crc kubenswrapper[4998]: I0227 10:17:29.767992 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"796c204a8545cb23ee0d23cd62dc9a49f066d909a731e0db30793c9c82d11fcd"} Feb 27 10:17:29 crc kubenswrapper[4998]: I0227 10:17:29.769122 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"d39948050d3ab95e3bba8a741168348edce3138db60d762e814ae57913760eca"} Feb 27 10:17:29 crc kubenswrapper[4998]: I0227 10:17:29.771509 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"cd39913932a9ad99a003d018a3a43d628e92f283291a816000ace93714bab549"} Feb 27 10:17:29 crc kubenswrapper[4998]: I0227 10:17:29.777361 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"52f412f2efbaaa5422a01f300f2fcb61c19d6fc7fb10145e654d09b891891244"} Feb 27 10:17:29 crc kubenswrapper[4998]: I0227 10:17:29.780367 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"821599ddaeb88a49c91710a2334c0770ba436ce5ba92efaff84d4d60a2dc4870"} Feb 27 10:17:29 crc kubenswrapper[4998]: W0227 10:17:29.857574 4998 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.173:6443: connect: connection refused Feb 27 10:17:29 crc kubenswrapper[4998]: E0227 10:17:29.857723 4998 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.173:6443: connect: connection refused" logger="UnhandledError" Feb 27 10:17:29 crc kubenswrapper[4998]: W0227 10:17:29.912196 4998 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.173:6443: connect: connection refused Feb 27 10:17:29 crc kubenswrapper[4998]: E0227 10:17:29.912349 4998 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.173:6443: connect: connection refused" logger="UnhandledError" Feb 27 10:17:30 crc kubenswrapper[4998]: W0227 10:17:30.100105 4998 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.173:6443: connect: connection refused Feb 27 10:17:30 crc kubenswrapper[4998]: E0227 10:17:30.100668 4998 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.173:6443: connect: connection refused" logger="UnhandledError" Feb 27 10:17:30 crc kubenswrapper[4998]: E0227 10:17:30.101136 4998 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.173:6443: connect: connection refused" interval="1.6s" Feb 27 10:17:30 crc kubenswrapper[4998]: I0227 10:17:30.341149 4998 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:17:30 crc kubenswrapper[4998]: I0227 10:17:30.343275 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:17:30 crc kubenswrapper[4998]: I0227 10:17:30.343366 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:17:30 crc kubenswrapper[4998]: I0227 10:17:30.343381 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:17:30 crc kubenswrapper[4998]: I0227 10:17:30.343455 4998 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 27 10:17:30 crc kubenswrapper[4998]: E0227 10:17:30.344512 4998 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.173:6443: connect: connection refused" node="crc" Feb 27 10:17:30 crc kubenswrapper[4998]: I0227 10:17:30.626017 4998 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 27 10:17:30 crc kubenswrapper[4998]: E0227 10:17:30.627469 4998 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.173:6443: connect: connection refused" logger="UnhandledError" Feb 27 10:17:30 crc kubenswrapper[4998]: I0227 10:17:30.691928 4998 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.173:6443: connect: connection refused Feb 27 10:17:30 crc kubenswrapper[4998]: I0227 10:17:30.784435 4998 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="3a72f2f016ab471dc647a29e72e4e49dd5008018682ff255ca5b8496cfc4a2a7" exitCode=0 Feb 27 10:17:30 crc kubenswrapper[4998]: I0227 10:17:30.784516 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"3a72f2f016ab471dc647a29e72e4e49dd5008018682ff255ca5b8496cfc4a2a7"} Feb 27 10:17:30 crc kubenswrapper[4998]: I0227 10:17:30.784535 4998 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:17:30 crc kubenswrapper[4998]: I0227 10:17:30.785549 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:17:30 crc kubenswrapper[4998]: I0227 10:17:30.785596 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:17:30 crc kubenswrapper[4998]: I0227 10:17:30.785615 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:17:30 crc kubenswrapper[4998]: I0227 10:17:30.787551 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"a3aa3432d6c4ec6208a82daff53e88297a9a36dcba4740c03f7982e806c8278e"} Feb 27 10:17:30 crc kubenswrapper[4998]: I0227 10:17:30.787603 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"e3b695c09816dfe060e15e662b7e0ebacd71ebee97b82f7589f141751789fd82"} Feb 27 10:17:30 crc kubenswrapper[4998]: I0227 10:17:30.787617 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"522fda246ba145bdebb76c4053bdce6892f1420b31e9bf785d9f94cc17d7f88a"} Feb 27 10:17:30 crc kubenswrapper[4998]: I0227 10:17:30.789407 4998 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="8b42644d7b4769348dda3a3ed5f82a860712159a6e62df2ee6a04b6e890851fb" exitCode=0 Feb 27 10:17:30 crc kubenswrapper[4998]: I0227 10:17:30.789477 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"8b42644d7b4769348dda3a3ed5f82a860712159a6e62df2ee6a04b6e890851fb"} Feb 27 10:17:30 crc kubenswrapper[4998]: I0227 10:17:30.789526 4998 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:17:30 crc kubenswrapper[4998]: I0227 10:17:30.790469 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:17:30 crc kubenswrapper[4998]: I0227 10:17:30.790511 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:17:30 crc kubenswrapper[4998]: I0227 10:17:30.790526 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:17:30 crc kubenswrapper[4998]: I0227 10:17:30.791662 4998 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="a28ebb836f0d7af6bdb1e5b7eaa78692585c79e9e2f0f05e806cf253adc7070f" exitCode=0 Feb 27 10:17:30 crc kubenswrapper[4998]: I0227 10:17:30.791740 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"a28ebb836f0d7af6bdb1e5b7eaa78692585c79e9e2f0f05e806cf253adc7070f"} Feb 27 10:17:30 crc kubenswrapper[4998]: I0227 10:17:30.791806 4998 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:17:30 crc kubenswrapper[4998]: I0227 10:17:30.792776 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:17:30 crc kubenswrapper[4998]: E0227 10:17:30.792719 4998 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.173:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.189813185a9c17f1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:17:28.690612209 +0000 UTC m=+0.688883207,LastTimestamp:2026-02-27 10:17:28.690612209 +0000 UTC m=+0.688883207,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:17:30 crc kubenswrapper[4998]: I0227 10:17:30.792813 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:17:30 crc kubenswrapper[4998]: I0227 10:17:30.792829 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:17:30 crc kubenswrapper[4998]: I0227 10:17:30.793869 4998 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="dcd8311d3319cee9ae6667193835b5bd30c08b0f52b6278fead9c76ffa5949ee" exitCode=0 Feb 27 10:17:30 crc kubenswrapper[4998]: I0227 10:17:30.793905 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"dcd8311d3319cee9ae6667193835b5bd30c08b0f52b6278fead9c76ffa5949ee"} Feb 27 10:17:30 crc kubenswrapper[4998]: I0227 10:17:30.793972 4998 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:17:30 crc kubenswrapper[4998]: I0227 10:17:30.794010 4998 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:17:30 crc kubenswrapper[4998]: I0227 10:17:30.794895 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:17:30 crc kubenswrapper[4998]: I0227 10:17:30.794936 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:17:30 crc kubenswrapper[4998]: I0227 10:17:30.794969 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:17:30 crc kubenswrapper[4998]: I0227 10:17:30.794987 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:17:30 crc kubenswrapper[4998]: I0227 10:17:30.795008 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:17:30 crc kubenswrapper[4998]: I0227 10:17:30.795021 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:17:31 crc kubenswrapper[4998]: W0227 10:17:31.608654 4998 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.173:6443: connect: connection refused Feb 27 10:17:31 crc kubenswrapper[4998]: E0227 10:17:31.608773 4998 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.173:6443: connect: connection refused" logger="UnhandledError" Feb 27 10:17:31 crc kubenswrapper[4998]: I0227 10:17:31.691756 4998 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.173:6443: connect: connection refused Feb 27 10:17:31 crc kubenswrapper[4998]: E0227 10:17:31.702929 4998 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.173:6443: connect: connection refused" interval="3.2s" Feb 27 10:17:31 crc kubenswrapper[4998]: I0227 10:17:31.801187 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"1e09abfc39e6ff32d77e2feeb903f10bb1d82033d56c9b6b7cc45bf52c6a5d2a"} Feb 27 10:17:31 crc kubenswrapper[4998]: I0227 10:17:31.801419 4998 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:17:31 crc kubenswrapper[4998]: I0227 10:17:31.802637 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:17:31 crc kubenswrapper[4998]: I0227 10:17:31.802675 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:17:31 crc kubenswrapper[4998]: I0227 10:17:31.802688 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:17:31 crc kubenswrapper[4998]: I0227 10:17:31.805915 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"df65cc66d990595a8fdb751234ffd8b9e890f53de56c736241bc8b52e7341357"} Feb 27 10:17:31 crc kubenswrapper[4998]: I0227 10:17:31.805952 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"9f0cce1c3309bc7de11ea57038f7a306b781b41434337fbf3da176ca2dba2b91"} Feb 27 10:17:31 crc kubenswrapper[4998]: I0227 10:17:31.805966 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"93c102b441273f39ca361ed86f6ef259e15d8adb7eea414db62fabebfda0dee1"} Feb 27 10:17:31 crc kubenswrapper[4998]: I0227 10:17:31.806015 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"2c807f36a54905b570613990b7428942b90cbad2d8d8dfbab02ee177d5575644"} Feb 27 10:17:31 crc kubenswrapper[4998]: I0227 10:17:31.807775 4998 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="f6f5b604c3667bd9b9ff07f7ee8dd1fd046911e6fbc216f8a850621ac53d010a" exitCode=0 Feb 27 10:17:31 crc kubenswrapper[4998]: I0227 10:17:31.807836 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"f6f5b604c3667bd9b9ff07f7ee8dd1fd046911e6fbc216f8a850621ac53d010a"} Feb 27 10:17:31 crc kubenswrapper[4998]: I0227 10:17:31.807979 4998 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:17:31 crc kubenswrapper[4998]: I0227 10:17:31.810495 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:17:31 crc kubenswrapper[4998]: I0227 10:17:31.810536 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:17:31 crc kubenswrapper[4998]: I0227 10:17:31.810546 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:17:31 crc kubenswrapper[4998]: I0227 10:17:31.816111 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"df8df596ab8f658fa7380300b4af87a511447baba28c0a4878829e2217d7d600"} Feb 27 10:17:31 crc kubenswrapper[4998]: I0227 10:17:31.816174 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"f77f77430694123a27f4e42870837e5ef1405bc4f157bc15efe4cc2daafd9456"} Feb 27 10:17:31 crc kubenswrapper[4998]: I0227 10:17:31.816189 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"d8e1a3d0cfbfb3c94ceb7fa8965506defc8d50fbd2c3c977ebea917d9b47f29c"} Feb 27 10:17:31 crc kubenswrapper[4998]: I0227 10:17:31.816405 4998 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:17:31 crc kubenswrapper[4998]: I0227 10:17:31.817766 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:17:31 crc kubenswrapper[4998]: I0227 10:17:31.817802 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:17:31 crc kubenswrapper[4998]: I0227 10:17:31.817815 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:17:31 crc kubenswrapper[4998]: I0227 10:17:31.819216 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"6f69cef55dc6c16d2f015053573c088caee5fcca24208ad8c7affb58bc981c1e"} Feb 27 10:17:31 crc kubenswrapper[4998]: I0227 10:17:31.819346 4998 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:17:31 crc kubenswrapper[4998]: I0227 10:17:31.820593 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:17:31 crc kubenswrapper[4998]: I0227 10:17:31.820640 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:17:31 crc kubenswrapper[4998]: I0227 10:17:31.820657 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:17:31 crc kubenswrapper[4998]: W0227 10:17:31.882923 4998 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.173:6443: connect: connection refused Feb 27 10:17:31 crc kubenswrapper[4998]: E0227 10:17:31.883012 4998 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.173:6443: connect: connection refused" logger="UnhandledError" Feb 27 10:17:31 crc kubenswrapper[4998]: I0227 10:17:31.945121 4998 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:17:31 crc kubenswrapper[4998]: I0227 10:17:31.946271 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:17:31 crc kubenswrapper[4998]: I0227 10:17:31.946316 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:17:31 crc kubenswrapper[4998]: I0227 10:17:31.946328 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:17:31 crc kubenswrapper[4998]: I0227 10:17:31.946360 4998 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 27 10:17:31 crc kubenswrapper[4998]: E0227 10:17:31.946809 4998 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.173:6443: connect: connection refused" node="crc" Feb 27 10:17:32 crc kubenswrapper[4998]: W0227 10:17:32.441503 4998 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.173:6443: connect: connection refused Feb 27 10:17:32 crc kubenswrapper[4998]: E0227 10:17:32.441602 4998 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.173:6443: connect: connection refused" logger="UnhandledError" Feb 27 10:17:32 crc kubenswrapper[4998]: W0227 10:17:32.592621 4998 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.173:6443: connect: connection refused Feb 27 10:17:32 crc kubenswrapper[4998]: E0227 10:17:32.592724 4998 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.173:6443: connect: connection refused" logger="UnhandledError" Feb 27 10:17:32 crc kubenswrapper[4998]: I0227 10:17:32.692392 4998 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.173:6443: connect: connection refused Feb 27 10:17:32 crc kubenswrapper[4998]: I0227 10:17:32.824764 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"174ea617a055800e0b887e9bb6834967f44deffbce6fd7d548875cfb47777848"} Feb 27 10:17:32 crc kubenswrapper[4998]: I0227 10:17:32.824934 4998 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:17:32 crc kubenswrapper[4998]: I0227 10:17:32.825934 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:17:32 crc kubenswrapper[4998]: I0227 10:17:32.825969 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:17:32 crc kubenswrapper[4998]: I0227 10:17:32.825978 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:17:32 crc kubenswrapper[4998]: I0227 10:17:32.829411 4998 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="e799d0c5d0a29fa4a20c358b5b14eb2910623970ec18dbc0511b9888bddfbc87" exitCode=0 Feb 27 10:17:32 crc kubenswrapper[4998]: I0227 10:17:32.829569 4998 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 27 10:17:32 crc kubenswrapper[4998]: I0227 10:17:32.829628 4998 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:17:32 crc kubenswrapper[4998]: I0227 10:17:32.829680 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"e799d0c5d0a29fa4a20c358b5b14eb2910623970ec18dbc0511b9888bddfbc87"} Feb 27 10:17:32 crc kubenswrapper[4998]: I0227 10:17:32.829802 4998 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:17:32 crc kubenswrapper[4998]: I0227 10:17:32.829831 4998 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:17:32 crc kubenswrapper[4998]: I0227 10:17:32.829842 4998 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:17:32 crc kubenswrapper[4998]: I0227 10:17:32.830969 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:17:32 crc kubenswrapper[4998]: I0227 10:17:32.831006 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:17:32 crc kubenswrapper[4998]: I0227 10:17:32.830972 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:17:32 crc kubenswrapper[4998]: I0227 10:17:32.831046 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:17:32 crc kubenswrapper[4998]: I0227 10:17:32.831055 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:17:32 crc kubenswrapper[4998]: I0227 10:17:32.831018 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:17:32 crc kubenswrapper[4998]: I0227 10:17:32.831076 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:17:32 crc kubenswrapper[4998]: I0227 10:17:32.831078 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:17:32 crc kubenswrapper[4998]: I0227 10:17:32.831063 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:17:32 crc kubenswrapper[4998]: I0227 10:17:32.831113 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:17:32 crc kubenswrapper[4998]: I0227 10:17:32.831128 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:17:32 crc kubenswrapper[4998]: I0227 10:17:32.831130 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:17:33 crc kubenswrapper[4998]: I0227 10:17:33.837856 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"4d760d1817981842ba5ff7cb6e99e17a2bb3c2a706ac403aff1e9df265bc38e6"} Feb 27 10:17:33 crc kubenswrapper[4998]: I0227 10:17:33.837914 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"7b0013632790bf9b7d865e7dd2b831daeac0c10446a270d6017c83c65f5687ff"} Feb 27 10:17:33 crc kubenswrapper[4998]: I0227 10:17:33.837931 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"ceedf973a7cd5ad682ffbb7ac21dbfff2f467cbebab605bc17fcb5eb53e05f55"} Feb 27 10:17:33 crc kubenswrapper[4998]: I0227 10:17:33.837959 4998 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:17:33 crc kubenswrapper[4998]: I0227 10:17:33.838057 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 10:17:33 crc kubenswrapper[4998]: I0227 10:17:33.839011 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:17:33 crc kubenswrapper[4998]: I0227 10:17:33.839094 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:17:33 crc kubenswrapper[4998]: I0227 10:17:33.839121 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:17:34 crc kubenswrapper[4998]: I0227 10:17:34.237170 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 10:17:34 crc kubenswrapper[4998]: I0227 10:17:34.309700 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 10:17:34 crc kubenswrapper[4998]: I0227 10:17:34.309905 4998 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:17:34 crc kubenswrapper[4998]: I0227 10:17:34.311538 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:17:34 crc kubenswrapper[4998]: I0227 10:17:34.311600 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:17:34 crc kubenswrapper[4998]: I0227 10:17:34.311623 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:17:34 crc kubenswrapper[4998]: I0227 10:17:34.729661 4998 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 27 10:17:34 crc kubenswrapper[4998]: I0227 10:17:34.845980 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"5f99112c9996d81cabf24593764be3695882a1acbdb45233464eb28fbdaa0869"} Feb 27 10:17:34 crc kubenswrapper[4998]: I0227 10:17:34.846041 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"07faee8c5639e04f711dedbcfa4188da5234cf6ccb4c1800d23311dc29e41f44"} Feb 27 10:17:34 crc kubenswrapper[4998]: I0227 10:17:34.846069 4998 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:17:34 crc kubenswrapper[4998]: I0227 10:17:34.846140 4998 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:17:34 crc kubenswrapper[4998]: I0227 10:17:34.846969 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:17:34 crc kubenswrapper[4998]: I0227 10:17:34.847000 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:17:34 crc kubenswrapper[4998]: I0227 10:17:34.847009 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:17:34 crc kubenswrapper[4998]: I0227 10:17:34.847252 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:17:34 crc kubenswrapper[4998]: I0227 10:17:34.847307 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:17:34 crc kubenswrapper[4998]: I0227 10:17:34.847317 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:17:35 crc kubenswrapper[4998]: I0227 10:17:35.147697 4998 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:17:35 crc kubenswrapper[4998]: I0227 10:17:35.148991 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:17:35 crc kubenswrapper[4998]: I0227 10:17:35.149039 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:17:35 crc kubenswrapper[4998]: I0227 10:17:35.149051 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:17:35 crc kubenswrapper[4998]: I0227 10:17:35.149081 4998 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 27 10:17:35 crc kubenswrapper[4998]: I0227 10:17:35.361730 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 27 10:17:35 crc kubenswrapper[4998]: I0227 10:17:35.361983 4998 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:17:35 crc kubenswrapper[4998]: I0227 10:17:35.363477 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:17:35 crc kubenswrapper[4998]: I0227 10:17:35.363538 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:17:35 crc kubenswrapper[4998]: I0227 10:17:35.363550 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:17:35 crc kubenswrapper[4998]: I0227 10:17:35.848308 4998 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:17:35 crc kubenswrapper[4998]: I0227 10:17:35.848307 4998 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:17:35 crc kubenswrapper[4998]: I0227 10:17:35.849196 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:17:35 crc kubenswrapper[4998]: I0227 10:17:35.849279 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:17:35 crc kubenswrapper[4998]: I0227 10:17:35.849291 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:17:35 crc kubenswrapper[4998]: I0227 10:17:35.850091 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:17:35 crc kubenswrapper[4998]: I0227 10:17:35.850124 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:17:35 crc kubenswrapper[4998]: I0227 10:17:35.850133 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:17:35 crc kubenswrapper[4998]: I0227 10:17:35.900066 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Feb 27 10:17:36 crc kubenswrapper[4998]: I0227 10:17:36.422977 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 10:17:36 crc kubenswrapper[4998]: I0227 10:17:36.423162 4998 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:17:36 crc kubenswrapper[4998]: I0227 10:17:36.424303 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:17:36 crc kubenswrapper[4998]: I0227 10:17:36.424348 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:17:36 crc kubenswrapper[4998]: I0227 10:17:36.424361 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:17:36 crc kubenswrapper[4998]: I0227 10:17:36.850915 4998 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:17:36 crc kubenswrapper[4998]: I0227 10:17:36.851722 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:17:36 crc kubenswrapper[4998]: I0227 10:17:36.851764 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:17:36 crc kubenswrapper[4998]: I0227 10:17:36.851776 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:17:37 crc kubenswrapper[4998]: I0227 10:17:37.675670 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 10:17:37 crc kubenswrapper[4998]: I0227 10:17:37.675922 4998 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:17:37 crc kubenswrapper[4998]: I0227 10:17:37.677585 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:17:37 crc kubenswrapper[4998]: I0227 10:17:37.677626 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:17:37 crc kubenswrapper[4998]: I0227 10:17:37.677640 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:17:37 crc kubenswrapper[4998]: I0227 10:17:37.842644 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Feb 27 10:17:37 crc kubenswrapper[4998]: I0227 10:17:37.853864 4998 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:17:37 crc kubenswrapper[4998]: I0227 10:17:37.854870 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:17:37 crc kubenswrapper[4998]: I0227 10:17:37.854926 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:17:37 crc kubenswrapper[4998]: I0227 10:17:37.854946 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:17:38 crc kubenswrapper[4998]: I0227 10:17:38.362457 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 10:17:38 crc kubenswrapper[4998]: I0227 10:17:38.362661 4998 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:17:38 crc kubenswrapper[4998]: I0227 10:17:38.363781 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:17:38 crc kubenswrapper[4998]: I0227 10:17:38.363822 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:17:38 crc kubenswrapper[4998]: I0227 10:17:38.363833 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:17:38 crc kubenswrapper[4998]: I0227 10:17:38.378515 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 10:17:38 crc kubenswrapper[4998]: E0227 10:17:38.840204 4998 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 27 10:17:38 crc kubenswrapper[4998]: I0227 10:17:38.857130 4998 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:17:38 crc kubenswrapper[4998]: I0227 10:17:38.858708 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:17:38 crc kubenswrapper[4998]: I0227 10:17:38.858786 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:17:38 crc kubenswrapper[4998]: I0227 10:17:38.858811 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:17:40 crc kubenswrapper[4998]: I0227 10:17:40.803795 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 10:17:40 crc kubenswrapper[4998]: I0227 10:17:40.804060 4998 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:17:40 crc kubenswrapper[4998]: I0227 10:17:40.805457 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:17:40 crc kubenswrapper[4998]: I0227 10:17:40.805495 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:17:40 crc kubenswrapper[4998]: I0227 10:17:40.805504 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:17:40 crc kubenswrapper[4998]: I0227 10:17:40.809161 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 10:17:40 crc kubenswrapper[4998]: I0227 10:17:40.862827 4998 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:17:40 crc kubenswrapper[4998]: I0227 10:17:40.863763 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:17:40 crc kubenswrapper[4998]: I0227 10:17:40.863820 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:17:40 crc kubenswrapper[4998]: I0227 10:17:40.863832 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:17:43 crc kubenswrapper[4998]: I0227 10:17:43.184176 4998 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:44718->192.168.126.11:17697: read: connection reset by peer" start-of-body= Feb 27 10:17:43 crc kubenswrapper[4998]: I0227 10:17:43.184212 4998 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:44726->192.168.126.11:17697: read: connection reset by peer" start-of-body= Feb 27 10:17:43 crc kubenswrapper[4998]: I0227 10:17:43.184295 4998 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:44718->192.168.126.11:17697: read: connection reset by peer" Feb 27 10:17:43 crc kubenswrapper[4998]: I0227 10:17:43.184323 4998 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:44726->192.168.126.11:17697: read: connection reset by peer" Feb 27 10:17:43 crc kubenswrapper[4998]: E0227 10:17:43.329610 4998 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:17:43Z is after 2026-02-23T05:33:13Z" interval="6.4s" Feb 27 10:17:43 crc kubenswrapper[4998]: W0227 10:17:43.330189 4998 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:17:43Z is after 2026-02-23T05:33:13Z Feb 27 10:17:43 crc kubenswrapper[4998]: E0227 10:17:43.330342 4998 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:17:43Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 27 10:17:43 crc kubenswrapper[4998]: W0227 10:17:43.331551 4998 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:17:43Z is after 2026-02-23T05:33:13Z Feb 27 10:17:43 crc kubenswrapper[4998]: E0227 10:17:43.331619 4998 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:17:43Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 27 10:17:43 crc kubenswrapper[4998]: E0227 10:17:43.331832 4998 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:17:43Z is after 2026-02-23T05:33:13Z" node="crc" Feb 27 10:17:43 crc kubenswrapper[4998]: W0227 10:17:43.331942 4998 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:17:43Z is after 2026-02-23T05:33:13Z Feb 27 10:17:43 crc kubenswrapper[4998]: E0227 10:17:43.332040 4998 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:17:43Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 27 10:17:43 crc kubenswrapper[4998]: E0227 10:17:43.333032 4998 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:17:43Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 27 10:17:43 crc kubenswrapper[4998]: W0227 10:17:43.333498 4998 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:17:43Z is after 2026-02-23T05:33:13Z Feb 27 10:17:43 crc kubenswrapper[4998]: E0227 10:17:43.333575 4998 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:17:43Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 27 10:17:43 crc kubenswrapper[4998]: I0227 10:17:43.338023 4998 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:17:43Z is after 2026-02-23T05:33:13Z Feb 27 10:17:43 crc kubenswrapper[4998]: E0227 10:17:43.341010 4998 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:17:43Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.189813185a9c17f1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:17:28.690612209 +0000 UTC m=+0.688883207,LastTimestamp:2026-02-27 10:17:28.690612209 +0000 UTC m=+0.688883207,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:17:43 crc kubenswrapper[4998]: I0227 10:17:43.345722 4998 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 27 10:17:43 crc kubenswrapper[4998]: I0227 10:17:43.345811 4998 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 27 10:17:43 crc kubenswrapper[4998]: I0227 10:17:43.353736 4998 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found]","reason":"Forbidden","details":{},"code":403} Feb 27 10:17:43 crc kubenswrapper[4998]: I0227 10:17:43.353813 4998 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 27 10:17:43 crc kubenswrapper[4998]: I0227 10:17:43.696618 4998 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:17:43Z is after 2026-02-23T05:33:13Z Feb 27 10:17:43 crc kubenswrapper[4998]: I0227 10:17:43.803770 4998 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 27 10:17:43 crc kubenswrapper[4998]: I0227 10:17:43.803889 4998 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 27 10:17:43 crc kubenswrapper[4998]: I0227 10:17:43.873131 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 27 10:17:43 crc kubenswrapper[4998]: I0227 10:17:43.875332 4998 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="174ea617a055800e0b887e9bb6834967f44deffbce6fd7d548875cfb47777848" exitCode=255 Feb 27 10:17:43 crc kubenswrapper[4998]: I0227 10:17:43.875400 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"174ea617a055800e0b887e9bb6834967f44deffbce6fd7d548875cfb47777848"} Feb 27 10:17:43 crc kubenswrapper[4998]: I0227 10:17:43.875603 4998 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:17:43 crc kubenswrapper[4998]: I0227 10:17:43.876529 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:17:43 crc kubenswrapper[4998]: I0227 10:17:43.876568 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:17:43 crc kubenswrapper[4998]: I0227 10:17:43.876583 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:17:43 crc kubenswrapper[4998]: I0227 10:17:43.877212 4998 scope.go:117] "RemoveContainer" containerID="174ea617a055800e0b887e9bb6834967f44deffbce6fd7d548875cfb47777848" Feb 27 10:17:44 crc kubenswrapper[4998]: I0227 10:17:44.697207 4998 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:17:44Z is after 2026-02-23T05:33:13Z Feb 27 10:17:44 crc kubenswrapper[4998]: I0227 10:17:44.881672 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 27 10:17:44 crc kubenswrapper[4998]: I0227 10:17:44.885644 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"6588fa50fc66eb34a7e44c7cabc4a2c767e2342c092297cfeeaed2fcf33e982b"} Feb 27 10:17:44 crc kubenswrapper[4998]: I0227 10:17:44.885870 4998 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:17:44 crc kubenswrapper[4998]: I0227 10:17:44.887073 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:17:44 crc kubenswrapper[4998]: I0227 10:17:44.887110 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:17:44 crc kubenswrapper[4998]: I0227 10:17:44.887123 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:17:45 crc kubenswrapper[4998]: I0227 10:17:45.694341 4998 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:17:45Z is after 2026-02-23T05:33:13Z Feb 27 10:17:45 crc kubenswrapper[4998]: I0227 10:17:45.889336 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 27 10:17:45 crc kubenswrapper[4998]: I0227 10:17:45.890046 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 27 10:17:45 crc kubenswrapper[4998]: I0227 10:17:45.891824 4998 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="6588fa50fc66eb34a7e44c7cabc4a2c767e2342c092297cfeeaed2fcf33e982b" exitCode=255 Feb 27 10:17:45 crc kubenswrapper[4998]: I0227 10:17:45.891885 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"6588fa50fc66eb34a7e44c7cabc4a2c767e2342c092297cfeeaed2fcf33e982b"} Feb 27 10:17:45 crc kubenswrapper[4998]: I0227 10:17:45.891956 4998 scope.go:117] "RemoveContainer" containerID="174ea617a055800e0b887e9bb6834967f44deffbce6fd7d548875cfb47777848" Feb 27 10:17:45 crc kubenswrapper[4998]: I0227 10:17:45.892070 4998 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:17:45 crc kubenswrapper[4998]: I0227 10:17:45.892789 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:17:45 crc kubenswrapper[4998]: I0227 10:17:45.892828 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:17:45 crc kubenswrapper[4998]: I0227 10:17:45.892853 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:17:45 crc kubenswrapper[4998]: I0227 10:17:45.893439 4998 scope.go:117] "RemoveContainer" containerID="6588fa50fc66eb34a7e44c7cabc4a2c767e2342c092297cfeeaed2fcf33e982b" Feb 27 10:17:45 crc kubenswrapper[4998]: E0227 10:17:45.893656 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 27 10:17:45 crc kubenswrapper[4998]: I0227 10:17:45.925495 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Feb 27 10:17:45 crc kubenswrapper[4998]: I0227 10:17:45.925686 4998 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:17:45 crc kubenswrapper[4998]: I0227 10:17:45.926852 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:17:45 crc kubenswrapper[4998]: I0227 10:17:45.926887 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:17:45 crc kubenswrapper[4998]: I0227 10:17:45.926900 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:17:45 crc kubenswrapper[4998]: I0227 10:17:45.939396 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Feb 27 10:17:46 crc kubenswrapper[4998]: I0227 10:17:46.697204 4998 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:17:46Z is after 2026-02-23T05:33:13Z Feb 27 10:17:46 crc kubenswrapper[4998]: I0227 10:17:46.896593 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 27 10:17:46 crc kubenswrapper[4998]: I0227 10:17:46.898654 4998 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:17:46 crc kubenswrapper[4998]: I0227 10:17:46.899487 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:17:46 crc kubenswrapper[4998]: I0227 10:17:46.899554 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:17:46 crc kubenswrapper[4998]: I0227 10:17:46.899576 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:17:47 crc kubenswrapper[4998]: I0227 10:17:47.682589 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 10:17:47 crc kubenswrapper[4998]: I0227 10:17:47.682794 4998 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:17:47 crc kubenswrapper[4998]: I0227 10:17:47.684171 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:17:47 crc kubenswrapper[4998]: I0227 10:17:47.684219 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:17:47 crc kubenswrapper[4998]: I0227 10:17:47.684254 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:17:47 crc kubenswrapper[4998]: I0227 10:17:47.684974 4998 scope.go:117] "RemoveContainer" containerID="6588fa50fc66eb34a7e44c7cabc4a2c767e2342c092297cfeeaed2fcf33e982b" Feb 27 10:17:47 crc kubenswrapper[4998]: E0227 10:17:47.685241 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 27 10:17:47 crc kubenswrapper[4998]: I0227 10:17:47.686325 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 10:17:47 crc kubenswrapper[4998]: I0227 10:17:47.694348 4998 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:17:47Z is after 2026-02-23T05:33:13Z Feb 27 10:17:47 crc kubenswrapper[4998]: I0227 10:17:47.904013 4998 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:17:47 crc kubenswrapper[4998]: I0227 10:17:47.905456 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:17:47 crc kubenswrapper[4998]: I0227 10:17:47.905504 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:17:47 crc kubenswrapper[4998]: I0227 10:17:47.905519 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:17:47 crc kubenswrapper[4998]: I0227 10:17:47.906273 4998 scope.go:117] "RemoveContainer" containerID="6588fa50fc66eb34a7e44c7cabc4a2c767e2342c092297cfeeaed2fcf33e982b" Feb 27 10:17:47 crc kubenswrapper[4998]: E0227 10:17:47.906506 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 27 10:17:48 crc kubenswrapper[4998]: I0227 10:17:48.697745 4998 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:17:48Z is after 2026-02-23T05:33:13Z Feb 27 10:17:48 crc kubenswrapper[4998]: E0227 10:17:48.840528 4998 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 27 10:17:49 crc kubenswrapper[4998]: I0227 10:17:49.694574 4998 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:17:49Z is after 2026-02-23T05:33:13Z Feb 27 10:17:49 crc kubenswrapper[4998]: I0227 10:17:49.732392 4998 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:17:49 crc kubenswrapper[4998]: I0227 10:17:49.734116 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:17:49 crc kubenswrapper[4998]: I0227 10:17:49.734180 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:17:49 crc kubenswrapper[4998]: I0227 10:17:49.734198 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:17:49 crc kubenswrapper[4998]: I0227 10:17:49.734268 4998 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 27 10:17:49 crc kubenswrapper[4998]: E0227 10:17:49.735245 4998 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:17:49Z is after 2026-02-23T05:33:13Z" interval="7s" Feb 27 10:17:49 crc kubenswrapper[4998]: E0227 10:17:49.739083 4998 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:17:49Z is after 2026-02-23T05:33:13Z" node="crc" Feb 27 10:17:49 crc kubenswrapper[4998]: W0227 10:17:49.894257 4998 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:17:49Z is after 2026-02-23T05:33:13Z Feb 27 10:17:49 crc kubenswrapper[4998]: E0227 10:17:49.894354 4998 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:17:49Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 27 10:17:50 crc kubenswrapper[4998]: W0227 10:17:50.665330 4998 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:17:50Z is after 2026-02-23T05:33:13Z Feb 27 10:17:50 crc kubenswrapper[4998]: E0227 10:17:50.665455 4998 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:17:50Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 27 10:17:50 crc kubenswrapper[4998]: I0227 10:17:50.695648 4998 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:17:50Z is after 2026-02-23T05:33:13Z Feb 27 10:17:51 crc kubenswrapper[4998]: W0227 10:17:51.340682 4998 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:17:51Z is after 2026-02-23T05:33:13Z Feb 27 10:17:51 crc kubenswrapper[4998]: E0227 10:17:51.340788 4998 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:17:51Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 27 10:17:51 crc kubenswrapper[4998]: I0227 10:17:51.682543 4998 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 10:17:51 crc kubenswrapper[4998]: I0227 10:17:51.682842 4998 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:17:51 crc kubenswrapper[4998]: I0227 10:17:51.684615 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:17:51 crc kubenswrapper[4998]: I0227 10:17:51.684666 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:17:51 crc kubenswrapper[4998]: I0227 10:17:51.684675 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:17:51 crc kubenswrapper[4998]: I0227 10:17:51.685330 4998 scope.go:117] "RemoveContainer" containerID="6588fa50fc66eb34a7e44c7cabc4a2c767e2342c092297cfeeaed2fcf33e982b" Feb 27 10:17:51 crc kubenswrapper[4998]: E0227 10:17:51.685557 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 27 10:17:51 crc kubenswrapper[4998]: I0227 10:17:51.694131 4998 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:17:51Z is after 2026-02-23T05:33:13Z Feb 27 10:17:52 crc kubenswrapper[4998]: I0227 10:17:52.108409 4998 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 27 10:17:52 crc kubenswrapper[4998]: E0227 10:17:52.113282 4998 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:17:52Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 27 10:17:52 crc kubenswrapper[4998]: W0227 10:17:52.201621 4998 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:17:52Z is after 2026-02-23T05:33:13Z Feb 27 10:17:52 crc kubenswrapper[4998]: E0227 10:17:52.201707 4998 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:17:52Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 27 10:17:52 crc kubenswrapper[4998]: I0227 10:17:52.235365 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 10:17:52 crc kubenswrapper[4998]: I0227 10:17:52.235692 4998 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:17:52 crc kubenswrapper[4998]: I0227 10:17:52.237277 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:17:52 crc kubenswrapper[4998]: I0227 10:17:52.237343 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:17:52 crc kubenswrapper[4998]: I0227 10:17:52.237366 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:17:52 crc kubenswrapper[4998]: I0227 10:17:52.238351 4998 scope.go:117] "RemoveContainer" containerID="6588fa50fc66eb34a7e44c7cabc4a2c767e2342c092297cfeeaed2fcf33e982b" Feb 27 10:17:52 crc kubenswrapper[4998]: E0227 10:17:52.238614 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 27 10:17:52 crc kubenswrapper[4998]: I0227 10:17:52.694632 4998 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:17:52Z is after 2026-02-23T05:33:13Z Feb 27 10:17:53 crc kubenswrapper[4998]: E0227 10:17:53.345741 4998 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:17:53Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.189813185a9c17f1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:17:28.690612209 +0000 UTC m=+0.688883207,LastTimestamp:2026-02-27 10:17:28.690612209 +0000 UTC m=+0.688883207,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:17:53 crc kubenswrapper[4998]: I0227 10:17:53.695147 4998 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:17:53Z is after 2026-02-23T05:33:13Z Feb 27 10:17:53 crc kubenswrapper[4998]: I0227 10:17:53.804697 4998 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 27 10:17:53 crc kubenswrapper[4998]: I0227 10:17:53.804768 4998 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 27 10:17:54 crc kubenswrapper[4998]: I0227 10:17:54.693803 4998 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:17:54Z is after 2026-02-23T05:33:13Z Feb 27 10:17:55 crc kubenswrapper[4998]: I0227 10:17:55.697022 4998 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:17:55Z is after 2026-02-23T05:33:13Z Feb 27 10:17:56 crc kubenswrapper[4998]: I0227 10:17:56.696459 4998 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:17:56Z is after 2026-02-23T05:33:13Z Feb 27 10:17:56 crc kubenswrapper[4998]: I0227 10:17:56.740290 4998 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:17:56 crc kubenswrapper[4998]: I0227 10:17:56.742494 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:17:56 crc kubenswrapper[4998]: I0227 10:17:56.742567 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:17:56 crc kubenswrapper[4998]: I0227 10:17:56.742596 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:17:56 crc kubenswrapper[4998]: I0227 10:17:56.742651 4998 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 27 10:17:56 crc kubenswrapper[4998]: E0227 10:17:56.742971 4998 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:17:56Z is after 2026-02-23T05:33:13Z" interval="7s" Feb 27 10:17:56 crc kubenswrapper[4998]: E0227 10:17:56.746327 4998 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:17:56Z is after 2026-02-23T05:33:13Z" node="crc" Feb 27 10:17:57 crc kubenswrapper[4998]: I0227 10:17:57.694158 4998 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:17:57Z is after 2026-02-23T05:33:13Z Feb 27 10:17:58 crc kubenswrapper[4998]: I0227 10:17:58.694563 4998 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:17:58Z is after 2026-02-23T05:33:13Z Feb 27 10:17:58 crc kubenswrapper[4998]: E0227 10:17:58.840659 4998 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 27 10:17:59 crc kubenswrapper[4998]: I0227 10:17:59.697426 4998 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:17:59Z is after 2026-02-23T05:33:13Z Feb 27 10:18:00 crc kubenswrapper[4998]: I0227 10:18:00.696266 4998 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:18:00Z is after 2026-02-23T05:33:13Z Feb 27 10:18:01 crc kubenswrapper[4998]: I0227 10:18:01.696518 4998 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:18:01Z is after 2026-02-23T05:33:13Z Feb 27 10:18:01 crc kubenswrapper[4998]: I0227 10:18:01.879331 4998 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": read tcp 192.168.126.11:37934->192.168.126.11:10357: read: connection reset by peer" start-of-body= Feb 27 10:18:01 crc kubenswrapper[4998]: I0227 10:18:01.879430 4998 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": read tcp 192.168.126.11:37934->192.168.126.11:10357: read: connection reset by peer" Feb 27 10:18:01 crc kubenswrapper[4998]: I0227 10:18:01.879512 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 10:18:01 crc kubenswrapper[4998]: I0227 10:18:01.879724 4998 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:18:01 crc kubenswrapper[4998]: I0227 10:18:01.881140 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:18:01 crc kubenswrapper[4998]: I0227 10:18:01.881166 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:18:01 crc kubenswrapper[4998]: I0227 10:18:01.881177 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:18:01 crc kubenswrapper[4998]: I0227 10:18:01.881627 4998 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"e3b695c09816dfe060e15e662b7e0ebacd71ebee97b82f7589f141751789fd82"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Feb 27 10:18:01 crc kubenswrapper[4998]: I0227 10:18:01.881838 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" containerID="cri-o://e3b695c09816dfe060e15e662b7e0ebacd71ebee97b82f7589f141751789fd82" gracePeriod=30 Feb 27 10:18:01 crc kubenswrapper[4998]: I0227 10:18:01.942706 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Feb 27 10:18:01 crc kubenswrapper[4998]: I0227 10:18:01.943266 4998 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="e3b695c09816dfe060e15e662b7e0ebacd71ebee97b82f7589f141751789fd82" exitCode=255 Feb 27 10:18:01 crc kubenswrapper[4998]: I0227 10:18:01.943335 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"e3b695c09816dfe060e15e662b7e0ebacd71ebee97b82f7589f141751789fd82"} Feb 27 10:18:02 crc kubenswrapper[4998]: I0227 10:18:02.695082 4998 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:18:02Z is after 2026-02-23T05:33:13Z Feb 27 10:18:02 crc kubenswrapper[4998]: I0227 10:18:02.950883 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Feb 27 10:18:02 crc kubenswrapper[4998]: I0227 10:18:02.951743 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"5a48766e32131d5ab9abc23e7d20a4d07291cb6a8fbfd6cc71727209ac18d01a"} Feb 27 10:18:02 crc kubenswrapper[4998]: I0227 10:18:02.951936 4998 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:18:02 crc kubenswrapper[4998]: I0227 10:18:02.953161 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:18:02 crc kubenswrapper[4998]: I0227 10:18:02.953281 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:18:02 crc kubenswrapper[4998]: I0227 10:18:02.953311 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:18:03 crc kubenswrapper[4998]: E0227 10:18:03.349626 4998 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:18:03Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.189813185a9c17f1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:17:28.690612209 +0000 UTC m=+0.688883207,LastTimestamp:2026-02-27 10:17:28.690612209 +0000 UTC m=+0.688883207,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:18:03 crc kubenswrapper[4998]: I0227 10:18:03.694544 4998 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:18:03Z is after 2026-02-23T05:33:13Z Feb 27 10:18:03 crc kubenswrapper[4998]: I0227 10:18:03.746930 4998 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:18:03 crc kubenswrapper[4998]: I0227 10:18:03.748464 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:18:03 crc kubenswrapper[4998]: I0227 10:18:03.748541 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:18:03 crc kubenswrapper[4998]: I0227 10:18:03.748557 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:18:03 crc kubenswrapper[4998]: I0227 10:18:03.748588 4998 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 27 10:18:03 crc kubenswrapper[4998]: E0227 10:18:03.750741 4998 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:18:03Z is after 2026-02-23T05:33:13Z" interval="7s" Feb 27 10:18:03 crc kubenswrapper[4998]: E0227 10:18:03.755051 4998 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:18:03Z is after 2026-02-23T05:33:13Z" node="crc" Feb 27 10:18:03 crc kubenswrapper[4998]: I0227 10:18:03.954408 4998 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:18:03 crc kubenswrapper[4998]: I0227 10:18:03.955512 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:18:03 crc kubenswrapper[4998]: I0227 10:18:03.955555 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:18:03 crc kubenswrapper[4998]: I0227 10:18:03.955566 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:18:04 crc kubenswrapper[4998]: I0227 10:18:04.310312 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 10:18:04 crc kubenswrapper[4998]: I0227 10:18:04.695032 4998 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:18:04Z is after 2026-02-23T05:33:13Z Feb 27 10:18:04 crc kubenswrapper[4998]: I0227 10:18:04.957135 4998 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:18:04 crc kubenswrapper[4998]: I0227 10:18:04.958505 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:18:04 crc kubenswrapper[4998]: I0227 10:18:04.958560 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:18:04 crc kubenswrapper[4998]: I0227 10:18:04.958573 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:18:05 crc kubenswrapper[4998]: W0227 10:18:05.450128 4998 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:18:05Z is after 2026-02-23T05:33:13Z Feb 27 10:18:05 crc kubenswrapper[4998]: E0227 10:18:05.450272 4998 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:18:05Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 27 10:18:05 crc kubenswrapper[4998]: I0227 10:18:05.695689 4998 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:18:05Z is after 2026-02-23T05:33:13Z Feb 27 10:18:06 crc kubenswrapper[4998]: I0227 10:18:06.696462 4998 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:18:06Z is after 2026-02-23T05:33:13Z Feb 27 10:18:06 crc kubenswrapper[4998]: I0227 10:18:06.764718 4998 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:18:06 crc kubenswrapper[4998]: I0227 10:18:06.767059 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:18:06 crc kubenswrapper[4998]: I0227 10:18:06.767113 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:18:06 crc kubenswrapper[4998]: I0227 10:18:06.767127 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:18:06 crc kubenswrapper[4998]: I0227 10:18:06.767913 4998 scope.go:117] "RemoveContainer" containerID="6588fa50fc66eb34a7e44c7cabc4a2c767e2342c092297cfeeaed2fcf33e982b" Feb 27 10:18:07 crc kubenswrapper[4998]: I0227 10:18:07.696002 4998 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:18:07Z is after 2026-02-23T05:33:13Z Feb 27 10:18:07 crc kubenswrapper[4998]: I0227 10:18:07.968757 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Feb 27 10:18:07 crc kubenswrapper[4998]: I0227 10:18:07.969487 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 27 10:18:07 crc kubenswrapper[4998]: I0227 10:18:07.971652 4998 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="49baf55a1ea2d1102381bce1e14db27baca5eca38c787aee06cbf5395da0698f" exitCode=255 Feb 27 10:18:07 crc kubenswrapper[4998]: I0227 10:18:07.971703 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"49baf55a1ea2d1102381bce1e14db27baca5eca38c787aee06cbf5395da0698f"} Feb 27 10:18:07 crc kubenswrapper[4998]: I0227 10:18:07.971748 4998 scope.go:117] "RemoveContainer" containerID="6588fa50fc66eb34a7e44c7cabc4a2c767e2342c092297cfeeaed2fcf33e982b" Feb 27 10:18:07 crc kubenswrapper[4998]: I0227 10:18:07.971956 4998 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:18:07 crc kubenswrapper[4998]: I0227 10:18:07.973206 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:18:07 crc kubenswrapper[4998]: I0227 10:18:07.973288 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:18:07 crc kubenswrapper[4998]: I0227 10:18:07.973316 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:18:07 crc kubenswrapper[4998]: I0227 10:18:07.974318 4998 scope.go:117] "RemoveContainer" containerID="49baf55a1ea2d1102381bce1e14db27baca5eca38c787aee06cbf5395da0698f" Feb 27 10:18:07 crc kubenswrapper[4998]: E0227 10:18:07.974593 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 27 10:18:08 crc kubenswrapper[4998]: I0227 10:18:08.381911 4998 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 27 10:18:08 crc kubenswrapper[4998]: E0227 10:18:08.386203 4998 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:18:08Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 27 10:18:08 crc kubenswrapper[4998]: E0227 10:18:08.387501 4998 certificate_manager.go:440] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Reached backoff limit, still unable to rotate certs: timed out waiting for the condition" logger="UnhandledError" Feb 27 10:18:08 crc kubenswrapper[4998]: I0227 10:18:08.696341 4998 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:18:08Z is after 2026-02-23T05:33:13Z Feb 27 10:18:08 crc kubenswrapper[4998]: E0227 10:18:08.841276 4998 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 27 10:18:08 crc kubenswrapper[4998]: I0227 10:18:08.984089 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Feb 27 10:18:09 crc kubenswrapper[4998]: I0227 10:18:09.694360 4998 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:18:09Z is after 2026-02-23T05:33:13Z Feb 27 10:18:10 crc kubenswrapper[4998]: I0227 10:18:10.695592 4998 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:18:10Z is after 2026-02-23T05:33:13Z Feb 27 10:18:10 crc kubenswrapper[4998]: I0227 10:18:10.755391 4998 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:18:10 crc kubenswrapper[4998]: E0227 10:18:10.755500 4998 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:18:10Z is after 2026-02-23T05:33:13Z" interval="7s" Feb 27 10:18:10 crc kubenswrapper[4998]: I0227 10:18:10.757071 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:18:10 crc kubenswrapper[4998]: I0227 10:18:10.757119 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:18:10 crc kubenswrapper[4998]: I0227 10:18:10.757146 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:18:10 crc kubenswrapper[4998]: I0227 10:18:10.757168 4998 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 27 10:18:10 crc kubenswrapper[4998]: E0227 10:18:10.760711 4998 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:18:10Z is after 2026-02-23T05:33:13Z" node="crc" Feb 27 10:18:10 crc kubenswrapper[4998]: W0227 10:18:10.775718 4998 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:18:10Z is after 2026-02-23T05:33:13Z Feb 27 10:18:10 crc kubenswrapper[4998]: E0227 10:18:10.775790 4998 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:18:10Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 27 10:18:10 crc kubenswrapper[4998]: I0227 10:18:10.803665 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 10:18:10 crc kubenswrapper[4998]: I0227 10:18:10.803983 4998 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:18:10 crc kubenswrapper[4998]: I0227 10:18:10.805582 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:18:10 crc kubenswrapper[4998]: I0227 10:18:10.805650 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:18:10 crc kubenswrapper[4998]: I0227 10:18:10.805675 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:18:11 crc kubenswrapper[4998]: I0227 10:18:11.683039 4998 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 10:18:11 crc kubenswrapper[4998]: I0227 10:18:11.683303 4998 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:18:11 crc kubenswrapper[4998]: I0227 10:18:11.684912 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:18:11 crc kubenswrapper[4998]: I0227 10:18:11.684956 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:18:11 crc kubenswrapper[4998]: I0227 10:18:11.684967 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:18:11 crc kubenswrapper[4998]: I0227 10:18:11.685590 4998 scope.go:117] "RemoveContainer" containerID="49baf55a1ea2d1102381bce1e14db27baca5eca38c787aee06cbf5395da0698f" Feb 27 10:18:11 crc kubenswrapper[4998]: E0227 10:18:11.685784 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 27 10:18:11 crc kubenswrapper[4998]: I0227 10:18:11.696382 4998 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:18:11Z is after 2026-02-23T05:33:13Z Feb 27 10:18:12 crc kubenswrapper[4998]: I0227 10:18:12.235132 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 10:18:12 crc kubenswrapper[4998]: I0227 10:18:12.235412 4998 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:18:12 crc kubenswrapper[4998]: I0227 10:18:12.237083 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:18:12 crc kubenswrapper[4998]: I0227 10:18:12.237144 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:18:12 crc kubenswrapper[4998]: I0227 10:18:12.237161 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:18:12 crc kubenswrapper[4998]: I0227 10:18:12.237951 4998 scope.go:117] "RemoveContainer" containerID="49baf55a1ea2d1102381bce1e14db27baca5eca38c787aee06cbf5395da0698f" Feb 27 10:18:12 crc kubenswrapper[4998]: E0227 10:18:12.238271 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 27 10:18:12 crc kubenswrapper[4998]: I0227 10:18:12.694524 4998 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:18:12Z is after 2026-02-23T05:33:13Z Feb 27 10:18:13 crc kubenswrapper[4998]: E0227 10:18:13.354142 4998 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:18:13Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.189813185a9c17f1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:17:28.690612209 +0000 UTC m=+0.688883207,LastTimestamp:2026-02-27 10:17:28.690612209 +0000 UTC m=+0.688883207,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:18:13 crc kubenswrapper[4998]: I0227 10:18:13.695937 4998 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:18:13Z is after 2026-02-23T05:33:13Z Feb 27 10:18:13 crc kubenswrapper[4998]: I0227 10:18:13.804281 4998 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 27 10:18:13 crc kubenswrapper[4998]: I0227 10:18:13.804353 4998 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 27 10:18:13 crc kubenswrapper[4998]: W0227 10:18:13.947019 4998 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:18:13Z is after 2026-02-23T05:33:13Z Feb 27 10:18:13 crc kubenswrapper[4998]: E0227 10:18:13.947107 4998 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:18:13Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 27 10:18:14 crc kubenswrapper[4998]: W0227 10:18:14.267043 4998 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:18:14Z is after 2026-02-23T05:33:13Z Feb 27 10:18:14 crc kubenswrapper[4998]: E0227 10:18:14.267126 4998 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:18:14Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 27 10:18:14 crc kubenswrapper[4998]: I0227 10:18:14.695210 4998 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:18:14Z is after 2026-02-23T05:33:13Z Feb 27 10:18:15 crc kubenswrapper[4998]: I0227 10:18:15.693985 4998 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:18:15Z is after 2026-02-23T05:33:13Z Feb 27 10:18:16 crc kubenswrapper[4998]: I0227 10:18:16.694756 4998 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:18:16Z is after 2026-02-23T05:33:13Z Feb 27 10:18:17 crc kubenswrapper[4998]: I0227 10:18:17.695528 4998 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:18:17Z is after 2026-02-23T05:33:13Z Feb 27 10:18:17 crc kubenswrapper[4998]: E0227 10:18:17.759549 4998 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:18:17Z is after 2026-02-23T05:33:13Z" interval="7s" Feb 27 10:18:17 crc kubenswrapper[4998]: I0227 10:18:17.761654 4998 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:18:17 crc kubenswrapper[4998]: I0227 10:18:17.763097 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:18:17 crc kubenswrapper[4998]: I0227 10:18:17.763143 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:18:17 crc kubenswrapper[4998]: I0227 10:18:17.763155 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:18:17 crc kubenswrapper[4998]: I0227 10:18:17.763179 4998 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 27 10:18:17 crc kubenswrapper[4998]: E0227 10:18:17.767512 4998 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:18:17Z is after 2026-02-23T05:33:13Z" node="crc" Feb 27 10:18:18 crc kubenswrapper[4998]: I0227 10:18:18.696966 4998 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:18:18Z is after 2026-02-23T05:33:13Z Feb 27 10:18:18 crc kubenswrapper[4998]: E0227 10:18:18.841864 4998 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 27 10:18:19 crc kubenswrapper[4998]: I0227 10:18:19.695817 4998 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:18:19Z is after 2026-02-23T05:33:13Z Feb 27 10:18:20 crc kubenswrapper[4998]: I0227 10:18:20.694550 4998 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:18:20Z is after 2026-02-23T05:33:13Z Feb 27 10:18:21 crc kubenswrapper[4998]: I0227 10:18:21.696624 4998 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:18:21Z is after 2026-02-23T05:33:13Z Feb 27 10:18:22 crc kubenswrapper[4998]: I0227 10:18:22.694146 4998 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:18:22Z is after 2026-02-23T05:33:13Z Feb 27 10:18:23 crc kubenswrapper[4998]: E0227 10:18:23.357580 4998 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:18:23Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.189813185a9c17f1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:17:28.690612209 +0000 UTC m=+0.688883207,LastTimestamp:2026-02-27 10:17:28.690612209 +0000 UTC m=+0.688883207,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:18:23 crc kubenswrapper[4998]: I0227 10:18:23.694107 4998 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:18:23Z is after 2026-02-23T05:33:13Z Feb 27 10:18:23 crc kubenswrapper[4998]: I0227 10:18:23.764578 4998 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:18:23 crc kubenswrapper[4998]: I0227 10:18:23.765786 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:18:23 crc kubenswrapper[4998]: I0227 10:18:23.765839 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:18:23 crc kubenswrapper[4998]: I0227 10:18:23.765851 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:18:23 crc kubenswrapper[4998]: I0227 10:18:23.766522 4998 scope.go:117] "RemoveContainer" containerID="49baf55a1ea2d1102381bce1e14db27baca5eca38c787aee06cbf5395da0698f" Feb 27 10:18:23 crc kubenswrapper[4998]: E0227 10:18:23.766722 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 27 10:18:23 crc kubenswrapper[4998]: I0227 10:18:23.804357 4998 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 27 10:18:23 crc kubenswrapper[4998]: I0227 10:18:23.804416 4998 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 27 10:18:24 crc kubenswrapper[4998]: I0227 10:18:24.696963 4998 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 10:18:24 crc kubenswrapper[4998]: E0227 10:18:24.764809 4998 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 27 10:18:24 crc kubenswrapper[4998]: I0227 10:18:24.767872 4998 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:18:24 crc kubenswrapper[4998]: I0227 10:18:24.769384 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:18:24 crc kubenswrapper[4998]: I0227 10:18:24.769420 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:18:24 crc kubenswrapper[4998]: I0227 10:18:24.769431 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:18:24 crc kubenswrapper[4998]: I0227 10:18:24.769458 4998 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 27 10:18:24 crc kubenswrapper[4998]: E0227 10:18:24.774398 4998 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 27 10:18:25 crc kubenswrapper[4998]: I0227 10:18:25.366261 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 27 10:18:25 crc kubenswrapper[4998]: I0227 10:18:25.366421 4998 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:18:25 crc kubenswrapper[4998]: I0227 10:18:25.367603 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:18:25 crc kubenswrapper[4998]: I0227 10:18:25.367741 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:18:25 crc kubenswrapper[4998]: I0227 10:18:25.367824 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:18:25 crc kubenswrapper[4998]: I0227 10:18:25.697994 4998 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 10:18:26 crc kubenswrapper[4998]: I0227 10:18:26.697640 4998 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 10:18:27 crc kubenswrapper[4998]: I0227 10:18:27.696651 4998 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 10:18:28 crc kubenswrapper[4998]: I0227 10:18:28.696282 4998 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 10:18:28 crc kubenswrapper[4998]: E0227 10:18:28.842415 4998 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 27 10:18:29 crc kubenswrapper[4998]: I0227 10:18:29.697546 4998 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 10:18:30 crc kubenswrapper[4998]: I0227 10:18:30.695869 4998 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 10:18:31 crc kubenswrapper[4998]: I0227 10:18:31.697482 4998 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 10:18:31 crc kubenswrapper[4998]: E0227 10:18:31.771903 4998 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 27 10:18:31 crc kubenswrapper[4998]: I0227 10:18:31.774960 4998 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:18:31 crc kubenswrapper[4998]: I0227 10:18:31.776508 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:18:31 crc kubenswrapper[4998]: I0227 10:18:31.776619 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:18:31 crc kubenswrapper[4998]: I0227 10:18:31.776637 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:18:31 crc kubenswrapper[4998]: I0227 10:18:31.776680 4998 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 27 10:18:31 crc kubenswrapper[4998]: E0227 10:18:31.783191 4998 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 27 10:18:32 crc kubenswrapper[4998]: I0227 10:18:32.697332 4998 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 10:18:32 crc kubenswrapper[4998]: I0227 10:18:32.993110 4998 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": read tcp 192.168.126.11:37298->192.168.126.11:10357: read: connection reset by peer" start-of-body= Feb 27 10:18:32 crc kubenswrapper[4998]: I0227 10:18:32.993190 4998 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": read tcp 192.168.126.11:37298->192.168.126.11:10357: read: connection reset by peer" Feb 27 10:18:32 crc kubenswrapper[4998]: I0227 10:18:32.993286 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 10:18:32 crc kubenswrapper[4998]: I0227 10:18:32.993438 4998 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:18:32 crc kubenswrapper[4998]: I0227 10:18:32.995274 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:18:32 crc kubenswrapper[4998]: I0227 10:18:32.995312 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:18:32 crc kubenswrapper[4998]: I0227 10:18:32.995324 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:18:32 crc kubenswrapper[4998]: I0227 10:18:32.995850 4998 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"5a48766e32131d5ab9abc23e7d20a4d07291cb6a8fbfd6cc71727209ac18d01a"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Feb 27 10:18:32 crc kubenswrapper[4998]: I0227 10:18:32.995958 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" containerID="cri-o://5a48766e32131d5ab9abc23e7d20a4d07291cb6a8fbfd6cc71727209ac18d01a" gracePeriod=30 Feb 27 10:18:33 crc kubenswrapper[4998]: I0227 10:18:33.049610 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/1.log" Feb 27 10:18:33 crc kubenswrapper[4998]: I0227 10:18:33.050996 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Feb 27 10:18:33 crc kubenswrapper[4998]: I0227 10:18:33.051413 4998 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="5a48766e32131d5ab9abc23e7d20a4d07291cb6a8fbfd6cc71727209ac18d01a" exitCode=255 Feb 27 10:18:33 crc kubenswrapper[4998]: I0227 10:18:33.051500 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"5a48766e32131d5ab9abc23e7d20a4d07291cb6a8fbfd6cc71727209ac18d01a"} Feb 27 10:18:33 crc kubenswrapper[4998]: I0227 10:18:33.051553 4998 scope.go:117] "RemoveContainer" containerID="e3b695c09816dfe060e15e662b7e0ebacd71ebee97b82f7589f141751789fd82" Feb 27 10:18:33 crc kubenswrapper[4998]: E0227 10:18:33.365541 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189813185a9c17f1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:17:28.690612209 +0000 UTC m=+0.688883207,LastTimestamp:2026-02-27 10:17:28.690612209 +0000 UTC m=+0.688883207,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:18:33 crc kubenswrapper[4998]: E0227 10:18:33.369964 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189813185e0551c2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:17:28.747839938 +0000 UTC m=+0.746110906,LastTimestamp:2026-02-27 10:17:28.747839938 +0000 UTC m=+0.746110906,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:18:33 crc kubenswrapper[4998]: E0227 10:18:33.375185 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189813185e05e561 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:17:28.747877729 +0000 UTC m=+0.746148707,LastTimestamp:2026-02-27 10:17:28.747877729 +0000 UTC m=+0.746148707,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:18:33 crc kubenswrapper[4998]: E0227 10:18:33.380126 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189813185e0620f3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:17:28.747892979 +0000 UTC m=+0.746163947,LastTimestamp:2026-02-27 10:17:28.747892979 +0000 UTC m=+0.746163947,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:18:33 crc kubenswrapper[4998]: E0227 10:18:33.384973 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18981318641f6bf0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:17:28.850213872 +0000 UTC m=+0.848484840,LastTimestamp:2026-02-27 10:17:28.850213872 +0000 UTC m=+0.848484840,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:18:33 crc kubenswrapper[4998]: E0227 10:18:33.390278 4998 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189813185e0551c2\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189813185e0551c2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:17:28.747839938 +0000 UTC m=+0.746110906,LastTimestamp:2026-02-27 10:17:28.868276255 +0000 UTC m=+0.866547223,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:18:33 crc kubenswrapper[4998]: E0227 10:18:33.396783 4998 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189813185e05e561\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189813185e05e561 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:17:28.747877729 +0000 UTC m=+0.746148707,LastTimestamp:2026-02-27 10:17:28.868303816 +0000 UTC m=+0.866574784,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:18:33 crc kubenswrapper[4998]: E0227 10:18:33.402551 4998 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189813185e0620f3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189813185e0620f3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:17:28.747892979 +0000 UTC m=+0.746163947,LastTimestamp:2026-02-27 10:17:28.868315006 +0000 UTC m=+0.866585974,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:18:33 crc kubenswrapper[4998]: E0227 10:18:33.407538 4998 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189813185e0551c2\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189813185e0551c2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:17:28.747839938 +0000 UTC m=+0.746110906,LastTimestamp:2026-02-27 10:17:28.870278449 +0000 UTC m=+0.868549417,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:18:33 crc kubenswrapper[4998]: E0227 10:18:33.411656 4998 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189813185e05e561\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189813185e05e561 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:17:28.747877729 +0000 UTC m=+0.746148707,LastTimestamp:2026-02-27 10:17:28.87029731 +0000 UTC m=+0.868568278,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:18:33 crc kubenswrapper[4998]: E0227 10:18:33.415777 4998 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189813185e0551c2\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189813185e0551c2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:17:28.747839938 +0000 UTC m=+0.746110906,LastTimestamp:2026-02-27 10:17:28.87031229 +0000 UTC m=+0.868583268,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:18:33 crc kubenswrapper[4998]: E0227 10:18:33.419781 4998 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189813185e05e561\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189813185e05e561 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:17:28.747877729 +0000 UTC m=+0.746148707,LastTimestamp:2026-02-27 10:17:28.870340611 +0000 UTC m=+0.868611579,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:18:33 crc kubenswrapper[4998]: E0227 10:18:33.423876 4998 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189813185e0620f3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189813185e0620f3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:17:28.747892979 +0000 UTC m=+0.746163947,LastTimestamp:2026-02-27 10:17:28.870351021 +0000 UTC m=+0.868621989,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:18:33 crc kubenswrapper[4998]: E0227 10:18:33.428521 4998 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189813185e0620f3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189813185e0620f3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:17:28.747892979 +0000 UTC m=+0.746163947,LastTimestamp:2026-02-27 10:17:28.870382142 +0000 UTC m=+0.868653120,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:18:33 crc kubenswrapper[4998]: E0227 10:18:33.432930 4998 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189813185e0551c2\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189813185e0551c2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:17:28.747839938 +0000 UTC m=+0.746110906,LastTimestamp:2026-02-27 10:17:28.872161889 +0000 UTC m=+0.870432857,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:18:33 crc kubenswrapper[4998]: E0227 10:18:33.437775 4998 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189813185e05e561\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189813185e05e561 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:17:28.747877729 +0000 UTC m=+0.746148707,LastTimestamp:2026-02-27 10:17:28.872177749 +0000 UTC m=+0.870448717,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:18:33 crc kubenswrapper[4998]: E0227 10:18:33.442062 4998 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189813185e0620f3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189813185e0620f3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:17:28.747892979 +0000 UTC m=+0.746163947,LastTimestamp:2026-02-27 10:17:28.87218964 +0000 UTC m=+0.870460608,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:18:33 crc kubenswrapper[4998]: E0227 10:18:33.445786 4998 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189813185e0551c2\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189813185e0551c2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:17:28.747839938 +0000 UTC m=+0.746110906,LastTimestamp:2026-02-27 10:17:28.872326703 +0000 UTC m=+0.870597671,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:18:33 crc kubenswrapper[4998]: E0227 10:18:33.449584 4998 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189813185e05e561\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189813185e05e561 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:17:28.747877729 +0000 UTC m=+0.746148707,LastTimestamp:2026-02-27 10:17:28.872342314 +0000 UTC m=+0.870613282,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:18:33 crc kubenswrapper[4998]: E0227 10:18:33.453434 4998 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189813185e0620f3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189813185e0620f3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:17:28.747892979 +0000 UTC m=+0.746163947,LastTimestamp:2026-02-27 10:17:28.872358904 +0000 UTC m=+0.870629872,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:18:33 crc kubenswrapper[4998]: E0227 10:18:33.457145 4998 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189813185e0551c2\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189813185e0551c2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:17:28.747839938 +0000 UTC m=+0.746110906,LastTimestamp:2026-02-27 10:17:28.873343771 +0000 UTC m=+0.871614739,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:18:33 crc kubenswrapper[4998]: E0227 10:18:33.460760 4998 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189813185e05e561\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189813185e05e561 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:17:28.747877729 +0000 UTC m=+0.746148707,LastTimestamp:2026-02-27 10:17:28.873355331 +0000 UTC m=+0.871626299,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:18:33 crc kubenswrapper[4998]: E0227 10:18:33.464033 4998 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189813185e0620f3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189813185e0620f3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:17:28.747892979 +0000 UTC m=+0.746163947,LastTimestamp:2026-02-27 10:17:28.873365892 +0000 UTC m=+0.871636860,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:18:33 crc kubenswrapper[4998]: E0227 10:18:33.467741 4998 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189813185e0551c2\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189813185e0551c2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:17:28.747839938 +0000 UTC m=+0.746110906,LastTimestamp:2026-02-27 10:17:28.873410773 +0000 UTC m=+0.871681741,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:18:33 crc kubenswrapper[4998]: E0227 10:18:33.471244 4998 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189813185e05e561\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189813185e05e561 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:17:28.747877729 +0000 UTC m=+0.746148707,LastTimestamp:2026-02-27 10:17:28.873427763 +0000 UTC m=+0.871698731,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:18:33 crc kubenswrapper[4998]: E0227 10:18:33.475329 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189813187c57a572 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:17:29.256551794 +0000 UTC m=+1.254822802,LastTimestamp:2026-02-27 10:17:29.256551794 +0000 UTC m=+1.254822802,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:18:33 crc kubenswrapper[4998]: E0227 10:18:33.478471 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189813187c5f6306 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:17:29.257059078 +0000 UTC m=+1.255330056,LastTimestamp:2026-02-27 10:17:29.257059078 +0000 UTC m=+1.255330056,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:18:33 crc kubenswrapper[4998]: E0227 10:18:33.482131 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189813187cbfa88c openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:17:29.263368332 +0000 UTC m=+1.261639300,LastTimestamp:2026-02-27 10:17:29.263368332 +0000 UTC m=+1.261639300,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:18:33 crc kubenswrapper[4998]: E0227 10:18:33.485336 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189813187d21c2c1 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:17:29.269797569 +0000 UTC m=+1.268068537,LastTimestamp:2026-02-27 10:17:29.269797569 +0000 UTC m=+1.268068537,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:18:33 crc kubenswrapper[4998]: E0227 10:18:33.489243 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189813187d29075a openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:17:29.270273882 +0000 UTC m=+1.268544890,LastTimestamp:2026-02-27 10:17:29.270273882 +0000 UTC m=+1.268544890,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:18:33 crc kubenswrapper[4998]: E0227 10:18:33.493870 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18981318ad0de683 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:17:30.073802371 +0000 UTC m=+2.072073339,LastTimestamp:2026-02-27 10:17:30.073802371 +0000 UTC m=+2.072073339,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:18:33 crc kubenswrapper[4998]: E0227 10:18:33.495137 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18981318ad0e2b9e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:17:30.073820062 +0000 UTC m=+2.072091060,LastTimestamp:2026-02-27 10:17:30.073820062 +0000 UTC m=+2.072091060,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:18:33 crc kubenswrapper[4998]: E0227 10:18:33.499016 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18981318ad0e80e4 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Created,Message:Created container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:17:30.073841892 +0000 UTC m=+2.072112860,LastTimestamp:2026-02-27 10:17:30.073841892 +0000 UTC m=+2.072112860,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:18:33 crc kubenswrapper[4998]: E0227 10:18:33.502868 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.18981318ad0efdd1 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:17:30.073873873 +0000 UTC m=+2.072144851,LastTimestamp:2026-02-27 10:17:30.073873873 +0000 UTC m=+2.072144851,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:18:33 crc kubenswrapper[4998]: E0227 10:18:33.507153 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18981318ad16adce openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:17:30.074377678 +0000 UTC m=+2.072648666,LastTimestamp:2026-02-27 10:17:30.074377678 +0000 UTC m=+2.072648666,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:18:33 crc kubenswrapper[4998]: E0227 10:18:33.511580 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18981318ae28c42f openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:17:30.092340271 +0000 UTC m=+2.090611329,LastTimestamp:2026-02-27 10:17:30.092340271 +0000 UTC m=+2.090611329,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:18:33 crc kubenswrapper[4998]: E0227 10:18:33.517788 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18981318ae460d83 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:17:30.094259587 +0000 UTC m=+2.092530565,LastTimestamp:2026-02-27 10:17:30.094259587 +0000 UTC m=+2.092530565,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:18:33 crc kubenswrapper[4998]: E0227 10:18:33.529540 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.18981318ae5ba3ce openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:17:30.095674318 +0000 UTC m=+2.093945296,LastTimestamp:2026-02-27 10:17:30.095674318 +0000 UTC m=+2.093945296,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:18:33 crc kubenswrapper[4998]: E0227 10:18:33.534113 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18981318ae81c0f7 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Started,Message:Started container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:17:30.098172151 +0000 UTC m=+2.096443119,LastTimestamp:2026-02-27 10:17:30.098172151 +0000 UTC m=+2.096443119,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:18:33 crc kubenswrapper[4998]: E0227 10:18:33.538752 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18981318ae84f18d openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:17:30.098381197 +0000 UTC m=+2.096652165,LastTimestamp:2026-02-27 10:17:30.098381197 +0000 UTC m=+2.096652165,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:18:33 crc kubenswrapper[4998]: E0227 10:18:33.543474 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18981318ae948110 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:17:30.099400976 +0000 UTC m=+2.097671944,LastTimestamp:2026-02-27 10:17:30.099400976 +0000 UTC m=+2.097671944,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:18:33 crc kubenswrapper[4998]: E0227 10:18:33.548314 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18981318c0d784b1 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:17:30.405782705 +0000 UTC m=+2.404053673,LastTimestamp:2026-02-27 10:17:30.405782705 +0000 UTC m=+2.404053673,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:18:33 crc kubenswrapper[4998]: E0227 10:18:33.552386 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18981318c1be32d4 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:17:30.420900564 +0000 UTC m=+2.419171532,LastTimestamp:2026-02-27 10:17:30.420900564 +0000 UTC m=+2.419171532,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:18:33 crc kubenswrapper[4998]: E0227 10:18:33.557037 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18981318c1dab1d9 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:17:30.422768089 +0000 UTC m=+2.421039067,LastTimestamp:2026-02-27 10:17:30.422768089 +0000 UTC m=+2.421039067,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:18:33 crc kubenswrapper[4998]: E0227 10:18:33.561379 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18981318cd5f9b50 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Created,Message:Created container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:17:30.616027984 +0000 UTC m=+2.614298962,LastTimestamp:2026-02-27 10:17:30.616027984 +0000 UTC m=+2.614298962,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:18:33 crc kubenswrapper[4998]: E0227 10:18:33.565470 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18981318ce5f8271 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Started,Message:Started container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:17:30.632798833 +0000 UTC m=+2.631069801,LastTimestamp:2026-02-27 10:17:30.632798833 +0000 UTC m=+2.631069801,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:18:33 crc kubenswrapper[4998]: E0227 10:18:33.569072 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18981318ce785f28 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:17:30.6344282 +0000 UTC m=+2.632699158,LastTimestamp:2026-02-27 10:17:30.6344282 +0000 UTC m=+2.632699158,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:18:33 crc kubenswrapper[4998]: E0227 10:18:33.573815 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.18981318d793394e openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:17:30.787182926 +0000 UTC m=+2.785453914,LastTimestamp:2026-02-27 10:17:30.787182926 +0000 UTC m=+2.785453914,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:18:33 crc kubenswrapper[4998]: E0227 10:18:33.577668 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18981318d7f6c3ba openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:17:30.793706426 +0000 UTC m=+2.791977394,LastTimestamp:2026-02-27 10:17:30.793706426 +0000 UTC m=+2.791977394,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:18:33 crc kubenswrapper[4998]: E0227 10:18:33.582354 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18981318d802d2bc openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:17:30.7944967 +0000 UTC m=+2.792767668,LastTimestamp:2026-02-27 10:17:30.7944967 +0000 UTC m=+2.792767668,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:18:33 crc kubenswrapper[4998]: E0227 10:18:33.586682 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18981318d83a808c openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:17:30.798145676 +0000 UTC m=+2.796416644,LastTimestamp:2026-02-27 10:17:30.798145676 +0000 UTC m=+2.796416644,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:18:33 crc kubenswrapper[4998]: E0227 10:18:33.591173 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18981318d8feba8b openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Created,Message:Created container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:17:30.811005579 +0000 UTC m=+2.809276557,LastTimestamp:2026-02-27 10:17:30.811005579 +0000 UTC m=+2.809276557,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:18:33 crc kubenswrapper[4998]: E0227 10:18:33.595475 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18981318e08e71a4 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Started,Message:Started container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:17:30.937864612 +0000 UTC m=+2.936135580,LastTimestamp:2026-02-27 10:17:30.937864612 +0000 UTC m=+2.936135580,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:18:33 crc kubenswrapper[4998]: E0227 10:18:33.597539 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.18981318e63f5460 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:17:31.033343072 +0000 UTC m=+3.031614030,LastTimestamp:2026-02-27 10:17:31.033343072 +0000 UTC m=+3.031614030,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:18:33 crc kubenswrapper[4998]: E0227 10:18:33.599352 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18981318e659c2e5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:17:31.035075301 +0000 UTC m=+3.033346269,LastTimestamp:2026-02-27 10:17:31.035075301 +0000 UTC m=+3.033346269,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:18:33 crc kubenswrapper[4998]: E0227 10:18:33.602410 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18981318e65c648c openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:17:31.035247756 +0000 UTC m=+3.033518744,LastTimestamp:2026-02-27 10:17:31.035247756 +0000 UTC m=+3.033518744,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:18:33 crc kubenswrapper[4998]: E0227 10:18:33.606332 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18981318e6657dae openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Created,Message:Created container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:17:31.035844014 +0000 UTC m=+3.034114982,LastTimestamp:2026-02-27 10:17:31.035844014 +0000 UTC m=+3.034114982,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:18:33 crc kubenswrapper[4998]: E0227 10:18:33.610104 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18981318e791ccc1 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:17:31.055525057 +0000 UTC m=+3.053796025,LastTimestamp:2026-02-27 10:17:31.055525057 +0000 UTC m=+3.053796025,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:18:33 crc kubenswrapper[4998]: E0227 10:18:33.615080 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18981318e7acdddb openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:17:31.057298907 +0000 UTC m=+3.055569875,LastTimestamp:2026-02-27 10:17:31.057298907 +0000 UTC m=+3.055569875,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:18:33 crc kubenswrapper[4998]: E0227 10:18:33.619617 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.18981318e850d85f openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:17:31.068045407 +0000 UTC m=+3.066316375,LastTimestamp:2026-02-27 10:17:31.068045407 +0000 UTC m=+3.066316375,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:18:33 crc kubenswrapper[4998]: E0227 10:18:33.623996 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18981318e8586e1a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:17:31.06854249 +0000 UTC m=+3.066813458,LastTimestamp:2026-02-27 10:17:31.06854249 +0000 UTC m=+3.066813458,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:18:33 crc kubenswrapper[4998]: E0227 10:18:33.628595 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18981318e876c5b2 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:17:31.070530994 +0000 UTC m=+3.068801962,LastTimestamp:2026-02-27 10:17:31.070530994 +0000 UTC m=+3.068801962,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:18:33 crc kubenswrapper[4998]: E0227 10:18:33.632623 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18981318e8c639e2 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Started,Message:Started container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:17:31.075738082 +0000 UTC m=+3.074009050,LastTimestamp:2026-02-27 10:17:31.075738082 +0000 UTC m=+3.074009050,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:18:33 crc kubenswrapper[4998]: E0227 10:18:33.637525 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18981318f2496310 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Created,Message:Created container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:17:31.235328784 +0000 UTC m=+3.233599752,LastTimestamp:2026-02-27 10:17:31.235328784 +0000 UTC m=+3.233599752,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:18:33 crc kubenswrapper[4998]: E0227 10:18:33.643648 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18981318f29ba30f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Created,Message:Created container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:17:31.240719119 +0000 UTC m=+3.238990087,LastTimestamp:2026-02-27 10:17:31.240719119 +0000 UTC m=+3.238990087,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:18:33 crc kubenswrapper[4998]: E0227 10:18:33.650997 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18981318f389c908 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Started,Message:Started container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:17:31.256326408 +0000 UTC m=+3.254597376,LastTimestamp:2026-02-27 10:17:31.256326408 +0000 UTC m=+3.254597376,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:18:33 crc kubenswrapper[4998]: E0227 10:18:33.655501 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18981318f39ec865 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:17:31.257702501 +0000 UTC m=+3.255973469,LastTimestamp:2026-02-27 10:17:31.257702501 +0000 UTC m=+3.255973469,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:18:33 crc kubenswrapper[4998]: E0227 10:18:33.659944 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18981318f41779f1 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Started,Message:Started container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:17:31.265612273 +0000 UTC m=+3.263883241,LastTimestamp:2026-02-27 10:17:31.265612273 +0000 UTC m=+3.263883241,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:18:33 crc kubenswrapper[4998]: E0227 10:18:33.664996 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18981318f42a85d6 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:17:31.266860502 +0000 UTC m=+3.265131470,LastTimestamp:2026-02-27 10:17:31.266860502 +0000 UTC m=+3.265131470,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:18:33 crc kubenswrapper[4998]: E0227 10:18:33.669669 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18981318ff5f1527 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Created,Message:Created container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:17:31.454854439 +0000 UTC m=+3.453125407,LastTimestamp:2026-02-27 10:17:31.454854439 +0000 UTC m=+3.453125407,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:18:33 crc kubenswrapper[4998]: E0227 10:18:33.675553 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18981318ff5fc48c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Created,Message:Created container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:17:31.45489934 +0000 UTC m=+3.453170318,LastTimestamp:2026-02-27 10:17:31.45489934 +0000 UTC m=+3.453170318,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:18:33 crc kubenswrapper[4998]: E0227 10:18:33.680892 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18981319009533fa openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Started,Message:Started container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:17:31.47517849 +0000 UTC m=+3.473449458,LastTimestamp:2026-02-27 10:17:31.47517849 +0000 UTC m=+3.473449458,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:18:33 crc kubenswrapper[4998]: E0227 10:18:33.685288 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1898131900b10818 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:17:31.477002264 +0000 UTC m=+3.475273232,LastTimestamp:2026-02-27 10:17:31.477002264 +0000 UTC m=+3.475273232,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:18:33 crc kubenswrapper[4998]: E0227 10:18:33.690047 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1898131900e5f976 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Started,Message:Started container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:17:31.480471926 +0000 UTC m=+3.478742894,LastTimestamp:2026-02-27 10:17:31.480471926 +0000 UTC m=+3.478742894,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:18:33 crc kubenswrapper[4998]: I0227 10:18:33.693968 4998 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 10:18:33 crc kubenswrapper[4998]: E0227 10:18:33.694639 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189813190b525399 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:17:31.655345049 +0000 UTC m=+3.653616017,LastTimestamp:2026-02-27 10:17:31.655345049 +0000 UTC m=+3.653616017,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:18:33 crc kubenswrapper[4998]: E0227 10:18:33.698511 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189813190c5cc17f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:17:31.672805759 +0000 UTC m=+3.671076727,LastTimestamp:2026-02-27 10:17:31.672805759 +0000 UTC m=+3.671076727,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:18:33 crc kubenswrapper[4998]: E0227 10:18:33.702918 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189813190c6b6a10 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:17:31.673766416 +0000 UTC m=+3.672037384,LastTimestamp:2026-02-27 10:17:31.673766416 +0000 UTC m=+3.672037384,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:18:33 crc kubenswrapper[4998]: E0227 10:18:33.708002 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1898131914aab4cd openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:17:31.812132045 +0000 UTC m=+3.810403013,LastTimestamp:2026-02-27 10:17:31.812132045 +0000 UTC m=+3.810403013,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:18:33 crc kubenswrapper[4998]: E0227 10:18:33.713282 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189813191cdba17f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:17:31.949556095 +0000 UTC m=+3.947827063,LastTimestamp:2026-02-27 10:17:31.949556095 +0000 UTC m=+3.947827063,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:18:33 crc kubenswrapper[4998]: E0227 10:18:33.717966 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189813191dbcc69d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:17:31.964311197 +0000 UTC m=+3.962582175,LastTimestamp:2026-02-27 10:17:31.964311197 +0000 UTC m=+3.962582175,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:18:33 crc kubenswrapper[4998]: E0227 10:18:33.723564 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1898131920d9ca18 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Created,Message:Created container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:17:32.01654428 +0000 UTC m=+4.014815248,LastTimestamp:2026-02-27 10:17:32.01654428 +0000 UTC m=+4.014815248,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:18:33 crc kubenswrapper[4998]: E0227 10:18:33.730150 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1898131921ab5bd3 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Started,Message:Started container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:17:32.030278611 +0000 UTC m=+4.028549569,LastTimestamp:2026-02-27 10:17:32.030278611 +0000 UTC m=+4.028549569,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:18:33 crc kubenswrapper[4998]: E0227 10:18:33.741415 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189813195182c0d3 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:17:32.832923859 +0000 UTC m=+4.831194827,LastTimestamp:2026-02-27 10:17:32.832923859 +0000 UTC m=+4.831194827,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:18:33 crc kubenswrapper[4998]: E0227 10:18:33.745294 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189813195bef234f openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:17:33.007799119 +0000 UTC m=+5.006070087,LastTimestamp:2026-02-27 10:17:33.007799119 +0000 UTC m=+5.006070087,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:18:33 crc kubenswrapper[4998]: E0227 10:18:33.749140 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189813195d45cd61 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:17:33.030255969 +0000 UTC m=+5.028526937,LastTimestamp:2026-02-27 10:17:33.030255969 +0000 UTC m=+5.028526937,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:18:33 crc kubenswrapper[4998]: E0227 10:18:33.752863 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189813195d5a7777 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:17:33.031610231 +0000 UTC m=+5.029881199,LastTimestamp:2026-02-27 10:17:33.031610231 +0000 UTC m=+5.029881199,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:18:33 crc kubenswrapper[4998]: E0227 10:18:33.760007 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189813196d7627e0 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:17:33.30186032 +0000 UTC m=+5.300131288,LastTimestamp:2026-02-27 10:17:33.30186032 +0000 UTC m=+5.300131288,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:18:33 crc kubenswrapper[4998]: E0227 10:18:33.764816 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189813196e8ab7ca openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:17:33.319985098 +0000 UTC m=+5.318256066,LastTimestamp:2026-02-27 10:17:33.319985098 +0000 UTC m=+5.318256066,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:18:33 crc kubenswrapper[4998]: E0227 10:18:33.769577 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189813196ea73a7c openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:17:33.321853564 +0000 UTC m=+5.320124532,LastTimestamp:2026-02-27 10:17:33.321853564 +0000 UTC m=+5.320124532,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:18:33 crc kubenswrapper[4998]: E0227 10:18:33.778294 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189813197b1ab133 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Created,Message:Created container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:17:33.530747187 +0000 UTC m=+5.529018165,LastTimestamp:2026-02-27 10:17:33.530747187 +0000 UTC m=+5.529018165,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:18:33 crc kubenswrapper[4998]: E0227 10:18:33.782727 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189813197bf516bc openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Started,Message:Started container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:17:33.545060028 +0000 UTC m=+5.543331026,LastTimestamp:2026-02-27 10:17:33.545060028 +0000 UTC m=+5.543331026,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:18:33 crc kubenswrapper[4998]: E0227 10:18:33.787411 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189813197c08d120 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:17:33.546352928 +0000 UTC m=+5.544623906,LastTimestamp:2026-02-27 10:17:33.546352928 +0000 UTC m=+5.544623906,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:18:33 crc kubenswrapper[4998]: E0227 10:18:33.792765 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1898131991e916ce openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Created,Message:Created container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:17:33.913372366 +0000 UTC m=+5.911643384,LastTimestamp:2026-02-27 10:17:33.913372366 +0000 UTC m=+5.911643384,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:18:33 crc kubenswrapper[4998]: E0227 10:18:33.796829 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189813199609a390 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Started,Message:Started container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:17:33.982614416 +0000 UTC m=+5.980885384,LastTimestamp:2026-02-27 10:17:33.982614416 +0000 UTC m=+5.980885384,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:18:33 crc kubenswrapper[4998]: E0227 10:18:33.800691 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189813199627be2b openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:17:33.984587307 +0000 UTC m=+5.982858275,LastTimestamp:2026-02-27 10:17:33.984587307 +0000 UTC m=+5.982858275,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:18:33 crc kubenswrapper[4998]: E0227 10:18:33.804346 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18981319a0990a16 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Created,Message:Created container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:17:34.15978447 +0000 UTC m=+6.158055438,LastTimestamp:2026-02-27 10:17:34.15978447 +0000 UTC m=+6.158055438,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:18:33 crc kubenswrapper[4998]: E0227 10:18:33.808205 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18981319a13b0017 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Started,Message:Started container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:17:34.170398743 +0000 UTC m=+6.168669711,LastTimestamp:2026-02-27 10:17:34.170398743 +0000 UTC m=+6.168669711,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:18:33 crc kubenswrapper[4998]: E0227 10:18:33.813417 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Feb 27 10:18:33 crc kubenswrapper[4998]: &Event{ObjectMeta:{kube-apiserver-crc.1898131bba7fbadb openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Liveness probe error: Get "https://192.168.126.11:17697/healthz": read tcp 192.168.126.11:44718->192.168.126.11:17697: read: connection reset by peer Feb 27 10:18:33 crc kubenswrapper[4998]: body: Feb 27 10:18:33 crc kubenswrapper[4998]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:17:43.184267995 +0000 UTC m=+15.182538963,LastTimestamp:2026-02-27 10:17:43.184267995 +0000 UTC m=+15.182538963,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 27 10:18:33 crc kubenswrapper[4998]: > Feb 27 10:18:33 crc kubenswrapper[4998]: E0227 10:18:33.817351 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Feb 27 10:18:33 crc kubenswrapper[4998]: &Event{ObjectMeta:{kube-apiserver-crc.1898131bba80326d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.126.11:17697/healthz": read tcp 192.168.126.11:44726->192.168.126.11:17697: read: connection reset by peer Feb 27 10:18:33 crc kubenswrapper[4998]: body: Feb 27 10:18:33 crc kubenswrapper[4998]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:17:43.184298605 +0000 UTC m=+15.182569573,LastTimestamp:2026-02-27 10:17:43.184298605 +0000 UTC m=+15.182569573,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 27 10:18:33 crc kubenswrapper[4998]: > Feb 27 10:18:33 crc kubenswrapper[4998]: E0227 10:18:33.822860 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1898131bba80ea1a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Liveness probe failed: Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:44718->192.168.126.11:17697: read: connection reset by peer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:17:43.184345626 +0000 UTC m=+15.182616594,LastTimestamp:2026-02-27 10:17:43.184345626 +0000 UTC m=+15.182616594,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:18:33 crc kubenswrapper[4998]: E0227 10:18:33.827691 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1898131bba8192d1 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:44726->192.168.126.11:17697: read: connection reset by peer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:17:43.184388817 +0000 UTC m=+15.182659795,LastTimestamp:2026-02-27 10:17:43.184388817 +0000 UTC m=+15.182659795,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:18:33 crc kubenswrapper[4998]: E0227 10:18:33.831438 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Feb 27 10:18:33 crc kubenswrapper[4998]: &Event{ObjectMeta:{kube-apiserver-crc.1898131bc4204237 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Feb 27 10:18:33 crc kubenswrapper[4998]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 27 10:18:33 crc kubenswrapper[4998]: Feb 27 10:18:33 crc kubenswrapper[4998]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:17:43.345783351 +0000 UTC m=+15.344054329,LastTimestamp:2026-02-27 10:17:43.345783351 +0000 UTC m=+15.344054329,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 27 10:18:33 crc kubenswrapper[4998]: > Feb 27 10:18:33 crc kubenswrapper[4998]: E0227 10:18:33.835551 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1898131bc42135db openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:17:43.345845723 +0000 UTC m=+15.344116701,LastTimestamp:2026-02-27 10:17:43.345845723 +0000 UTC m=+15.344116701,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:18:33 crc kubenswrapper[4998]: E0227 10:18:33.838994 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Feb 27 10:18:33 crc kubenswrapper[4998]: &Event{ObjectMeta:{kube-apiserver-crc.1898131bc49a6fac openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Feb 27 10:18:33 crc kubenswrapper[4998]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found]","reason":"Forbidden","details":{},"code":403} Feb 27 10:18:33 crc kubenswrapper[4998]: Feb 27 10:18:33 crc kubenswrapper[4998]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:17:43.35379038 +0000 UTC m=+15.352061358,LastTimestamp:2026-02-27 10:17:43.35379038 +0000 UTC m=+15.352061358,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 27 10:18:33 crc kubenswrapper[4998]: > Feb 27 10:18:33 crc kubenswrapper[4998]: E0227 10:18:33.843596 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Feb 27 10:18:33 crc kubenswrapper[4998]: &Event{ObjectMeta:{kube-controller-manager-crc.1898131bdf6dd2ba openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Feb 27 10:18:33 crc kubenswrapper[4998]: body: Feb 27 10:18:33 crc kubenswrapper[4998]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:17:43.80385145 +0000 UTC m=+15.802122458,LastTimestamp:2026-02-27 10:17:43.80385145 +0000 UTC m=+15.802122458,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 27 10:18:33 crc kubenswrapper[4998]: > Feb 27 10:18:33 crc kubenswrapper[4998]: E0227 10:18:33.847807 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1898131bdf6eee4a openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:17:43.803924042 +0000 UTC m=+15.802195050,LastTimestamp:2026-02-27 10:17:43.803924042 +0000 UTC m=+15.802195050,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:18:33 crc kubenswrapper[4998]: E0227 10:18:33.853497 4998 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.1898131bdf6dd2ba\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Feb 27 10:18:33 crc kubenswrapper[4998]: &Event{ObjectMeta:{kube-controller-manager-crc.1898131bdf6dd2ba openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Feb 27 10:18:33 crc kubenswrapper[4998]: body: Feb 27 10:18:33 crc kubenswrapper[4998]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:17:43.80385145 +0000 UTC m=+15.802122458,LastTimestamp:2026-02-27 10:17:53.804751901 +0000 UTC m=+25.803022869,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 27 10:18:33 crc kubenswrapper[4998]: > Feb 27 10:18:33 crc kubenswrapper[4998]: E0227 10:18:33.857011 4998 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.1898131bdf6eee4a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1898131bdf6eee4a openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:17:43.803924042 +0000 UTC m=+15.802195050,LastTimestamp:2026-02-27 10:17:53.804793062 +0000 UTC m=+25.803064030,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:18:33 crc kubenswrapper[4998]: E0227 10:18:33.861006 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Feb 27 10:18:33 crc kubenswrapper[4998]: &Event{ObjectMeta:{kube-controller-manager-crc.1898132014d0b1f5 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": read tcp 192.168.126.11:37934->192.168.126.11:10357: read: connection reset by peer Feb 27 10:18:33 crc kubenswrapper[4998]: body: Feb 27 10:18:33 crc kubenswrapper[4998]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:18:01.879392757 +0000 UTC m=+33.877663755,LastTimestamp:2026-02-27 10:18:01.879392757 +0000 UTC m=+33.877663755,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 27 10:18:33 crc kubenswrapper[4998]: > Feb 27 10:18:33 crc kubenswrapper[4998]: E0227 10:18:33.864645 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1898132014d1e563 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": read tcp 192.168.126.11:37934->192.168.126.11:10357: read: connection reset by peer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:18:01.879471459 +0000 UTC m=+33.877742457,LastTimestamp:2026-02-27 10:18:01.879471459 +0000 UTC m=+33.877742457,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:18:33 crc kubenswrapper[4998]: E0227 10:18:33.869205 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1898132014f5be8e openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Killing,Message:Container cluster-policy-controller failed startup probe, will be restarted,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:18:01.881820814 +0000 UTC m=+33.880091802,LastTimestamp:2026-02-27 10:18:01.881820814 +0000 UTC m=+33.880091802,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:18:33 crc kubenswrapper[4998]: E0227 10:18:33.873009 4998 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.18981318ae460d83\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18981318ae460d83 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:17:30.094259587 +0000 UTC m=+2.092530565,LastTimestamp:2026-02-27 10:18:02.404072066 +0000 UTC m=+34.402343084,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:18:33 crc kubenswrapper[4998]: E0227 10:18:33.877073 4998 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.18981318c0d784b1\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18981318c0d784b1 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:17:30.405782705 +0000 UTC m=+2.404053673,LastTimestamp:2026-02-27 10:18:02.600707321 +0000 UTC m=+34.598978289,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:18:33 crc kubenswrapper[4998]: E0227 10:18:33.881633 4998 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.18981318c1be32d4\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18981318c1be32d4 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:17:30.420900564 +0000 UTC m=+2.419171532,LastTimestamp:2026-02-27 10:18:02.611652619 +0000 UTC m=+34.609923577,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:18:33 crc kubenswrapper[4998]: E0227 10:18:33.889600 4998 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.1898131bdf6dd2ba\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Feb 27 10:18:33 crc kubenswrapper[4998]: &Event{ObjectMeta:{kube-controller-manager-crc.1898131bdf6dd2ba openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Feb 27 10:18:33 crc kubenswrapper[4998]: body: Feb 27 10:18:33 crc kubenswrapper[4998]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:17:43.80385145 +0000 UTC m=+15.802122458,LastTimestamp:2026-02-27 10:18:13.804335646 +0000 UTC m=+45.802606614,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 27 10:18:33 crc kubenswrapper[4998]: > Feb 27 10:18:33 crc kubenswrapper[4998]: E0227 10:18:33.894491 4998 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.1898131bdf6eee4a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1898131bdf6eee4a openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:17:43.803924042 +0000 UTC m=+15.802195050,LastTimestamp:2026-02-27 10:18:13.804381767 +0000 UTC m=+45.802652735,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:18:33 crc kubenswrapper[4998]: E0227 10:18:33.899550 4998 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.1898131bdf6dd2ba\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Feb 27 10:18:33 crc kubenswrapper[4998]: &Event{ObjectMeta:{kube-controller-manager-crc.1898131bdf6dd2ba openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Feb 27 10:18:33 crc kubenswrapper[4998]: body: Feb 27 10:18:33 crc kubenswrapper[4998]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:17:43.80385145 +0000 UTC m=+15.802122458,LastTimestamp:2026-02-27 10:18:23.804400575 +0000 UTC m=+55.802671543,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 27 10:18:33 crc kubenswrapper[4998]: > Feb 27 10:18:34 crc kubenswrapper[4998]: I0227 10:18:34.056339 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/1.log" Feb 27 10:18:34 crc kubenswrapper[4998]: I0227 10:18:34.057636 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"54c1076d7b06c386da934e0ef2a7ae42071884aaa9781bf133c657585aadddbc"} Feb 27 10:18:34 crc kubenswrapper[4998]: I0227 10:18:34.057745 4998 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:18:34 crc kubenswrapper[4998]: I0227 10:18:34.058639 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:18:34 crc kubenswrapper[4998]: I0227 10:18:34.058679 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:18:34 crc kubenswrapper[4998]: I0227 10:18:34.058691 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:18:34 crc kubenswrapper[4998]: I0227 10:18:34.310273 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 10:18:34 crc kubenswrapper[4998]: I0227 10:18:34.696286 4998 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 10:18:34 crc kubenswrapper[4998]: I0227 10:18:34.764213 4998 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:18:34 crc kubenswrapper[4998]: I0227 10:18:34.765310 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:18:34 crc kubenswrapper[4998]: I0227 10:18:34.765364 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:18:34 crc kubenswrapper[4998]: I0227 10:18:34.765377 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:18:35 crc kubenswrapper[4998]: I0227 10:18:35.060035 4998 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:18:35 crc kubenswrapper[4998]: I0227 10:18:35.061041 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:18:35 crc kubenswrapper[4998]: I0227 10:18:35.061092 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:18:35 crc kubenswrapper[4998]: I0227 10:18:35.061105 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:18:35 crc kubenswrapper[4998]: W0227 10:18:35.668943 4998 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 27 10:18:35 crc kubenswrapper[4998]: E0227 10:18:35.669011 4998 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Feb 27 10:18:35 crc kubenswrapper[4998]: I0227 10:18:35.695914 4998 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 10:18:36 crc kubenswrapper[4998]: I0227 10:18:36.061559 4998 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:18:36 crc kubenswrapper[4998]: I0227 10:18:36.062591 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:18:36 crc kubenswrapper[4998]: I0227 10:18:36.062638 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:18:36 crc kubenswrapper[4998]: I0227 10:18:36.062653 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:18:36 crc kubenswrapper[4998]: I0227 10:18:36.697106 4998 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 10:18:36 crc kubenswrapper[4998]: I0227 10:18:36.764709 4998 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:18:36 crc kubenswrapper[4998]: I0227 10:18:36.766008 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:18:36 crc kubenswrapper[4998]: I0227 10:18:36.766065 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:18:36 crc kubenswrapper[4998]: I0227 10:18:36.766078 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:18:36 crc kubenswrapper[4998]: I0227 10:18:36.766801 4998 scope.go:117] "RemoveContainer" containerID="49baf55a1ea2d1102381bce1e14db27baca5eca38c787aee06cbf5395da0698f" Feb 27 10:18:37 crc kubenswrapper[4998]: I0227 10:18:37.066115 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Feb 27 10:18:37 crc kubenswrapper[4998]: I0227 10:18:37.067809 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"9b6e8ecfcaba19fb742ce57619627409073b5ea272a20cca6cff39ebcef8d3e2"} Feb 27 10:18:37 crc kubenswrapper[4998]: I0227 10:18:37.067928 4998 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:18:37 crc kubenswrapper[4998]: I0227 10:18:37.068629 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:18:37 crc kubenswrapper[4998]: I0227 10:18:37.068651 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:18:37 crc kubenswrapper[4998]: I0227 10:18:37.068660 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:18:37 crc kubenswrapper[4998]: I0227 10:18:37.697125 4998 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 10:18:38 crc kubenswrapper[4998]: I0227 10:18:38.694630 4998 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 10:18:38 crc kubenswrapper[4998]: E0227 10:18:38.776297 4998 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 27 10:18:38 crc kubenswrapper[4998]: I0227 10:18:38.783504 4998 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:18:38 crc kubenswrapper[4998]: I0227 10:18:38.784618 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:18:38 crc kubenswrapper[4998]: I0227 10:18:38.784650 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:18:38 crc kubenswrapper[4998]: I0227 10:18:38.784659 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:18:38 crc kubenswrapper[4998]: I0227 10:18:38.784681 4998 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 27 10:18:38 crc kubenswrapper[4998]: E0227 10:18:38.788044 4998 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 27 10:18:38 crc kubenswrapper[4998]: E0227 10:18:38.843618 4998 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 27 10:18:39 crc kubenswrapper[4998]: I0227 10:18:39.074374 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Feb 27 10:18:39 crc kubenswrapper[4998]: I0227 10:18:39.075074 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Feb 27 10:18:39 crc kubenswrapper[4998]: I0227 10:18:39.076767 4998 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="9b6e8ecfcaba19fb742ce57619627409073b5ea272a20cca6cff39ebcef8d3e2" exitCode=255 Feb 27 10:18:39 crc kubenswrapper[4998]: I0227 10:18:39.076810 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"9b6e8ecfcaba19fb742ce57619627409073b5ea272a20cca6cff39ebcef8d3e2"} Feb 27 10:18:39 crc kubenswrapper[4998]: I0227 10:18:39.076854 4998 scope.go:117] "RemoveContainer" containerID="49baf55a1ea2d1102381bce1e14db27baca5eca38c787aee06cbf5395da0698f" Feb 27 10:18:39 crc kubenswrapper[4998]: I0227 10:18:39.077206 4998 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:18:39 crc kubenswrapper[4998]: I0227 10:18:39.078593 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:18:39 crc kubenswrapper[4998]: I0227 10:18:39.078625 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:18:39 crc kubenswrapper[4998]: I0227 10:18:39.078636 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:18:39 crc kubenswrapper[4998]: I0227 10:18:39.079176 4998 scope.go:117] "RemoveContainer" containerID="9b6e8ecfcaba19fb742ce57619627409073b5ea272a20cca6cff39ebcef8d3e2" Feb 27 10:18:39 crc kubenswrapper[4998]: E0227 10:18:39.079446 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 27 10:18:39 crc kubenswrapper[4998]: I0227 10:18:39.696658 4998 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 10:18:40 crc kubenswrapper[4998]: I0227 10:18:40.080648 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Feb 27 10:18:40 crc kubenswrapper[4998]: I0227 10:18:40.389005 4998 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 27 10:18:40 crc kubenswrapper[4998]: I0227 10:18:40.401676 4998 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 27 10:18:40 crc kubenswrapper[4998]: I0227 10:18:40.695907 4998 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 10:18:40 crc kubenswrapper[4998]: I0227 10:18:40.803575 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 10:18:40 crc kubenswrapper[4998]: I0227 10:18:40.803979 4998 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:18:40 crc kubenswrapper[4998]: I0227 10:18:40.805560 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:18:40 crc kubenswrapper[4998]: I0227 10:18:40.805596 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:18:40 crc kubenswrapper[4998]: I0227 10:18:40.805604 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:18:40 crc kubenswrapper[4998]: I0227 10:18:40.808769 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 10:18:41 crc kubenswrapper[4998]: I0227 10:18:41.085902 4998 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:18:41 crc kubenswrapper[4998]: I0227 10:18:41.086868 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:18:41 crc kubenswrapper[4998]: I0227 10:18:41.086930 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:18:41 crc kubenswrapper[4998]: I0227 10:18:41.086947 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:18:41 crc kubenswrapper[4998]: I0227 10:18:41.683096 4998 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 10:18:41 crc kubenswrapper[4998]: I0227 10:18:41.683296 4998 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:18:41 crc kubenswrapper[4998]: I0227 10:18:41.684580 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:18:41 crc kubenswrapper[4998]: I0227 10:18:41.684618 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:18:41 crc kubenswrapper[4998]: I0227 10:18:41.684631 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:18:41 crc kubenswrapper[4998]: I0227 10:18:41.685177 4998 scope.go:117] "RemoveContainer" containerID="9b6e8ecfcaba19fb742ce57619627409073b5ea272a20cca6cff39ebcef8d3e2" Feb 27 10:18:41 crc kubenswrapper[4998]: E0227 10:18:41.685473 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 27 10:18:41 crc kubenswrapper[4998]: I0227 10:18:41.695270 4998 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 10:18:42 crc kubenswrapper[4998]: I0227 10:18:42.235032 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 10:18:42 crc kubenswrapper[4998]: I0227 10:18:42.235215 4998 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:18:42 crc kubenswrapper[4998]: I0227 10:18:42.236382 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:18:42 crc kubenswrapper[4998]: I0227 10:18:42.236408 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:18:42 crc kubenswrapper[4998]: I0227 10:18:42.236417 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:18:42 crc kubenswrapper[4998]: I0227 10:18:42.237285 4998 scope.go:117] "RemoveContainer" containerID="9b6e8ecfcaba19fb742ce57619627409073b5ea272a20cca6cff39ebcef8d3e2" Feb 27 10:18:42 crc kubenswrapper[4998]: E0227 10:18:42.237572 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 27 10:18:42 crc kubenswrapper[4998]: I0227 10:18:42.696782 4998 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 10:18:43 crc kubenswrapper[4998]: I0227 10:18:43.644504 4998 csr.go:261] certificate signing request csr-5jz8w is approved, waiting to be issued Feb 27 10:18:43 crc kubenswrapper[4998]: I0227 10:18:43.651805 4998 csr.go:257] certificate signing request csr-5jz8w is issued Feb 27 10:18:43 crc kubenswrapper[4998]: I0227 10:18:43.661395 4998 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 27 10:18:44 crc kubenswrapper[4998]: I0227 10:18:44.313620 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 10:18:44 crc kubenswrapper[4998]: I0227 10:18:44.313786 4998 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:18:44 crc kubenswrapper[4998]: I0227 10:18:44.315132 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:18:44 crc kubenswrapper[4998]: I0227 10:18:44.315172 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:18:44 crc kubenswrapper[4998]: I0227 10:18:44.315180 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:18:44 crc kubenswrapper[4998]: I0227 10:18:44.527343 4998 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 27 10:18:44 crc kubenswrapper[4998]: I0227 10:18:44.653564 4998 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2026-12-29 09:50:59.272621942 +0000 UTC Feb 27 10:18:44 crc kubenswrapper[4998]: I0227 10:18:44.653677 4998 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7319h32m14.618953483s for next certificate rotation Feb 27 10:18:45 crc kubenswrapper[4998]: I0227 10:18:45.789061 4998 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:18:45 crc kubenswrapper[4998]: I0227 10:18:45.790531 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:18:45 crc kubenswrapper[4998]: I0227 10:18:45.790559 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:18:45 crc kubenswrapper[4998]: I0227 10:18:45.790573 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:18:45 crc kubenswrapper[4998]: I0227 10:18:45.790724 4998 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 27 10:18:45 crc kubenswrapper[4998]: I0227 10:18:45.799425 4998 kubelet_node_status.go:115] "Node was previously registered" node="crc" Feb 27 10:18:45 crc kubenswrapper[4998]: I0227 10:18:45.799688 4998 kubelet_node_status.go:79] "Successfully registered node" node="crc" Feb 27 10:18:45 crc kubenswrapper[4998]: E0227 10:18:45.799727 4998 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Feb 27 10:18:45 crc kubenswrapper[4998]: I0227 10:18:45.802807 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:18:45 crc kubenswrapper[4998]: I0227 10:18:45.802832 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:18:45 crc kubenswrapper[4998]: I0227 10:18:45.802840 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:18:45 crc kubenswrapper[4998]: I0227 10:18:45.802871 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:18:45 crc kubenswrapper[4998]: I0227 10:18:45.802882 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:18:45Z","lastTransitionTime":"2026-02-27T10:18:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:18:45 crc kubenswrapper[4998]: E0227 10:18:45.815827 4998 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:18:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:18:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:18:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:18:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:18:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:18:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:18:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:18:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a76c225a-d617-4499-bb32-eb42f31208cd\\\",\\\"systemUUID\\\":\\\"9cb598f9-84ae-4703-b4b9-6775104308e7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:18:45 crc kubenswrapper[4998]: I0227 10:18:45.822494 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:18:45 crc kubenswrapper[4998]: I0227 10:18:45.822535 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:18:45 crc kubenswrapper[4998]: I0227 10:18:45.822544 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:18:45 crc kubenswrapper[4998]: I0227 10:18:45.822557 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:18:45 crc kubenswrapper[4998]: I0227 10:18:45.822568 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:18:45Z","lastTransitionTime":"2026-02-27T10:18:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:18:45 crc kubenswrapper[4998]: E0227 10:18:45.835940 4998 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:18:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:18:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:18:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:18:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:18:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:18:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:18:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:18:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a76c225a-d617-4499-bb32-eb42f31208cd\\\",\\\"systemUUID\\\":\\\"9cb598f9-84ae-4703-b4b9-6775104308e7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:18:45 crc kubenswrapper[4998]: I0227 10:18:45.844833 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:18:45 crc kubenswrapper[4998]: I0227 10:18:45.844879 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:18:45 crc kubenswrapper[4998]: I0227 10:18:45.844892 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:18:45 crc kubenswrapper[4998]: I0227 10:18:45.844910 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:18:45 crc kubenswrapper[4998]: I0227 10:18:45.844922 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:18:45Z","lastTransitionTime":"2026-02-27T10:18:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:18:45 crc kubenswrapper[4998]: E0227 10:18:45.855612 4998 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:18:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:18:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:18:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:18:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:18:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:18:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:18:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:18:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a76c225a-d617-4499-bb32-eb42f31208cd\\\",\\\"systemUUID\\\":\\\"9cb598f9-84ae-4703-b4b9-6775104308e7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:18:45 crc kubenswrapper[4998]: I0227 10:18:45.866160 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:18:45 crc kubenswrapper[4998]: I0227 10:18:45.866203 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:18:45 crc kubenswrapper[4998]: I0227 10:18:45.866215 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:18:45 crc kubenswrapper[4998]: I0227 10:18:45.866243 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:18:45 crc kubenswrapper[4998]: I0227 10:18:45.866252 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:18:45Z","lastTransitionTime":"2026-02-27T10:18:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:18:45 crc kubenswrapper[4998]: E0227 10:18:45.878200 4998 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:18:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:18:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:18:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:18:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:18:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:18:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:18:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:18:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a76c225a-d617-4499-bb32-eb42f31208cd\\\",\\\"systemUUID\\\":\\\"9cb598f9-84ae-4703-b4b9-6775104308e7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:18:45 crc kubenswrapper[4998]: E0227 10:18:45.878365 4998 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 27 10:18:45 crc kubenswrapper[4998]: E0227 10:18:45.878393 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:45 crc kubenswrapper[4998]: E0227 10:18:45.979019 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:46 crc kubenswrapper[4998]: E0227 10:18:46.080072 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:46 crc kubenswrapper[4998]: E0227 10:18:46.180804 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:46 crc kubenswrapper[4998]: E0227 10:18:46.281636 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:46 crc kubenswrapper[4998]: E0227 10:18:46.382367 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:46 crc kubenswrapper[4998]: E0227 10:18:46.483517 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:46 crc kubenswrapper[4998]: E0227 10:18:46.584572 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:46 crc kubenswrapper[4998]: E0227 10:18:46.685138 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:46 crc kubenswrapper[4998]: E0227 10:18:46.786129 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:46 crc kubenswrapper[4998]: E0227 10:18:46.886524 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:46 crc kubenswrapper[4998]: E0227 10:18:46.987527 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:47 crc kubenswrapper[4998]: E0227 10:18:47.088632 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:47 crc kubenswrapper[4998]: E0227 10:18:47.189516 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:47 crc kubenswrapper[4998]: E0227 10:18:47.290364 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:47 crc kubenswrapper[4998]: E0227 10:18:47.391146 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:47 crc kubenswrapper[4998]: E0227 10:18:47.491952 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:47 crc kubenswrapper[4998]: E0227 10:18:47.592337 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:47 crc kubenswrapper[4998]: E0227 10:18:47.692850 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:47 crc kubenswrapper[4998]: E0227 10:18:47.793357 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:47 crc kubenswrapper[4998]: E0227 10:18:47.893889 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:47 crc kubenswrapper[4998]: E0227 10:18:47.994727 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:48 crc kubenswrapper[4998]: E0227 10:18:48.095795 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:48 crc kubenswrapper[4998]: E0227 10:18:48.196321 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:48 crc kubenswrapper[4998]: E0227 10:18:48.296799 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:48 crc kubenswrapper[4998]: E0227 10:18:48.397731 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:48 crc kubenswrapper[4998]: E0227 10:18:48.498284 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:48 crc kubenswrapper[4998]: E0227 10:18:48.598625 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:48 crc kubenswrapper[4998]: E0227 10:18:48.698993 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:48 crc kubenswrapper[4998]: E0227 10:18:48.799625 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:48 crc kubenswrapper[4998]: E0227 10:18:48.843963 4998 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 27 10:18:48 crc kubenswrapper[4998]: E0227 10:18:48.900877 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:49 crc kubenswrapper[4998]: E0227 10:18:49.001656 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:49 crc kubenswrapper[4998]: E0227 10:18:49.102316 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:49 crc kubenswrapper[4998]: E0227 10:18:49.203192 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:49 crc kubenswrapper[4998]: E0227 10:18:49.304324 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:49 crc kubenswrapper[4998]: E0227 10:18:49.405076 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:49 crc kubenswrapper[4998]: E0227 10:18:49.505739 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:49 crc kubenswrapper[4998]: E0227 10:18:49.606760 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:49 crc kubenswrapper[4998]: E0227 10:18:49.707061 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:49 crc kubenswrapper[4998]: I0227 10:18:49.764067 4998 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:18:49 crc kubenswrapper[4998]: I0227 10:18:49.765671 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:18:49 crc kubenswrapper[4998]: I0227 10:18:49.765747 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:18:49 crc kubenswrapper[4998]: I0227 10:18:49.765860 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:18:49 crc kubenswrapper[4998]: E0227 10:18:49.807287 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:49 crc kubenswrapper[4998]: E0227 10:18:49.908416 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:50 crc kubenswrapper[4998]: E0227 10:18:50.009178 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:50 crc kubenswrapper[4998]: E0227 10:18:50.109404 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:50 crc kubenswrapper[4998]: E0227 10:18:50.209491 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:50 crc kubenswrapper[4998]: E0227 10:18:50.310418 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:50 crc kubenswrapper[4998]: E0227 10:18:50.411293 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:50 crc kubenswrapper[4998]: E0227 10:18:50.512216 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:50 crc kubenswrapper[4998]: E0227 10:18:50.612633 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:50 crc kubenswrapper[4998]: E0227 10:18:50.713191 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:50 crc kubenswrapper[4998]: E0227 10:18:50.813765 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:50 crc kubenswrapper[4998]: E0227 10:18:50.914544 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:51 crc kubenswrapper[4998]: E0227 10:18:51.015750 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:51 crc kubenswrapper[4998]: E0227 10:18:51.116941 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:51 crc kubenswrapper[4998]: E0227 10:18:51.218027 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:51 crc kubenswrapper[4998]: E0227 10:18:51.318758 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:51 crc kubenswrapper[4998]: E0227 10:18:51.419573 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:51 crc kubenswrapper[4998]: E0227 10:18:51.519878 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:51 crc kubenswrapper[4998]: E0227 10:18:51.620871 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:51 crc kubenswrapper[4998]: E0227 10:18:51.722008 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:51 crc kubenswrapper[4998]: E0227 10:18:51.823115 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:51 crc kubenswrapper[4998]: E0227 10:18:51.924015 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:52 crc kubenswrapper[4998]: E0227 10:18:52.024910 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:52 crc kubenswrapper[4998]: E0227 10:18:52.126082 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:52 crc kubenswrapper[4998]: E0227 10:18:52.226524 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:52 crc kubenswrapper[4998]: E0227 10:18:52.327063 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:52 crc kubenswrapper[4998]: E0227 10:18:52.427978 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:52 crc kubenswrapper[4998]: E0227 10:18:52.529192 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:52 crc kubenswrapper[4998]: E0227 10:18:52.629948 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:52 crc kubenswrapper[4998]: E0227 10:18:52.730900 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:52 crc kubenswrapper[4998]: E0227 10:18:52.831528 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:52 crc kubenswrapper[4998]: E0227 10:18:52.931901 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:53 crc kubenswrapper[4998]: E0227 10:18:53.033009 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:53 crc kubenswrapper[4998]: E0227 10:18:53.133655 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:53 crc kubenswrapper[4998]: E0227 10:18:53.234104 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:53 crc kubenswrapper[4998]: E0227 10:18:53.335035 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:53 crc kubenswrapper[4998]: E0227 10:18:53.435760 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:53 crc kubenswrapper[4998]: E0227 10:18:53.536728 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:53 crc kubenswrapper[4998]: E0227 10:18:53.637832 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:53 crc kubenswrapper[4998]: E0227 10:18:53.738493 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:53 crc kubenswrapper[4998]: E0227 10:18:53.838941 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:53 crc kubenswrapper[4998]: E0227 10:18:53.940054 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:54 crc kubenswrapper[4998]: E0227 10:18:54.040396 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:54 crc kubenswrapper[4998]: E0227 10:18:54.141282 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:54 crc kubenswrapper[4998]: E0227 10:18:54.241464 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:54 crc kubenswrapper[4998]: E0227 10:18:54.341988 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:54 crc kubenswrapper[4998]: E0227 10:18:54.443138 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:54 crc kubenswrapper[4998]: E0227 10:18:54.544265 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:54 crc kubenswrapper[4998]: E0227 10:18:54.644663 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:54 crc kubenswrapper[4998]: E0227 10:18:54.745515 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:54 crc kubenswrapper[4998]: E0227 10:18:54.845936 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:54 crc kubenswrapper[4998]: E0227 10:18:54.946967 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:55 crc kubenswrapper[4998]: E0227 10:18:55.048060 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:55 crc kubenswrapper[4998]: E0227 10:18:55.148846 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:55 crc kubenswrapper[4998]: E0227 10:18:55.249269 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:55 crc kubenswrapper[4998]: E0227 10:18:55.350119 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:55 crc kubenswrapper[4998]: E0227 10:18:55.451031 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:55 crc kubenswrapper[4998]: E0227 10:18:55.551870 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:55 crc kubenswrapper[4998]: E0227 10:18:55.653190 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:55 crc kubenswrapper[4998]: I0227 10:18:55.655045 4998 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 27 10:18:55 crc kubenswrapper[4998]: E0227 10:18:55.753890 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:55 crc kubenswrapper[4998]: E0227 10:18:55.854462 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:55 crc kubenswrapper[4998]: E0227 10:18:55.954837 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:56 crc kubenswrapper[4998]: E0227 10:18:56.055948 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:56 crc kubenswrapper[4998]: E0227 10:18:56.156917 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:56 crc kubenswrapper[4998]: E0227 10:18:56.257920 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:56 crc kubenswrapper[4998]: E0227 10:18:56.271116 4998 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Feb 27 10:18:56 crc kubenswrapper[4998]: I0227 10:18:56.275125 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:18:56 crc kubenswrapper[4998]: I0227 10:18:56.275156 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:18:56 crc kubenswrapper[4998]: I0227 10:18:56.275166 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:18:56 crc kubenswrapper[4998]: I0227 10:18:56.275182 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:18:56 crc kubenswrapper[4998]: I0227 10:18:56.275194 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:18:56Z","lastTransitionTime":"2026-02-27T10:18:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:18:56 crc kubenswrapper[4998]: E0227 10:18:56.289083 4998 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:18:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:18:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:18:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:18:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:18:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:18:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:18:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:18:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a76c225a-d617-4499-bb32-eb42f31208cd\\\",\\\"systemUUID\\\":\\\"9cb598f9-84ae-4703-b4b9-6775104308e7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:18:56 crc kubenswrapper[4998]: I0227 10:18:56.292477 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:18:56 crc kubenswrapper[4998]: I0227 10:18:56.292520 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:18:56 crc kubenswrapper[4998]: I0227 10:18:56.292531 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:18:56 crc kubenswrapper[4998]: I0227 10:18:56.292547 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:18:56 crc kubenswrapper[4998]: I0227 10:18:56.292559 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:18:56Z","lastTransitionTime":"2026-02-27T10:18:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:18:56 crc kubenswrapper[4998]: E0227 10:18:56.301870 4998 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:18:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:18:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:18:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:18:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:18:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:18:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:18:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:18:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a76c225a-d617-4499-bb32-eb42f31208cd\\\",\\\"systemUUID\\\":\\\"9cb598f9-84ae-4703-b4b9-6775104308e7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:18:56 crc kubenswrapper[4998]: I0227 10:18:56.304833 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:18:56 crc kubenswrapper[4998]: I0227 10:18:56.304876 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:18:56 crc kubenswrapper[4998]: I0227 10:18:56.304892 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:18:56 crc kubenswrapper[4998]: I0227 10:18:56.304912 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:18:56 crc kubenswrapper[4998]: I0227 10:18:56.304941 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:18:56Z","lastTransitionTime":"2026-02-27T10:18:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:18:56 crc kubenswrapper[4998]: E0227 10:18:56.314715 4998 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:18:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:18:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:18:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:18:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:18:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:18:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:18:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:18:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a76c225a-d617-4499-bb32-eb42f31208cd\\\",\\\"systemUUID\\\":\\\"9cb598f9-84ae-4703-b4b9-6775104308e7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:18:56 crc kubenswrapper[4998]: I0227 10:18:56.318312 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:18:56 crc kubenswrapper[4998]: I0227 10:18:56.318356 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:18:56 crc kubenswrapper[4998]: I0227 10:18:56.318366 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:18:56 crc kubenswrapper[4998]: I0227 10:18:56.318382 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:18:56 crc kubenswrapper[4998]: I0227 10:18:56.318393 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:18:56Z","lastTransitionTime":"2026-02-27T10:18:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:18:56 crc kubenswrapper[4998]: E0227 10:18:56.327373 4998 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:18:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:18:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:18:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:18:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:18:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:18:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:18:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:18:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a76c225a-d617-4499-bb32-eb42f31208cd\\\",\\\"systemUUID\\\":\\\"9cb598f9-84ae-4703-b4b9-6775104308e7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:18:56 crc kubenswrapper[4998]: E0227 10:18:56.327482 4998 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 27 10:18:56 crc kubenswrapper[4998]: E0227 10:18:56.358798 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:56 crc kubenswrapper[4998]: E0227 10:18:56.459449 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:56 crc kubenswrapper[4998]: E0227 10:18:56.560284 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:56 crc kubenswrapper[4998]: E0227 10:18:56.661209 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:56 crc kubenswrapper[4998]: E0227 10:18:56.762314 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:56 crc kubenswrapper[4998]: E0227 10:18:56.863141 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:56 crc kubenswrapper[4998]: E0227 10:18:56.963959 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:57 crc kubenswrapper[4998]: E0227 10:18:57.064612 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:57 crc kubenswrapper[4998]: E0227 10:18:57.165305 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:57 crc kubenswrapper[4998]: E0227 10:18:57.265504 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:57 crc kubenswrapper[4998]: E0227 10:18:57.366658 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:57 crc kubenswrapper[4998]: E0227 10:18:57.467744 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:57 crc kubenswrapper[4998]: E0227 10:18:57.568283 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:57 crc kubenswrapper[4998]: E0227 10:18:57.669265 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:57 crc kubenswrapper[4998]: I0227 10:18:57.764058 4998 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:18:57 crc kubenswrapper[4998]: I0227 10:18:57.765335 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:18:57 crc kubenswrapper[4998]: I0227 10:18:57.765382 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:18:57 crc kubenswrapper[4998]: I0227 10:18:57.765392 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:18:57 crc kubenswrapper[4998]: I0227 10:18:57.765994 4998 scope.go:117] "RemoveContainer" containerID="9b6e8ecfcaba19fb742ce57619627409073b5ea272a20cca6cff39ebcef8d3e2" Feb 27 10:18:57 crc kubenswrapper[4998]: E0227 10:18:57.766168 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 27 10:18:57 crc kubenswrapper[4998]: E0227 10:18:57.769911 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:57 crc kubenswrapper[4998]: E0227 10:18:57.870822 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:57 crc kubenswrapper[4998]: E0227 10:18:57.971379 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:58 crc kubenswrapper[4998]: E0227 10:18:58.071708 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:58 crc kubenswrapper[4998]: E0227 10:18:58.172239 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:58 crc kubenswrapper[4998]: E0227 10:18:58.272645 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:58 crc kubenswrapper[4998]: E0227 10:18:58.372723 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:58 crc kubenswrapper[4998]: E0227 10:18:58.473882 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:58 crc kubenswrapper[4998]: E0227 10:18:58.574599 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:58 crc kubenswrapper[4998]: E0227 10:18:58.675152 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:58 crc kubenswrapper[4998]: E0227 10:18:58.776313 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:58 crc kubenswrapper[4998]: E0227 10:18:58.845153 4998 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 27 10:18:58 crc kubenswrapper[4998]: E0227 10:18:58.876741 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:58 crc kubenswrapper[4998]: E0227 10:18:58.977420 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:59 crc kubenswrapper[4998]: E0227 10:18:59.077882 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:59 crc kubenswrapper[4998]: E0227 10:18:59.178027 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:59 crc kubenswrapper[4998]: E0227 10:18:59.279132 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:59 crc kubenswrapper[4998]: E0227 10:18:59.379633 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:59 crc kubenswrapper[4998]: E0227 10:18:59.480592 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:59 crc kubenswrapper[4998]: E0227 10:18:59.580993 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:59 crc kubenswrapper[4998]: E0227 10:18:59.681932 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:59 crc kubenswrapper[4998]: E0227 10:18:59.782976 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:59 crc kubenswrapper[4998]: E0227 10:18:59.883124 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:18:59 crc kubenswrapper[4998]: E0227 10:18:59.983876 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:19:00 crc kubenswrapper[4998]: I0227 10:19:00.025899 4998 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 27 10:19:00 crc kubenswrapper[4998]: E0227 10:19:00.084642 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:19:00 crc kubenswrapper[4998]: E0227 10:19:00.185446 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:19:00 crc kubenswrapper[4998]: E0227 10:19:00.286385 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:19:00 crc kubenswrapper[4998]: E0227 10:19:00.386809 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:19:00 crc kubenswrapper[4998]: E0227 10:19:00.487132 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:19:00 crc kubenswrapper[4998]: E0227 10:19:00.588208 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:19:00 crc kubenswrapper[4998]: E0227 10:19:00.688540 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:19:00 crc kubenswrapper[4998]: E0227 10:19:00.789648 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:19:00 crc kubenswrapper[4998]: E0227 10:19:00.890608 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:19:00 crc kubenswrapper[4998]: E0227 10:19:00.991519 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:19:01 crc kubenswrapper[4998]: E0227 10:19:01.092133 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:19:01 crc kubenswrapper[4998]: E0227 10:19:01.192353 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:19:01 crc kubenswrapper[4998]: E0227 10:19:01.293426 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:19:01 crc kubenswrapper[4998]: E0227 10:19:01.393696 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:19:01 crc kubenswrapper[4998]: E0227 10:19:01.494407 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:19:01 crc kubenswrapper[4998]: E0227 10:19:01.595469 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:19:01 crc kubenswrapper[4998]: E0227 10:19:01.696398 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:19:01 crc kubenswrapper[4998]: E0227 10:19:01.796511 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:19:01 crc kubenswrapper[4998]: E0227 10:19:01.897163 4998 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:19:01 crc kubenswrapper[4998]: I0227 10:19:01.981472 4998 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.000211 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.000270 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.000299 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.000314 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.000325 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:02Z","lastTransitionTime":"2026-02-27T10:19:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.103055 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.103097 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.103108 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.103129 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.103138 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:02Z","lastTransitionTime":"2026-02-27T10:19:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.205701 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.205746 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.205756 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.205773 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.205783 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:02Z","lastTransitionTime":"2026-02-27T10:19:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.308350 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.308388 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.308397 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.308411 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.308422 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:02Z","lastTransitionTime":"2026-02-27T10:19:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.410992 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.411040 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.411051 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.411068 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.411080 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:02Z","lastTransitionTime":"2026-02-27T10:19:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.513696 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.513755 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.513774 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.513798 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.513814 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:02Z","lastTransitionTime":"2026-02-27T10:19:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.617276 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.617356 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.617392 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.617422 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.617443 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:02Z","lastTransitionTime":"2026-02-27T10:19:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.720330 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.720409 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.720432 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.720525 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.720598 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:02Z","lastTransitionTime":"2026-02-27T10:19:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.724968 4998 apiserver.go:52] "Watching apiserver" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.731145 4998 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.731486 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g"] Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.732002 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.732088 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.732105 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:19:02 crc kubenswrapper[4998]: E0227 10:19:02.732338 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 10:19:02 crc kubenswrapper[4998]: E0227 10:19:02.732384 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.732680 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.732764 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.732701 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:19:02 crc kubenswrapper[4998]: E0227 10:19:02.732934 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.735543 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.736618 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.736711 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.737302 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.737819 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.738315 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.738357 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.738695 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.739286 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.760188 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.776073 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.790353 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.797047 4998 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.802677 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.812747 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.821472 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.822783 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.822823 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.822840 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.822857 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.822872 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:02Z","lastTransitionTime":"2026-02-27T10:19:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.833358 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.846157 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.866003 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.886626 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.887075 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.887113 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.887391 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.887453 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.887482 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.887545 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.887899 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.887926 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.887973 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.888112 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.888208 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.888006 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.888371 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.888384 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.888416 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.888443 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.888465 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 27 10:19:02 crc kubenswrapper[4998]: E0227 10:19:02.888505 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 10:19:03.388465184 +0000 UTC m=+95.386736152 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.888529 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.888554 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.888592 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.888610 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.888625 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.888639 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.888706 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.888767 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.888787 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.888816 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.888831 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.888849 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.888917 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.888939 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.888960 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.889001 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.889020 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.889037 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.889054 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.889080 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.889087 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.889155 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.889187 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.889214 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.889264 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.889287 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.889307 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.889331 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.889345 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.889353 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.889379 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.889402 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.889428 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.889450 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.889452 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.889472 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.889555 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.889600 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.889645 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.889713 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.889750 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.889797 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.889837 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.889877 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.889915 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.889953 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.889958 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.889991 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.890028 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.890064 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.890099 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.890144 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.890152 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.890197 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.890291 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.890331 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.890367 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.890403 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.890446 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.890481 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.890519 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.890557 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.890593 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.890631 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.890667 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.890701 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.890738 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.890772 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.890810 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.890848 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.890888 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.890925 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.890967 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.891008 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.891047 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.891092 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.891133 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.891172 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.891211 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.891280 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.891316 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.891351 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.891387 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.891421 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.891459 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.891499 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.891536 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.891572 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.891606 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.891640 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.891675 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.891711 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.891748 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.891782 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.891820 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.891854 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.891892 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.891931 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.891968 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.892004 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.892057 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.892094 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.892133 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.892170 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.892206 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.892290 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.892329 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.892367 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.892408 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.892448 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.892488 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.892529 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.892566 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.892604 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.892642 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.892680 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.892723 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.892763 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.892856 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.892933 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.892973 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.893011 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.893050 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.893086 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.893120 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.893155 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.893192 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.893250 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.893291 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.893329 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.893368 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.893407 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.893447 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.893488 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.893528 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.893567 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.893605 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.893645 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.893682 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.893716 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.893757 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.893792 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.893833 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.893867 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.893903 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.893941 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.893975 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.894009 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.894042 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.894077 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.894173 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.894212 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.894289 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.894331 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.894405 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.894444 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.894481 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.894520 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.894555 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.894590 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.894624 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.894660 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.894693 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.894725 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.894758 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.894791 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.894831 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.894871 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.894908 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.894942 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.894978 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.895014 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.895056 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.895093 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.895130 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.895172 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.895211 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.895288 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.895328 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.895366 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.895403 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.895442 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.895478 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.895513 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.895548 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.895584 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.895622 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.895662 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.895698 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.895763 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.895815 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.895855 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.895894 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.895937 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.895983 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.896019 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.896061 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.896102 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.896145 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.896187 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.896246 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.896287 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.896329 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.899184 4998 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.899272 4998 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.899360 4998 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.899373 4998 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.899386 4998 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.899400 4998 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.899422 4998 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.899438 4998 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.899455 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.899474 4998 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.907179 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.909883 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.910354 4998 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.912915 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.890368 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.914026 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.914036 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.915217 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.915592 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.915920 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.916624 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.924206 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.924619 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.924796 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.925051 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.925462 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.912800 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.890725 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.891286 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.891755 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.892287 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.892325 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.892347 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.892929 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.893147 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.913841 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.893427 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.893621 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.894010 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.894421 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.894598 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.894714 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.894801 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.895528 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.895879 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.895996 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.896154 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.896193 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.896307 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.898585 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.898642 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.898837 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.899128 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.899135 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.899155 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.926081 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.899467 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.899482 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.899711 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.899743 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.899979 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.900070 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.900261 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.900386 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.900497 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.900654 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.900767 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.900894 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.901062 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.901076 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.901098 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.901133 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.901272 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.901466 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.901699 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.901765 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.901780 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.901984 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.902138 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.902150 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.902344 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.902379 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.902474 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.902500 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.902765 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.903019 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.903052 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.903091 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.903425 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.903620 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.904189 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.904242 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.904204 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.904679 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.904796 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.904813 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.904829 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.904969 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.905116 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.905156 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.905292 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.905398 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.905402 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.905691 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.905721 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.905816 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.905860 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.905896 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.906040 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.906065 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.906289 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.906333 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.906375 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.906428 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.906500 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.906596 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.906708 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.906729 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.906832 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: E0227 10:19:02.907000 4998 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.907793 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.908050 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.908149 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.908280 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.908378 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.908421 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.908469 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.908548 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.908593 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.908833 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.909155 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.909752 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.909830 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.910024 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.910199 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.910284 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: E0227 10:19:02.910313 4998 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.911298 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.912071 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.913164 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.912050 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.890948 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.924903 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.913111 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.891029 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: E0227 10:19:02.926925 4998 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.926925 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.926969 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.927072 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.927363 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.927374 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.927380 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: E0227 10:19:02.927665 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-27 10:19:03.427643273 +0000 UTC m=+95.425914241 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.927746 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.927772 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.927789 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.928035 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.928057 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.928254 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.928328 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.928361 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.928480 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.928747 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.929057 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.926556 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: E0227 10:19:02.929337 4998 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 27 10:19:02 crc kubenswrapper[4998]: E0227 10:19:02.929363 4998 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.929368 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: E0227 10:19:02.929564 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-27 10:19:03.429541334 +0000 UTC m=+95.427812312 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.929603 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.929651 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: E0227 10:19:02.929621 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-27 10:19:03.429607015 +0000 UTC m=+95.427877993 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.929685 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.929835 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.929859 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.930141 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.930194 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.930645 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.930903 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.931371 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.931562 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.931946 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.931978 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.931988 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.932080 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.932148 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.932173 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.932186 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.932379 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.932679 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.932825 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.933211 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.933306 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.933419 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.933427 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.933543 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.934174 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.934752 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.935356 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.935473 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.935753 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.935788 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.935950 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.938324 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.942429 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.942464 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:02Z","lastTransitionTime":"2026-02-27T10:19:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.942768 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: E0227 10:19:02.942970 4998 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 27 10:19:02 crc kubenswrapper[4998]: E0227 10:19:02.943086 4998 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 27 10:19:02 crc kubenswrapper[4998]: E0227 10:19:02.943165 4998 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 10:19:02 crc kubenswrapper[4998]: E0227 10:19:02.943695 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-27 10:19:03.44366555 +0000 UTC m=+95.441936518 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.960195 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.973329 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.974305 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:19:02 crc kubenswrapper[4998]: I0227 10:19:02.983564 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.002094 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.002159 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.002249 4998 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.002265 4998 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.002278 4998 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.002289 4998 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.002300 4998 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.002313 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.002324 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.002337 4998 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.002325 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.002364 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.002348 4998 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.002419 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.002435 4998 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.002449 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.002462 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.002476 4998 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.002488 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.002499 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.002513 4998 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.002526 4998 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.002538 4998 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.002550 4998 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.002562 4998 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.002573 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.002585 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.002597 4998 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.002608 4998 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.002619 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.002631 4998 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.002645 4998 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.002659 4998 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.002670 4998 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.002682 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.002693 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.002706 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.002718 4998 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.002729 4998 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.002742 4998 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.002785 4998 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.002802 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.002815 4998 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.002828 4998 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.002840 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.002853 4998 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.002864 4998 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.002875 4998 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.002887 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.002899 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.002912 4998 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.002923 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.002936 4998 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.002947 4998 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.002958 4998 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.002969 4998 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.002982 4998 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.002996 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.003007 4998 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.003018 4998 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.003030 4998 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.003044 4998 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.003056 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.003070 4998 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.003082 4998 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.003094 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.003106 4998 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.003116 4998 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.003128 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.003139 4998 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.003151 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.003163 4998 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.003176 4998 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.003188 4998 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.003199 4998 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.003210 4998 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.003221 4998 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.003249 4998 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.003261 4998 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.003273 4998 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.003284 4998 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.003296 4998 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.003308 4998 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.003319 4998 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.003332 4998 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.003346 4998 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.003359 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.003370 4998 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.003381 4998 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.003394 4998 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.003406 4998 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.003418 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.003430 4998 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.003443 4998 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.003455 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.003469 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.003480 4998 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.003492 4998 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.003503 4998 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.003515 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.003527 4998 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.003539 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.003550 4998 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.003561 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.003572 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.003583 4998 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.003594 4998 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.003605 4998 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.003617 4998 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.003628 4998 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.003639 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.003650 4998 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.003660 4998 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.003670 4998 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.003683 4998 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.003694 4998 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.003706 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.003716 4998 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.003727 4998 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.003738 4998 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.003750 4998 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.003760 4998 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.003771 4998 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.003782 4998 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.003793 4998 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.003805 4998 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.003817 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.003874 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.003889 4998 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.003900 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.003912 4998 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.003925 4998 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.003939 4998 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.003950 4998 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.003962 4998 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.003974 4998 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.003985 4998 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.003996 4998 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.004006 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.004018 4998 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.004029 4998 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.004040 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.004051 4998 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.004061 4998 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.004072 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.004084 4998 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.004095 4998 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.004107 4998 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.004118 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.004129 4998 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.004140 4998 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.004151 4998 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.004163 4998 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.004173 4998 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.004183 4998 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.004194 4998 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.004204 4998 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.004215 4998 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.004243 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.004255 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.004267 4998 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.004278 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.004289 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.004299 4998 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.004310 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.004321 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.004332 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.004344 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.004355 4998 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.004366 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.004379 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.004390 4998 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.004401 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.004414 4998 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.004425 4998 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.004435 4998 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.004446 4998 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.004457 4998 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.004468 4998 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.004479 4998 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.004492 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.004504 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.004516 4998 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.004527 4998 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.004537 4998 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.004776 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.010296 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.017569 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.044817 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.044892 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.044905 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.044924 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.044938 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:03Z","lastTransitionTime":"2026-02-27T10:19:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.051122 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.059353 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 27 10:19:03 crc kubenswrapper[4998]: W0227 10:19:03.063093 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-2f9c4a0398f63a73e9935f00258e00a022bb230896080ee45b80c1f3f59b582f WatchSource:0}: Error finding container 2f9c4a0398f63a73e9935f00258e00a022bb230896080ee45b80c1f3f59b582f: Status 404 returned error can't find the container with id 2f9c4a0398f63a73e9935f00258e00a022bb230896080ee45b80c1f3f59b582f Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.067004 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 27 10:19:03 crc kubenswrapper[4998]: E0227 10:19:03.068473 4998 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 10:19:03 crc kubenswrapper[4998]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Feb 27 10:19:03 crc kubenswrapper[4998]: set -o allexport Feb 27 10:19:03 crc kubenswrapper[4998]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Feb 27 10:19:03 crc kubenswrapper[4998]: source /etc/kubernetes/apiserver-url.env Feb 27 10:19:03 crc kubenswrapper[4998]: else Feb 27 10:19:03 crc kubenswrapper[4998]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Feb 27 10:19:03 crc kubenswrapper[4998]: exit 1 Feb 27 10:19:03 crc kubenswrapper[4998]: fi Feb 27 10:19:03 crc kubenswrapper[4998]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Feb 27 10:19:03 crc kubenswrapper[4998]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 27 10:19:03 crc kubenswrapper[4998]: > logger="UnhandledError" Feb 27 10:19:03 crc kubenswrapper[4998]: E0227 10:19:03.069712 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Feb 27 10:19:03 crc kubenswrapper[4998]: E0227 10:19:03.075277 4998 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 10:19:03 crc kubenswrapper[4998]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Feb 27 10:19:03 crc kubenswrapper[4998]: if [[ -f "/env/_master" ]]; then Feb 27 10:19:03 crc kubenswrapper[4998]: set -o allexport Feb 27 10:19:03 crc kubenswrapper[4998]: source "/env/_master" Feb 27 10:19:03 crc kubenswrapper[4998]: set +o allexport Feb 27 10:19:03 crc kubenswrapper[4998]: fi Feb 27 10:19:03 crc kubenswrapper[4998]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Feb 27 10:19:03 crc kubenswrapper[4998]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Feb 27 10:19:03 crc kubenswrapper[4998]: ho_enable="--enable-hybrid-overlay" Feb 27 10:19:03 crc kubenswrapper[4998]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Feb 27 10:19:03 crc kubenswrapper[4998]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Feb 27 10:19:03 crc kubenswrapper[4998]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Feb 27 10:19:03 crc kubenswrapper[4998]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Feb 27 10:19:03 crc kubenswrapper[4998]: --webhook-cert-dir="/etc/webhook-cert" \ Feb 27 10:19:03 crc kubenswrapper[4998]: --webhook-host=127.0.0.1 \ Feb 27 10:19:03 crc kubenswrapper[4998]: --webhook-port=9743 \ Feb 27 10:19:03 crc kubenswrapper[4998]: ${ho_enable} \ Feb 27 10:19:03 crc kubenswrapper[4998]: --enable-interconnect \ Feb 27 10:19:03 crc kubenswrapper[4998]: --disable-approver \ Feb 27 10:19:03 crc kubenswrapper[4998]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Feb 27 10:19:03 crc kubenswrapper[4998]: --wait-for-kubernetes-api=200s \ Feb 27 10:19:03 crc kubenswrapper[4998]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Feb 27 10:19:03 crc kubenswrapper[4998]: --loglevel="${LOGLEVEL}" Feb 27 10:19:03 crc kubenswrapper[4998]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 27 10:19:03 crc kubenswrapper[4998]: > logger="UnhandledError" Feb 27 10:19:03 crc kubenswrapper[4998]: E0227 10:19:03.077929 4998 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 10:19:03 crc kubenswrapper[4998]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Feb 27 10:19:03 crc kubenswrapper[4998]: if [[ -f "/env/_master" ]]; then Feb 27 10:19:03 crc kubenswrapper[4998]: set -o allexport Feb 27 10:19:03 crc kubenswrapper[4998]: source "/env/_master" Feb 27 10:19:03 crc kubenswrapper[4998]: set +o allexport Feb 27 10:19:03 crc kubenswrapper[4998]: fi Feb 27 10:19:03 crc kubenswrapper[4998]: Feb 27 10:19:03 crc kubenswrapper[4998]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Feb 27 10:19:03 crc kubenswrapper[4998]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Feb 27 10:19:03 crc kubenswrapper[4998]: --disable-webhook \ Feb 27 10:19:03 crc kubenswrapper[4998]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Feb 27 10:19:03 crc kubenswrapper[4998]: --loglevel="${LOGLEVEL}" Feb 27 10:19:03 crc kubenswrapper[4998]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 27 10:19:03 crc kubenswrapper[4998]: > logger="UnhandledError" Feb 27 10:19:03 crc kubenswrapper[4998]: E0227 10:19:03.079207 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Feb 27 10:19:03 crc kubenswrapper[4998]: E0227 10:19:03.082597 4998 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rczfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4ln5h_openshift-network-operator(d75a4c96-2883-4a0b-bab2-0fab2b6c0b49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 27 10:19:03 crc kubenswrapper[4998]: E0227 10:19:03.084279 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-4ln5h" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.105104 4998 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.105160 4998 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.105173 4998 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.142243 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"2f9c4a0398f63a73e9935f00258e00a022bb230896080ee45b80c1f3f59b582f"} Feb 27 10:19:03 crc kubenswrapper[4998]: E0227 10:19:03.143874 4998 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 10:19:03 crc kubenswrapper[4998]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Feb 27 10:19:03 crc kubenswrapper[4998]: set -o allexport Feb 27 10:19:03 crc kubenswrapper[4998]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Feb 27 10:19:03 crc kubenswrapper[4998]: source /etc/kubernetes/apiserver-url.env Feb 27 10:19:03 crc kubenswrapper[4998]: else Feb 27 10:19:03 crc kubenswrapper[4998]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Feb 27 10:19:03 crc kubenswrapper[4998]: exit 1 Feb 27 10:19:03 crc kubenswrapper[4998]: fi Feb 27 10:19:03 crc kubenswrapper[4998]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Feb 27 10:19:03 crc kubenswrapper[4998]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 27 10:19:03 crc kubenswrapper[4998]: > logger="UnhandledError" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.144022 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"ec443a1c3d86f846f94fe2ee29fc7f6b2a2bacd616ce3546b76410b26b9a7494"} Feb 27 10:19:03 crc kubenswrapper[4998]: E0227 10:19:03.145422 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.146774 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"1f455a4ba05bcb13f57541cadf3342b35dcf6895aaa7799eb8c191f7c774e16f"} Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.147012 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.147071 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.147085 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.147127 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.147141 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:03Z","lastTransitionTime":"2026-02-27T10:19:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:03 crc kubenswrapper[4998]: E0227 10:19:03.147343 4998 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rczfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4ln5h_openshift-network-operator(d75a4c96-2883-4a0b-bab2-0fab2b6c0b49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 27 10:19:03 crc kubenswrapper[4998]: E0227 10:19:03.148272 4998 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 10:19:03 crc kubenswrapper[4998]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Feb 27 10:19:03 crc kubenswrapper[4998]: if [[ -f "/env/_master" ]]; then Feb 27 10:19:03 crc kubenswrapper[4998]: set -o allexport Feb 27 10:19:03 crc kubenswrapper[4998]: source "/env/_master" Feb 27 10:19:03 crc kubenswrapper[4998]: set +o allexport Feb 27 10:19:03 crc kubenswrapper[4998]: fi Feb 27 10:19:03 crc kubenswrapper[4998]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Feb 27 10:19:03 crc kubenswrapper[4998]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Feb 27 10:19:03 crc kubenswrapper[4998]: ho_enable="--enable-hybrid-overlay" Feb 27 10:19:03 crc kubenswrapper[4998]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Feb 27 10:19:03 crc kubenswrapper[4998]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Feb 27 10:19:03 crc kubenswrapper[4998]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Feb 27 10:19:03 crc kubenswrapper[4998]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Feb 27 10:19:03 crc kubenswrapper[4998]: --webhook-cert-dir="/etc/webhook-cert" \ Feb 27 10:19:03 crc kubenswrapper[4998]: --webhook-host=127.0.0.1 \ Feb 27 10:19:03 crc kubenswrapper[4998]: --webhook-port=9743 \ Feb 27 10:19:03 crc kubenswrapper[4998]: ${ho_enable} \ Feb 27 10:19:03 crc kubenswrapper[4998]: --enable-interconnect \ Feb 27 10:19:03 crc kubenswrapper[4998]: --disable-approver \ Feb 27 10:19:03 crc kubenswrapper[4998]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Feb 27 10:19:03 crc kubenswrapper[4998]: --wait-for-kubernetes-api=200s \ Feb 27 10:19:03 crc kubenswrapper[4998]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Feb 27 10:19:03 crc kubenswrapper[4998]: --loglevel="${LOGLEVEL}" Feb 27 10:19:03 crc kubenswrapper[4998]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 27 10:19:03 crc kubenswrapper[4998]: > logger="UnhandledError" Feb 27 10:19:03 crc kubenswrapper[4998]: E0227 10:19:03.149116 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-4ln5h" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" Feb 27 10:19:03 crc kubenswrapper[4998]: E0227 10:19:03.151183 4998 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 10:19:03 crc kubenswrapper[4998]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Feb 27 10:19:03 crc kubenswrapper[4998]: if [[ -f "/env/_master" ]]; then Feb 27 10:19:03 crc kubenswrapper[4998]: set -o allexport Feb 27 10:19:03 crc kubenswrapper[4998]: source "/env/_master" Feb 27 10:19:03 crc kubenswrapper[4998]: set +o allexport Feb 27 10:19:03 crc kubenswrapper[4998]: fi Feb 27 10:19:03 crc kubenswrapper[4998]: Feb 27 10:19:03 crc kubenswrapper[4998]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Feb 27 10:19:03 crc kubenswrapper[4998]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Feb 27 10:19:03 crc kubenswrapper[4998]: --disable-webhook \ Feb 27 10:19:03 crc kubenswrapper[4998]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Feb 27 10:19:03 crc kubenswrapper[4998]: --loglevel="${LOGLEVEL}" Feb 27 10:19:03 crc kubenswrapper[4998]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 27 10:19:03 crc kubenswrapper[4998]: > logger="UnhandledError" Feb 27 10:19:03 crc kubenswrapper[4998]: E0227 10:19:03.152272 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.154492 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.164300 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.172256 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.181040 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.189447 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.201470 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.208938 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.215997 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.224047 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.235620 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.248133 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.249924 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.249952 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.249980 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.249997 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.250006 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:03Z","lastTransitionTime":"2026-02-27T10:19:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.257079 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.352800 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.352861 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.352872 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.352890 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.352901 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:03Z","lastTransitionTime":"2026-02-27T10:19:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.408656 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:19:03 crc kubenswrapper[4998]: E0227 10:19:03.408871 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 10:19:04.408843397 +0000 UTC m=+96.407114365 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.455885 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.455957 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.455971 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.455997 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.456014 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:03Z","lastTransitionTime":"2026-02-27T10:19:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.510134 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.510174 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.510193 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.510215 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:19:03 crc kubenswrapper[4998]: E0227 10:19:03.510341 4998 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 27 10:19:03 crc kubenswrapper[4998]: E0227 10:19:03.510357 4998 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 27 10:19:03 crc kubenswrapper[4998]: E0227 10:19:03.510419 4998 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 27 10:19:03 crc kubenswrapper[4998]: E0227 10:19:03.510448 4998 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 27 10:19:03 crc kubenswrapper[4998]: E0227 10:19:03.510397 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-27 10:19:04.51038325 +0000 UTC m=+96.508654218 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 27 10:19:03 crc kubenswrapper[4998]: E0227 10:19:03.510467 4998 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 10:19:03 crc kubenswrapper[4998]: E0227 10:19:03.510460 4998 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 27 10:19:03 crc kubenswrapper[4998]: E0227 10:19:03.510497 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-27 10:19:04.510479992 +0000 UTC m=+96.508750960 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 27 10:19:03 crc kubenswrapper[4998]: E0227 10:19:03.510509 4998 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 27 10:19:03 crc kubenswrapper[4998]: E0227 10:19:03.510518 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-27 10:19:04.510509622 +0000 UTC m=+96.508780590 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 10:19:03 crc kubenswrapper[4998]: E0227 10:19:03.510523 4998 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 10:19:03 crc kubenswrapper[4998]: E0227 10:19:03.510579 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-27 10:19:04.510557173 +0000 UTC m=+96.508828141 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.559113 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.559166 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.559180 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.559199 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.559266 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:03Z","lastTransitionTime":"2026-02-27T10:19:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.663437 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.663484 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.663496 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.663513 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.663526 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:03Z","lastTransitionTime":"2026-02-27T10:19:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.766912 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.766965 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.766976 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.767000 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.767013 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:03Z","lastTransitionTime":"2026-02-27T10:19:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.870200 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.870288 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.870305 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.870323 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.870335 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:03Z","lastTransitionTime":"2026-02-27T10:19:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.973882 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.973939 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.973952 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.973971 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:03 crc kubenswrapper[4998]: I0227 10:19:03.973984 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:03Z","lastTransitionTime":"2026-02-27T10:19:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:04 crc kubenswrapper[4998]: I0227 10:19:04.076926 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:04 crc kubenswrapper[4998]: I0227 10:19:04.076974 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:04 crc kubenswrapper[4998]: I0227 10:19:04.076984 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:04 crc kubenswrapper[4998]: I0227 10:19:04.077005 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:04 crc kubenswrapper[4998]: I0227 10:19:04.077016 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:04Z","lastTransitionTime":"2026-02-27T10:19:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:04 crc kubenswrapper[4998]: I0227 10:19:04.180847 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:04 crc kubenswrapper[4998]: I0227 10:19:04.180919 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:04 crc kubenswrapper[4998]: I0227 10:19:04.180934 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:04 crc kubenswrapper[4998]: I0227 10:19:04.180956 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:04 crc kubenswrapper[4998]: I0227 10:19:04.180971 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:04Z","lastTransitionTime":"2026-02-27T10:19:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:04 crc kubenswrapper[4998]: I0227 10:19:04.284051 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:04 crc kubenswrapper[4998]: I0227 10:19:04.284096 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:04 crc kubenswrapper[4998]: I0227 10:19:04.284105 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:04 crc kubenswrapper[4998]: I0227 10:19:04.284123 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:04 crc kubenswrapper[4998]: I0227 10:19:04.284144 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:04Z","lastTransitionTime":"2026-02-27T10:19:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:04 crc kubenswrapper[4998]: I0227 10:19:04.387432 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:04 crc kubenswrapper[4998]: I0227 10:19:04.387479 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:04 crc kubenswrapper[4998]: I0227 10:19:04.387493 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:04 crc kubenswrapper[4998]: I0227 10:19:04.387516 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:04 crc kubenswrapper[4998]: I0227 10:19:04.387530 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:04Z","lastTransitionTime":"2026-02-27T10:19:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:04 crc kubenswrapper[4998]: I0227 10:19:04.418777 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:19:04 crc kubenswrapper[4998]: E0227 10:19:04.419165 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 10:19:06.419116504 +0000 UTC m=+98.417387592 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:19:04 crc kubenswrapper[4998]: I0227 10:19:04.490246 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:04 crc kubenswrapper[4998]: I0227 10:19:04.490309 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:04 crc kubenswrapper[4998]: I0227 10:19:04.490323 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:04 crc kubenswrapper[4998]: I0227 10:19:04.490347 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:04 crc kubenswrapper[4998]: I0227 10:19:04.490365 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:04Z","lastTransitionTime":"2026-02-27T10:19:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:04 crc kubenswrapper[4998]: I0227 10:19:04.519849 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:19:04 crc kubenswrapper[4998]: I0227 10:19:04.519928 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:19:04 crc kubenswrapper[4998]: I0227 10:19:04.519967 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:19:04 crc kubenswrapper[4998]: I0227 10:19:04.520006 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:19:04 crc kubenswrapper[4998]: E0227 10:19:04.520093 4998 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 27 10:19:04 crc kubenswrapper[4998]: E0227 10:19:04.520162 4998 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 27 10:19:04 crc kubenswrapper[4998]: E0227 10:19:04.520218 4998 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 27 10:19:04 crc kubenswrapper[4998]: E0227 10:19:04.520281 4998 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 27 10:19:04 crc kubenswrapper[4998]: E0227 10:19:04.520307 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-27 10:19:06.520273058 +0000 UTC m=+98.518544066 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 27 10:19:04 crc kubenswrapper[4998]: E0227 10:19:04.520285 4998 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 27 10:19:04 crc kubenswrapper[4998]: E0227 10:19:04.520348 4998 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 10:19:04 crc kubenswrapper[4998]: E0227 10:19:04.520397 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-27 10:19:06.52038473 +0000 UTC m=+98.518655728 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 10:19:04 crc kubenswrapper[4998]: E0227 10:19:04.520312 4998 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 10:19:04 crc kubenswrapper[4998]: E0227 10:19:04.520453 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-27 10:19:06.520442211 +0000 UTC m=+98.518713219 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 10:19:04 crc kubenswrapper[4998]: E0227 10:19:04.520155 4998 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 27 10:19:04 crc kubenswrapper[4998]: E0227 10:19:04.520521 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-27 10:19:06.520510623 +0000 UTC m=+98.518781621 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 27 10:19:04 crc kubenswrapper[4998]: I0227 10:19:04.593613 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:04 crc kubenswrapper[4998]: I0227 10:19:04.593718 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:04 crc kubenswrapper[4998]: I0227 10:19:04.593786 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:04 crc kubenswrapper[4998]: I0227 10:19:04.593825 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:04 crc kubenswrapper[4998]: I0227 10:19:04.593879 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:04Z","lastTransitionTime":"2026-02-27T10:19:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:04 crc kubenswrapper[4998]: I0227 10:19:04.696989 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:04 crc kubenswrapper[4998]: I0227 10:19:04.697062 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:04 crc kubenswrapper[4998]: I0227 10:19:04.697076 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:04 crc kubenswrapper[4998]: I0227 10:19:04.697095 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:04 crc kubenswrapper[4998]: I0227 10:19:04.697131 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:04Z","lastTransitionTime":"2026-02-27T10:19:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:04 crc kubenswrapper[4998]: I0227 10:19:04.764617 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:19:04 crc kubenswrapper[4998]: I0227 10:19:04.764689 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:19:04 crc kubenswrapper[4998]: I0227 10:19:04.764760 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:19:04 crc kubenswrapper[4998]: E0227 10:19:04.764846 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 10:19:04 crc kubenswrapper[4998]: E0227 10:19:04.764991 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 10:19:04 crc kubenswrapper[4998]: E0227 10:19:04.765112 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 10:19:04 crc kubenswrapper[4998]: I0227 10:19:04.769814 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Feb 27 10:19:04 crc kubenswrapper[4998]: I0227 10:19:04.771045 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Feb 27 10:19:04 crc kubenswrapper[4998]: I0227 10:19:04.773913 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Feb 27 10:19:04 crc kubenswrapper[4998]: I0227 10:19:04.775266 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Feb 27 10:19:04 crc kubenswrapper[4998]: I0227 10:19:04.777433 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Feb 27 10:19:04 crc kubenswrapper[4998]: I0227 10:19:04.778456 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Feb 27 10:19:04 crc kubenswrapper[4998]: I0227 10:19:04.779720 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Feb 27 10:19:04 crc kubenswrapper[4998]: I0227 10:19:04.781727 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Feb 27 10:19:04 crc kubenswrapper[4998]: I0227 10:19:04.783264 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Feb 27 10:19:04 crc kubenswrapper[4998]: I0227 10:19:04.785312 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Feb 27 10:19:04 crc kubenswrapper[4998]: I0227 10:19:04.786651 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Feb 27 10:19:04 crc kubenswrapper[4998]: I0227 10:19:04.788901 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Feb 27 10:19:04 crc kubenswrapper[4998]: I0227 10:19:04.790082 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Feb 27 10:19:04 crc kubenswrapper[4998]: I0227 10:19:04.791152 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Feb 27 10:19:04 crc kubenswrapper[4998]: I0227 10:19:04.793050 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Feb 27 10:19:04 crc kubenswrapper[4998]: I0227 10:19:04.794265 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Feb 27 10:19:04 crc kubenswrapper[4998]: I0227 10:19:04.796104 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Feb 27 10:19:04 crc kubenswrapper[4998]: I0227 10:19:04.797353 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Feb 27 10:19:04 crc kubenswrapper[4998]: I0227 10:19:04.798671 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Feb 27 10:19:04 crc kubenswrapper[4998]: I0227 10:19:04.799931 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:04 crc kubenswrapper[4998]: I0227 10:19:04.799985 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:04 crc kubenswrapper[4998]: I0227 10:19:04.799998 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:04 crc kubenswrapper[4998]: I0227 10:19:04.800016 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:04 crc kubenswrapper[4998]: I0227 10:19:04.800026 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:04Z","lastTransitionTime":"2026-02-27T10:19:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:04 crc kubenswrapper[4998]: I0227 10:19:04.800835 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Feb 27 10:19:04 crc kubenswrapper[4998]: I0227 10:19:04.801781 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Feb 27 10:19:04 crc kubenswrapper[4998]: I0227 10:19:04.803002 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Feb 27 10:19:04 crc kubenswrapper[4998]: I0227 10:19:04.805026 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Feb 27 10:19:04 crc kubenswrapper[4998]: I0227 10:19:04.806626 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Feb 27 10:19:04 crc kubenswrapper[4998]: I0227 10:19:04.808561 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Feb 27 10:19:04 crc kubenswrapper[4998]: I0227 10:19:04.809962 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Feb 27 10:19:04 crc kubenswrapper[4998]: I0227 10:19:04.812422 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Feb 27 10:19:04 crc kubenswrapper[4998]: I0227 10:19:04.813450 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Feb 27 10:19:04 crc kubenswrapper[4998]: I0227 10:19:04.815486 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Feb 27 10:19:04 crc kubenswrapper[4998]: I0227 10:19:04.816645 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Feb 27 10:19:04 crc kubenswrapper[4998]: I0227 10:19:04.817665 4998 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 27 10:19:04 crc kubenswrapper[4998]: I0227 10:19:04.817878 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Feb 27 10:19:04 crc kubenswrapper[4998]: I0227 10:19:04.822375 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Feb 27 10:19:04 crc kubenswrapper[4998]: I0227 10:19:04.823161 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Feb 27 10:19:04 crc kubenswrapper[4998]: I0227 10:19:04.824712 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Feb 27 10:19:04 crc kubenswrapper[4998]: I0227 10:19:04.827145 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Feb 27 10:19:04 crc kubenswrapper[4998]: I0227 10:19:04.828152 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Feb 27 10:19:04 crc kubenswrapper[4998]: I0227 10:19:04.829614 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Feb 27 10:19:04 crc kubenswrapper[4998]: I0227 10:19:04.830522 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Feb 27 10:19:04 crc kubenswrapper[4998]: I0227 10:19:04.832115 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Feb 27 10:19:04 crc kubenswrapper[4998]: I0227 10:19:04.832821 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Feb 27 10:19:04 crc kubenswrapper[4998]: I0227 10:19:04.834372 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Feb 27 10:19:04 crc kubenswrapper[4998]: I0227 10:19:04.835364 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Feb 27 10:19:04 crc kubenswrapper[4998]: I0227 10:19:04.836827 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Feb 27 10:19:04 crc kubenswrapper[4998]: I0227 10:19:04.837543 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Feb 27 10:19:04 crc kubenswrapper[4998]: I0227 10:19:04.839056 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Feb 27 10:19:04 crc kubenswrapper[4998]: I0227 10:19:04.839956 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Feb 27 10:19:04 crc kubenswrapper[4998]: I0227 10:19:04.842052 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Feb 27 10:19:04 crc kubenswrapper[4998]: I0227 10:19:04.842903 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Feb 27 10:19:04 crc kubenswrapper[4998]: I0227 10:19:04.844458 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Feb 27 10:19:04 crc kubenswrapper[4998]: I0227 10:19:04.845136 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Feb 27 10:19:04 crc kubenswrapper[4998]: I0227 10:19:04.846211 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Feb 27 10:19:04 crc kubenswrapper[4998]: I0227 10:19:04.848176 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Feb 27 10:19:04 crc kubenswrapper[4998]: I0227 10:19:04.849283 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Feb 27 10:19:04 crc kubenswrapper[4998]: I0227 10:19:04.903518 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:04 crc kubenswrapper[4998]: I0227 10:19:04.903600 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:04 crc kubenswrapper[4998]: I0227 10:19:04.903611 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:04 crc kubenswrapper[4998]: I0227 10:19:04.903631 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:04 crc kubenswrapper[4998]: I0227 10:19:04.903644 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:04Z","lastTransitionTime":"2026-02-27T10:19:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:05 crc kubenswrapper[4998]: I0227 10:19:05.007180 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:05 crc kubenswrapper[4998]: I0227 10:19:05.007287 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:05 crc kubenswrapper[4998]: I0227 10:19:05.007307 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:05 crc kubenswrapper[4998]: I0227 10:19:05.007344 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:05 crc kubenswrapper[4998]: I0227 10:19:05.007363 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:05Z","lastTransitionTime":"2026-02-27T10:19:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:05 crc kubenswrapper[4998]: I0227 10:19:05.109807 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:05 crc kubenswrapper[4998]: I0227 10:19:05.109854 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:05 crc kubenswrapper[4998]: I0227 10:19:05.109865 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:05 crc kubenswrapper[4998]: I0227 10:19:05.109884 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:05 crc kubenswrapper[4998]: I0227 10:19:05.109896 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:05Z","lastTransitionTime":"2026-02-27T10:19:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:05 crc kubenswrapper[4998]: I0227 10:19:05.212914 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:05 crc kubenswrapper[4998]: I0227 10:19:05.212968 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:05 crc kubenswrapper[4998]: I0227 10:19:05.212978 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:05 crc kubenswrapper[4998]: I0227 10:19:05.212992 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:05 crc kubenswrapper[4998]: I0227 10:19:05.213001 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:05Z","lastTransitionTime":"2026-02-27T10:19:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:05 crc kubenswrapper[4998]: I0227 10:19:05.315449 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:05 crc kubenswrapper[4998]: I0227 10:19:05.315492 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:05 crc kubenswrapper[4998]: I0227 10:19:05.315503 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:05 crc kubenswrapper[4998]: I0227 10:19:05.315520 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:05 crc kubenswrapper[4998]: I0227 10:19:05.315531 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:05Z","lastTransitionTime":"2026-02-27T10:19:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:05 crc kubenswrapper[4998]: I0227 10:19:05.417866 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:05 crc kubenswrapper[4998]: I0227 10:19:05.417935 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:05 crc kubenswrapper[4998]: I0227 10:19:05.417958 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:05 crc kubenswrapper[4998]: I0227 10:19:05.417986 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:05 crc kubenswrapper[4998]: I0227 10:19:05.418007 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:05Z","lastTransitionTime":"2026-02-27T10:19:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:05 crc kubenswrapper[4998]: I0227 10:19:05.521118 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:05 crc kubenswrapper[4998]: I0227 10:19:05.521165 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:05 crc kubenswrapper[4998]: I0227 10:19:05.521179 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:05 crc kubenswrapper[4998]: I0227 10:19:05.521200 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:05 crc kubenswrapper[4998]: I0227 10:19:05.521215 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:05Z","lastTransitionTime":"2026-02-27T10:19:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:05 crc kubenswrapper[4998]: I0227 10:19:05.624044 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:05 crc kubenswrapper[4998]: I0227 10:19:05.624094 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:05 crc kubenswrapper[4998]: I0227 10:19:05.624147 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:05 crc kubenswrapper[4998]: I0227 10:19:05.624172 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:05 crc kubenswrapper[4998]: I0227 10:19:05.624189 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:05Z","lastTransitionTime":"2026-02-27T10:19:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:05 crc kubenswrapper[4998]: I0227 10:19:05.727431 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:05 crc kubenswrapper[4998]: I0227 10:19:05.727475 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:05 crc kubenswrapper[4998]: I0227 10:19:05.727486 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:05 crc kubenswrapper[4998]: I0227 10:19:05.727506 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:05 crc kubenswrapper[4998]: I0227 10:19:05.727520 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:05Z","lastTransitionTime":"2026-02-27T10:19:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:05 crc kubenswrapper[4998]: I0227 10:19:05.829908 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:05 crc kubenswrapper[4998]: I0227 10:19:05.829959 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:05 crc kubenswrapper[4998]: I0227 10:19:05.829971 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:05 crc kubenswrapper[4998]: I0227 10:19:05.829992 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:05 crc kubenswrapper[4998]: I0227 10:19:05.830005 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:05Z","lastTransitionTime":"2026-02-27T10:19:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:05 crc kubenswrapper[4998]: I0227 10:19:05.932591 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:05 crc kubenswrapper[4998]: I0227 10:19:05.932630 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:05 crc kubenswrapper[4998]: I0227 10:19:05.932638 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:05 crc kubenswrapper[4998]: I0227 10:19:05.932654 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:05 crc kubenswrapper[4998]: I0227 10:19:05.932666 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:05Z","lastTransitionTime":"2026-02-27T10:19:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:06 crc kubenswrapper[4998]: I0227 10:19:06.035221 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:06 crc kubenswrapper[4998]: I0227 10:19:06.035291 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:06 crc kubenswrapper[4998]: I0227 10:19:06.035303 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:06 crc kubenswrapper[4998]: I0227 10:19:06.035320 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:06 crc kubenswrapper[4998]: I0227 10:19:06.035332 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:06Z","lastTransitionTime":"2026-02-27T10:19:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:06 crc kubenswrapper[4998]: I0227 10:19:06.140553 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:06 crc kubenswrapper[4998]: I0227 10:19:06.140587 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:06 crc kubenswrapper[4998]: I0227 10:19:06.140597 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:06 crc kubenswrapper[4998]: I0227 10:19:06.140613 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:06 crc kubenswrapper[4998]: I0227 10:19:06.140623 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:06Z","lastTransitionTime":"2026-02-27T10:19:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:06 crc kubenswrapper[4998]: I0227 10:19:06.242641 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:06 crc kubenswrapper[4998]: I0227 10:19:06.242682 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:06 crc kubenswrapper[4998]: I0227 10:19:06.242692 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:06 crc kubenswrapper[4998]: I0227 10:19:06.242708 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:06 crc kubenswrapper[4998]: I0227 10:19:06.242722 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:06Z","lastTransitionTime":"2026-02-27T10:19:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:06 crc kubenswrapper[4998]: I0227 10:19:06.345710 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:06 crc kubenswrapper[4998]: I0227 10:19:06.345748 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:06 crc kubenswrapper[4998]: I0227 10:19:06.345760 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:06 crc kubenswrapper[4998]: I0227 10:19:06.345775 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:06 crc kubenswrapper[4998]: I0227 10:19:06.345785 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:06Z","lastTransitionTime":"2026-02-27T10:19:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:06 crc kubenswrapper[4998]: I0227 10:19:06.432346 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:19:06 crc kubenswrapper[4998]: E0227 10:19:06.432526 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 10:19:10.432486702 +0000 UTC m=+102.430757710 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:19:06 crc kubenswrapper[4998]: I0227 10:19:06.448854 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:06 crc kubenswrapper[4998]: I0227 10:19:06.448903 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:06 crc kubenswrapper[4998]: I0227 10:19:06.448918 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:06 crc kubenswrapper[4998]: I0227 10:19:06.448938 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:06 crc kubenswrapper[4998]: I0227 10:19:06.448952 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:06Z","lastTransitionTime":"2026-02-27T10:19:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:06 crc kubenswrapper[4998]: I0227 10:19:06.471351 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:06 crc kubenswrapper[4998]: I0227 10:19:06.471397 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:06 crc kubenswrapper[4998]: I0227 10:19:06.471413 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:06 crc kubenswrapper[4998]: I0227 10:19:06.471433 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:06 crc kubenswrapper[4998]: I0227 10:19:06.471450 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:06Z","lastTransitionTime":"2026-02-27T10:19:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:06 crc kubenswrapper[4998]: E0227 10:19:06.489067 4998 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:19:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:19:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:19:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:19:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a76c225a-d617-4499-bb32-eb42f31208cd\\\",\\\"systemUUID\\\":\\\"9cb598f9-84ae-4703-b4b9-6775104308e7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:06 crc kubenswrapper[4998]: I0227 10:19:06.495143 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:06 crc kubenswrapper[4998]: I0227 10:19:06.495200 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:06 crc kubenswrapper[4998]: I0227 10:19:06.495215 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:06 crc kubenswrapper[4998]: I0227 10:19:06.495269 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:06 crc kubenswrapper[4998]: I0227 10:19:06.495285 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:06Z","lastTransitionTime":"2026-02-27T10:19:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:06 crc kubenswrapper[4998]: E0227 10:19:06.508178 4998 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:19:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:19:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:19:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:19:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a76c225a-d617-4499-bb32-eb42f31208cd\\\",\\\"systemUUID\\\":\\\"9cb598f9-84ae-4703-b4b9-6775104308e7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:06 crc kubenswrapper[4998]: I0227 10:19:06.513287 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:06 crc kubenswrapper[4998]: I0227 10:19:06.513349 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:06 crc kubenswrapper[4998]: I0227 10:19:06.513365 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:06 crc kubenswrapper[4998]: I0227 10:19:06.513417 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:06 crc kubenswrapper[4998]: I0227 10:19:06.513435 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:06Z","lastTransitionTime":"2026-02-27T10:19:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:06 crc kubenswrapper[4998]: E0227 10:19:06.527192 4998 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:19:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:19:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:19:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:19:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a76c225a-d617-4499-bb32-eb42f31208cd\\\",\\\"systemUUID\\\":\\\"9cb598f9-84ae-4703-b4b9-6775104308e7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:06 crc kubenswrapper[4998]: I0227 10:19:06.532092 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:06 crc kubenswrapper[4998]: I0227 10:19:06.532187 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:06 crc kubenswrapper[4998]: I0227 10:19:06.532292 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:06 crc kubenswrapper[4998]: I0227 10:19:06.532328 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:06 crc kubenswrapper[4998]: I0227 10:19:06.532351 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:06Z","lastTransitionTime":"2026-02-27T10:19:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:06 crc kubenswrapper[4998]: I0227 10:19:06.533377 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:19:06 crc kubenswrapper[4998]: I0227 10:19:06.533425 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:19:06 crc kubenswrapper[4998]: I0227 10:19:06.533449 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:19:06 crc kubenswrapper[4998]: I0227 10:19:06.533474 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:19:06 crc kubenswrapper[4998]: E0227 10:19:06.533589 4998 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 27 10:19:06 crc kubenswrapper[4998]: E0227 10:19:06.533600 4998 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 27 10:19:06 crc kubenswrapper[4998]: E0227 10:19:06.533758 4998 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 27 10:19:06 crc kubenswrapper[4998]: E0227 10:19:06.533774 4998 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 10:19:06 crc kubenswrapper[4998]: E0227 10:19:06.533602 4998 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 27 10:19:06 crc kubenswrapper[4998]: E0227 10:19:06.533721 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-27 10:19:10.533692157 +0000 UTC m=+102.531963125 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 27 10:19:06 crc kubenswrapper[4998]: E0227 10:19:06.533910 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-27 10:19:10.533890431 +0000 UTC m=+102.532161419 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 10:19:06 crc kubenswrapper[4998]: E0227 10:19:06.533925 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-27 10:19:10.533917682 +0000 UTC m=+102.532188660 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 27 10:19:06 crc kubenswrapper[4998]: E0227 10:19:06.533952 4998 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 27 10:19:06 crc kubenswrapper[4998]: E0227 10:19:06.533976 4998 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 27 10:19:06 crc kubenswrapper[4998]: E0227 10:19:06.533993 4998 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 10:19:06 crc kubenswrapper[4998]: E0227 10:19:06.534066 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-27 10:19:10.534038454 +0000 UTC m=+102.532309612 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 10:19:06 crc kubenswrapper[4998]: E0227 10:19:06.546937 4998 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:19:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:19:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:19:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:19:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a76c225a-d617-4499-bb32-eb42f31208cd\\\",\\\"systemUUID\\\":\\\"9cb598f9-84ae-4703-b4b9-6775104308e7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:06 crc kubenswrapper[4998]: I0227 10:19:06.550886 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:06 crc kubenswrapper[4998]: I0227 10:19:06.551113 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:06 crc kubenswrapper[4998]: I0227 10:19:06.551301 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:06 crc kubenswrapper[4998]: I0227 10:19:06.551479 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:06 crc kubenswrapper[4998]: I0227 10:19:06.551618 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:06Z","lastTransitionTime":"2026-02-27T10:19:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:06 crc kubenswrapper[4998]: E0227 10:19:06.564609 4998 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:19:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:19:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:19:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:19:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a76c225a-d617-4499-bb32-eb42f31208cd\\\",\\\"systemUUID\\\":\\\"9cb598f9-84ae-4703-b4b9-6775104308e7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:06 crc kubenswrapper[4998]: E0227 10:19:06.565159 4998 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 27 10:19:06 crc kubenswrapper[4998]: I0227 10:19:06.567025 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:06 crc kubenswrapper[4998]: I0227 10:19:06.567214 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:06 crc kubenswrapper[4998]: I0227 10:19:06.567396 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:06 crc kubenswrapper[4998]: I0227 10:19:06.567537 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:06 crc kubenswrapper[4998]: I0227 10:19:06.567654 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:06Z","lastTransitionTime":"2026-02-27T10:19:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:06 crc kubenswrapper[4998]: I0227 10:19:06.671005 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:06 crc kubenswrapper[4998]: I0227 10:19:06.671084 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:06 crc kubenswrapper[4998]: I0227 10:19:06.671106 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:06 crc kubenswrapper[4998]: I0227 10:19:06.671130 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:06 crc kubenswrapper[4998]: I0227 10:19:06.671146 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:06Z","lastTransitionTime":"2026-02-27T10:19:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:06 crc kubenswrapper[4998]: I0227 10:19:06.764775 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:19:06 crc kubenswrapper[4998]: I0227 10:19:06.764786 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:19:06 crc kubenswrapper[4998]: I0227 10:19:06.765000 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:19:06 crc kubenswrapper[4998]: E0227 10:19:06.765526 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 10:19:06 crc kubenswrapper[4998]: E0227 10:19:06.765560 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 10:19:06 crc kubenswrapper[4998]: E0227 10:19:06.765856 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 10:19:06 crc kubenswrapper[4998]: I0227 10:19:06.774313 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:06 crc kubenswrapper[4998]: I0227 10:19:06.774394 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:06 crc kubenswrapper[4998]: I0227 10:19:06.774436 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:06 crc kubenswrapper[4998]: I0227 10:19:06.774467 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:06 crc kubenswrapper[4998]: I0227 10:19:06.774490 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:06Z","lastTransitionTime":"2026-02-27T10:19:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:06 crc kubenswrapper[4998]: I0227 10:19:06.877323 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:06 crc kubenswrapper[4998]: I0227 10:19:06.877691 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:06 crc kubenswrapper[4998]: I0227 10:19:06.877873 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:06 crc kubenswrapper[4998]: I0227 10:19:06.878101 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:06 crc kubenswrapper[4998]: I0227 10:19:06.878366 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:06Z","lastTransitionTime":"2026-02-27T10:19:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:06 crc kubenswrapper[4998]: I0227 10:19:06.981495 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:06 crc kubenswrapper[4998]: I0227 10:19:06.981553 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:06 crc kubenswrapper[4998]: I0227 10:19:06.981575 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:06 crc kubenswrapper[4998]: I0227 10:19:06.981606 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:06 crc kubenswrapper[4998]: I0227 10:19:06.981630 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:06Z","lastTransitionTime":"2026-02-27T10:19:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:07 crc kubenswrapper[4998]: I0227 10:19:07.084250 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:07 crc kubenswrapper[4998]: I0227 10:19:07.084286 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:07 crc kubenswrapper[4998]: I0227 10:19:07.084296 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:07 crc kubenswrapper[4998]: I0227 10:19:07.084311 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:07 crc kubenswrapper[4998]: I0227 10:19:07.084322 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:07Z","lastTransitionTime":"2026-02-27T10:19:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:07 crc kubenswrapper[4998]: I0227 10:19:07.186985 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:07 crc kubenswrapper[4998]: I0227 10:19:07.187038 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:07 crc kubenswrapper[4998]: I0227 10:19:07.187059 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:07 crc kubenswrapper[4998]: I0227 10:19:07.187088 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:07 crc kubenswrapper[4998]: I0227 10:19:07.187110 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:07Z","lastTransitionTime":"2026-02-27T10:19:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:07 crc kubenswrapper[4998]: I0227 10:19:07.289823 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:07 crc kubenswrapper[4998]: I0227 10:19:07.289862 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:07 crc kubenswrapper[4998]: I0227 10:19:07.289878 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:07 crc kubenswrapper[4998]: I0227 10:19:07.289900 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:07 crc kubenswrapper[4998]: I0227 10:19:07.289916 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:07Z","lastTransitionTime":"2026-02-27T10:19:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:07 crc kubenswrapper[4998]: I0227 10:19:07.392163 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:07 crc kubenswrapper[4998]: I0227 10:19:07.392213 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:07 crc kubenswrapper[4998]: I0227 10:19:07.392249 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:07 crc kubenswrapper[4998]: I0227 10:19:07.392268 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:07 crc kubenswrapper[4998]: I0227 10:19:07.392280 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:07Z","lastTransitionTime":"2026-02-27T10:19:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:07 crc kubenswrapper[4998]: I0227 10:19:07.495176 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:07 crc kubenswrapper[4998]: I0227 10:19:07.495675 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:07 crc kubenswrapper[4998]: I0227 10:19:07.495890 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:07 crc kubenswrapper[4998]: I0227 10:19:07.496087 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:07 crc kubenswrapper[4998]: I0227 10:19:07.496320 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:07Z","lastTransitionTime":"2026-02-27T10:19:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:07 crc kubenswrapper[4998]: I0227 10:19:07.599028 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:07 crc kubenswrapper[4998]: I0227 10:19:07.599326 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:07 crc kubenswrapper[4998]: I0227 10:19:07.599407 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:07 crc kubenswrapper[4998]: I0227 10:19:07.599510 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:07 crc kubenswrapper[4998]: I0227 10:19:07.599622 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:07Z","lastTransitionTime":"2026-02-27T10:19:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:07 crc kubenswrapper[4998]: I0227 10:19:07.702081 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:07 crc kubenswrapper[4998]: I0227 10:19:07.702122 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:07 crc kubenswrapper[4998]: I0227 10:19:07.702132 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:07 crc kubenswrapper[4998]: I0227 10:19:07.702149 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:07 crc kubenswrapper[4998]: I0227 10:19:07.702159 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:07Z","lastTransitionTime":"2026-02-27T10:19:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:07 crc kubenswrapper[4998]: I0227 10:19:07.804491 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:07 crc kubenswrapper[4998]: I0227 10:19:07.804546 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:07 crc kubenswrapper[4998]: I0227 10:19:07.804563 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:07 crc kubenswrapper[4998]: I0227 10:19:07.804589 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:07 crc kubenswrapper[4998]: I0227 10:19:07.804610 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:07Z","lastTransitionTime":"2026-02-27T10:19:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:07 crc kubenswrapper[4998]: I0227 10:19:07.907299 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:07 crc kubenswrapper[4998]: I0227 10:19:07.907350 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:07 crc kubenswrapper[4998]: I0227 10:19:07.907361 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:07 crc kubenswrapper[4998]: I0227 10:19:07.907378 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:07 crc kubenswrapper[4998]: I0227 10:19:07.907390 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:07Z","lastTransitionTime":"2026-02-27T10:19:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:08 crc kubenswrapper[4998]: I0227 10:19:08.010092 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:08 crc kubenswrapper[4998]: I0227 10:19:08.010141 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:08 crc kubenswrapper[4998]: I0227 10:19:08.010156 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:08 crc kubenswrapper[4998]: I0227 10:19:08.010178 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:08 crc kubenswrapper[4998]: I0227 10:19:08.010194 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:08Z","lastTransitionTime":"2026-02-27T10:19:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:08 crc kubenswrapper[4998]: I0227 10:19:08.112657 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:08 crc kubenswrapper[4998]: I0227 10:19:08.112697 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:08 crc kubenswrapper[4998]: I0227 10:19:08.112708 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:08 crc kubenswrapper[4998]: I0227 10:19:08.112725 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:08 crc kubenswrapper[4998]: I0227 10:19:08.112737 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:08Z","lastTransitionTime":"2026-02-27T10:19:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:08 crc kubenswrapper[4998]: I0227 10:19:08.217120 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:08 crc kubenswrapper[4998]: I0227 10:19:08.217182 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:08 crc kubenswrapper[4998]: I0227 10:19:08.217199 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:08 crc kubenswrapper[4998]: I0227 10:19:08.217223 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:08 crc kubenswrapper[4998]: I0227 10:19:08.217263 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:08Z","lastTransitionTime":"2026-02-27T10:19:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:08 crc kubenswrapper[4998]: I0227 10:19:08.320019 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:08 crc kubenswrapper[4998]: I0227 10:19:08.320077 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:08 crc kubenswrapper[4998]: I0227 10:19:08.320099 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:08 crc kubenswrapper[4998]: I0227 10:19:08.320129 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:08 crc kubenswrapper[4998]: I0227 10:19:08.320155 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:08Z","lastTransitionTime":"2026-02-27T10:19:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:08 crc kubenswrapper[4998]: I0227 10:19:08.423846 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:08 crc kubenswrapper[4998]: I0227 10:19:08.423984 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:08 crc kubenswrapper[4998]: I0227 10:19:08.424004 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:08 crc kubenswrapper[4998]: I0227 10:19:08.424032 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:08 crc kubenswrapper[4998]: I0227 10:19:08.424061 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:08Z","lastTransitionTime":"2026-02-27T10:19:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:08 crc kubenswrapper[4998]: I0227 10:19:08.527440 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:08 crc kubenswrapper[4998]: I0227 10:19:08.527498 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:08 crc kubenswrapper[4998]: I0227 10:19:08.527510 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:08 crc kubenswrapper[4998]: I0227 10:19:08.527528 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:08 crc kubenswrapper[4998]: I0227 10:19:08.527539 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:08Z","lastTransitionTime":"2026-02-27T10:19:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:08 crc kubenswrapper[4998]: I0227 10:19:08.631069 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:08 crc kubenswrapper[4998]: I0227 10:19:08.631147 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:08 crc kubenswrapper[4998]: I0227 10:19:08.631166 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:08 crc kubenswrapper[4998]: I0227 10:19:08.631192 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:08 crc kubenswrapper[4998]: I0227 10:19:08.631209 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:08Z","lastTransitionTime":"2026-02-27T10:19:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:08 crc kubenswrapper[4998]: I0227 10:19:08.733403 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:08 crc kubenswrapper[4998]: I0227 10:19:08.733450 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:08 crc kubenswrapper[4998]: I0227 10:19:08.733460 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:08 crc kubenswrapper[4998]: I0227 10:19:08.733475 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:08 crc kubenswrapper[4998]: I0227 10:19:08.733485 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:08Z","lastTransitionTime":"2026-02-27T10:19:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:08 crc kubenswrapper[4998]: I0227 10:19:08.764947 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:19:08 crc kubenswrapper[4998]: I0227 10:19:08.764997 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:19:08 crc kubenswrapper[4998]: I0227 10:19:08.765374 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:19:08 crc kubenswrapper[4998]: E0227 10:19:08.765190 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 10:19:08 crc kubenswrapper[4998]: E0227 10:19:08.765481 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 10:19:08 crc kubenswrapper[4998]: E0227 10:19:08.765608 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 10:19:08 crc kubenswrapper[4998]: I0227 10:19:08.782487 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:08 crc kubenswrapper[4998]: I0227 10:19:08.799045 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:08 crc kubenswrapper[4998]: I0227 10:19:08.811103 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:08 crc kubenswrapper[4998]: I0227 10:19:08.822557 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:08 crc kubenswrapper[4998]: I0227 10:19:08.835483 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:08 crc kubenswrapper[4998]: I0227 10:19:08.835528 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:08 crc kubenswrapper[4998]: I0227 10:19:08.835543 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:08 crc kubenswrapper[4998]: I0227 10:19:08.835564 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:08 crc kubenswrapper[4998]: I0227 10:19:08.835579 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:08Z","lastTransitionTime":"2026-02-27T10:19:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:08 crc kubenswrapper[4998]: I0227 10:19:08.837384 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:08 crc kubenswrapper[4998]: I0227 10:19:08.846068 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:08 crc kubenswrapper[4998]: I0227 10:19:08.938823 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:08 crc kubenswrapper[4998]: I0227 10:19:08.938923 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:08 crc kubenswrapper[4998]: I0227 10:19:08.938942 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:08 crc kubenswrapper[4998]: I0227 10:19:08.939011 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:08 crc kubenswrapper[4998]: I0227 10:19:08.939029 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:08Z","lastTransitionTime":"2026-02-27T10:19:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:09 crc kubenswrapper[4998]: I0227 10:19:09.041891 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:09 crc kubenswrapper[4998]: I0227 10:19:09.041946 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:09 crc kubenswrapper[4998]: I0227 10:19:09.041958 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:09 crc kubenswrapper[4998]: I0227 10:19:09.041979 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:09 crc kubenswrapper[4998]: I0227 10:19:09.041993 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:09Z","lastTransitionTime":"2026-02-27T10:19:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:09 crc kubenswrapper[4998]: I0227 10:19:09.145264 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:09 crc kubenswrapper[4998]: I0227 10:19:09.145319 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:09 crc kubenswrapper[4998]: I0227 10:19:09.145334 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:09 crc kubenswrapper[4998]: I0227 10:19:09.145356 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:09 crc kubenswrapper[4998]: I0227 10:19:09.145377 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:09Z","lastTransitionTime":"2026-02-27T10:19:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:09 crc kubenswrapper[4998]: I0227 10:19:09.248140 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:09 crc kubenswrapper[4998]: I0227 10:19:09.248191 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:09 crc kubenswrapper[4998]: I0227 10:19:09.248203 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:09 crc kubenswrapper[4998]: I0227 10:19:09.248248 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:09 crc kubenswrapper[4998]: I0227 10:19:09.248261 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:09Z","lastTransitionTime":"2026-02-27T10:19:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:09 crc kubenswrapper[4998]: I0227 10:19:09.351324 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:09 crc kubenswrapper[4998]: I0227 10:19:09.351378 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:09 crc kubenswrapper[4998]: I0227 10:19:09.351387 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:09 crc kubenswrapper[4998]: I0227 10:19:09.351407 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:09 crc kubenswrapper[4998]: I0227 10:19:09.351420 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:09Z","lastTransitionTime":"2026-02-27T10:19:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:09 crc kubenswrapper[4998]: I0227 10:19:09.454846 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:09 crc kubenswrapper[4998]: I0227 10:19:09.454936 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:09 crc kubenswrapper[4998]: I0227 10:19:09.454962 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:09 crc kubenswrapper[4998]: I0227 10:19:09.454995 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:09 crc kubenswrapper[4998]: I0227 10:19:09.455034 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:09Z","lastTransitionTime":"2026-02-27T10:19:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:09 crc kubenswrapper[4998]: I0227 10:19:09.560894 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:09 crc kubenswrapper[4998]: I0227 10:19:09.561413 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:09 crc kubenswrapper[4998]: I0227 10:19:09.561428 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:09 crc kubenswrapper[4998]: I0227 10:19:09.561449 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:09 crc kubenswrapper[4998]: I0227 10:19:09.561463 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:09Z","lastTransitionTime":"2026-02-27T10:19:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:09 crc kubenswrapper[4998]: I0227 10:19:09.664160 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:09 crc kubenswrapper[4998]: I0227 10:19:09.664276 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:09 crc kubenswrapper[4998]: I0227 10:19:09.664303 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:09 crc kubenswrapper[4998]: I0227 10:19:09.664334 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:09 crc kubenswrapper[4998]: I0227 10:19:09.664357 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:09Z","lastTransitionTime":"2026-02-27T10:19:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:09 crc kubenswrapper[4998]: I0227 10:19:09.767107 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:09 crc kubenswrapper[4998]: I0227 10:19:09.767181 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:09 crc kubenswrapper[4998]: I0227 10:19:09.767203 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:09 crc kubenswrapper[4998]: I0227 10:19:09.767267 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:09 crc kubenswrapper[4998]: I0227 10:19:09.767297 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:09Z","lastTransitionTime":"2026-02-27T10:19:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:09 crc kubenswrapper[4998]: I0227 10:19:09.782677 4998 scope.go:117] "RemoveContainer" containerID="9b6e8ecfcaba19fb742ce57619627409073b5ea272a20cca6cff39ebcef8d3e2" Feb 27 10:19:09 crc kubenswrapper[4998]: E0227 10:19:09.783306 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 27 10:19:09 crc kubenswrapper[4998]: I0227 10:19:09.783736 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 27 10:19:09 crc kubenswrapper[4998]: I0227 10:19:09.827533 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-qcfqc"] Feb 27 10:19:09 crc kubenswrapper[4998]: I0227 10:19:09.827984 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-qcfqc" Feb 27 10:19:09 crc kubenswrapper[4998]: I0227 10:19:09.830819 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 27 10:19:09 crc kubenswrapper[4998]: I0227 10:19:09.831481 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 27 10:19:09 crc kubenswrapper[4998]: I0227 10:19:09.831584 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 27 10:19:09 crc kubenswrapper[4998]: I0227 10:19:09.840441 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:09 crc kubenswrapper[4998]: I0227 10:19:09.850476 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcfqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9652967a-d4bf-4304-bd25-4fed87e89b10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:09Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:09Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jctzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcfqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:09 crc kubenswrapper[4998]: I0227 10:19:09.863337 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jctzd\" (UniqueName: \"kubernetes.io/projected/9652967a-d4bf-4304-bd25-4fed87e89b10-kube-api-access-jctzd\") pod \"node-resolver-qcfqc\" (UID: \"9652967a-d4bf-4304-bd25-4fed87e89b10\") " pod="openshift-dns/node-resolver-qcfqc" Feb 27 10:19:09 crc kubenswrapper[4998]: I0227 10:19:09.863328 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:09 crc kubenswrapper[4998]: I0227 10:19:09.863422 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/9652967a-d4bf-4304-bd25-4fed87e89b10-hosts-file\") pod \"node-resolver-qcfqc\" (UID: \"9652967a-d4bf-4304-bd25-4fed87e89b10\") " pod="openshift-dns/node-resolver-qcfqc" Feb 27 10:19:09 crc kubenswrapper[4998]: I0227 10:19:09.870501 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:09 crc kubenswrapper[4998]: I0227 10:19:09.870557 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:09 crc kubenswrapper[4998]: I0227 10:19:09.870569 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:09 crc kubenswrapper[4998]: I0227 10:19:09.870587 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:09 crc kubenswrapper[4998]: I0227 10:19:09.870602 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:09Z","lastTransitionTime":"2026-02-27T10:19:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:09 crc kubenswrapper[4998]: I0227 10:19:09.875074 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:09 crc kubenswrapper[4998]: I0227 10:19:09.887370 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:09 crc kubenswrapper[4998]: I0227 10:19:09.896132 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:09 crc kubenswrapper[4998]: I0227 10:19:09.906645 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:09 crc kubenswrapper[4998]: I0227 10:19:09.918899 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fc05123-f698-45ff-a3c3-13c18e466cdb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c807f36a54905b570613990b7428942b90cbad2d8d8dfbab02ee177d5575644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f0cce1c3309bc7de11ea57038f7a306b781b41434337fbf3da176ca2dba2b91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93c102b441273f39ca361ed86f6ef259e15d8adb7eea414db62fabebfda0dee1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b6e8ecfcaba19fb742ce57619627409073b5ea272a20cca6cff39ebcef8d3e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b6e8ecfcaba19fb742ce57619627409073b5ea272a20cca6cff39ebcef8d3e2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T10:18:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 10:18:37.748369 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 10:18:37.748616 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 10:18:37.749728 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2309284339/tls.crt::/tmp/serving-cert-2309284339/tls.key\\\\\\\"\\\\nI0227 10:18:38.164511 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 10:18:38.177544 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 10:18:38.177576 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 10:18:38.177600 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 10:18:38.177606 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 10:18:38.196196 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 10:18:38.196250 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 10:18:38.196256 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 10:18:38.196263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 10:18:38.196267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 10:18:38.196282 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 10:18:38.196286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0227 10:18:38.201111 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0227 10:18:38.201327 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T10:18:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df65cc66d990595a8fdb751234ffd8b9e890f53de56c736241bc8b52e7341357\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b42644d7b4769348dda3a3ed5f82a860712159a6e62df2ee6a04b6e890851fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b42644d7b4769348dda3a3ed5f82a860712159a6e62df2ee6a04b6e890851fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:17:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:09 crc kubenswrapper[4998]: I0227 10:19:09.964342 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jctzd\" (UniqueName: \"kubernetes.io/projected/9652967a-d4bf-4304-bd25-4fed87e89b10-kube-api-access-jctzd\") pod \"node-resolver-qcfqc\" (UID: \"9652967a-d4bf-4304-bd25-4fed87e89b10\") " pod="openshift-dns/node-resolver-qcfqc" Feb 27 10:19:09 crc kubenswrapper[4998]: I0227 10:19:09.964382 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/9652967a-d4bf-4304-bd25-4fed87e89b10-hosts-file\") pod \"node-resolver-qcfqc\" (UID: \"9652967a-d4bf-4304-bd25-4fed87e89b10\") " pod="openshift-dns/node-resolver-qcfqc" Feb 27 10:19:09 crc kubenswrapper[4998]: I0227 10:19:09.964449 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/9652967a-d4bf-4304-bd25-4fed87e89b10-hosts-file\") pod \"node-resolver-qcfqc\" (UID: \"9652967a-d4bf-4304-bd25-4fed87e89b10\") " pod="openshift-dns/node-resolver-qcfqc" Feb 27 10:19:09 crc kubenswrapper[4998]: I0227 10:19:09.972953 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:09 crc kubenswrapper[4998]: I0227 10:19:09.973012 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:09 crc kubenswrapper[4998]: I0227 10:19:09.973029 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:09 crc kubenswrapper[4998]: I0227 10:19:09.973045 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:09 crc kubenswrapper[4998]: I0227 10:19:09.973055 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:09Z","lastTransitionTime":"2026-02-27T10:19:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:09 crc kubenswrapper[4998]: I0227 10:19:09.981633 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jctzd\" (UniqueName: \"kubernetes.io/projected/9652967a-d4bf-4304-bd25-4fed87e89b10-kube-api-access-jctzd\") pod \"node-resolver-qcfqc\" (UID: \"9652967a-d4bf-4304-bd25-4fed87e89b10\") " pod="openshift-dns/node-resolver-qcfqc" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.075532 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.075588 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.075642 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.075667 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.075685 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:10Z","lastTransitionTime":"2026-02-27T10:19:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.141217 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-qcfqc" Feb 27 10:19:10 crc kubenswrapper[4998]: E0227 10:19:10.155935 4998 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 10:19:10 crc kubenswrapper[4998]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/bin/bash -c #!/bin/bash Feb 27 10:19:10 crc kubenswrapper[4998]: set -uo pipefail Feb 27 10:19:10 crc kubenswrapper[4998]: Feb 27 10:19:10 crc kubenswrapper[4998]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Feb 27 10:19:10 crc kubenswrapper[4998]: Feb 27 10:19:10 crc kubenswrapper[4998]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Feb 27 10:19:10 crc kubenswrapper[4998]: HOSTS_FILE="/etc/hosts" Feb 27 10:19:10 crc kubenswrapper[4998]: TEMP_FILE="/etc/hosts.tmp" Feb 27 10:19:10 crc kubenswrapper[4998]: Feb 27 10:19:10 crc kubenswrapper[4998]: IFS=', ' read -r -a services <<< "${SERVICES}" Feb 27 10:19:10 crc kubenswrapper[4998]: Feb 27 10:19:10 crc kubenswrapper[4998]: # Make a temporary file with the old hosts file's attributes. Feb 27 10:19:10 crc kubenswrapper[4998]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Feb 27 10:19:10 crc kubenswrapper[4998]: echo "Failed to preserve hosts file. Exiting." Feb 27 10:19:10 crc kubenswrapper[4998]: exit 1 Feb 27 10:19:10 crc kubenswrapper[4998]: fi Feb 27 10:19:10 crc kubenswrapper[4998]: Feb 27 10:19:10 crc kubenswrapper[4998]: while true; do Feb 27 10:19:10 crc kubenswrapper[4998]: declare -A svc_ips Feb 27 10:19:10 crc kubenswrapper[4998]: for svc in "${services[@]}"; do Feb 27 10:19:10 crc kubenswrapper[4998]: # Fetch service IP from cluster dns if present. We make several tries Feb 27 10:19:10 crc kubenswrapper[4998]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Feb 27 10:19:10 crc kubenswrapper[4998]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Feb 27 10:19:10 crc kubenswrapper[4998]: # support UDP loadbalancers and require reaching DNS through TCP. Feb 27 10:19:10 crc kubenswrapper[4998]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Feb 27 10:19:10 crc kubenswrapper[4998]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Feb 27 10:19:10 crc kubenswrapper[4998]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Feb 27 10:19:10 crc kubenswrapper[4998]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Feb 27 10:19:10 crc kubenswrapper[4998]: for i in ${!cmds[*]} Feb 27 10:19:10 crc kubenswrapper[4998]: do Feb 27 10:19:10 crc kubenswrapper[4998]: ips=($(eval "${cmds[i]}")) Feb 27 10:19:10 crc kubenswrapper[4998]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Feb 27 10:19:10 crc kubenswrapper[4998]: svc_ips["${svc}"]="${ips[@]}" Feb 27 10:19:10 crc kubenswrapper[4998]: break Feb 27 10:19:10 crc kubenswrapper[4998]: fi Feb 27 10:19:10 crc kubenswrapper[4998]: done Feb 27 10:19:10 crc kubenswrapper[4998]: done Feb 27 10:19:10 crc kubenswrapper[4998]: Feb 27 10:19:10 crc kubenswrapper[4998]: # Update /etc/hosts only if we get valid service IPs Feb 27 10:19:10 crc kubenswrapper[4998]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Feb 27 10:19:10 crc kubenswrapper[4998]: # Stale entries could exist in /etc/hosts if the service is deleted Feb 27 10:19:10 crc kubenswrapper[4998]: if [[ -n "${svc_ips[*]-}" ]]; then Feb 27 10:19:10 crc kubenswrapper[4998]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Feb 27 10:19:10 crc kubenswrapper[4998]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Feb 27 10:19:10 crc kubenswrapper[4998]: # Only continue rebuilding the hosts entries if its original content is preserved Feb 27 10:19:10 crc kubenswrapper[4998]: sleep 60 & wait Feb 27 10:19:10 crc kubenswrapper[4998]: continue Feb 27 10:19:10 crc kubenswrapper[4998]: fi Feb 27 10:19:10 crc kubenswrapper[4998]: Feb 27 10:19:10 crc kubenswrapper[4998]: # Append resolver entries for services Feb 27 10:19:10 crc kubenswrapper[4998]: rc=0 Feb 27 10:19:10 crc kubenswrapper[4998]: for svc in "${!svc_ips[@]}"; do Feb 27 10:19:10 crc kubenswrapper[4998]: for ip in ${svc_ips[${svc}]}; do Feb 27 10:19:10 crc kubenswrapper[4998]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Feb 27 10:19:10 crc kubenswrapper[4998]: done Feb 27 10:19:10 crc kubenswrapper[4998]: done Feb 27 10:19:10 crc kubenswrapper[4998]: if [[ $rc -ne 0 ]]; then Feb 27 10:19:10 crc kubenswrapper[4998]: sleep 60 & wait Feb 27 10:19:10 crc kubenswrapper[4998]: continue Feb 27 10:19:10 crc kubenswrapper[4998]: fi Feb 27 10:19:10 crc kubenswrapper[4998]: Feb 27 10:19:10 crc kubenswrapper[4998]: Feb 27 10:19:10 crc kubenswrapper[4998]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Feb 27 10:19:10 crc kubenswrapper[4998]: # Replace /etc/hosts with our modified version if needed Feb 27 10:19:10 crc kubenswrapper[4998]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Feb 27 10:19:10 crc kubenswrapper[4998]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Feb 27 10:19:10 crc kubenswrapper[4998]: fi Feb 27 10:19:10 crc kubenswrapper[4998]: sleep 60 & wait Feb 27 10:19:10 crc kubenswrapper[4998]: unset svc_ips Feb 27 10:19:10 crc kubenswrapper[4998]: done Feb 27 10:19:10 crc kubenswrapper[4998]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jctzd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-qcfqc_openshift-dns(9652967a-d4bf-4304-bd25-4fed87e89b10): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 27 10:19:10 crc kubenswrapper[4998]: > logger="UnhandledError" Feb 27 10:19:10 crc kubenswrapper[4998]: E0227 10:19:10.158125 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-qcfqc" podUID="9652967a-d4bf-4304-bd25-4fed87e89b10" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.167062 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-qcfqc" event={"ID":"9652967a-d4bf-4304-bd25-4fed87e89b10","Type":"ContainerStarted","Data":"9ad2fc9867e7ca50a72ae7194d0a4ba9b8bfd2347a5e9d77723597e2d027df3e"} Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.168069 4998 scope.go:117] "RemoveContainer" containerID="9b6e8ecfcaba19fb742ce57619627409073b5ea272a20cca6cff39ebcef8d3e2" Feb 27 10:19:10 crc kubenswrapper[4998]: E0227 10:19:10.168427 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.171360 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-46lvx"] Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.171684 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-46lvx" Feb 27 10:19:10 crc kubenswrapper[4998]: E0227 10:19:10.172645 4998 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 10:19:10 crc kubenswrapper[4998]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/bin/bash -c #!/bin/bash Feb 27 10:19:10 crc kubenswrapper[4998]: set -uo pipefail Feb 27 10:19:10 crc kubenswrapper[4998]: Feb 27 10:19:10 crc kubenswrapper[4998]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Feb 27 10:19:10 crc kubenswrapper[4998]: Feb 27 10:19:10 crc kubenswrapper[4998]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Feb 27 10:19:10 crc kubenswrapper[4998]: HOSTS_FILE="/etc/hosts" Feb 27 10:19:10 crc kubenswrapper[4998]: TEMP_FILE="/etc/hosts.tmp" Feb 27 10:19:10 crc kubenswrapper[4998]: Feb 27 10:19:10 crc kubenswrapper[4998]: IFS=', ' read -r -a services <<< "${SERVICES}" Feb 27 10:19:10 crc kubenswrapper[4998]: Feb 27 10:19:10 crc kubenswrapper[4998]: # Make a temporary file with the old hosts file's attributes. Feb 27 10:19:10 crc kubenswrapper[4998]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Feb 27 10:19:10 crc kubenswrapper[4998]: echo "Failed to preserve hosts file. Exiting." Feb 27 10:19:10 crc kubenswrapper[4998]: exit 1 Feb 27 10:19:10 crc kubenswrapper[4998]: fi Feb 27 10:19:10 crc kubenswrapper[4998]: Feb 27 10:19:10 crc kubenswrapper[4998]: while true; do Feb 27 10:19:10 crc kubenswrapper[4998]: declare -A svc_ips Feb 27 10:19:10 crc kubenswrapper[4998]: for svc in "${services[@]}"; do Feb 27 10:19:10 crc kubenswrapper[4998]: # Fetch service IP from cluster dns if present. We make several tries Feb 27 10:19:10 crc kubenswrapper[4998]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Feb 27 10:19:10 crc kubenswrapper[4998]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Feb 27 10:19:10 crc kubenswrapper[4998]: # support UDP loadbalancers and require reaching DNS through TCP. Feb 27 10:19:10 crc kubenswrapper[4998]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Feb 27 10:19:10 crc kubenswrapper[4998]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Feb 27 10:19:10 crc kubenswrapper[4998]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Feb 27 10:19:10 crc kubenswrapper[4998]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Feb 27 10:19:10 crc kubenswrapper[4998]: for i in ${!cmds[*]} Feb 27 10:19:10 crc kubenswrapper[4998]: do Feb 27 10:19:10 crc kubenswrapper[4998]: ips=($(eval "${cmds[i]}")) Feb 27 10:19:10 crc kubenswrapper[4998]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Feb 27 10:19:10 crc kubenswrapper[4998]: svc_ips["${svc}"]="${ips[@]}" Feb 27 10:19:10 crc kubenswrapper[4998]: break Feb 27 10:19:10 crc kubenswrapper[4998]: fi Feb 27 10:19:10 crc kubenswrapper[4998]: done Feb 27 10:19:10 crc kubenswrapper[4998]: done Feb 27 10:19:10 crc kubenswrapper[4998]: Feb 27 10:19:10 crc kubenswrapper[4998]: # Update /etc/hosts only if we get valid service IPs Feb 27 10:19:10 crc kubenswrapper[4998]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Feb 27 10:19:10 crc kubenswrapper[4998]: # Stale entries could exist in /etc/hosts if the service is deleted Feb 27 10:19:10 crc kubenswrapper[4998]: if [[ -n "${svc_ips[*]-}" ]]; then Feb 27 10:19:10 crc kubenswrapper[4998]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Feb 27 10:19:10 crc kubenswrapper[4998]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Feb 27 10:19:10 crc kubenswrapper[4998]: # Only continue rebuilding the hosts entries if its original content is preserved Feb 27 10:19:10 crc kubenswrapper[4998]: sleep 60 & wait Feb 27 10:19:10 crc kubenswrapper[4998]: continue Feb 27 10:19:10 crc kubenswrapper[4998]: fi Feb 27 10:19:10 crc kubenswrapper[4998]: Feb 27 10:19:10 crc kubenswrapper[4998]: # Append resolver entries for services Feb 27 10:19:10 crc kubenswrapper[4998]: rc=0 Feb 27 10:19:10 crc kubenswrapper[4998]: for svc in "${!svc_ips[@]}"; do Feb 27 10:19:10 crc kubenswrapper[4998]: for ip in ${svc_ips[${svc}]}; do Feb 27 10:19:10 crc kubenswrapper[4998]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Feb 27 10:19:10 crc kubenswrapper[4998]: done Feb 27 10:19:10 crc kubenswrapper[4998]: done Feb 27 10:19:10 crc kubenswrapper[4998]: if [[ $rc -ne 0 ]]; then Feb 27 10:19:10 crc kubenswrapper[4998]: sleep 60 & wait Feb 27 10:19:10 crc kubenswrapper[4998]: continue Feb 27 10:19:10 crc kubenswrapper[4998]: fi Feb 27 10:19:10 crc kubenswrapper[4998]: Feb 27 10:19:10 crc kubenswrapper[4998]: Feb 27 10:19:10 crc kubenswrapper[4998]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Feb 27 10:19:10 crc kubenswrapper[4998]: # Replace /etc/hosts with our modified version if needed Feb 27 10:19:10 crc kubenswrapper[4998]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Feb 27 10:19:10 crc kubenswrapper[4998]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Feb 27 10:19:10 crc kubenswrapper[4998]: fi Feb 27 10:19:10 crc kubenswrapper[4998]: sleep 60 & wait Feb 27 10:19:10 crc kubenswrapper[4998]: unset svc_ips Feb 27 10:19:10 crc kubenswrapper[4998]: done Feb 27 10:19:10 crc kubenswrapper[4998]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jctzd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-qcfqc_openshift-dns(9652967a-d4bf-4304-bd25-4fed87e89b10): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 27 10:19:10 crc kubenswrapper[4998]: > logger="UnhandledError" Feb 27 10:19:10 crc kubenswrapper[4998]: E0227 10:19:10.173815 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-qcfqc" podUID="9652967a-d4bf-4304-bd25-4fed87e89b10" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.175716 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.176005 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.176172 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.176547 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-l9x2p"] Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.177112 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-m6kr5"] Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.177266 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.177391 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.177466 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-l9x2p" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.177489 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.178577 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.178612 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.178629 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.178711 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.178804 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.179211 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.179247 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.178732 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:10Z","lastTransitionTime":"2026-02-27T10:19:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.180321 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.180583 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.181669 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.181706 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.182179 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.191893 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.199196 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.205403 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcfqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9652967a-d4bf-4304-bd25-4fed87e89b10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:09Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:09Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jctzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcfqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.215705 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fc05123-f698-45ff-a3c3-13c18e466cdb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c807f36a54905b570613990b7428942b90cbad2d8d8dfbab02ee177d5575644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f0cce1c3309bc7de11ea57038f7a306b781b41434337fbf3da176ca2dba2b91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93c102b441273f39ca361ed86f6ef259e15d8adb7eea414db62fabebfda0dee1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b6e8ecfcaba19fb742ce57619627409073b5ea272a20cca6cff39ebcef8d3e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b6e8ecfcaba19fb742ce57619627409073b5ea272a20cca6cff39ebcef8d3e2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T10:18:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 10:18:37.748369 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 10:18:37.748616 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 10:18:37.749728 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2309284339/tls.crt::/tmp/serving-cert-2309284339/tls.key\\\\\\\"\\\\nI0227 10:18:38.164511 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 10:18:38.177544 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 10:18:38.177576 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 10:18:38.177600 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 10:18:38.177606 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 10:18:38.196196 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 10:18:38.196250 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 10:18:38.196256 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 10:18:38.196263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 10:18:38.196267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 10:18:38.196282 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 10:18:38.196286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0227 10:18:38.201111 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0227 10:18:38.201327 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T10:18:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df65cc66d990595a8fdb751234ffd8b9e890f53de56c736241bc8b52e7341357\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b42644d7b4769348dda3a3ed5f82a860712159a6e62df2ee6a04b6e890851fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b42644d7b4769348dda3a3ed5f82a860712159a6e62df2ee6a04b6e890851fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:17:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.226017 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.237058 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.246564 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.253126 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcfqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9652967a-d4bf-4304-bd25-4fed87e89b10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:09Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:09Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jctzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcfqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.266422 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/400c5e2f-5448-49c6-bf8e-04b21e552bb2-rootfs\") pod \"machine-config-daemon-m6kr5\" (UID: \"400c5e2f-5448-49c6-bf8e-04b21e552bb2\") " pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.266472 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e55e9768-52ee-4fcf-a279-1b55e6d6c6fd-system-cni-dir\") pod \"multus-additional-cni-plugins-l9x2p\" (UID: \"e55e9768-52ee-4fcf-a279-1b55e6d6c6fd\") " pod="openshift-multus/multus-additional-cni-plugins-l9x2p" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.266517 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e55e9768-52ee-4fcf-a279-1b55e6d6c6fd-tuning-conf-dir\") pod \"multus-additional-cni-plugins-l9x2p\" (UID: \"e55e9768-52ee-4fcf-a279-1b55e6d6c6fd\") " pod="openshift-multus/multus-additional-cni-plugins-l9x2p" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.266534 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a046a5ca-7081-4920-98af-1027a5bc29d0-multus-cni-dir\") pod \"multus-46lvx\" (UID: \"a046a5ca-7081-4920-98af-1027a5bc29d0\") " pod="openshift-multus/multus-46lvx" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.266552 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a046a5ca-7081-4920-98af-1027a5bc29d0-os-release\") pod \"multus-46lvx\" (UID: \"a046a5ca-7081-4920-98af-1027a5bc29d0\") " pod="openshift-multus/multus-46lvx" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.266663 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/a046a5ca-7081-4920-98af-1027a5bc29d0-multus-daemon-config\") pod \"multus-46lvx\" (UID: \"a046a5ca-7081-4920-98af-1027a5bc29d0\") " pod="openshift-multus/multus-46lvx" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.266709 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a046a5ca-7081-4920-98af-1027a5bc29d0-host-run-netns\") pod \"multus-46lvx\" (UID: \"a046a5ca-7081-4920-98af-1027a5bc29d0\") " pod="openshift-multus/multus-46lvx" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.266728 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/400c5e2f-5448-49c6-bf8e-04b21e552bb2-mcd-auth-proxy-config\") pod \"machine-config-daemon-m6kr5\" (UID: \"400c5e2f-5448-49c6-bf8e-04b21e552bb2\") " pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.266746 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a046a5ca-7081-4920-98af-1027a5bc29d0-cni-binary-copy\") pod \"multus-46lvx\" (UID: \"a046a5ca-7081-4920-98af-1027a5bc29d0\") " pod="openshift-multus/multus-46lvx" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.266814 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/400c5e2f-5448-49c6-bf8e-04b21e552bb2-proxy-tls\") pod \"machine-config-daemon-m6kr5\" (UID: \"400c5e2f-5448-49c6-bf8e-04b21e552bb2\") " pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.266904 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/a046a5ca-7081-4920-98af-1027a5bc29d0-host-run-k8s-cni-cncf-io\") pod \"multus-46lvx\" (UID: \"a046a5ca-7081-4920-98af-1027a5bc29d0\") " pod="openshift-multus/multus-46lvx" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.266952 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a046a5ca-7081-4920-98af-1027a5bc29d0-multus-conf-dir\") pod \"multus-46lvx\" (UID: \"a046a5ca-7081-4920-98af-1027a5bc29d0\") " pod="openshift-multus/multus-46lvx" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.266976 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e55e9768-52ee-4fcf-a279-1b55e6d6c6fd-os-release\") pod \"multus-additional-cni-plugins-l9x2p\" (UID: \"e55e9768-52ee-4fcf-a279-1b55e6d6c6fd\") " pod="openshift-multus/multus-additional-cni-plugins-l9x2p" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.267001 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/a046a5ca-7081-4920-98af-1027a5bc29d0-host-var-lib-cni-multus\") pod \"multus-46lvx\" (UID: \"a046a5ca-7081-4920-98af-1027a5bc29d0\") " pod="openshift-multus/multus-46lvx" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.267038 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/a046a5ca-7081-4920-98af-1027a5bc29d0-multus-socket-dir-parent\") pod \"multus-46lvx\" (UID: \"a046a5ca-7081-4920-98af-1027a5bc29d0\") " pod="openshift-multus/multus-46lvx" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.267060 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a046a5ca-7081-4920-98af-1027a5bc29d0-host-var-lib-cni-bin\") pod \"multus-46lvx\" (UID: \"a046a5ca-7081-4920-98af-1027a5bc29d0\") " pod="openshift-multus/multus-46lvx" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.267080 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/a046a5ca-7081-4920-98af-1027a5bc29d0-host-run-multus-certs\") pod \"multus-46lvx\" (UID: \"a046a5ca-7081-4920-98af-1027a5bc29d0\") " pod="openshift-multus/multus-46lvx" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.267102 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6l6t6\" (UniqueName: \"kubernetes.io/projected/400c5e2f-5448-49c6-bf8e-04b21e552bb2-kube-api-access-6l6t6\") pod \"machine-config-daemon-m6kr5\" (UID: \"400c5e2f-5448-49c6-bf8e-04b21e552bb2\") " pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.267124 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/a046a5ca-7081-4920-98af-1027a5bc29d0-host-var-lib-kubelet\") pod \"multus-46lvx\" (UID: \"a046a5ca-7081-4920-98af-1027a5bc29d0\") " pod="openshift-multus/multus-46lvx" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.267144 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a046a5ca-7081-4920-98af-1027a5bc29d0-etc-kubernetes\") pod \"multus-46lvx\" (UID: \"a046a5ca-7081-4920-98af-1027a5bc29d0\") " pod="openshift-multus/multus-46lvx" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.267164 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/e55e9768-52ee-4fcf-a279-1b55e6d6c6fd-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-l9x2p\" (UID: \"e55e9768-52ee-4fcf-a279-1b55e6d6c6fd\") " pod="openshift-multus/multus-additional-cni-plugins-l9x2p" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.267183 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnkjs\" (UniqueName: \"kubernetes.io/projected/e55e9768-52ee-4fcf-a279-1b55e6d6c6fd-kube-api-access-gnkjs\") pod \"multus-additional-cni-plugins-l9x2p\" (UID: \"e55e9768-52ee-4fcf-a279-1b55e6d6c6fd\") " pod="openshift-multus/multus-additional-cni-plugins-l9x2p" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.267202 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a046a5ca-7081-4920-98af-1027a5bc29d0-system-cni-dir\") pod \"multus-46lvx\" (UID: \"a046a5ca-7081-4920-98af-1027a5bc29d0\") " pod="openshift-multus/multus-46lvx" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.267269 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e55e9768-52ee-4fcf-a279-1b55e6d6c6fd-cnibin\") pod \"multus-additional-cni-plugins-l9x2p\" (UID: \"e55e9768-52ee-4fcf-a279-1b55e6d6c6fd\") " pod="openshift-multus/multus-additional-cni-plugins-l9x2p" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.267302 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a046a5ca-7081-4920-98af-1027a5bc29d0-cnibin\") pod \"multus-46lvx\" (UID: \"a046a5ca-7081-4920-98af-1027a5bc29d0\") " pod="openshift-multus/multus-46lvx" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.267323 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/a046a5ca-7081-4920-98af-1027a5bc29d0-hostroot\") pod \"multus-46lvx\" (UID: \"a046a5ca-7081-4920-98af-1027a5bc29d0\") " pod="openshift-multus/multus-46lvx" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.267344 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e55e9768-52ee-4fcf-a279-1b55e6d6c6fd-cni-binary-copy\") pod \"multus-additional-cni-plugins-l9x2p\" (UID: \"e55e9768-52ee-4fcf-a279-1b55e6d6c6fd\") " pod="openshift-multus/multus-additional-cni-plugins-l9x2p" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.267363 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7s287\" (UniqueName: \"kubernetes.io/projected/a046a5ca-7081-4920-98af-1027a5bc29d0-kube-api-access-7s287\") pod \"multus-46lvx\" (UID: \"a046a5ca-7081-4920-98af-1027a5bc29d0\") " pod="openshift-multus/multus-46lvx" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.267711 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l9x2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e55e9768-52ee-4fcf-a279-1b55e6d6c6fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l9x2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.281480 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.281519 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.281535 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.281556 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.281570 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:10Z","lastTransitionTime":"2026-02-27T10:19:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.283854 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fc05123-f698-45ff-a3c3-13c18e466cdb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c807f36a54905b570613990b7428942b90cbad2d8d8dfbab02ee177d5575644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f0cce1c3309bc7de11ea57038f7a306b781b41434337fbf3da176ca2dba2b91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93c102b441273f39ca361ed86f6ef259e15d8adb7eea414db62fabebfda0dee1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b6e8ecfcaba19fb742ce57619627409073b5ea272a20cca6cff39ebcef8d3e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b6e8ecfcaba19fb742ce57619627409073b5ea272a20cca6cff39ebcef8d3e2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T10:18:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 10:18:37.748369 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 10:18:37.748616 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 10:18:37.749728 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2309284339/tls.crt::/tmp/serving-cert-2309284339/tls.key\\\\\\\"\\\\nI0227 10:18:38.164511 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 10:18:38.177544 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 10:18:38.177576 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 10:18:38.177600 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 10:18:38.177606 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 10:18:38.196196 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 10:18:38.196250 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 10:18:38.196256 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 10:18:38.196263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 10:18:38.196267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 10:18:38.196282 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 10:18:38.196286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0227 10:18:38.201111 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0227 10:18:38.201327 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T10:18:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df65cc66d990595a8fdb751234ffd8b9e890f53de56c736241bc8b52e7341357\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b42644d7b4769348dda3a3ed5f82a860712159a6e62df2ee6a04b6e890851fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b42644d7b4769348dda3a3ed5f82a860712159a6e62df2ee6a04b6e890851fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:17:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.294746 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.305733 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.313877 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.324312 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-46lvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a046a5ca-7081-4920-98af-1027a5bc29d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7s287\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-46lvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.333342 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"400c5e2f-5448-49c6-bf8e-04b21e552bb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l6t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l6t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m6kr5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.343188 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.351350 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.360012 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.368466 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/a046a5ca-7081-4920-98af-1027a5bc29d0-host-run-k8s-cni-cncf-io\") pod \"multus-46lvx\" (UID: \"a046a5ca-7081-4920-98af-1027a5bc29d0\") " pod="openshift-multus/multus-46lvx" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.368497 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a046a5ca-7081-4920-98af-1027a5bc29d0-multus-conf-dir\") pod \"multus-46lvx\" (UID: \"a046a5ca-7081-4920-98af-1027a5bc29d0\") " pod="openshift-multus/multus-46lvx" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.368513 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e55e9768-52ee-4fcf-a279-1b55e6d6c6fd-os-release\") pod \"multus-additional-cni-plugins-l9x2p\" (UID: \"e55e9768-52ee-4fcf-a279-1b55e6d6c6fd\") " pod="openshift-multus/multus-additional-cni-plugins-l9x2p" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.368529 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/a046a5ca-7081-4920-98af-1027a5bc29d0-host-var-lib-cni-multus\") pod \"multus-46lvx\" (UID: \"a046a5ca-7081-4920-98af-1027a5bc29d0\") " pod="openshift-multus/multus-46lvx" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.368551 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/a046a5ca-7081-4920-98af-1027a5bc29d0-multus-socket-dir-parent\") pod \"multus-46lvx\" (UID: \"a046a5ca-7081-4920-98af-1027a5bc29d0\") " pod="openshift-multus/multus-46lvx" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.368565 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a046a5ca-7081-4920-98af-1027a5bc29d0-host-var-lib-cni-bin\") pod \"multus-46lvx\" (UID: \"a046a5ca-7081-4920-98af-1027a5bc29d0\") " pod="openshift-multus/multus-46lvx" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.368579 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/a046a5ca-7081-4920-98af-1027a5bc29d0-host-run-multus-certs\") pod \"multus-46lvx\" (UID: \"a046a5ca-7081-4920-98af-1027a5bc29d0\") " pod="openshift-multus/multus-46lvx" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.368594 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6l6t6\" (UniqueName: \"kubernetes.io/projected/400c5e2f-5448-49c6-bf8e-04b21e552bb2-kube-api-access-6l6t6\") pod \"machine-config-daemon-m6kr5\" (UID: \"400c5e2f-5448-49c6-bf8e-04b21e552bb2\") " pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.368609 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/a046a5ca-7081-4920-98af-1027a5bc29d0-host-var-lib-kubelet\") pod \"multus-46lvx\" (UID: \"a046a5ca-7081-4920-98af-1027a5bc29d0\") " pod="openshift-multus/multus-46lvx" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.368614 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a046a5ca-7081-4920-98af-1027a5bc29d0-multus-conf-dir\") pod \"multus-46lvx\" (UID: \"a046a5ca-7081-4920-98af-1027a5bc29d0\") " pod="openshift-multus/multus-46lvx" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.368623 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a046a5ca-7081-4920-98af-1027a5bc29d0-etc-kubernetes\") pod \"multus-46lvx\" (UID: \"a046a5ca-7081-4920-98af-1027a5bc29d0\") " pod="openshift-multus/multus-46lvx" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.368662 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/e55e9768-52ee-4fcf-a279-1b55e6d6c6fd-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-l9x2p\" (UID: \"e55e9768-52ee-4fcf-a279-1b55e6d6c6fd\") " pod="openshift-multus/multus-additional-cni-plugins-l9x2p" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.368683 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnkjs\" (UniqueName: \"kubernetes.io/projected/e55e9768-52ee-4fcf-a279-1b55e6d6c6fd-kube-api-access-gnkjs\") pod \"multus-additional-cni-plugins-l9x2p\" (UID: \"e55e9768-52ee-4fcf-a279-1b55e6d6c6fd\") " pod="openshift-multus/multus-additional-cni-plugins-l9x2p" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.368701 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a046a5ca-7081-4920-98af-1027a5bc29d0-system-cni-dir\") pod \"multus-46lvx\" (UID: \"a046a5ca-7081-4920-98af-1027a5bc29d0\") " pod="openshift-multus/multus-46lvx" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.368715 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e55e9768-52ee-4fcf-a279-1b55e6d6c6fd-os-release\") pod \"multus-additional-cni-plugins-l9x2p\" (UID: \"e55e9768-52ee-4fcf-a279-1b55e6d6c6fd\") " pod="openshift-multus/multus-additional-cni-plugins-l9x2p" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.368731 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e55e9768-52ee-4fcf-a279-1b55e6d6c6fd-cnibin\") pod \"multus-additional-cni-plugins-l9x2p\" (UID: \"e55e9768-52ee-4fcf-a279-1b55e6d6c6fd\") " pod="openshift-multus/multus-additional-cni-plugins-l9x2p" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.368663 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a046a5ca-7081-4920-98af-1027a5bc29d0-etc-kubernetes\") pod \"multus-46lvx\" (UID: \"a046a5ca-7081-4920-98af-1027a5bc29d0\") " pod="openshift-multus/multus-46lvx" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.368775 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a046a5ca-7081-4920-98af-1027a5bc29d0-cnibin\") pod \"multus-46lvx\" (UID: \"a046a5ca-7081-4920-98af-1027a5bc29d0\") " pod="openshift-multus/multus-46lvx" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.368750 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/a046a5ca-7081-4920-98af-1027a5bc29d0-host-var-lib-cni-multus\") pod \"multus-46lvx\" (UID: \"a046a5ca-7081-4920-98af-1027a5bc29d0\") " pod="openshift-multus/multus-46lvx" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.368781 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/a046a5ca-7081-4920-98af-1027a5bc29d0-multus-socket-dir-parent\") pod \"multus-46lvx\" (UID: \"a046a5ca-7081-4920-98af-1027a5bc29d0\") " pod="openshift-multus/multus-46lvx" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.368798 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a046a5ca-7081-4920-98af-1027a5bc29d0-host-var-lib-cni-bin\") pod \"multus-46lvx\" (UID: \"a046a5ca-7081-4920-98af-1027a5bc29d0\") " pod="openshift-multus/multus-46lvx" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.368820 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/a046a5ca-7081-4920-98af-1027a5bc29d0-host-run-multus-certs\") pod \"multus-46lvx\" (UID: \"a046a5ca-7081-4920-98af-1027a5bc29d0\") " pod="openshift-multus/multus-46lvx" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.368909 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/a046a5ca-7081-4920-98af-1027a5bc29d0-host-var-lib-kubelet\") pod \"multus-46lvx\" (UID: \"a046a5ca-7081-4920-98af-1027a5bc29d0\") " pod="openshift-multus/multus-46lvx" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.368930 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e55e9768-52ee-4fcf-a279-1b55e6d6c6fd-cnibin\") pod \"multus-additional-cni-plugins-l9x2p\" (UID: \"e55e9768-52ee-4fcf-a279-1b55e6d6c6fd\") " pod="openshift-multus/multus-additional-cni-plugins-l9x2p" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.368958 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a046a5ca-7081-4920-98af-1027a5bc29d0-system-cni-dir\") pod \"multus-46lvx\" (UID: \"a046a5ca-7081-4920-98af-1027a5bc29d0\") " pod="openshift-multus/multus-46lvx" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.368718 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/a046a5ca-7081-4920-98af-1027a5bc29d0-host-run-k8s-cni-cncf-io\") pod \"multus-46lvx\" (UID: \"a046a5ca-7081-4920-98af-1027a5bc29d0\") " pod="openshift-multus/multus-46lvx" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.368747 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a046a5ca-7081-4920-98af-1027a5bc29d0-cnibin\") pod \"multus-46lvx\" (UID: \"a046a5ca-7081-4920-98af-1027a5bc29d0\") " pod="openshift-multus/multus-46lvx" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.368988 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/a046a5ca-7081-4920-98af-1027a5bc29d0-hostroot\") pod \"multus-46lvx\" (UID: \"a046a5ca-7081-4920-98af-1027a5bc29d0\") " pod="openshift-multus/multus-46lvx" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.369007 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e55e9768-52ee-4fcf-a279-1b55e6d6c6fd-cni-binary-copy\") pod \"multus-additional-cni-plugins-l9x2p\" (UID: \"e55e9768-52ee-4fcf-a279-1b55e6d6c6fd\") " pod="openshift-multus/multus-additional-cni-plugins-l9x2p" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.369028 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7s287\" (UniqueName: \"kubernetes.io/projected/a046a5ca-7081-4920-98af-1027a5bc29d0-kube-api-access-7s287\") pod \"multus-46lvx\" (UID: \"a046a5ca-7081-4920-98af-1027a5bc29d0\") " pod="openshift-multus/multus-46lvx" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.369048 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/400c5e2f-5448-49c6-bf8e-04b21e552bb2-rootfs\") pod \"machine-config-daemon-m6kr5\" (UID: \"400c5e2f-5448-49c6-bf8e-04b21e552bb2\") " pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.369072 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/400c5e2f-5448-49c6-bf8e-04b21e552bb2-rootfs\") pod \"machine-config-daemon-m6kr5\" (UID: \"400c5e2f-5448-49c6-bf8e-04b21e552bb2\") " pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.369094 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e55e9768-52ee-4fcf-a279-1b55e6d6c6fd-system-cni-dir\") pod \"multus-additional-cni-plugins-l9x2p\" (UID: \"e55e9768-52ee-4fcf-a279-1b55e6d6c6fd\") " pod="openshift-multus/multus-additional-cni-plugins-l9x2p" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.369074 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e55e9768-52ee-4fcf-a279-1b55e6d6c6fd-system-cni-dir\") pod \"multus-additional-cni-plugins-l9x2p\" (UID: \"e55e9768-52ee-4fcf-a279-1b55e6d6c6fd\") " pod="openshift-multus/multus-additional-cni-plugins-l9x2p" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.369282 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e55e9768-52ee-4fcf-a279-1b55e6d6c6fd-tuning-conf-dir\") pod \"multus-additional-cni-plugins-l9x2p\" (UID: \"e55e9768-52ee-4fcf-a279-1b55e6d6c6fd\") " pod="openshift-multus/multus-additional-cni-plugins-l9x2p" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.369306 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a046a5ca-7081-4920-98af-1027a5bc29d0-multus-cni-dir\") pod \"multus-46lvx\" (UID: \"a046a5ca-7081-4920-98af-1027a5bc29d0\") " pod="openshift-multus/multus-46lvx" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.369328 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a046a5ca-7081-4920-98af-1027a5bc29d0-os-release\") pod \"multus-46lvx\" (UID: \"a046a5ca-7081-4920-98af-1027a5bc29d0\") " pod="openshift-multus/multus-46lvx" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.369351 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/a046a5ca-7081-4920-98af-1027a5bc29d0-multus-daemon-config\") pod \"multus-46lvx\" (UID: \"a046a5ca-7081-4920-98af-1027a5bc29d0\") " pod="openshift-multus/multus-46lvx" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.369375 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a046a5ca-7081-4920-98af-1027a5bc29d0-host-run-netns\") pod \"multus-46lvx\" (UID: \"a046a5ca-7081-4920-98af-1027a5bc29d0\") " pod="openshift-multus/multus-46lvx" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.369400 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/400c5e2f-5448-49c6-bf8e-04b21e552bb2-mcd-auth-proxy-config\") pod \"machine-config-daemon-m6kr5\" (UID: \"400c5e2f-5448-49c6-bf8e-04b21e552bb2\") " pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.369425 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a046a5ca-7081-4920-98af-1027a5bc29d0-cni-binary-copy\") pod \"multus-46lvx\" (UID: \"a046a5ca-7081-4920-98af-1027a5bc29d0\") " pod="openshift-multus/multus-46lvx" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.369433 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/a046a5ca-7081-4920-98af-1027a5bc29d0-hostroot\") pod \"multus-46lvx\" (UID: \"a046a5ca-7081-4920-98af-1027a5bc29d0\") " pod="openshift-multus/multus-46lvx" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.369449 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/400c5e2f-5448-49c6-bf8e-04b21e552bb2-proxy-tls\") pod \"machine-config-daemon-m6kr5\" (UID: \"400c5e2f-5448-49c6-bf8e-04b21e552bb2\") " pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.369495 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/e55e9768-52ee-4fcf-a279-1b55e6d6c6fd-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-l9x2p\" (UID: \"e55e9768-52ee-4fcf-a279-1b55e6d6c6fd\") " pod="openshift-multus/multus-additional-cni-plugins-l9x2p" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.369583 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a046a5ca-7081-4920-98af-1027a5bc29d0-multus-cni-dir\") pod \"multus-46lvx\" (UID: \"a046a5ca-7081-4920-98af-1027a5bc29d0\") " pod="openshift-multus/multus-46lvx" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.369591 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a046a5ca-7081-4920-98af-1027a5bc29d0-host-run-netns\") pod \"multus-46lvx\" (UID: \"a046a5ca-7081-4920-98af-1027a5bc29d0\") " pod="openshift-multus/multus-46lvx" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.369765 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e55e9768-52ee-4fcf-a279-1b55e6d6c6fd-cni-binary-copy\") pod \"multus-additional-cni-plugins-l9x2p\" (UID: \"e55e9768-52ee-4fcf-a279-1b55e6d6c6fd\") " pod="openshift-multus/multus-additional-cni-plugins-l9x2p" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.369896 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a046a5ca-7081-4920-98af-1027a5bc29d0-os-release\") pod \"multus-46lvx\" (UID: \"a046a5ca-7081-4920-98af-1027a5bc29d0\") " pod="openshift-multus/multus-46lvx" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.370068 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e55e9768-52ee-4fcf-a279-1b55e6d6c6fd-tuning-conf-dir\") pod \"multus-additional-cni-plugins-l9x2p\" (UID: \"e55e9768-52ee-4fcf-a279-1b55e6d6c6fd\") " pod="openshift-multus/multus-additional-cni-plugins-l9x2p" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.370082 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/400c5e2f-5448-49c6-bf8e-04b21e552bb2-mcd-auth-proxy-config\") pod \"machine-config-daemon-m6kr5\" (UID: \"400c5e2f-5448-49c6-bf8e-04b21e552bb2\") " pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.370471 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a046a5ca-7081-4920-98af-1027a5bc29d0-cni-binary-copy\") pod \"multus-46lvx\" (UID: \"a046a5ca-7081-4920-98af-1027a5bc29d0\") " pod="openshift-multus/multus-46lvx" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.370661 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/a046a5ca-7081-4920-98af-1027a5bc29d0-multus-daemon-config\") pod \"multus-46lvx\" (UID: \"a046a5ca-7081-4920-98af-1027a5bc29d0\") " pod="openshift-multus/multus-46lvx" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.373703 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/400c5e2f-5448-49c6-bf8e-04b21e552bb2-proxy-tls\") pod \"machine-config-daemon-m6kr5\" (UID: \"400c5e2f-5448-49c6-bf8e-04b21e552bb2\") " pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.383600 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.383630 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.383639 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.383651 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.383662 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:10Z","lastTransitionTime":"2026-02-27T10:19:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.384356 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6l6t6\" (UniqueName: \"kubernetes.io/projected/400c5e2f-5448-49c6-bf8e-04b21e552bb2-kube-api-access-6l6t6\") pod \"machine-config-daemon-m6kr5\" (UID: \"400c5e2f-5448-49c6-bf8e-04b21e552bb2\") " pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.388737 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnkjs\" (UniqueName: \"kubernetes.io/projected/e55e9768-52ee-4fcf-a279-1b55e6d6c6fd-kube-api-access-gnkjs\") pod \"multus-additional-cni-plugins-l9x2p\" (UID: \"e55e9768-52ee-4fcf-a279-1b55e6d6c6fd\") " pod="openshift-multus/multus-additional-cni-plugins-l9x2p" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.390019 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7s287\" (UniqueName: \"kubernetes.io/projected/a046a5ca-7081-4920-98af-1027a5bc29d0-kube-api-access-7s287\") pod \"multus-46lvx\" (UID: \"a046a5ca-7081-4920-98af-1027a5bc29d0\") " pod="openshift-multus/multus-46lvx" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.470498 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:19:10 crc kubenswrapper[4998]: E0227 10:19:10.470671 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 10:19:18.470640194 +0000 UTC m=+110.468911172 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.486906 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.486945 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.486956 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.486971 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.486982 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:10Z","lastTransitionTime":"2026-02-27T10:19:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.495334 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-46lvx" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.503584 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.508325 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-l9x2p" Feb 27 10:19:10 crc kubenswrapper[4998]: E0227 10:19:10.510981 4998 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 10:19:10 crc kubenswrapper[4998]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Feb 27 10:19:10 crc kubenswrapper[4998]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Feb 27 10:19:10 crc kubenswrapper[4998]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7s287,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-46lvx_openshift-multus(a046a5ca-7081-4920-98af-1027a5bc29d0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 27 10:19:10 crc kubenswrapper[4998]: > logger="UnhandledError" Feb 27 10:19:10 crc kubenswrapper[4998]: E0227 10:19:10.512338 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-46lvx" podUID="a046a5ca-7081-4920-98af-1027a5bc29d0" Feb 27 10:19:10 crc kubenswrapper[4998]: E0227 10:19:10.529502 4998 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.18.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6l6t6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-m6kr5_openshift-machine-config-operator(400c5e2f-5448-49c6-bf8e-04b21e552bb2): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 27 10:19:10 crc kubenswrapper[4998]: W0227 10:19:10.529600 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode55e9768_52ee_4fcf_a279_1b55e6d6c6fd.slice/crio-b7837db83f4f580b03605c1e13fcefc9973f90f096cff84421840956beae3c78 WatchSource:0}: Error finding container b7837db83f4f580b03605c1e13fcefc9973f90f096cff84421840956beae3c78: Status 404 returned error can't find the container with id b7837db83f4f580b03605c1e13fcefc9973f90f096cff84421840956beae3c78 Feb 27 10:19:10 crc kubenswrapper[4998]: E0227 10:19:10.532074 4998 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6l6t6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-m6kr5_openshift-machine-config-operator(400c5e2f-5448-49c6-bf8e-04b21e552bb2): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 27 10:19:10 crc kubenswrapper[4998]: E0227 10:19:10.533855 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" Feb 27 10:19:10 crc kubenswrapper[4998]: E0227 10:19:10.534010 4998 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gnkjs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-l9x2p_openshift-multus(e55e9768-52ee-4fcf-a279-1b55e6d6c6fd): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 27 10:19:10 crc kubenswrapper[4998]: E0227 10:19:10.536542 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-l9x2p" podUID="e55e9768-52ee-4fcf-a279-1b55e6d6c6fd" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.544091 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-wh9xl"] Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.545305 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.548826 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.549295 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.551588 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.551717 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.551603 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.551811 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.551597 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.560975 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fc05123-f698-45ff-a3c3-13c18e466cdb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c807f36a54905b570613990b7428942b90cbad2d8d8dfbab02ee177d5575644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f0cce1c3309bc7de11ea57038f7a306b781b41434337fbf3da176ca2dba2b91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93c102b441273f39ca361ed86f6ef259e15d8adb7eea414db62fabebfda0dee1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b6e8ecfcaba19fb742ce57619627409073b5ea272a20cca6cff39ebcef8d3e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b6e8ecfcaba19fb742ce57619627409073b5ea272a20cca6cff39ebcef8d3e2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T10:18:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 10:18:37.748369 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 10:18:37.748616 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 10:18:37.749728 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2309284339/tls.crt::/tmp/serving-cert-2309284339/tls.key\\\\\\\"\\\\nI0227 10:18:38.164511 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 10:18:38.177544 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 10:18:38.177576 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 10:18:38.177600 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 10:18:38.177606 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 10:18:38.196196 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 10:18:38.196250 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 10:18:38.196256 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 10:18:38.196263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 10:18:38.196267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 10:18:38.196282 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 10:18:38.196286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0227 10:18:38.201111 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0227 10:18:38.201327 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T10:18:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df65cc66d990595a8fdb751234ffd8b9e890f53de56c736241bc8b52e7341357\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b42644d7b4769348dda3a3ed5f82a860712159a6e62df2ee6a04b6e890851fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b42644d7b4769348dda3a3ed5f82a860712159a6e62df2ee6a04b6e890851fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:17:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.571407 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bceef7ff-b99d-432e-b9cb-7c538c82b74b-host-run-ovn-kubernetes\") pod \"ovnkube-node-wh9xl\" (UID: \"bceef7ff-b99d-432e-b9cb-7c538c82b74b\") " pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.571514 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.571626 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.571687 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/bceef7ff-b99d-432e-b9cb-7c538c82b74b-run-ovn\") pod \"ovnkube-node-wh9xl\" (UID: \"bceef7ff-b99d-432e-b9cb-7c538c82b74b\") " pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" Feb 27 10:19:10 crc kubenswrapper[4998]: E0227 10:19:10.571726 4998 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.571742 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/bceef7ff-b99d-432e-b9cb-7c538c82b74b-ovnkube-config\") pod \"ovnkube-node-wh9xl\" (UID: \"bceef7ff-b99d-432e-b9cb-7c538c82b74b\") " pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" Feb 27 10:19:10 crc kubenswrapper[4998]: E0227 10:19:10.571766 4998 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.571781 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/bceef7ff-b99d-432e-b9cb-7c538c82b74b-host-kubelet\") pod \"ovnkube-node-wh9xl\" (UID: \"bceef7ff-b99d-432e-b9cb-7c538c82b74b\") " pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" Feb 27 10:19:10 crc kubenswrapper[4998]: E0227 10:19:10.571786 4998 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 10:19:10 crc kubenswrapper[4998]: E0227 10:19:10.571804 4998 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.571843 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/bceef7ff-b99d-432e-b9cb-7c538c82b74b-env-overrides\") pod \"ovnkube-node-wh9xl\" (UID: \"bceef7ff-b99d-432e-b9cb-7c538c82b74b\") " pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" Feb 27 10:19:10 crc kubenswrapper[4998]: E0227 10:19:10.571930 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-27 10:19:18.57190541 +0000 UTC m=+110.570176508 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.571960 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xxgq\" (UniqueName: \"kubernetes.io/projected/bceef7ff-b99d-432e-b9cb-7c538c82b74b-kube-api-access-9xxgq\") pod \"ovnkube-node-wh9xl\" (UID: \"bceef7ff-b99d-432e-b9cb-7c538c82b74b\") " pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" Feb 27 10:19:10 crc kubenswrapper[4998]: E0227 10:19:10.571981 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-27 10:19:18.571959761 +0000 UTC m=+110.570230749 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.572015 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bceef7ff-b99d-432e-b9cb-7c538c82b74b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-wh9xl\" (UID: \"bceef7ff-b99d-432e-b9cb-7c538c82b74b\") " pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.572055 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/bceef7ff-b99d-432e-b9cb-7c538c82b74b-ovnkube-script-lib\") pod \"ovnkube-node-wh9xl\" (UID: \"bceef7ff-b99d-432e-b9cb-7c538c82b74b\") " pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.572102 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/bceef7ff-b99d-432e-b9cb-7c538c82b74b-run-systemd\") pod \"ovnkube-node-wh9xl\" (UID: \"bceef7ff-b99d-432e-b9cb-7c538c82b74b\") " pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.572165 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/bceef7ff-b99d-432e-b9cb-7c538c82b74b-host-run-netns\") pod \"ovnkube-node-wh9xl\" (UID: \"bceef7ff-b99d-432e-b9cb-7c538c82b74b\") " pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.572223 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bceef7ff-b99d-432e-b9cb-7c538c82b74b-var-lib-openvswitch\") pod \"ovnkube-node-wh9xl\" (UID: \"bceef7ff-b99d-432e-b9cb-7c538c82b74b\") " pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.572316 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/bceef7ff-b99d-432e-b9cb-7c538c82b74b-node-log\") pod \"ovnkube-node-wh9xl\" (UID: \"bceef7ff-b99d-432e-b9cb-7c538c82b74b\") " pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.572381 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/bceef7ff-b99d-432e-b9cb-7c538c82b74b-host-cni-bin\") pod \"ovnkube-node-wh9xl\" (UID: \"bceef7ff-b99d-432e-b9cb-7c538c82b74b\") " pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.572420 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/bceef7ff-b99d-432e-b9cb-7c538c82b74b-host-slash\") pod \"ovnkube-node-wh9xl\" (UID: \"bceef7ff-b99d-432e-b9cb-7c538c82b74b\") " pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.572454 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bceef7ff-b99d-432e-b9cb-7c538c82b74b-run-openvswitch\") pod \"ovnkube-node-wh9xl\" (UID: \"bceef7ff-b99d-432e-b9cb-7c538c82b74b\") " pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.572530 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.572589 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/bceef7ff-b99d-432e-b9cb-7c538c82b74b-ovn-node-metrics-cert\") pod \"ovnkube-node-wh9xl\" (UID: \"bceef7ff-b99d-432e-b9cb-7c538c82b74b\") " pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.572644 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:19:10 crc kubenswrapper[4998]: E0227 10:19:10.572678 4998 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 27 10:19:10 crc kubenswrapper[4998]: E0227 10:19:10.572699 4998 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 27 10:19:10 crc kubenswrapper[4998]: E0227 10:19:10.572714 4998 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 10:19:10 crc kubenswrapper[4998]: E0227 10:19:10.572777 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-27 10:19:18.572765289 +0000 UTC m=+110.571036397 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.572695 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/bceef7ff-b99d-432e-b9cb-7c538c82b74b-systemd-units\") pod \"ovnkube-node-wh9xl\" (UID: \"bceef7ff-b99d-432e-b9cb-7c538c82b74b\") " pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" Feb 27 10:19:10 crc kubenswrapper[4998]: E0227 10:19:10.572791 4998 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.572844 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bceef7ff-b99d-432e-b9cb-7c538c82b74b-etc-openvswitch\") pod \"ovnkube-node-wh9xl\" (UID: \"bceef7ff-b99d-432e-b9cb-7c538c82b74b\") " pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" Feb 27 10:19:10 crc kubenswrapper[4998]: E0227 10:19:10.572879 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-27 10:19:18.572851331 +0000 UTC m=+110.571122429 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.572928 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bceef7ff-b99d-432e-b9cb-7c538c82b74b-host-cni-netd\") pod \"ovnkube-node-wh9xl\" (UID: \"bceef7ff-b99d-432e-b9cb-7c538c82b74b\") " pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.573002 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/bceef7ff-b99d-432e-b9cb-7c538c82b74b-log-socket\") pod \"ovnkube-node-wh9xl\" (UID: \"bceef7ff-b99d-432e-b9cb-7c538c82b74b\") " pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.573105 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.590871 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.591139 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.591313 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.591111 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-46lvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a046a5ca-7081-4920-98af-1027a5bc29d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7s287\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-46lvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.591469 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.591621 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:10Z","lastTransitionTime":"2026-02-27T10:19:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.601385 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"400c5e2f-5448-49c6-bf8e-04b21e552bb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l6t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l6t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m6kr5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.611108 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.627314 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.642556 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.654406 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.674295 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/bceef7ff-b99d-432e-b9cb-7c538c82b74b-host-slash\") pod \"ovnkube-node-wh9xl\" (UID: \"bceef7ff-b99d-432e-b9cb-7c538c82b74b\") " pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.674364 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bceef7ff-b99d-432e-b9cb-7c538c82b74b-run-openvswitch\") pod \"ovnkube-node-wh9xl\" (UID: \"bceef7ff-b99d-432e-b9cb-7c538c82b74b\") " pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.674405 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/bceef7ff-b99d-432e-b9cb-7c538c82b74b-systemd-units\") pod \"ovnkube-node-wh9xl\" (UID: \"bceef7ff-b99d-432e-b9cb-7c538c82b74b\") " pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.674430 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bceef7ff-b99d-432e-b9cb-7c538c82b74b-etc-openvswitch\") pod \"ovnkube-node-wh9xl\" (UID: \"bceef7ff-b99d-432e-b9cb-7c538c82b74b\") " pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.674505 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/bceef7ff-b99d-432e-b9cb-7c538c82b74b-ovn-node-metrics-cert\") pod \"ovnkube-node-wh9xl\" (UID: \"bceef7ff-b99d-432e-b9cb-7c538c82b74b\") " pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.674538 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bceef7ff-b99d-432e-b9cb-7c538c82b74b-host-cni-netd\") pod \"ovnkube-node-wh9xl\" (UID: \"bceef7ff-b99d-432e-b9cb-7c538c82b74b\") " pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.674561 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/bceef7ff-b99d-432e-b9cb-7c538c82b74b-log-socket\") pod \"ovnkube-node-wh9xl\" (UID: \"bceef7ff-b99d-432e-b9cb-7c538c82b74b\") " pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.674551 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/bceef7ff-b99d-432e-b9cb-7c538c82b74b-host-slash\") pod \"ovnkube-node-wh9xl\" (UID: \"bceef7ff-b99d-432e-b9cb-7c538c82b74b\") " pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.674606 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/bceef7ff-b99d-432e-b9cb-7c538c82b74b-run-ovn\") pod \"ovnkube-node-wh9xl\" (UID: \"bceef7ff-b99d-432e-b9cb-7c538c82b74b\") " pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.674630 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bceef7ff-b99d-432e-b9cb-7c538c82b74b-host-run-ovn-kubernetes\") pod \"ovnkube-node-wh9xl\" (UID: \"bceef7ff-b99d-432e-b9cb-7c538c82b74b\") " pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.674686 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bceef7ff-b99d-432e-b9cb-7c538c82b74b-host-run-ovn-kubernetes\") pod \"ovnkube-node-wh9xl\" (UID: \"bceef7ff-b99d-432e-b9cb-7c538c82b74b\") " pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.674723 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/bceef7ff-b99d-432e-b9cb-7c538c82b74b-ovnkube-config\") pod \"ovnkube-node-wh9xl\" (UID: \"bceef7ff-b99d-432e-b9cb-7c538c82b74b\") " pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.674748 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bceef7ff-b99d-432e-b9cb-7c538c82b74b-run-openvswitch\") pod \"ovnkube-node-wh9xl\" (UID: \"bceef7ff-b99d-432e-b9cb-7c538c82b74b\") " pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.674770 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/bceef7ff-b99d-432e-b9cb-7c538c82b74b-env-overrides\") pod \"ovnkube-node-wh9xl\" (UID: \"bceef7ff-b99d-432e-b9cb-7c538c82b74b\") " pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.674788 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/bceef7ff-b99d-432e-b9cb-7c538c82b74b-systemd-units\") pod \"ovnkube-node-wh9xl\" (UID: \"bceef7ff-b99d-432e-b9cb-7c538c82b74b\") " pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.674812 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xxgq\" (UniqueName: \"kubernetes.io/projected/bceef7ff-b99d-432e-b9cb-7c538c82b74b-kube-api-access-9xxgq\") pod \"ovnkube-node-wh9xl\" (UID: \"bceef7ff-b99d-432e-b9cb-7c538c82b74b\") " pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.674826 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bceef7ff-b99d-432e-b9cb-7c538c82b74b-etc-openvswitch\") pod \"ovnkube-node-wh9xl\" (UID: \"bceef7ff-b99d-432e-b9cb-7c538c82b74b\") " pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.674853 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/bceef7ff-b99d-432e-b9cb-7c538c82b74b-host-kubelet\") pod \"ovnkube-node-wh9xl\" (UID: \"bceef7ff-b99d-432e-b9cb-7c538c82b74b\") " pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.674896 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/bceef7ff-b99d-432e-b9cb-7c538c82b74b-run-systemd\") pod \"ovnkube-node-wh9xl\" (UID: \"bceef7ff-b99d-432e-b9cb-7c538c82b74b\") " pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.674932 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bceef7ff-b99d-432e-b9cb-7c538c82b74b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-wh9xl\" (UID: \"bceef7ff-b99d-432e-b9cb-7c538c82b74b\") " pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.674978 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/bceef7ff-b99d-432e-b9cb-7c538c82b74b-ovnkube-script-lib\") pod \"ovnkube-node-wh9xl\" (UID: \"bceef7ff-b99d-432e-b9cb-7c538c82b74b\") " pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.675032 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/bceef7ff-b99d-432e-b9cb-7c538c82b74b-host-run-netns\") pod \"ovnkube-node-wh9xl\" (UID: \"bceef7ff-b99d-432e-b9cb-7c538c82b74b\") " pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.675064 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bceef7ff-b99d-432e-b9cb-7c538c82b74b-var-lib-openvswitch\") pod \"ovnkube-node-wh9xl\" (UID: \"bceef7ff-b99d-432e-b9cb-7c538c82b74b\") " pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.675105 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/bceef7ff-b99d-432e-b9cb-7c538c82b74b-node-log\") pod \"ovnkube-node-wh9xl\" (UID: \"bceef7ff-b99d-432e-b9cb-7c538c82b74b\") " pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.675147 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/bceef7ff-b99d-432e-b9cb-7c538c82b74b-host-cni-bin\") pod \"ovnkube-node-wh9xl\" (UID: \"bceef7ff-b99d-432e-b9cb-7c538c82b74b\") " pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.675274 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/bceef7ff-b99d-432e-b9cb-7c538c82b74b-host-cni-bin\") pod \"ovnkube-node-wh9xl\" (UID: \"bceef7ff-b99d-432e-b9cb-7c538c82b74b\") " pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.675473 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/bceef7ff-b99d-432e-b9cb-7c538c82b74b-host-run-netns\") pod \"ovnkube-node-wh9xl\" (UID: \"bceef7ff-b99d-432e-b9cb-7c538c82b74b\") " pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.675479 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bceef7ff-b99d-432e-b9cb-7c538c82b74b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-wh9xl\" (UID: \"bceef7ff-b99d-432e-b9cb-7c538c82b74b\") " pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.675628 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/bceef7ff-b99d-432e-b9cb-7c538c82b74b-run-systemd\") pod \"ovnkube-node-wh9xl\" (UID: \"bceef7ff-b99d-432e-b9cb-7c538c82b74b\") " pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.675653 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/bceef7ff-b99d-432e-b9cb-7c538c82b74b-host-kubelet\") pod \"ovnkube-node-wh9xl\" (UID: \"bceef7ff-b99d-432e-b9cb-7c538c82b74b\") " pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.675683 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/bceef7ff-b99d-432e-b9cb-7c538c82b74b-log-socket\") pod \"ovnkube-node-wh9xl\" (UID: \"bceef7ff-b99d-432e-b9cb-7c538c82b74b\") " pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.675785 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/bceef7ff-b99d-432e-b9cb-7c538c82b74b-node-log\") pod \"ovnkube-node-wh9xl\" (UID: \"bceef7ff-b99d-432e-b9cb-7c538c82b74b\") " pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.675790 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/bceef7ff-b99d-432e-b9cb-7c538c82b74b-run-ovn\") pod \"ovnkube-node-wh9xl\" (UID: \"bceef7ff-b99d-432e-b9cb-7c538c82b74b\") " pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.675792 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bceef7ff-b99d-432e-b9cb-7c538c82b74b-var-lib-openvswitch\") pod \"ovnkube-node-wh9xl\" (UID: \"bceef7ff-b99d-432e-b9cb-7c538c82b74b\") " pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.675711 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bceef7ff-b99d-432e-b9cb-7c538c82b74b-host-cni-netd\") pod \"ovnkube-node-wh9xl\" (UID: \"bceef7ff-b99d-432e-b9cb-7c538c82b74b\") " pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.676632 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/bceef7ff-b99d-432e-b9cb-7c538c82b74b-env-overrides\") pod \"ovnkube-node-wh9xl\" (UID: \"bceef7ff-b99d-432e-b9cb-7c538c82b74b\") " pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.676661 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/bceef7ff-b99d-432e-b9cb-7c538c82b74b-ovnkube-config\") pod \"ovnkube-node-wh9xl\" (UID: \"bceef7ff-b99d-432e-b9cb-7c538c82b74b\") " pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.678048 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/bceef7ff-b99d-432e-b9cb-7c538c82b74b-ovnkube-script-lib\") pod \"ovnkube-node-wh9xl\" (UID: \"bceef7ff-b99d-432e-b9cb-7c538c82b74b\") " pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.681054 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bceef7ff-b99d-432e-b9cb-7c538c82b74b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wh9xl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.681928 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/bceef7ff-b99d-432e-b9cb-7c538c82b74b-ovn-node-metrics-cert\") pod \"ovnkube-node-wh9xl\" (UID: \"bceef7ff-b99d-432e-b9cb-7c538c82b74b\") " pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.693018 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.693911 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xxgq\" (UniqueName: \"kubernetes.io/projected/bceef7ff-b99d-432e-b9cb-7c538c82b74b-kube-api-access-9xxgq\") pod \"ovnkube-node-wh9xl\" (UID: \"bceef7ff-b99d-432e-b9cb-7c538c82b74b\") " pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.694067 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.694117 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.694134 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.694158 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.694176 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:10Z","lastTransitionTime":"2026-02-27T10:19:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.704374 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l9x2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e55e9768-52ee-4fcf-a279-1b55e6d6c6fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l9x2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.714600 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcfqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9652967a-d4bf-4304-bd25-4fed87e89b10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:09Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:09Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jctzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcfqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.764529 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.764634 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:19:10 crc kubenswrapper[4998]: E0227 10:19:10.764676 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.764718 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:19:10 crc kubenswrapper[4998]: E0227 10:19:10.764920 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 10:19:10 crc kubenswrapper[4998]: E0227 10:19:10.765003 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.797619 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.797753 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.797772 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.797798 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.797855 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:10Z","lastTransitionTime":"2026-02-27T10:19:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.865769 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" Feb 27 10:19:10 crc kubenswrapper[4998]: W0227 10:19:10.886304 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbceef7ff_b99d_432e_b9cb_7c538c82b74b.slice/crio-22a3f9cd8b6410eb2f98b272fca1e976ac2afcdccedeffc1283ee8fa073179f6 WatchSource:0}: Error finding container 22a3f9cd8b6410eb2f98b272fca1e976ac2afcdccedeffc1283ee8fa073179f6: Status 404 returned error can't find the container with id 22a3f9cd8b6410eb2f98b272fca1e976ac2afcdccedeffc1283ee8fa073179f6 Feb 27 10:19:10 crc kubenswrapper[4998]: E0227 10:19:10.890852 4998 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 10:19:10 crc kubenswrapper[4998]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Feb 27 10:19:10 crc kubenswrapper[4998]: apiVersion: v1 Feb 27 10:19:10 crc kubenswrapper[4998]: clusters: Feb 27 10:19:10 crc kubenswrapper[4998]: - cluster: Feb 27 10:19:10 crc kubenswrapper[4998]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Feb 27 10:19:10 crc kubenswrapper[4998]: server: https://api-int.crc.testing:6443 Feb 27 10:19:10 crc kubenswrapper[4998]: name: default-cluster Feb 27 10:19:10 crc kubenswrapper[4998]: contexts: Feb 27 10:19:10 crc kubenswrapper[4998]: - context: Feb 27 10:19:10 crc kubenswrapper[4998]: cluster: default-cluster Feb 27 10:19:10 crc kubenswrapper[4998]: namespace: default Feb 27 10:19:10 crc kubenswrapper[4998]: user: default-auth Feb 27 10:19:10 crc kubenswrapper[4998]: name: default-context Feb 27 10:19:10 crc kubenswrapper[4998]: current-context: default-context Feb 27 10:19:10 crc kubenswrapper[4998]: kind: Config Feb 27 10:19:10 crc kubenswrapper[4998]: preferences: {} Feb 27 10:19:10 crc kubenswrapper[4998]: users: Feb 27 10:19:10 crc kubenswrapper[4998]: - name: default-auth Feb 27 10:19:10 crc kubenswrapper[4998]: user: Feb 27 10:19:10 crc kubenswrapper[4998]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Feb 27 10:19:10 crc kubenswrapper[4998]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Feb 27 10:19:10 crc kubenswrapper[4998]: EOF Feb 27 10:19:10 crc kubenswrapper[4998]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9xxgq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-wh9xl_openshift-ovn-kubernetes(bceef7ff-b99d-432e-b9cb-7c538c82b74b): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 27 10:19:10 crc kubenswrapper[4998]: > logger="UnhandledError" Feb 27 10:19:10 crc kubenswrapper[4998]: E0227 10:19:10.892083 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" podUID="bceef7ff-b99d-432e-b9cb-7c538c82b74b" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.899991 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.900062 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.900080 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.900134 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:10 crc kubenswrapper[4998]: I0227 10:19:10.900151 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:10Z","lastTransitionTime":"2026-02-27T10:19:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:11 crc kubenswrapper[4998]: I0227 10:19:11.002898 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:11 crc kubenswrapper[4998]: I0227 10:19:11.002931 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:11 crc kubenswrapper[4998]: I0227 10:19:11.002963 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:11 crc kubenswrapper[4998]: I0227 10:19:11.002979 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:11 crc kubenswrapper[4998]: I0227 10:19:11.002992 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:11Z","lastTransitionTime":"2026-02-27T10:19:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:11 crc kubenswrapper[4998]: I0227 10:19:11.106844 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:11 crc kubenswrapper[4998]: I0227 10:19:11.106896 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:11 crc kubenswrapper[4998]: I0227 10:19:11.106910 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:11 crc kubenswrapper[4998]: I0227 10:19:11.106924 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:11 crc kubenswrapper[4998]: I0227 10:19:11.106933 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:11Z","lastTransitionTime":"2026-02-27T10:19:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:11 crc kubenswrapper[4998]: I0227 10:19:11.171021 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" event={"ID":"400c5e2f-5448-49c6-bf8e-04b21e552bb2","Type":"ContainerStarted","Data":"a8f11097d5fd657a714d9acbb21adaf0cf379d1884598cb128ccfb8a8013022e"} Feb 27 10:19:11 crc kubenswrapper[4998]: I0227 10:19:11.171876 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-46lvx" event={"ID":"a046a5ca-7081-4920-98af-1027a5bc29d0","Type":"ContainerStarted","Data":"1a8b4dc1a0c36c08cee077bd9ace5af98b2e4707df58b723804a60f06e9da8fc"} Feb 27 10:19:11 crc kubenswrapper[4998]: I0227 10:19:11.172678 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" event={"ID":"bceef7ff-b99d-432e-b9cb-7c538c82b74b","Type":"ContainerStarted","Data":"22a3f9cd8b6410eb2f98b272fca1e976ac2afcdccedeffc1283ee8fa073179f6"} Feb 27 10:19:11 crc kubenswrapper[4998]: E0227 10:19:11.172802 4998 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.18.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6l6t6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-m6kr5_openshift-machine-config-operator(400c5e2f-5448-49c6-bf8e-04b21e552bb2): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 27 10:19:11 crc kubenswrapper[4998]: E0227 10:19:11.173747 4998 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 10:19:11 crc kubenswrapper[4998]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Feb 27 10:19:11 crc kubenswrapper[4998]: apiVersion: v1 Feb 27 10:19:11 crc kubenswrapper[4998]: clusters: Feb 27 10:19:11 crc kubenswrapper[4998]: - cluster: Feb 27 10:19:11 crc kubenswrapper[4998]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Feb 27 10:19:11 crc kubenswrapper[4998]: server: https://api-int.crc.testing:6443 Feb 27 10:19:11 crc kubenswrapper[4998]: name: default-cluster Feb 27 10:19:11 crc kubenswrapper[4998]: contexts: Feb 27 10:19:11 crc kubenswrapper[4998]: - context: Feb 27 10:19:11 crc kubenswrapper[4998]: cluster: default-cluster Feb 27 10:19:11 crc kubenswrapper[4998]: namespace: default Feb 27 10:19:11 crc kubenswrapper[4998]: user: default-auth Feb 27 10:19:11 crc kubenswrapper[4998]: name: default-context Feb 27 10:19:11 crc kubenswrapper[4998]: current-context: default-context Feb 27 10:19:11 crc kubenswrapper[4998]: kind: Config Feb 27 10:19:11 crc kubenswrapper[4998]: preferences: {} Feb 27 10:19:11 crc kubenswrapper[4998]: users: Feb 27 10:19:11 crc kubenswrapper[4998]: - name: default-auth Feb 27 10:19:11 crc kubenswrapper[4998]: user: Feb 27 10:19:11 crc kubenswrapper[4998]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Feb 27 10:19:11 crc kubenswrapper[4998]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Feb 27 10:19:11 crc kubenswrapper[4998]: EOF Feb 27 10:19:11 crc kubenswrapper[4998]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9xxgq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-wh9xl_openshift-ovn-kubernetes(bceef7ff-b99d-432e-b9cb-7c538c82b74b): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 27 10:19:11 crc kubenswrapper[4998]: > logger="UnhandledError" Feb 27 10:19:11 crc kubenswrapper[4998]: I0227 10:19:11.174044 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-l9x2p" event={"ID":"e55e9768-52ee-4fcf-a279-1b55e6d6c6fd","Type":"ContainerStarted","Data":"b7837db83f4f580b03605c1e13fcefc9973f90f096cff84421840956beae3c78"} Feb 27 10:19:11 crc kubenswrapper[4998]: E0227 10:19:11.174448 4998 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6l6t6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-m6kr5_openshift-machine-config-operator(400c5e2f-5448-49c6-bf8e-04b21e552bb2): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 27 10:19:11 crc kubenswrapper[4998]: E0227 10:19:11.174996 4998 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gnkjs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-l9x2p_openshift-multus(e55e9768-52ee-4fcf-a279-1b55e6d6c6fd): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 27 10:19:11 crc kubenswrapper[4998]: E0227 10:19:11.175078 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" podUID="bceef7ff-b99d-432e-b9cb-7c538c82b74b" Feb 27 10:19:11 crc kubenswrapper[4998]: E0227 10:19:11.175476 4998 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 10:19:11 crc kubenswrapper[4998]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Feb 27 10:19:11 crc kubenswrapper[4998]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Feb 27 10:19:11 crc kubenswrapper[4998]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7s287,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-46lvx_openshift-multus(a046a5ca-7081-4920-98af-1027a5bc29d0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 27 10:19:11 crc kubenswrapper[4998]: > logger="UnhandledError" Feb 27 10:19:11 crc kubenswrapper[4998]: E0227 10:19:11.175512 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" Feb 27 10:19:11 crc kubenswrapper[4998]: E0227 10:19:11.176610 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-l9x2p" podUID="e55e9768-52ee-4fcf-a279-1b55e6d6c6fd" Feb 27 10:19:11 crc kubenswrapper[4998]: E0227 10:19:11.177090 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-46lvx" podUID="a046a5ca-7081-4920-98af-1027a5bc29d0" Feb 27 10:19:11 crc kubenswrapper[4998]: I0227 10:19:11.185724 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-46lvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a046a5ca-7081-4920-98af-1027a5bc29d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7s287\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-46lvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:11 crc kubenswrapper[4998]: I0227 10:19:11.197776 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"400c5e2f-5448-49c6-bf8e-04b21e552bb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l6t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l6t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m6kr5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:11 crc kubenswrapper[4998]: I0227 10:19:11.209382 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:11 crc kubenswrapper[4998]: I0227 10:19:11.209431 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:11 crc kubenswrapper[4998]: I0227 10:19:11.209441 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:11 crc kubenswrapper[4998]: I0227 10:19:11.209455 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:11 crc kubenswrapper[4998]: I0227 10:19:11.209463 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:11Z","lastTransitionTime":"2026-02-27T10:19:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:11 crc kubenswrapper[4998]: I0227 10:19:11.212542 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:11 crc kubenswrapper[4998]: I0227 10:19:11.223379 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:11 crc kubenswrapper[4998]: I0227 10:19:11.233676 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:11 crc kubenswrapper[4998]: I0227 10:19:11.243639 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:11 crc kubenswrapper[4998]: I0227 10:19:11.259338 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bceef7ff-b99d-432e-b9cb-7c538c82b74b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wh9xl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:11 crc kubenswrapper[4998]: I0227 10:19:11.271184 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:11 crc kubenswrapper[4998]: I0227 10:19:11.286873 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:11 crc kubenswrapper[4998]: I0227 10:19:11.295761 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcfqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9652967a-d4bf-4304-bd25-4fed87e89b10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:09Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:09Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jctzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcfqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:11 crc kubenswrapper[4998]: I0227 10:19:11.308845 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l9x2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e55e9768-52ee-4fcf-a279-1b55e6d6c6fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l9x2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:11 crc kubenswrapper[4998]: I0227 10:19:11.311512 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:11 crc kubenswrapper[4998]: I0227 10:19:11.311593 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:11 crc kubenswrapper[4998]: I0227 10:19:11.311611 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:11 crc kubenswrapper[4998]: I0227 10:19:11.311640 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:11 crc kubenswrapper[4998]: I0227 10:19:11.311657 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:11Z","lastTransitionTime":"2026-02-27T10:19:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:11 crc kubenswrapper[4998]: I0227 10:19:11.319381 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fc05123-f698-45ff-a3c3-13c18e466cdb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c807f36a54905b570613990b7428942b90cbad2d8d8dfbab02ee177d5575644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f0cce1c3309bc7de11ea57038f7a306b781b41434337fbf3da176ca2dba2b91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93c102b441273f39ca361ed86f6ef259e15d8adb7eea414db62fabebfda0dee1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b6e8ecfcaba19fb742ce57619627409073b5ea272a20cca6cff39ebcef8d3e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b6e8ecfcaba19fb742ce57619627409073b5ea272a20cca6cff39ebcef8d3e2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T10:18:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 10:18:37.748369 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 10:18:37.748616 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 10:18:37.749728 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2309284339/tls.crt::/tmp/serving-cert-2309284339/tls.key\\\\\\\"\\\\nI0227 10:18:38.164511 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 10:18:38.177544 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 10:18:38.177576 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 10:18:38.177600 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 10:18:38.177606 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 10:18:38.196196 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 10:18:38.196250 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 10:18:38.196256 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 10:18:38.196263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 10:18:38.196267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 10:18:38.196282 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 10:18:38.196286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0227 10:18:38.201111 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0227 10:18:38.201327 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T10:18:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df65cc66d990595a8fdb751234ffd8b9e890f53de56c736241bc8b52e7341357\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b42644d7b4769348dda3a3ed5f82a860712159a6e62df2ee6a04b6e890851fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b42644d7b4769348dda3a3ed5f82a860712159a6e62df2ee6a04b6e890851fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:17:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:11 crc kubenswrapper[4998]: I0227 10:19:11.332366 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fc05123-f698-45ff-a3c3-13c18e466cdb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c807f36a54905b570613990b7428942b90cbad2d8d8dfbab02ee177d5575644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f0cce1c3309bc7de11ea57038f7a306b781b41434337fbf3da176ca2dba2b91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93c102b441273f39ca361ed86f6ef259e15d8adb7eea414db62fabebfda0dee1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b6e8ecfcaba19fb742ce57619627409073b5ea272a20cca6cff39ebcef8d3e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b6e8ecfcaba19fb742ce57619627409073b5ea272a20cca6cff39ebcef8d3e2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T10:18:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 10:18:37.748369 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 10:18:37.748616 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 10:18:37.749728 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2309284339/tls.crt::/tmp/serving-cert-2309284339/tls.key\\\\\\\"\\\\nI0227 10:18:38.164511 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 10:18:38.177544 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 10:18:38.177576 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 10:18:38.177600 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 10:18:38.177606 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 10:18:38.196196 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 10:18:38.196250 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 10:18:38.196256 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 10:18:38.196263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 10:18:38.196267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 10:18:38.196282 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 10:18:38.196286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0227 10:18:38.201111 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0227 10:18:38.201327 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T10:18:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df65cc66d990595a8fdb751234ffd8b9e890f53de56c736241bc8b52e7341357\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b42644d7b4769348dda3a3ed5f82a860712159a6e62df2ee6a04b6e890851fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b42644d7b4769348dda3a3ed5f82a860712159a6e62df2ee6a04b6e890851fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:17:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:11 crc kubenswrapper[4998]: I0227 10:19:11.342703 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:11 crc kubenswrapper[4998]: I0227 10:19:11.353192 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:11 crc kubenswrapper[4998]: I0227 10:19:11.361677 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:11 crc kubenswrapper[4998]: I0227 10:19:11.372056 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-46lvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a046a5ca-7081-4920-98af-1027a5bc29d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7s287\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-46lvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:11 crc kubenswrapper[4998]: I0227 10:19:11.380825 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"400c5e2f-5448-49c6-bf8e-04b21e552bb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l6t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l6t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m6kr5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:11 crc kubenswrapper[4998]: I0227 10:19:11.395956 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:11 crc kubenswrapper[4998]: I0227 10:19:11.406747 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:11 crc kubenswrapper[4998]: I0227 10:19:11.414525 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:11 crc kubenswrapper[4998]: I0227 10:19:11.414744 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:11 crc kubenswrapper[4998]: I0227 10:19:11.414822 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:11 crc kubenswrapper[4998]: I0227 10:19:11.414907 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:11 crc kubenswrapper[4998]: I0227 10:19:11.414970 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:11Z","lastTransitionTime":"2026-02-27T10:19:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:11 crc kubenswrapper[4998]: I0227 10:19:11.420728 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:11 crc kubenswrapper[4998]: I0227 10:19:11.435634 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bceef7ff-b99d-432e-b9cb-7c538c82b74b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wh9xl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:11 crc kubenswrapper[4998]: I0227 10:19:11.443319 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcfqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9652967a-d4bf-4304-bd25-4fed87e89b10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:09Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:09Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jctzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcfqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:11 crc kubenswrapper[4998]: I0227 10:19:11.456574 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l9x2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e55e9768-52ee-4fcf-a279-1b55e6d6c6fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l9x2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:11 crc kubenswrapper[4998]: I0227 10:19:11.517782 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:11 crc kubenswrapper[4998]: I0227 10:19:11.517846 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:11 crc kubenswrapper[4998]: I0227 10:19:11.517867 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:11 crc kubenswrapper[4998]: I0227 10:19:11.517887 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:11 crc kubenswrapper[4998]: I0227 10:19:11.517931 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:11Z","lastTransitionTime":"2026-02-27T10:19:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:11 crc kubenswrapper[4998]: I0227 10:19:11.620114 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:11 crc kubenswrapper[4998]: I0227 10:19:11.620148 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:11 crc kubenswrapper[4998]: I0227 10:19:11.620158 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:11 crc kubenswrapper[4998]: I0227 10:19:11.620174 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:11 crc kubenswrapper[4998]: I0227 10:19:11.620184 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:11Z","lastTransitionTime":"2026-02-27T10:19:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:11 crc kubenswrapper[4998]: I0227 10:19:11.722824 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:11 crc kubenswrapper[4998]: I0227 10:19:11.722870 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:11 crc kubenswrapper[4998]: I0227 10:19:11.722881 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:11 crc kubenswrapper[4998]: I0227 10:19:11.722897 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:11 crc kubenswrapper[4998]: I0227 10:19:11.722907 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:11Z","lastTransitionTime":"2026-02-27T10:19:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:11 crc kubenswrapper[4998]: I0227 10:19:11.825044 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:11 crc kubenswrapper[4998]: I0227 10:19:11.825134 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:11 crc kubenswrapper[4998]: I0227 10:19:11.825159 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:11 crc kubenswrapper[4998]: I0227 10:19:11.825188 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:11 crc kubenswrapper[4998]: I0227 10:19:11.825214 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:11Z","lastTransitionTime":"2026-02-27T10:19:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:11 crc kubenswrapper[4998]: I0227 10:19:11.928587 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:11 crc kubenswrapper[4998]: I0227 10:19:11.928628 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:11 crc kubenswrapper[4998]: I0227 10:19:11.928638 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:11 crc kubenswrapper[4998]: I0227 10:19:11.928654 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:11 crc kubenswrapper[4998]: I0227 10:19:11.928665 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:11Z","lastTransitionTime":"2026-02-27T10:19:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:12 crc kubenswrapper[4998]: I0227 10:19:12.031079 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:12 crc kubenswrapper[4998]: I0227 10:19:12.031124 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:12 crc kubenswrapper[4998]: I0227 10:19:12.031141 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:12 crc kubenswrapper[4998]: I0227 10:19:12.031159 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:12 crc kubenswrapper[4998]: I0227 10:19:12.031170 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:12Z","lastTransitionTime":"2026-02-27T10:19:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:12 crc kubenswrapper[4998]: I0227 10:19:12.133105 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:12 crc kubenswrapper[4998]: I0227 10:19:12.133147 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:12 crc kubenswrapper[4998]: I0227 10:19:12.133158 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:12 crc kubenswrapper[4998]: I0227 10:19:12.133174 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:12 crc kubenswrapper[4998]: I0227 10:19:12.133186 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:12Z","lastTransitionTime":"2026-02-27T10:19:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:12 crc kubenswrapper[4998]: I0227 10:19:12.235975 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:12 crc kubenswrapper[4998]: I0227 10:19:12.236013 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:12 crc kubenswrapper[4998]: I0227 10:19:12.236025 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:12 crc kubenswrapper[4998]: I0227 10:19:12.236043 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:12 crc kubenswrapper[4998]: I0227 10:19:12.236055 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:12Z","lastTransitionTime":"2026-02-27T10:19:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:12 crc kubenswrapper[4998]: I0227 10:19:12.338664 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:12 crc kubenswrapper[4998]: I0227 10:19:12.338695 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:12 crc kubenswrapper[4998]: I0227 10:19:12.338703 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:12 crc kubenswrapper[4998]: I0227 10:19:12.338719 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:12 crc kubenswrapper[4998]: I0227 10:19:12.338730 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:12Z","lastTransitionTime":"2026-02-27T10:19:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:12 crc kubenswrapper[4998]: I0227 10:19:12.441056 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:12 crc kubenswrapper[4998]: I0227 10:19:12.441099 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:12 crc kubenswrapper[4998]: I0227 10:19:12.441113 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:12 crc kubenswrapper[4998]: I0227 10:19:12.441128 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:12 crc kubenswrapper[4998]: I0227 10:19:12.441141 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:12Z","lastTransitionTime":"2026-02-27T10:19:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:12 crc kubenswrapper[4998]: I0227 10:19:12.543899 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:12 crc kubenswrapper[4998]: I0227 10:19:12.543930 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:12 crc kubenswrapper[4998]: I0227 10:19:12.543938 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:12 crc kubenswrapper[4998]: I0227 10:19:12.543955 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:12 crc kubenswrapper[4998]: I0227 10:19:12.543971 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:12Z","lastTransitionTime":"2026-02-27T10:19:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:12 crc kubenswrapper[4998]: I0227 10:19:12.647361 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:12 crc kubenswrapper[4998]: I0227 10:19:12.647426 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:12 crc kubenswrapper[4998]: I0227 10:19:12.647444 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:12 crc kubenswrapper[4998]: I0227 10:19:12.647469 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:12 crc kubenswrapper[4998]: I0227 10:19:12.647492 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:12Z","lastTransitionTime":"2026-02-27T10:19:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:12 crc kubenswrapper[4998]: I0227 10:19:12.750315 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:12 crc kubenswrapper[4998]: I0227 10:19:12.750414 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:12 crc kubenswrapper[4998]: I0227 10:19:12.750443 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:12 crc kubenswrapper[4998]: I0227 10:19:12.750476 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:12 crc kubenswrapper[4998]: I0227 10:19:12.750497 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:12Z","lastTransitionTime":"2026-02-27T10:19:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:12 crc kubenswrapper[4998]: I0227 10:19:12.764289 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:19:12 crc kubenswrapper[4998]: I0227 10:19:12.764345 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:19:12 crc kubenswrapper[4998]: E0227 10:19:12.764465 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 10:19:12 crc kubenswrapper[4998]: I0227 10:19:12.764535 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:19:12 crc kubenswrapper[4998]: E0227 10:19:12.764684 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 10:19:12 crc kubenswrapper[4998]: E0227 10:19:12.764885 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 10:19:12 crc kubenswrapper[4998]: I0227 10:19:12.857785 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:12 crc kubenswrapper[4998]: I0227 10:19:12.857854 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:12 crc kubenswrapper[4998]: I0227 10:19:12.857877 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:12 crc kubenswrapper[4998]: I0227 10:19:12.857906 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:12 crc kubenswrapper[4998]: I0227 10:19:12.857929 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:12Z","lastTransitionTime":"2026-02-27T10:19:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:12 crc kubenswrapper[4998]: I0227 10:19:12.961307 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:12 crc kubenswrapper[4998]: I0227 10:19:12.961364 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:12 crc kubenswrapper[4998]: I0227 10:19:12.961380 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:12 crc kubenswrapper[4998]: I0227 10:19:12.961405 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:12 crc kubenswrapper[4998]: I0227 10:19:12.961445 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:12Z","lastTransitionTime":"2026-02-27T10:19:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:13 crc kubenswrapper[4998]: I0227 10:19:13.064166 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:13 crc kubenswrapper[4998]: I0227 10:19:13.064257 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:13 crc kubenswrapper[4998]: I0227 10:19:13.064268 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:13 crc kubenswrapper[4998]: I0227 10:19:13.064282 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:13 crc kubenswrapper[4998]: I0227 10:19:13.064291 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:13Z","lastTransitionTime":"2026-02-27T10:19:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:13 crc kubenswrapper[4998]: I0227 10:19:13.167039 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:13 crc kubenswrapper[4998]: I0227 10:19:13.167114 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:13 crc kubenswrapper[4998]: I0227 10:19:13.167127 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:13 crc kubenswrapper[4998]: I0227 10:19:13.167147 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:13 crc kubenswrapper[4998]: I0227 10:19:13.167158 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:13Z","lastTransitionTime":"2026-02-27T10:19:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:13 crc kubenswrapper[4998]: I0227 10:19:13.270524 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:13 crc kubenswrapper[4998]: I0227 10:19:13.270581 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:13 crc kubenswrapper[4998]: I0227 10:19:13.270597 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:13 crc kubenswrapper[4998]: I0227 10:19:13.270620 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:13 crc kubenswrapper[4998]: I0227 10:19:13.270637 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:13Z","lastTransitionTime":"2026-02-27T10:19:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:13 crc kubenswrapper[4998]: I0227 10:19:13.373273 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:13 crc kubenswrapper[4998]: I0227 10:19:13.373312 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:13 crc kubenswrapper[4998]: I0227 10:19:13.373321 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:13 crc kubenswrapper[4998]: I0227 10:19:13.373335 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:13 crc kubenswrapper[4998]: I0227 10:19:13.373344 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:13Z","lastTransitionTime":"2026-02-27T10:19:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:13 crc kubenswrapper[4998]: I0227 10:19:13.475810 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:13 crc kubenswrapper[4998]: I0227 10:19:13.475856 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:13 crc kubenswrapper[4998]: I0227 10:19:13.475868 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:13 crc kubenswrapper[4998]: I0227 10:19:13.475894 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:13 crc kubenswrapper[4998]: I0227 10:19:13.475907 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:13Z","lastTransitionTime":"2026-02-27T10:19:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:13 crc kubenswrapper[4998]: I0227 10:19:13.577969 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:13 crc kubenswrapper[4998]: I0227 10:19:13.578004 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:13 crc kubenswrapper[4998]: I0227 10:19:13.578012 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:13 crc kubenswrapper[4998]: I0227 10:19:13.578024 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:13 crc kubenswrapper[4998]: I0227 10:19:13.578033 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:13Z","lastTransitionTime":"2026-02-27T10:19:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:13 crc kubenswrapper[4998]: I0227 10:19:13.679798 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:13 crc kubenswrapper[4998]: I0227 10:19:13.679843 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:13 crc kubenswrapper[4998]: I0227 10:19:13.679854 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:13 crc kubenswrapper[4998]: I0227 10:19:13.679871 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:13 crc kubenswrapper[4998]: I0227 10:19:13.679881 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:13Z","lastTransitionTime":"2026-02-27T10:19:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:13 crc kubenswrapper[4998]: E0227 10:19:13.766528 4998 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 10:19:13 crc kubenswrapper[4998]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Feb 27 10:19:13 crc kubenswrapper[4998]: set -o allexport Feb 27 10:19:13 crc kubenswrapper[4998]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Feb 27 10:19:13 crc kubenswrapper[4998]: source /etc/kubernetes/apiserver-url.env Feb 27 10:19:13 crc kubenswrapper[4998]: else Feb 27 10:19:13 crc kubenswrapper[4998]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Feb 27 10:19:13 crc kubenswrapper[4998]: exit 1 Feb 27 10:19:13 crc kubenswrapper[4998]: fi Feb 27 10:19:13 crc kubenswrapper[4998]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Feb 27 10:19:13 crc kubenswrapper[4998]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 27 10:19:13 crc kubenswrapper[4998]: > logger="UnhandledError" Feb 27 10:19:13 crc kubenswrapper[4998]: E0227 10:19:13.767756 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Feb 27 10:19:13 crc kubenswrapper[4998]: I0227 10:19:13.782005 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:13 crc kubenswrapper[4998]: I0227 10:19:13.782071 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:13 crc kubenswrapper[4998]: I0227 10:19:13.782090 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:13 crc kubenswrapper[4998]: I0227 10:19:13.782115 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:13 crc kubenswrapper[4998]: I0227 10:19:13.782133 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:13Z","lastTransitionTime":"2026-02-27T10:19:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:13 crc kubenswrapper[4998]: I0227 10:19:13.884272 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:13 crc kubenswrapper[4998]: I0227 10:19:13.884322 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:13 crc kubenswrapper[4998]: I0227 10:19:13.884338 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:13 crc kubenswrapper[4998]: I0227 10:19:13.884357 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:13 crc kubenswrapper[4998]: I0227 10:19:13.884369 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:13Z","lastTransitionTime":"2026-02-27T10:19:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:13 crc kubenswrapper[4998]: I0227 10:19:13.990952 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:13 crc kubenswrapper[4998]: I0227 10:19:13.991051 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:13 crc kubenswrapper[4998]: I0227 10:19:13.991070 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:13 crc kubenswrapper[4998]: I0227 10:19:13.991095 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:13 crc kubenswrapper[4998]: I0227 10:19:13.991111 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:13Z","lastTransitionTime":"2026-02-27T10:19:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:14 crc kubenswrapper[4998]: I0227 10:19:14.094014 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:14 crc kubenswrapper[4998]: I0227 10:19:14.094073 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:14 crc kubenswrapper[4998]: I0227 10:19:14.094088 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:14 crc kubenswrapper[4998]: I0227 10:19:14.094108 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:14 crc kubenswrapper[4998]: I0227 10:19:14.094121 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:14Z","lastTransitionTime":"2026-02-27T10:19:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:14 crc kubenswrapper[4998]: I0227 10:19:14.195706 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:14 crc kubenswrapper[4998]: I0227 10:19:14.195741 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:14 crc kubenswrapper[4998]: I0227 10:19:14.195750 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:14 crc kubenswrapper[4998]: I0227 10:19:14.195773 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:14 crc kubenswrapper[4998]: I0227 10:19:14.195782 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:14Z","lastTransitionTime":"2026-02-27T10:19:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:14 crc kubenswrapper[4998]: I0227 10:19:14.297764 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:14 crc kubenswrapper[4998]: I0227 10:19:14.297809 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:14 crc kubenswrapper[4998]: I0227 10:19:14.297826 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:14 crc kubenswrapper[4998]: I0227 10:19:14.297846 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:14 crc kubenswrapper[4998]: I0227 10:19:14.297874 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:14Z","lastTransitionTime":"2026-02-27T10:19:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:14 crc kubenswrapper[4998]: I0227 10:19:14.400127 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:14 crc kubenswrapper[4998]: I0227 10:19:14.400179 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:14 crc kubenswrapper[4998]: I0227 10:19:14.400195 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:14 crc kubenswrapper[4998]: I0227 10:19:14.400220 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:14 crc kubenswrapper[4998]: I0227 10:19:14.400261 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:14Z","lastTransitionTime":"2026-02-27T10:19:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:14 crc kubenswrapper[4998]: I0227 10:19:14.503108 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:14 crc kubenswrapper[4998]: I0227 10:19:14.503181 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:14 crc kubenswrapper[4998]: I0227 10:19:14.503213 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:14 crc kubenswrapper[4998]: I0227 10:19:14.503349 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:14 crc kubenswrapper[4998]: I0227 10:19:14.503384 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:14Z","lastTransitionTime":"2026-02-27T10:19:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:14 crc kubenswrapper[4998]: I0227 10:19:14.606265 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:14 crc kubenswrapper[4998]: I0227 10:19:14.606328 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:14 crc kubenswrapper[4998]: I0227 10:19:14.606346 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:14 crc kubenswrapper[4998]: I0227 10:19:14.606366 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:14 crc kubenswrapper[4998]: I0227 10:19:14.606382 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:14Z","lastTransitionTime":"2026-02-27T10:19:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:14 crc kubenswrapper[4998]: I0227 10:19:14.709780 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:14 crc kubenswrapper[4998]: I0227 10:19:14.710254 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:14 crc kubenswrapper[4998]: I0227 10:19:14.710339 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:14 crc kubenswrapper[4998]: I0227 10:19:14.710453 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:14 crc kubenswrapper[4998]: I0227 10:19:14.710533 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:14Z","lastTransitionTime":"2026-02-27T10:19:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:14 crc kubenswrapper[4998]: I0227 10:19:14.764358 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:19:14 crc kubenswrapper[4998]: I0227 10:19:14.764387 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:19:14 crc kubenswrapper[4998]: I0227 10:19:14.764607 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:19:14 crc kubenswrapper[4998]: E0227 10:19:14.764708 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 10:19:14 crc kubenswrapper[4998]: E0227 10:19:14.764799 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 10:19:14 crc kubenswrapper[4998]: E0227 10:19:14.764857 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 10:19:14 crc kubenswrapper[4998]: E0227 10:19:14.766677 4998 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 10:19:14 crc kubenswrapper[4998]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Feb 27 10:19:14 crc kubenswrapper[4998]: if [[ -f "/env/_master" ]]; then Feb 27 10:19:14 crc kubenswrapper[4998]: set -o allexport Feb 27 10:19:14 crc kubenswrapper[4998]: source "/env/_master" Feb 27 10:19:14 crc kubenswrapper[4998]: set +o allexport Feb 27 10:19:14 crc kubenswrapper[4998]: fi Feb 27 10:19:14 crc kubenswrapper[4998]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Feb 27 10:19:14 crc kubenswrapper[4998]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Feb 27 10:19:14 crc kubenswrapper[4998]: ho_enable="--enable-hybrid-overlay" Feb 27 10:19:14 crc kubenswrapper[4998]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Feb 27 10:19:14 crc kubenswrapper[4998]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Feb 27 10:19:14 crc kubenswrapper[4998]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Feb 27 10:19:14 crc kubenswrapper[4998]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Feb 27 10:19:14 crc kubenswrapper[4998]: --webhook-cert-dir="/etc/webhook-cert" \ Feb 27 10:19:14 crc kubenswrapper[4998]: --webhook-host=127.0.0.1 \ Feb 27 10:19:14 crc kubenswrapper[4998]: --webhook-port=9743 \ Feb 27 10:19:14 crc kubenswrapper[4998]: ${ho_enable} \ Feb 27 10:19:14 crc kubenswrapper[4998]: --enable-interconnect \ Feb 27 10:19:14 crc kubenswrapper[4998]: --disable-approver \ Feb 27 10:19:14 crc kubenswrapper[4998]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Feb 27 10:19:14 crc kubenswrapper[4998]: --wait-for-kubernetes-api=200s \ Feb 27 10:19:14 crc kubenswrapper[4998]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Feb 27 10:19:14 crc kubenswrapper[4998]: --loglevel="${LOGLEVEL}" Feb 27 10:19:14 crc kubenswrapper[4998]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 27 10:19:14 crc kubenswrapper[4998]: > logger="UnhandledError" Feb 27 10:19:14 crc kubenswrapper[4998]: E0227 10:19:14.766683 4998 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rczfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4ln5h_openshift-network-operator(d75a4c96-2883-4a0b-bab2-0fab2b6c0b49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 27 10:19:14 crc kubenswrapper[4998]: E0227 10:19:14.770217 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-4ln5h" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" Feb 27 10:19:14 crc kubenswrapper[4998]: E0227 10:19:14.771461 4998 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 10:19:14 crc kubenswrapper[4998]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Feb 27 10:19:14 crc kubenswrapper[4998]: if [[ -f "/env/_master" ]]; then Feb 27 10:19:14 crc kubenswrapper[4998]: set -o allexport Feb 27 10:19:14 crc kubenswrapper[4998]: source "/env/_master" Feb 27 10:19:14 crc kubenswrapper[4998]: set +o allexport Feb 27 10:19:14 crc kubenswrapper[4998]: fi Feb 27 10:19:14 crc kubenswrapper[4998]: Feb 27 10:19:14 crc kubenswrapper[4998]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Feb 27 10:19:14 crc kubenswrapper[4998]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Feb 27 10:19:14 crc kubenswrapper[4998]: --disable-webhook \ Feb 27 10:19:14 crc kubenswrapper[4998]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Feb 27 10:19:14 crc kubenswrapper[4998]: --loglevel="${LOGLEVEL}" Feb 27 10:19:14 crc kubenswrapper[4998]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 27 10:19:14 crc kubenswrapper[4998]: > logger="UnhandledError" Feb 27 10:19:14 crc kubenswrapper[4998]: E0227 10:19:14.773024 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Feb 27 10:19:14 crc kubenswrapper[4998]: I0227 10:19:14.812790 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:14 crc kubenswrapper[4998]: I0227 10:19:14.812832 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:14 crc kubenswrapper[4998]: I0227 10:19:14.812842 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:14 crc kubenswrapper[4998]: I0227 10:19:14.812859 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:14 crc kubenswrapper[4998]: I0227 10:19:14.812868 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:14Z","lastTransitionTime":"2026-02-27T10:19:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:14 crc kubenswrapper[4998]: I0227 10:19:14.916007 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:14 crc kubenswrapper[4998]: I0227 10:19:14.916286 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:14 crc kubenswrapper[4998]: I0227 10:19:14.916354 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:14 crc kubenswrapper[4998]: I0227 10:19:14.916429 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:14 crc kubenswrapper[4998]: I0227 10:19:14.916494 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:14Z","lastTransitionTime":"2026-02-27T10:19:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:15 crc kubenswrapper[4998]: I0227 10:19:15.019508 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:15 crc kubenswrapper[4998]: I0227 10:19:15.019578 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:15 crc kubenswrapper[4998]: I0227 10:19:15.019597 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:15 crc kubenswrapper[4998]: I0227 10:19:15.019622 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:15 crc kubenswrapper[4998]: I0227 10:19:15.019651 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:15Z","lastTransitionTime":"2026-02-27T10:19:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:15 crc kubenswrapper[4998]: I0227 10:19:15.122007 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:15 crc kubenswrapper[4998]: I0227 10:19:15.122046 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:15 crc kubenswrapper[4998]: I0227 10:19:15.122057 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:15 crc kubenswrapper[4998]: I0227 10:19:15.122072 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:15 crc kubenswrapper[4998]: I0227 10:19:15.122083 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:15Z","lastTransitionTime":"2026-02-27T10:19:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:15 crc kubenswrapper[4998]: I0227 10:19:15.225516 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:15 crc kubenswrapper[4998]: I0227 10:19:15.225565 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:15 crc kubenswrapper[4998]: I0227 10:19:15.225574 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:15 crc kubenswrapper[4998]: I0227 10:19:15.225592 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:15 crc kubenswrapper[4998]: I0227 10:19:15.225601 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:15Z","lastTransitionTime":"2026-02-27T10:19:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:15 crc kubenswrapper[4998]: I0227 10:19:15.328515 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:15 crc kubenswrapper[4998]: I0227 10:19:15.328552 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:15 crc kubenswrapper[4998]: I0227 10:19:15.328560 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:15 crc kubenswrapper[4998]: I0227 10:19:15.328592 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:15 crc kubenswrapper[4998]: I0227 10:19:15.328603 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:15Z","lastTransitionTime":"2026-02-27T10:19:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:15 crc kubenswrapper[4998]: I0227 10:19:15.431182 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:15 crc kubenswrapper[4998]: I0227 10:19:15.431257 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:15 crc kubenswrapper[4998]: I0227 10:19:15.431270 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:15 crc kubenswrapper[4998]: I0227 10:19:15.431288 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:15 crc kubenswrapper[4998]: I0227 10:19:15.431298 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:15Z","lastTransitionTime":"2026-02-27T10:19:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:15 crc kubenswrapper[4998]: I0227 10:19:15.533572 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:15 crc kubenswrapper[4998]: I0227 10:19:15.533642 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:15 crc kubenswrapper[4998]: I0227 10:19:15.533665 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:15 crc kubenswrapper[4998]: I0227 10:19:15.533695 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:15 crc kubenswrapper[4998]: I0227 10:19:15.533715 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:15Z","lastTransitionTime":"2026-02-27T10:19:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:15 crc kubenswrapper[4998]: I0227 10:19:15.636094 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:15 crc kubenswrapper[4998]: I0227 10:19:15.636209 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:15 crc kubenswrapper[4998]: I0227 10:19:15.636237 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:15 crc kubenswrapper[4998]: I0227 10:19:15.636255 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:15 crc kubenswrapper[4998]: I0227 10:19:15.636266 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:15Z","lastTransitionTime":"2026-02-27T10:19:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:15 crc kubenswrapper[4998]: I0227 10:19:15.738833 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:15 crc kubenswrapper[4998]: I0227 10:19:15.738886 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:15 crc kubenswrapper[4998]: I0227 10:19:15.738898 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:15 crc kubenswrapper[4998]: I0227 10:19:15.738916 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:15 crc kubenswrapper[4998]: I0227 10:19:15.738929 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:15Z","lastTransitionTime":"2026-02-27T10:19:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:15 crc kubenswrapper[4998]: I0227 10:19:15.841546 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:15 crc kubenswrapper[4998]: I0227 10:19:15.841615 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:15 crc kubenswrapper[4998]: I0227 10:19:15.841639 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:15 crc kubenswrapper[4998]: I0227 10:19:15.841664 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:15 crc kubenswrapper[4998]: I0227 10:19:15.841682 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:15Z","lastTransitionTime":"2026-02-27T10:19:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:15 crc kubenswrapper[4998]: I0227 10:19:15.944124 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:15 crc kubenswrapper[4998]: I0227 10:19:15.944183 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:15 crc kubenswrapper[4998]: I0227 10:19:15.944191 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:15 crc kubenswrapper[4998]: I0227 10:19:15.944220 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:15 crc kubenswrapper[4998]: I0227 10:19:15.944255 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:15Z","lastTransitionTime":"2026-02-27T10:19:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:16 crc kubenswrapper[4998]: I0227 10:19:16.047097 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:16 crc kubenswrapper[4998]: I0227 10:19:16.047139 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:16 crc kubenswrapper[4998]: I0227 10:19:16.047147 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:16 crc kubenswrapper[4998]: I0227 10:19:16.047161 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:16 crc kubenswrapper[4998]: I0227 10:19:16.047171 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:16Z","lastTransitionTime":"2026-02-27T10:19:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:16 crc kubenswrapper[4998]: I0227 10:19:16.149835 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:16 crc kubenswrapper[4998]: I0227 10:19:16.149891 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:16 crc kubenswrapper[4998]: I0227 10:19:16.149909 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:16 crc kubenswrapper[4998]: I0227 10:19:16.149936 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:16 crc kubenswrapper[4998]: I0227 10:19:16.149950 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:16Z","lastTransitionTime":"2026-02-27T10:19:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:16 crc kubenswrapper[4998]: I0227 10:19:16.252142 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:16 crc kubenswrapper[4998]: I0227 10:19:16.252180 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:16 crc kubenswrapper[4998]: I0227 10:19:16.252191 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:16 crc kubenswrapper[4998]: I0227 10:19:16.252208 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:16 crc kubenswrapper[4998]: I0227 10:19:16.252218 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:16Z","lastTransitionTime":"2026-02-27T10:19:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:16 crc kubenswrapper[4998]: I0227 10:19:16.296813 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-jl2nx"] Feb 27 10:19:16 crc kubenswrapper[4998]: I0227 10:19:16.297128 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-jl2nx" Feb 27 10:19:16 crc kubenswrapper[4998]: I0227 10:19:16.302283 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 27 10:19:16 crc kubenswrapper[4998]: I0227 10:19:16.302518 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 27 10:19:16 crc kubenswrapper[4998]: I0227 10:19:16.302576 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 27 10:19:16 crc kubenswrapper[4998]: I0227 10:19:16.302692 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 27 10:19:16 crc kubenswrapper[4998]: I0227 10:19:16.316091 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-46lvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a046a5ca-7081-4920-98af-1027a5bc29d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7s287\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-46lvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:16 crc kubenswrapper[4998]: I0227 10:19:16.328074 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"400c5e2f-5448-49c6-bf8e-04b21e552bb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l6t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l6t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m6kr5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:16 crc kubenswrapper[4998]: I0227 10:19:16.336204 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c459deb8-e1ea-43de-a1b0-1b463eee4bdc-host\") pod \"node-ca-jl2nx\" (UID: \"c459deb8-e1ea-43de-a1b0-1b463eee4bdc\") " pod="openshift-image-registry/node-ca-jl2nx" Feb 27 10:19:16 crc kubenswrapper[4998]: I0227 10:19:16.336275 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/c459deb8-e1ea-43de-a1b0-1b463eee4bdc-serviceca\") pod \"node-ca-jl2nx\" (UID: \"c459deb8-e1ea-43de-a1b0-1b463eee4bdc\") " pod="openshift-image-registry/node-ca-jl2nx" Feb 27 10:19:16 crc kubenswrapper[4998]: I0227 10:19:16.336300 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jc6kg\" (UniqueName: \"kubernetes.io/projected/c459deb8-e1ea-43de-a1b0-1b463eee4bdc-kube-api-access-jc6kg\") pod \"node-ca-jl2nx\" (UID: \"c459deb8-e1ea-43de-a1b0-1b463eee4bdc\") " pod="openshift-image-registry/node-ca-jl2nx" Feb 27 10:19:16 crc kubenswrapper[4998]: I0227 10:19:16.338421 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:16 crc kubenswrapper[4998]: I0227 10:19:16.349115 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:16 crc kubenswrapper[4998]: I0227 10:19:16.354938 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:16 crc kubenswrapper[4998]: I0227 10:19:16.355092 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:16 crc kubenswrapper[4998]: I0227 10:19:16.355222 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:16 crc kubenswrapper[4998]: I0227 10:19:16.355390 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:16 crc kubenswrapper[4998]: I0227 10:19:16.355479 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:16Z","lastTransitionTime":"2026-02-27T10:19:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:16 crc kubenswrapper[4998]: I0227 10:19:16.360307 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:16 crc kubenswrapper[4998]: I0227 10:19:16.381503 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:16 crc kubenswrapper[4998]: I0227 10:19:16.410252 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bceef7ff-b99d-432e-b9cb-7c538c82b74b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wh9xl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:16 crc kubenswrapper[4998]: I0227 10:19:16.420217 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jl2nx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c459deb8-e1ea-43de-a1b0-1b463eee4bdc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:16Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:16Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc6kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jl2nx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:16 crc kubenswrapper[4998]: I0227 10:19:16.429123 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:16 crc kubenswrapper[4998]: I0227 10:19:16.437022 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c459deb8-e1ea-43de-a1b0-1b463eee4bdc-host\") pod \"node-ca-jl2nx\" (UID: \"c459deb8-e1ea-43de-a1b0-1b463eee4bdc\") " pod="openshift-image-registry/node-ca-jl2nx" Feb 27 10:19:16 crc kubenswrapper[4998]: I0227 10:19:16.437055 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/c459deb8-e1ea-43de-a1b0-1b463eee4bdc-serviceca\") pod \"node-ca-jl2nx\" (UID: \"c459deb8-e1ea-43de-a1b0-1b463eee4bdc\") " pod="openshift-image-registry/node-ca-jl2nx" Feb 27 10:19:16 crc kubenswrapper[4998]: I0227 10:19:16.437073 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jc6kg\" (UniqueName: \"kubernetes.io/projected/c459deb8-e1ea-43de-a1b0-1b463eee4bdc-kube-api-access-jc6kg\") pod \"node-ca-jl2nx\" (UID: \"c459deb8-e1ea-43de-a1b0-1b463eee4bdc\") " pod="openshift-image-registry/node-ca-jl2nx" Feb 27 10:19:16 crc kubenswrapper[4998]: I0227 10:19:16.437205 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c459deb8-e1ea-43de-a1b0-1b463eee4bdc-host\") pod \"node-ca-jl2nx\" (UID: \"c459deb8-e1ea-43de-a1b0-1b463eee4bdc\") " pod="openshift-image-registry/node-ca-jl2nx" Feb 27 10:19:16 crc kubenswrapper[4998]: I0227 10:19:16.438145 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/c459deb8-e1ea-43de-a1b0-1b463eee4bdc-serviceca\") pod \"node-ca-jl2nx\" (UID: \"c459deb8-e1ea-43de-a1b0-1b463eee4bdc\") " pod="openshift-image-registry/node-ca-jl2nx" Feb 27 10:19:16 crc kubenswrapper[4998]: I0227 10:19:16.439449 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:16 crc kubenswrapper[4998]: I0227 10:19:16.445818 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcfqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9652967a-d4bf-4304-bd25-4fed87e89b10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:09Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:09Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jctzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcfqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:16 crc kubenswrapper[4998]: I0227 10:19:16.451974 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jc6kg\" (UniqueName: \"kubernetes.io/projected/c459deb8-e1ea-43de-a1b0-1b463eee4bdc-kube-api-access-jc6kg\") pod \"node-ca-jl2nx\" (UID: \"c459deb8-e1ea-43de-a1b0-1b463eee4bdc\") " pod="openshift-image-registry/node-ca-jl2nx" Feb 27 10:19:16 crc kubenswrapper[4998]: I0227 10:19:16.456878 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:16 crc kubenswrapper[4998]: I0227 10:19:16.456916 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:16 crc kubenswrapper[4998]: I0227 10:19:16.456925 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:16 crc kubenswrapper[4998]: I0227 10:19:16.456939 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:16 crc kubenswrapper[4998]: I0227 10:19:16.456947 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:16Z","lastTransitionTime":"2026-02-27T10:19:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:16 crc kubenswrapper[4998]: I0227 10:19:16.457722 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l9x2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e55e9768-52ee-4fcf-a279-1b55e6d6c6fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l9x2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:16 crc kubenswrapper[4998]: I0227 10:19:16.469118 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fc05123-f698-45ff-a3c3-13c18e466cdb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c807f36a54905b570613990b7428942b90cbad2d8d8dfbab02ee177d5575644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f0cce1c3309bc7de11ea57038f7a306b781b41434337fbf3da176ca2dba2b91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93c102b441273f39ca361ed86f6ef259e15d8adb7eea414db62fabebfda0dee1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b6e8ecfcaba19fb742ce57619627409073b5ea272a20cca6cff39ebcef8d3e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b6e8ecfcaba19fb742ce57619627409073b5ea272a20cca6cff39ebcef8d3e2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T10:18:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 10:18:37.748369 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 10:18:37.748616 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 10:18:37.749728 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2309284339/tls.crt::/tmp/serving-cert-2309284339/tls.key\\\\\\\"\\\\nI0227 10:18:38.164511 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 10:18:38.177544 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 10:18:38.177576 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 10:18:38.177600 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 10:18:38.177606 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 10:18:38.196196 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 10:18:38.196250 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 10:18:38.196256 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 10:18:38.196263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 10:18:38.196267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 10:18:38.196282 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 10:18:38.196286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0227 10:18:38.201111 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0227 10:18:38.201327 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T10:18:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df65cc66d990595a8fdb751234ffd8b9e890f53de56c736241bc8b52e7341357\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b42644d7b4769348dda3a3ed5f82a860712159a6e62df2ee6a04b6e890851fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b42644d7b4769348dda3a3ed5f82a860712159a6e62df2ee6a04b6e890851fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:17:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:16 crc kubenswrapper[4998]: I0227 10:19:16.560110 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:16 crc kubenswrapper[4998]: I0227 10:19:16.560184 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:16 crc kubenswrapper[4998]: I0227 10:19:16.560207 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:16 crc kubenswrapper[4998]: I0227 10:19:16.560274 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:16 crc kubenswrapper[4998]: I0227 10:19:16.560301 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:16Z","lastTransitionTime":"2026-02-27T10:19:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:16 crc kubenswrapper[4998]: I0227 10:19:16.620023 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-jl2nx" Feb 27 10:19:16 crc kubenswrapper[4998]: W0227 10:19:16.630336 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc459deb8_e1ea_43de_a1b0_1b463eee4bdc.slice/crio-be819714e7c3a47cd812b39d932e0f946ffbd7214b4b1b0121da17531de4de78 WatchSource:0}: Error finding container be819714e7c3a47cd812b39d932e0f946ffbd7214b4b1b0121da17531de4de78: Status 404 returned error can't find the container with id be819714e7c3a47cd812b39d932e0f946ffbd7214b4b1b0121da17531de4de78 Feb 27 10:19:16 crc kubenswrapper[4998]: E0227 10:19:16.632420 4998 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 10:19:16 crc kubenswrapper[4998]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Feb 27 10:19:16 crc kubenswrapper[4998]: while [ true ]; Feb 27 10:19:16 crc kubenswrapper[4998]: do Feb 27 10:19:16 crc kubenswrapper[4998]: for f in $(ls /tmp/serviceca); do Feb 27 10:19:16 crc kubenswrapper[4998]: echo $f Feb 27 10:19:16 crc kubenswrapper[4998]: ca_file_path="/tmp/serviceca/${f}" Feb 27 10:19:16 crc kubenswrapper[4998]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Feb 27 10:19:16 crc kubenswrapper[4998]: reg_dir_path="/etc/docker/certs.d/${f}" Feb 27 10:19:16 crc kubenswrapper[4998]: if [ -e "${reg_dir_path}" ]; then Feb 27 10:19:16 crc kubenswrapper[4998]: cp -u $ca_file_path $reg_dir_path/ca.crt Feb 27 10:19:16 crc kubenswrapper[4998]: else Feb 27 10:19:16 crc kubenswrapper[4998]: mkdir $reg_dir_path Feb 27 10:19:16 crc kubenswrapper[4998]: cp $ca_file_path $reg_dir_path/ca.crt Feb 27 10:19:16 crc kubenswrapper[4998]: fi Feb 27 10:19:16 crc kubenswrapper[4998]: done Feb 27 10:19:16 crc kubenswrapper[4998]: for d in $(ls /etc/docker/certs.d); do Feb 27 10:19:16 crc kubenswrapper[4998]: echo $d Feb 27 10:19:16 crc kubenswrapper[4998]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Feb 27 10:19:16 crc kubenswrapper[4998]: reg_conf_path="/tmp/serviceca/${dp}" Feb 27 10:19:16 crc kubenswrapper[4998]: if [ ! -e "${reg_conf_path}" ]; then Feb 27 10:19:16 crc kubenswrapper[4998]: rm -rf /etc/docker/certs.d/$d Feb 27 10:19:16 crc kubenswrapper[4998]: fi Feb 27 10:19:16 crc kubenswrapper[4998]: done Feb 27 10:19:16 crc kubenswrapper[4998]: sleep 60 & wait ${!} Feb 27 10:19:16 crc kubenswrapper[4998]: done Feb 27 10:19:16 crc kubenswrapper[4998]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jc6kg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-jl2nx_openshift-image-registry(c459deb8-e1ea-43de-a1b0-1b463eee4bdc): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 27 10:19:16 crc kubenswrapper[4998]: > logger="UnhandledError" Feb 27 10:19:16 crc kubenswrapper[4998]: E0227 10:19:16.633656 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-jl2nx" podUID="c459deb8-e1ea-43de-a1b0-1b463eee4bdc" Feb 27 10:19:16 crc kubenswrapper[4998]: I0227 10:19:16.662741 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:16 crc kubenswrapper[4998]: I0227 10:19:16.662784 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:16 crc kubenswrapper[4998]: I0227 10:19:16.662800 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:16 crc kubenswrapper[4998]: I0227 10:19:16.662820 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:16 crc kubenswrapper[4998]: I0227 10:19:16.662835 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:16Z","lastTransitionTime":"2026-02-27T10:19:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:16 crc kubenswrapper[4998]: I0227 10:19:16.667941 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:16 crc kubenswrapper[4998]: I0227 10:19:16.667986 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:16 crc kubenswrapper[4998]: I0227 10:19:16.668025 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:16 crc kubenswrapper[4998]: I0227 10:19:16.668055 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:16 crc kubenswrapper[4998]: I0227 10:19:16.668070 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:16Z","lastTransitionTime":"2026-02-27T10:19:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:16 crc kubenswrapper[4998]: E0227 10:19:16.677619 4998 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:19:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:19:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:19:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:19:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a76c225a-d617-4499-bb32-eb42f31208cd\\\",\\\"systemUUID\\\":\\\"9cb598f9-84ae-4703-b4b9-6775104308e7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:16 crc kubenswrapper[4998]: I0227 10:19:16.681119 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:16 crc kubenswrapper[4998]: I0227 10:19:16.681170 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:16 crc kubenswrapper[4998]: I0227 10:19:16.681200 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:16 crc kubenswrapper[4998]: I0227 10:19:16.681220 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:16 crc kubenswrapper[4998]: I0227 10:19:16.681258 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:16Z","lastTransitionTime":"2026-02-27T10:19:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:16 crc kubenswrapper[4998]: E0227 10:19:16.691435 4998 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:19:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:19:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:19:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:19:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a76c225a-d617-4499-bb32-eb42f31208cd\\\",\\\"systemUUID\\\":\\\"9cb598f9-84ae-4703-b4b9-6775104308e7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:16 crc kubenswrapper[4998]: I0227 10:19:16.694585 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:16 crc kubenswrapper[4998]: I0227 10:19:16.694621 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:16 crc kubenswrapper[4998]: I0227 10:19:16.694636 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:16 crc kubenswrapper[4998]: I0227 10:19:16.694652 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:16 crc kubenswrapper[4998]: I0227 10:19:16.694662 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:16Z","lastTransitionTime":"2026-02-27T10:19:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:16 crc kubenswrapper[4998]: E0227 10:19:16.703802 4998 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:19:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:19:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:19:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:19:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a76c225a-d617-4499-bb32-eb42f31208cd\\\",\\\"systemUUID\\\":\\\"9cb598f9-84ae-4703-b4b9-6775104308e7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:16 crc kubenswrapper[4998]: I0227 10:19:16.707144 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:16 crc kubenswrapper[4998]: I0227 10:19:16.707192 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:16 crc kubenswrapper[4998]: I0227 10:19:16.707204 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:16 crc kubenswrapper[4998]: I0227 10:19:16.707219 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:16 crc kubenswrapper[4998]: I0227 10:19:16.707252 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:16Z","lastTransitionTime":"2026-02-27T10:19:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:16 crc kubenswrapper[4998]: E0227 10:19:16.716077 4998 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:19:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:19:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:19:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:19:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a76c225a-d617-4499-bb32-eb42f31208cd\\\",\\\"systemUUID\\\":\\\"9cb598f9-84ae-4703-b4b9-6775104308e7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:16 crc kubenswrapper[4998]: I0227 10:19:16.718958 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:16 crc kubenswrapper[4998]: I0227 10:19:16.718988 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:16 crc kubenswrapper[4998]: I0227 10:19:16.719000 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:16 crc kubenswrapper[4998]: I0227 10:19:16.719021 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:16 crc kubenswrapper[4998]: I0227 10:19:16.719033 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:16Z","lastTransitionTime":"2026-02-27T10:19:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:16 crc kubenswrapper[4998]: E0227 10:19:16.729025 4998 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:19:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:19:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:19:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:19:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a76c225a-d617-4499-bb32-eb42f31208cd\\\",\\\"systemUUID\\\":\\\"9cb598f9-84ae-4703-b4b9-6775104308e7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:16 crc kubenswrapper[4998]: E0227 10:19:16.729165 4998 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 27 10:19:16 crc kubenswrapper[4998]: I0227 10:19:16.763868 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:19:16 crc kubenswrapper[4998]: I0227 10:19:16.763868 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:19:16 crc kubenswrapper[4998]: E0227 10:19:16.763990 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 10:19:16 crc kubenswrapper[4998]: I0227 10:19:16.764019 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:19:16 crc kubenswrapper[4998]: E0227 10:19:16.764100 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 10:19:16 crc kubenswrapper[4998]: E0227 10:19:16.764258 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 10:19:16 crc kubenswrapper[4998]: I0227 10:19:16.765699 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:16 crc kubenswrapper[4998]: I0227 10:19:16.765729 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:16 crc kubenswrapper[4998]: I0227 10:19:16.765740 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:16 crc kubenswrapper[4998]: I0227 10:19:16.765753 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:16 crc kubenswrapper[4998]: I0227 10:19:16.765764 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:16Z","lastTransitionTime":"2026-02-27T10:19:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:16 crc kubenswrapper[4998]: I0227 10:19:16.868506 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:16 crc kubenswrapper[4998]: I0227 10:19:16.868606 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:16 crc kubenswrapper[4998]: I0227 10:19:16.868640 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:16 crc kubenswrapper[4998]: I0227 10:19:16.868670 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:16 crc kubenswrapper[4998]: I0227 10:19:16.868694 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:16Z","lastTransitionTime":"2026-02-27T10:19:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:16 crc kubenswrapper[4998]: I0227 10:19:16.971000 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:16 crc kubenswrapper[4998]: I0227 10:19:16.971049 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:16 crc kubenswrapper[4998]: I0227 10:19:16.971059 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:16 crc kubenswrapper[4998]: I0227 10:19:16.971075 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:16 crc kubenswrapper[4998]: I0227 10:19:16.971085 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:16Z","lastTransitionTime":"2026-02-27T10:19:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:17 crc kubenswrapper[4998]: I0227 10:19:17.074251 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:17 crc kubenswrapper[4998]: I0227 10:19:17.074328 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:17 crc kubenswrapper[4998]: I0227 10:19:17.074353 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:17 crc kubenswrapper[4998]: I0227 10:19:17.074374 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:17 crc kubenswrapper[4998]: I0227 10:19:17.074386 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:17Z","lastTransitionTime":"2026-02-27T10:19:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:17 crc kubenswrapper[4998]: I0227 10:19:17.178330 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:17 crc kubenswrapper[4998]: I0227 10:19:17.178386 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:17 crc kubenswrapper[4998]: I0227 10:19:17.178401 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:17 crc kubenswrapper[4998]: I0227 10:19:17.178420 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:17 crc kubenswrapper[4998]: I0227 10:19:17.178435 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:17Z","lastTransitionTime":"2026-02-27T10:19:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:17 crc kubenswrapper[4998]: I0227 10:19:17.193828 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-jl2nx" event={"ID":"c459deb8-e1ea-43de-a1b0-1b463eee4bdc","Type":"ContainerStarted","Data":"be819714e7c3a47cd812b39d932e0f946ffbd7214b4b1b0121da17531de4de78"} Feb 27 10:19:17 crc kubenswrapper[4998]: E0227 10:19:17.197374 4998 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 10:19:17 crc kubenswrapper[4998]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Feb 27 10:19:17 crc kubenswrapper[4998]: while [ true ]; Feb 27 10:19:17 crc kubenswrapper[4998]: do Feb 27 10:19:17 crc kubenswrapper[4998]: for f in $(ls /tmp/serviceca); do Feb 27 10:19:17 crc kubenswrapper[4998]: echo $f Feb 27 10:19:17 crc kubenswrapper[4998]: ca_file_path="/tmp/serviceca/${f}" Feb 27 10:19:17 crc kubenswrapper[4998]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Feb 27 10:19:17 crc kubenswrapper[4998]: reg_dir_path="/etc/docker/certs.d/${f}" Feb 27 10:19:17 crc kubenswrapper[4998]: if [ -e "${reg_dir_path}" ]; then Feb 27 10:19:17 crc kubenswrapper[4998]: cp -u $ca_file_path $reg_dir_path/ca.crt Feb 27 10:19:17 crc kubenswrapper[4998]: else Feb 27 10:19:17 crc kubenswrapper[4998]: mkdir $reg_dir_path Feb 27 10:19:17 crc kubenswrapper[4998]: cp $ca_file_path $reg_dir_path/ca.crt Feb 27 10:19:17 crc kubenswrapper[4998]: fi Feb 27 10:19:17 crc kubenswrapper[4998]: done Feb 27 10:19:17 crc kubenswrapper[4998]: for d in $(ls /etc/docker/certs.d); do Feb 27 10:19:17 crc kubenswrapper[4998]: echo $d Feb 27 10:19:17 crc kubenswrapper[4998]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Feb 27 10:19:17 crc kubenswrapper[4998]: reg_conf_path="/tmp/serviceca/${dp}" Feb 27 10:19:17 crc kubenswrapper[4998]: if [ ! -e "${reg_conf_path}" ]; then Feb 27 10:19:17 crc kubenswrapper[4998]: rm -rf /etc/docker/certs.d/$d Feb 27 10:19:17 crc kubenswrapper[4998]: fi Feb 27 10:19:17 crc kubenswrapper[4998]: done Feb 27 10:19:17 crc kubenswrapper[4998]: sleep 60 & wait ${!} Feb 27 10:19:17 crc kubenswrapper[4998]: done Feb 27 10:19:17 crc kubenswrapper[4998]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jc6kg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-jl2nx_openshift-image-registry(c459deb8-e1ea-43de-a1b0-1b463eee4bdc): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 27 10:19:17 crc kubenswrapper[4998]: > logger="UnhandledError" Feb 27 10:19:17 crc kubenswrapper[4998]: E0227 10:19:17.199046 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-jl2nx" podUID="c459deb8-e1ea-43de-a1b0-1b463eee4bdc" Feb 27 10:19:17 crc kubenswrapper[4998]: I0227 10:19:17.213101 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fc05123-f698-45ff-a3c3-13c18e466cdb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c807f36a54905b570613990b7428942b90cbad2d8d8dfbab02ee177d5575644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f0cce1c3309bc7de11ea57038f7a306b781b41434337fbf3da176ca2dba2b91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93c102b441273f39ca361ed86f6ef259e15d8adb7eea414db62fabebfda0dee1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b6e8ecfcaba19fb742ce57619627409073b5ea272a20cca6cff39ebcef8d3e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b6e8ecfcaba19fb742ce57619627409073b5ea272a20cca6cff39ebcef8d3e2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T10:18:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 10:18:37.748369 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 10:18:37.748616 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 10:18:37.749728 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2309284339/tls.crt::/tmp/serving-cert-2309284339/tls.key\\\\\\\"\\\\nI0227 10:18:38.164511 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 10:18:38.177544 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 10:18:38.177576 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 10:18:38.177600 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 10:18:38.177606 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 10:18:38.196196 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 10:18:38.196250 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 10:18:38.196256 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 10:18:38.196263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 10:18:38.196267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 10:18:38.196282 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 10:18:38.196286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0227 10:18:38.201111 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0227 10:18:38.201327 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T10:18:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df65cc66d990595a8fdb751234ffd8b9e890f53de56c736241bc8b52e7341357\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b42644d7b4769348dda3a3ed5f82a860712159a6e62df2ee6a04b6e890851fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b42644d7b4769348dda3a3ed5f82a860712159a6e62df2ee6a04b6e890851fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:17:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:17 crc kubenswrapper[4998]: I0227 10:19:17.223066 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:17 crc kubenswrapper[4998]: I0227 10:19:17.234072 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:17 crc kubenswrapper[4998]: I0227 10:19:17.244786 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:17 crc kubenswrapper[4998]: I0227 10:19:17.257874 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-46lvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a046a5ca-7081-4920-98af-1027a5bc29d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7s287\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-46lvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:17 crc kubenswrapper[4998]: I0227 10:19:17.267239 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"400c5e2f-5448-49c6-bf8e-04b21e552bb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l6t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l6t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m6kr5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:17 crc kubenswrapper[4998]: I0227 10:19:17.278736 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:17 crc kubenswrapper[4998]: I0227 10:19:17.282548 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:17 crc kubenswrapper[4998]: I0227 10:19:17.282704 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:17 crc kubenswrapper[4998]: I0227 10:19:17.282717 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:17 crc kubenswrapper[4998]: I0227 10:19:17.282736 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:17 crc kubenswrapper[4998]: I0227 10:19:17.282760 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:17Z","lastTransitionTime":"2026-02-27T10:19:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:17 crc kubenswrapper[4998]: I0227 10:19:17.292035 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:17 crc kubenswrapper[4998]: I0227 10:19:17.302680 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:17 crc kubenswrapper[4998]: I0227 10:19:17.322402 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bceef7ff-b99d-432e-b9cb-7c538c82b74b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wh9xl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:17 crc kubenswrapper[4998]: I0227 10:19:17.334550 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jl2nx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c459deb8-e1ea-43de-a1b0-1b463eee4bdc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:16Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:16Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc6kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jl2nx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:17 crc kubenswrapper[4998]: I0227 10:19:17.344444 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcfqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9652967a-d4bf-4304-bd25-4fed87e89b10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:09Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:09Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jctzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcfqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:17 crc kubenswrapper[4998]: I0227 10:19:17.358939 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l9x2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e55e9768-52ee-4fcf-a279-1b55e6d6c6fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l9x2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:17 crc kubenswrapper[4998]: I0227 10:19:17.386013 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:17 crc kubenswrapper[4998]: I0227 10:19:17.386060 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:17 crc kubenswrapper[4998]: I0227 10:19:17.386074 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:17 crc kubenswrapper[4998]: I0227 10:19:17.386091 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:17 crc kubenswrapper[4998]: I0227 10:19:17.386105 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:17Z","lastTransitionTime":"2026-02-27T10:19:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:17 crc kubenswrapper[4998]: I0227 10:19:17.488972 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:17 crc kubenswrapper[4998]: I0227 10:19:17.489012 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:17 crc kubenswrapper[4998]: I0227 10:19:17.489023 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:17 crc kubenswrapper[4998]: I0227 10:19:17.489040 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:17 crc kubenswrapper[4998]: I0227 10:19:17.489051 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:17Z","lastTransitionTime":"2026-02-27T10:19:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:17 crc kubenswrapper[4998]: I0227 10:19:17.591107 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:17 crc kubenswrapper[4998]: I0227 10:19:17.591144 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:17 crc kubenswrapper[4998]: I0227 10:19:17.591154 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:17 crc kubenswrapper[4998]: I0227 10:19:17.591168 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:17 crc kubenswrapper[4998]: I0227 10:19:17.591177 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:17Z","lastTransitionTime":"2026-02-27T10:19:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:17 crc kubenswrapper[4998]: I0227 10:19:17.694214 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:17 crc kubenswrapper[4998]: I0227 10:19:17.694281 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:17 crc kubenswrapper[4998]: I0227 10:19:17.694293 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:17 crc kubenswrapper[4998]: I0227 10:19:17.694310 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:17 crc kubenswrapper[4998]: I0227 10:19:17.694322 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:17Z","lastTransitionTime":"2026-02-27T10:19:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:17 crc kubenswrapper[4998]: I0227 10:19:17.797716 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:17 crc kubenswrapper[4998]: I0227 10:19:17.797797 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:17 crc kubenswrapper[4998]: I0227 10:19:17.797816 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:17 crc kubenswrapper[4998]: I0227 10:19:17.797844 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:17 crc kubenswrapper[4998]: I0227 10:19:17.797868 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:17Z","lastTransitionTime":"2026-02-27T10:19:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:17 crc kubenswrapper[4998]: I0227 10:19:17.901697 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:17 crc kubenswrapper[4998]: I0227 10:19:17.901781 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:17 crc kubenswrapper[4998]: I0227 10:19:17.901799 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:17 crc kubenswrapper[4998]: I0227 10:19:17.901830 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:17 crc kubenswrapper[4998]: I0227 10:19:17.901899 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:17Z","lastTransitionTime":"2026-02-27T10:19:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:18 crc kubenswrapper[4998]: I0227 10:19:18.004555 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:18 crc kubenswrapper[4998]: I0227 10:19:18.004611 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:18 crc kubenswrapper[4998]: I0227 10:19:18.004624 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:18 crc kubenswrapper[4998]: I0227 10:19:18.004641 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:18 crc kubenswrapper[4998]: I0227 10:19:18.004654 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:18Z","lastTransitionTime":"2026-02-27T10:19:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:18 crc kubenswrapper[4998]: I0227 10:19:18.107686 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:18 crc kubenswrapper[4998]: I0227 10:19:18.107818 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:18 crc kubenswrapper[4998]: I0227 10:19:18.107899 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:18 crc kubenswrapper[4998]: I0227 10:19:18.107935 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:18 crc kubenswrapper[4998]: I0227 10:19:18.108005 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:18Z","lastTransitionTime":"2026-02-27T10:19:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:18 crc kubenswrapper[4998]: I0227 10:19:18.210061 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:18 crc kubenswrapper[4998]: I0227 10:19:18.210104 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:18 crc kubenswrapper[4998]: I0227 10:19:18.210115 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:18 crc kubenswrapper[4998]: I0227 10:19:18.210133 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:18 crc kubenswrapper[4998]: I0227 10:19:18.210147 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:18Z","lastTransitionTime":"2026-02-27T10:19:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:18 crc kubenswrapper[4998]: I0227 10:19:18.312599 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:18 crc kubenswrapper[4998]: I0227 10:19:18.312669 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:18 crc kubenswrapper[4998]: I0227 10:19:18.312684 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:18 crc kubenswrapper[4998]: I0227 10:19:18.312721 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:18 crc kubenswrapper[4998]: I0227 10:19:18.312735 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:18Z","lastTransitionTime":"2026-02-27T10:19:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:18 crc kubenswrapper[4998]: I0227 10:19:18.416734 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:18 crc kubenswrapper[4998]: I0227 10:19:18.416795 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:18 crc kubenswrapper[4998]: I0227 10:19:18.416813 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:18 crc kubenswrapper[4998]: I0227 10:19:18.416836 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:18 crc kubenswrapper[4998]: I0227 10:19:18.416858 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:18Z","lastTransitionTime":"2026-02-27T10:19:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:18 crc kubenswrapper[4998]: I0227 10:19:18.519879 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:18 crc kubenswrapper[4998]: I0227 10:19:18.519980 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:18 crc kubenswrapper[4998]: I0227 10:19:18.520000 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:18 crc kubenswrapper[4998]: I0227 10:19:18.520030 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:18 crc kubenswrapper[4998]: I0227 10:19:18.520051 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:18Z","lastTransitionTime":"2026-02-27T10:19:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:18 crc kubenswrapper[4998]: I0227 10:19:18.561001 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:19:18 crc kubenswrapper[4998]: E0227 10:19:18.561433 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 10:19:34.561390354 +0000 UTC m=+126.559661362 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:19:18 crc kubenswrapper[4998]: I0227 10:19:18.623671 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:18 crc kubenswrapper[4998]: I0227 10:19:18.623717 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:18 crc kubenswrapper[4998]: I0227 10:19:18.623731 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:18 crc kubenswrapper[4998]: I0227 10:19:18.623750 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:18 crc kubenswrapper[4998]: I0227 10:19:18.623765 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:18Z","lastTransitionTime":"2026-02-27T10:19:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:18 crc kubenswrapper[4998]: I0227 10:19:18.663000 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:19:18 crc kubenswrapper[4998]: I0227 10:19:18.663145 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:19:18 crc kubenswrapper[4998]: I0227 10:19:18.663263 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:19:18 crc kubenswrapper[4998]: E0227 10:19:18.663278 4998 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 27 10:19:18 crc kubenswrapper[4998]: E0227 10:19:18.663309 4998 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 27 10:19:18 crc kubenswrapper[4998]: E0227 10:19:18.663325 4998 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 10:19:18 crc kubenswrapper[4998]: I0227 10:19:18.663321 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:19:18 crc kubenswrapper[4998]: E0227 10:19:18.663391 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-27 10:19:34.663373626 +0000 UTC m=+126.661644594 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 10:19:18 crc kubenswrapper[4998]: E0227 10:19:18.663459 4998 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 27 10:19:18 crc kubenswrapper[4998]: E0227 10:19:18.663517 4998 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 27 10:19:18 crc kubenswrapper[4998]: E0227 10:19:18.663627 4998 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 27 10:19:18 crc kubenswrapper[4998]: E0227 10:19:18.664039 4998 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 27 10:19:18 crc kubenswrapper[4998]: E0227 10:19:18.664051 4998 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 10:19:18 crc kubenswrapper[4998]: E0227 10:19:18.664819 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-27 10:19:34.663955208 +0000 UTC m=+126.662226176 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 27 10:19:18 crc kubenswrapper[4998]: E0227 10:19:18.664925 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-27 10:19:34.664888818 +0000 UTC m=+126.663159966 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 27 10:19:18 crc kubenswrapper[4998]: E0227 10:19:18.665035 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-27 10:19:34.665009971 +0000 UTC m=+126.663281179 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 10:19:18 crc kubenswrapper[4998]: I0227 10:19:18.725477 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:18 crc kubenswrapper[4998]: I0227 10:19:18.725513 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:18 crc kubenswrapper[4998]: I0227 10:19:18.725523 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:18 crc kubenswrapper[4998]: I0227 10:19:18.725538 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:18 crc kubenswrapper[4998]: I0227 10:19:18.725548 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:18Z","lastTransitionTime":"2026-02-27T10:19:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:18 crc kubenswrapper[4998]: I0227 10:19:18.765169 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:19:18 crc kubenswrapper[4998]: E0227 10:19:18.765392 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 10:19:18 crc kubenswrapper[4998]: I0227 10:19:18.765693 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:19:18 crc kubenswrapper[4998]: I0227 10:19:18.765771 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:19:18 crc kubenswrapper[4998]: E0227 10:19:18.765990 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 10:19:18 crc kubenswrapper[4998]: E0227 10:19:18.765966 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 10:19:18 crc kubenswrapper[4998]: I0227 10:19:18.781336 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:18 crc kubenswrapper[4998]: I0227 10:19:18.794493 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:18 crc kubenswrapper[4998]: I0227 10:19:18.810088 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:18 crc kubenswrapper[4998]: I0227 10:19:18.827473 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:18 crc kubenswrapper[4998]: I0227 10:19:18.827527 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:18 crc kubenswrapper[4998]: I0227 10:19:18.827544 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:18 crc kubenswrapper[4998]: I0227 10:19:18.827567 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:18 crc kubenswrapper[4998]: I0227 10:19:18.827585 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:18Z","lastTransitionTime":"2026-02-27T10:19:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:18 crc kubenswrapper[4998]: I0227 10:19:18.839497 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bceef7ff-b99d-432e-b9cb-7c538c82b74b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wh9xl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:18 crc kubenswrapper[4998]: I0227 10:19:18.850855 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jl2nx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c459deb8-e1ea-43de-a1b0-1b463eee4bdc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:16Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:16Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc6kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jl2nx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:18 crc kubenswrapper[4998]: I0227 10:19:18.863873 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcfqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9652967a-d4bf-4304-bd25-4fed87e89b10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:09Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:09Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jctzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcfqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:18 crc kubenswrapper[4998]: I0227 10:19:18.879082 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l9x2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e55e9768-52ee-4fcf-a279-1b55e6d6c6fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l9x2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:18 crc kubenswrapper[4998]: I0227 10:19:18.893367 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fc05123-f698-45ff-a3c3-13c18e466cdb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c807f36a54905b570613990b7428942b90cbad2d8d8dfbab02ee177d5575644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f0cce1c3309bc7de11ea57038f7a306b781b41434337fbf3da176ca2dba2b91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93c102b441273f39ca361ed86f6ef259e15d8adb7eea414db62fabebfda0dee1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b6e8ecfcaba19fb742ce57619627409073b5ea272a20cca6cff39ebcef8d3e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b6e8ecfcaba19fb742ce57619627409073b5ea272a20cca6cff39ebcef8d3e2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T10:18:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 10:18:37.748369 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 10:18:37.748616 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 10:18:37.749728 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2309284339/tls.crt::/tmp/serving-cert-2309284339/tls.key\\\\\\\"\\\\nI0227 10:18:38.164511 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 10:18:38.177544 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 10:18:38.177576 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 10:18:38.177600 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 10:18:38.177606 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 10:18:38.196196 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 10:18:38.196250 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 10:18:38.196256 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 10:18:38.196263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 10:18:38.196267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 10:18:38.196282 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 10:18:38.196286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0227 10:18:38.201111 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0227 10:18:38.201327 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T10:18:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df65cc66d990595a8fdb751234ffd8b9e890f53de56c736241bc8b52e7341357\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b42644d7b4769348dda3a3ed5f82a860712159a6e62df2ee6a04b6e890851fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b42644d7b4769348dda3a3ed5f82a860712159a6e62df2ee6a04b6e890851fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:17:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:18 crc kubenswrapper[4998]: I0227 10:19:18.908478 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:18 crc kubenswrapper[4998]: I0227 10:19:18.919838 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:18 crc kubenswrapper[4998]: I0227 10:19:18.930918 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:18 crc kubenswrapper[4998]: I0227 10:19:18.930959 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:18 crc kubenswrapper[4998]: I0227 10:19:18.930970 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:18 crc kubenswrapper[4998]: I0227 10:19:18.930991 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:18 crc kubenswrapper[4998]: I0227 10:19:18.931006 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:18Z","lastTransitionTime":"2026-02-27T10:19:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:18 crc kubenswrapper[4998]: I0227 10:19:18.931052 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:18 crc kubenswrapper[4998]: I0227 10:19:18.943304 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-46lvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a046a5ca-7081-4920-98af-1027a5bc29d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7s287\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-46lvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:18 crc kubenswrapper[4998]: I0227 10:19:18.953900 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"400c5e2f-5448-49c6-bf8e-04b21e552bb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l6t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l6t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m6kr5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:19 crc kubenswrapper[4998]: I0227 10:19:19.033353 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:19 crc kubenswrapper[4998]: I0227 10:19:19.033417 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:19 crc kubenswrapper[4998]: I0227 10:19:19.033436 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:19 crc kubenswrapper[4998]: I0227 10:19:19.033461 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:19 crc kubenswrapper[4998]: I0227 10:19:19.033475 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:19Z","lastTransitionTime":"2026-02-27T10:19:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:19 crc kubenswrapper[4998]: I0227 10:19:19.136417 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:19 crc kubenswrapper[4998]: I0227 10:19:19.136542 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:19 crc kubenswrapper[4998]: I0227 10:19:19.136561 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:19 crc kubenswrapper[4998]: I0227 10:19:19.136588 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:19 crc kubenswrapper[4998]: I0227 10:19:19.136608 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:19Z","lastTransitionTime":"2026-02-27T10:19:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:19 crc kubenswrapper[4998]: I0227 10:19:19.241696 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:19 crc kubenswrapper[4998]: I0227 10:19:19.241745 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:19 crc kubenswrapper[4998]: I0227 10:19:19.241756 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:19 crc kubenswrapper[4998]: I0227 10:19:19.241771 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:19 crc kubenswrapper[4998]: I0227 10:19:19.241782 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:19Z","lastTransitionTime":"2026-02-27T10:19:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:19 crc kubenswrapper[4998]: I0227 10:19:19.344205 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:19 crc kubenswrapper[4998]: I0227 10:19:19.344289 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:19 crc kubenswrapper[4998]: I0227 10:19:19.344300 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:19 crc kubenswrapper[4998]: I0227 10:19:19.344318 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:19 crc kubenswrapper[4998]: I0227 10:19:19.344330 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:19Z","lastTransitionTime":"2026-02-27T10:19:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:19 crc kubenswrapper[4998]: I0227 10:19:19.446966 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:19 crc kubenswrapper[4998]: I0227 10:19:19.447012 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:19 crc kubenswrapper[4998]: I0227 10:19:19.447022 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:19 crc kubenswrapper[4998]: I0227 10:19:19.447039 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:19 crc kubenswrapper[4998]: I0227 10:19:19.447049 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:19Z","lastTransitionTime":"2026-02-27T10:19:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:19 crc kubenswrapper[4998]: I0227 10:19:19.549048 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:19 crc kubenswrapper[4998]: I0227 10:19:19.549606 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:19 crc kubenswrapper[4998]: I0227 10:19:19.549672 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:19 crc kubenswrapper[4998]: I0227 10:19:19.549746 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:19 crc kubenswrapper[4998]: I0227 10:19:19.549805 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:19Z","lastTransitionTime":"2026-02-27T10:19:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:19 crc kubenswrapper[4998]: I0227 10:19:19.652114 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:19 crc kubenswrapper[4998]: I0227 10:19:19.652186 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:19 crc kubenswrapper[4998]: I0227 10:19:19.652199 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:19 crc kubenswrapper[4998]: I0227 10:19:19.652217 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:19 crc kubenswrapper[4998]: I0227 10:19:19.652256 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:19Z","lastTransitionTime":"2026-02-27T10:19:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:19 crc kubenswrapper[4998]: I0227 10:19:19.754176 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:19 crc kubenswrapper[4998]: I0227 10:19:19.754263 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:19 crc kubenswrapper[4998]: I0227 10:19:19.754281 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:19 crc kubenswrapper[4998]: I0227 10:19:19.754303 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:19 crc kubenswrapper[4998]: I0227 10:19:19.754323 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:19Z","lastTransitionTime":"2026-02-27T10:19:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:19 crc kubenswrapper[4998]: I0227 10:19:19.856816 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:19 crc kubenswrapper[4998]: I0227 10:19:19.856875 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:19 crc kubenswrapper[4998]: I0227 10:19:19.856886 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:19 crc kubenswrapper[4998]: I0227 10:19:19.856902 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:19 crc kubenswrapper[4998]: I0227 10:19:19.856913 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:19Z","lastTransitionTime":"2026-02-27T10:19:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:19 crc kubenswrapper[4998]: I0227 10:19:19.959520 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:19 crc kubenswrapper[4998]: I0227 10:19:19.959573 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:19 crc kubenswrapper[4998]: I0227 10:19:19.959587 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:19 crc kubenswrapper[4998]: I0227 10:19:19.959607 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:19 crc kubenswrapper[4998]: I0227 10:19:19.959621 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:19Z","lastTransitionTime":"2026-02-27T10:19:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:20 crc kubenswrapper[4998]: I0227 10:19:20.062953 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:20 crc kubenswrapper[4998]: I0227 10:19:20.062982 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:20 crc kubenswrapper[4998]: I0227 10:19:20.062991 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:20 crc kubenswrapper[4998]: I0227 10:19:20.063004 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:20 crc kubenswrapper[4998]: I0227 10:19:20.063014 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:20Z","lastTransitionTime":"2026-02-27T10:19:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:20 crc kubenswrapper[4998]: I0227 10:19:20.167042 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:20 crc kubenswrapper[4998]: I0227 10:19:20.167115 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:20 crc kubenswrapper[4998]: I0227 10:19:20.167138 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:20 crc kubenswrapper[4998]: I0227 10:19:20.167166 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:20 crc kubenswrapper[4998]: I0227 10:19:20.167190 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:20Z","lastTransitionTime":"2026-02-27T10:19:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:20 crc kubenswrapper[4998]: I0227 10:19:20.270443 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:20 crc kubenswrapper[4998]: I0227 10:19:20.270499 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:20 crc kubenswrapper[4998]: I0227 10:19:20.270511 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:20 crc kubenswrapper[4998]: I0227 10:19:20.270527 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:20 crc kubenswrapper[4998]: I0227 10:19:20.270539 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:20Z","lastTransitionTime":"2026-02-27T10:19:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:20 crc kubenswrapper[4998]: I0227 10:19:20.372546 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:20 crc kubenswrapper[4998]: I0227 10:19:20.372589 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:20 crc kubenswrapper[4998]: I0227 10:19:20.372598 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:20 crc kubenswrapper[4998]: I0227 10:19:20.372612 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:20 crc kubenswrapper[4998]: I0227 10:19:20.372625 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:20Z","lastTransitionTime":"2026-02-27T10:19:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:20 crc kubenswrapper[4998]: I0227 10:19:20.474703 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:20 crc kubenswrapper[4998]: I0227 10:19:20.474767 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:20 crc kubenswrapper[4998]: I0227 10:19:20.474781 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:20 crc kubenswrapper[4998]: I0227 10:19:20.474796 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:20 crc kubenswrapper[4998]: I0227 10:19:20.474808 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:20Z","lastTransitionTime":"2026-02-27T10:19:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:20 crc kubenswrapper[4998]: I0227 10:19:20.577291 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:20 crc kubenswrapper[4998]: I0227 10:19:20.577331 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:20 crc kubenswrapper[4998]: I0227 10:19:20.577341 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:20 crc kubenswrapper[4998]: I0227 10:19:20.577357 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:20 crc kubenswrapper[4998]: I0227 10:19:20.577367 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:20Z","lastTransitionTime":"2026-02-27T10:19:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:20 crc kubenswrapper[4998]: I0227 10:19:20.680359 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:20 crc kubenswrapper[4998]: I0227 10:19:20.680440 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:20 crc kubenswrapper[4998]: I0227 10:19:20.680455 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:20 crc kubenswrapper[4998]: I0227 10:19:20.680475 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:20 crc kubenswrapper[4998]: I0227 10:19:20.680487 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:20Z","lastTransitionTime":"2026-02-27T10:19:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:20 crc kubenswrapper[4998]: I0227 10:19:20.764746 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:19:20 crc kubenswrapper[4998]: E0227 10:19:20.764934 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 10:19:20 crc kubenswrapper[4998]: I0227 10:19:20.765244 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:19:20 crc kubenswrapper[4998]: I0227 10:19:20.765258 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:19:20 crc kubenswrapper[4998]: E0227 10:19:20.765367 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 10:19:20 crc kubenswrapper[4998]: E0227 10:19:20.765516 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 10:19:20 crc kubenswrapper[4998]: E0227 10:19:20.770927 4998 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 10:19:20 crc kubenswrapper[4998]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/bin/bash -c #!/bin/bash Feb 27 10:19:20 crc kubenswrapper[4998]: set -uo pipefail Feb 27 10:19:20 crc kubenswrapper[4998]: Feb 27 10:19:20 crc kubenswrapper[4998]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Feb 27 10:19:20 crc kubenswrapper[4998]: Feb 27 10:19:20 crc kubenswrapper[4998]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Feb 27 10:19:20 crc kubenswrapper[4998]: HOSTS_FILE="/etc/hosts" Feb 27 10:19:20 crc kubenswrapper[4998]: TEMP_FILE="/etc/hosts.tmp" Feb 27 10:19:20 crc kubenswrapper[4998]: Feb 27 10:19:20 crc kubenswrapper[4998]: IFS=', ' read -r -a services <<< "${SERVICES}" Feb 27 10:19:20 crc kubenswrapper[4998]: Feb 27 10:19:20 crc kubenswrapper[4998]: # Make a temporary file with the old hosts file's attributes. Feb 27 10:19:20 crc kubenswrapper[4998]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Feb 27 10:19:20 crc kubenswrapper[4998]: echo "Failed to preserve hosts file. Exiting." Feb 27 10:19:20 crc kubenswrapper[4998]: exit 1 Feb 27 10:19:20 crc kubenswrapper[4998]: fi Feb 27 10:19:20 crc kubenswrapper[4998]: Feb 27 10:19:20 crc kubenswrapper[4998]: while true; do Feb 27 10:19:20 crc kubenswrapper[4998]: declare -A svc_ips Feb 27 10:19:20 crc kubenswrapper[4998]: for svc in "${services[@]}"; do Feb 27 10:19:20 crc kubenswrapper[4998]: # Fetch service IP from cluster dns if present. We make several tries Feb 27 10:19:20 crc kubenswrapper[4998]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Feb 27 10:19:20 crc kubenswrapper[4998]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Feb 27 10:19:20 crc kubenswrapper[4998]: # support UDP loadbalancers and require reaching DNS through TCP. Feb 27 10:19:20 crc kubenswrapper[4998]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Feb 27 10:19:20 crc kubenswrapper[4998]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Feb 27 10:19:20 crc kubenswrapper[4998]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Feb 27 10:19:20 crc kubenswrapper[4998]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Feb 27 10:19:20 crc kubenswrapper[4998]: for i in ${!cmds[*]} Feb 27 10:19:20 crc kubenswrapper[4998]: do Feb 27 10:19:20 crc kubenswrapper[4998]: ips=($(eval "${cmds[i]}")) Feb 27 10:19:20 crc kubenswrapper[4998]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Feb 27 10:19:20 crc kubenswrapper[4998]: svc_ips["${svc}"]="${ips[@]}" Feb 27 10:19:20 crc kubenswrapper[4998]: break Feb 27 10:19:20 crc kubenswrapper[4998]: fi Feb 27 10:19:20 crc kubenswrapper[4998]: done Feb 27 10:19:20 crc kubenswrapper[4998]: done Feb 27 10:19:20 crc kubenswrapper[4998]: Feb 27 10:19:20 crc kubenswrapper[4998]: # Update /etc/hosts only if we get valid service IPs Feb 27 10:19:20 crc kubenswrapper[4998]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Feb 27 10:19:20 crc kubenswrapper[4998]: # Stale entries could exist in /etc/hosts if the service is deleted Feb 27 10:19:20 crc kubenswrapper[4998]: if [[ -n "${svc_ips[*]-}" ]]; then Feb 27 10:19:20 crc kubenswrapper[4998]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Feb 27 10:19:20 crc kubenswrapper[4998]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Feb 27 10:19:20 crc kubenswrapper[4998]: # Only continue rebuilding the hosts entries if its original content is preserved Feb 27 10:19:20 crc kubenswrapper[4998]: sleep 60 & wait Feb 27 10:19:20 crc kubenswrapper[4998]: continue Feb 27 10:19:20 crc kubenswrapper[4998]: fi Feb 27 10:19:20 crc kubenswrapper[4998]: Feb 27 10:19:20 crc kubenswrapper[4998]: # Append resolver entries for services Feb 27 10:19:20 crc kubenswrapper[4998]: rc=0 Feb 27 10:19:20 crc kubenswrapper[4998]: for svc in "${!svc_ips[@]}"; do Feb 27 10:19:20 crc kubenswrapper[4998]: for ip in ${svc_ips[${svc}]}; do Feb 27 10:19:20 crc kubenswrapper[4998]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Feb 27 10:19:20 crc kubenswrapper[4998]: done Feb 27 10:19:20 crc kubenswrapper[4998]: done Feb 27 10:19:20 crc kubenswrapper[4998]: if [[ $rc -ne 0 ]]; then Feb 27 10:19:20 crc kubenswrapper[4998]: sleep 60 & wait Feb 27 10:19:20 crc kubenswrapper[4998]: continue Feb 27 10:19:20 crc kubenswrapper[4998]: fi Feb 27 10:19:20 crc kubenswrapper[4998]: Feb 27 10:19:20 crc kubenswrapper[4998]: Feb 27 10:19:20 crc kubenswrapper[4998]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Feb 27 10:19:20 crc kubenswrapper[4998]: # Replace /etc/hosts with our modified version if needed Feb 27 10:19:20 crc kubenswrapper[4998]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Feb 27 10:19:20 crc kubenswrapper[4998]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Feb 27 10:19:20 crc kubenswrapper[4998]: fi Feb 27 10:19:20 crc kubenswrapper[4998]: sleep 60 & wait Feb 27 10:19:20 crc kubenswrapper[4998]: unset svc_ips Feb 27 10:19:20 crc kubenswrapper[4998]: done Feb 27 10:19:20 crc kubenswrapper[4998]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jctzd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-qcfqc_openshift-dns(9652967a-d4bf-4304-bd25-4fed87e89b10): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 27 10:19:20 crc kubenswrapper[4998]: > logger="UnhandledError" Feb 27 10:19:20 crc kubenswrapper[4998]: E0227 10:19:20.772168 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-qcfqc" podUID="9652967a-d4bf-4304-bd25-4fed87e89b10" Feb 27 10:19:20 crc kubenswrapper[4998]: I0227 10:19:20.783106 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:20 crc kubenswrapper[4998]: I0227 10:19:20.783440 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:20 crc kubenswrapper[4998]: I0227 10:19:20.783609 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:20 crc kubenswrapper[4998]: I0227 10:19:20.783743 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:20 crc kubenswrapper[4998]: I0227 10:19:20.783866 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:20Z","lastTransitionTime":"2026-02-27T10:19:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:20 crc kubenswrapper[4998]: I0227 10:19:20.887250 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:20 crc kubenswrapper[4998]: I0227 10:19:20.887563 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:20 crc kubenswrapper[4998]: I0227 10:19:20.887667 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:20 crc kubenswrapper[4998]: I0227 10:19:20.887794 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:20 crc kubenswrapper[4998]: I0227 10:19:20.887885 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:20Z","lastTransitionTime":"2026-02-27T10:19:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:20 crc kubenswrapper[4998]: I0227 10:19:20.990430 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:20 crc kubenswrapper[4998]: I0227 10:19:20.990494 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:20 crc kubenswrapper[4998]: I0227 10:19:20.990517 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:20 crc kubenswrapper[4998]: I0227 10:19:20.990548 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:20 crc kubenswrapper[4998]: I0227 10:19:20.990572 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:20Z","lastTransitionTime":"2026-02-27T10:19:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:21 crc kubenswrapper[4998]: I0227 10:19:21.092870 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:21 crc kubenswrapper[4998]: I0227 10:19:21.093135 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:21 crc kubenswrapper[4998]: I0227 10:19:21.093254 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:21 crc kubenswrapper[4998]: I0227 10:19:21.093345 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:21 crc kubenswrapper[4998]: I0227 10:19:21.093426 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:21Z","lastTransitionTime":"2026-02-27T10:19:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:21 crc kubenswrapper[4998]: I0227 10:19:21.195559 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:21 crc kubenswrapper[4998]: I0227 10:19:21.195595 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:21 crc kubenswrapper[4998]: I0227 10:19:21.195604 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:21 crc kubenswrapper[4998]: I0227 10:19:21.195618 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:21 crc kubenswrapper[4998]: I0227 10:19:21.195627 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:21Z","lastTransitionTime":"2026-02-27T10:19:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:21 crc kubenswrapper[4998]: I0227 10:19:21.298495 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:21 crc kubenswrapper[4998]: I0227 10:19:21.298564 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:21 crc kubenswrapper[4998]: I0227 10:19:21.298580 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:21 crc kubenswrapper[4998]: I0227 10:19:21.298600 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:21 crc kubenswrapper[4998]: I0227 10:19:21.298615 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:21Z","lastTransitionTime":"2026-02-27T10:19:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:21 crc kubenswrapper[4998]: I0227 10:19:21.400905 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:21 crc kubenswrapper[4998]: I0227 10:19:21.400956 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:21 crc kubenswrapper[4998]: I0227 10:19:21.400967 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:21 crc kubenswrapper[4998]: I0227 10:19:21.400984 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:21 crc kubenswrapper[4998]: I0227 10:19:21.400997 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:21Z","lastTransitionTime":"2026-02-27T10:19:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:21 crc kubenswrapper[4998]: I0227 10:19:21.503621 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:21 crc kubenswrapper[4998]: I0227 10:19:21.503670 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:21 crc kubenswrapper[4998]: I0227 10:19:21.503682 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:21 crc kubenswrapper[4998]: I0227 10:19:21.503694 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:21 crc kubenswrapper[4998]: I0227 10:19:21.503702 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:21Z","lastTransitionTime":"2026-02-27T10:19:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:21 crc kubenswrapper[4998]: I0227 10:19:21.607212 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:21 crc kubenswrapper[4998]: I0227 10:19:21.607320 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:21 crc kubenswrapper[4998]: I0227 10:19:21.607339 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:21 crc kubenswrapper[4998]: I0227 10:19:21.607364 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:21 crc kubenswrapper[4998]: I0227 10:19:21.607381 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:21Z","lastTransitionTime":"2026-02-27T10:19:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:21 crc kubenswrapper[4998]: I0227 10:19:21.723929 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:21 crc kubenswrapper[4998]: I0227 10:19:21.724007 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:21 crc kubenswrapper[4998]: I0227 10:19:21.724026 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:21 crc kubenswrapper[4998]: I0227 10:19:21.724463 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:21 crc kubenswrapper[4998]: I0227 10:19:21.724524 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:21Z","lastTransitionTime":"2026-02-27T10:19:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:21 crc kubenswrapper[4998]: I0227 10:19:21.764551 4998 scope.go:117] "RemoveContainer" containerID="9b6e8ecfcaba19fb742ce57619627409073b5ea272a20cca6cff39ebcef8d3e2" Feb 27 10:19:21 crc kubenswrapper[4998]: I0227 10:19:21.827784 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:21 crc kubenswrapper[4998]: I0227 10:19:21.827808 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:21 crc kubenswrapper[4998]: I0227 10:19:21.827815 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:21 crc kubenswrapper[4998]: I0227 10:19:21.827828 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:21 crc kubenswrapper[4998]: I0227 10:19:21.827837 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:21Z","lastTransitionTime":"2026-02-27T10:19:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:21 crc kubenswrapper[4998]: I0227 10:19:21.929966 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:21 crc kubenswrapper[4998]: I0227 10:19:21.930001 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:21 crc kubenswrapper[4998]: I0227 10:19:21.930010 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:21 crc kubenswrapper[4998]: I0227 10:19:21.930022 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:21 crc kubenswrapper[4998]: I0227 10:19:21.930032 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:21Z","lastTransitionTime":"2026-02-27T10:19:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:22 crc kubenswrapper[4998]: I0227 10:19:22.029403 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-g7c4b"] Feb 27 10:19:22 crc kubenswrapper[4998]: I0227 10:19:22.030102 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-g7c4b" Feb 27 10:19:22 crc kubenswrapper[4998]: I0227 10:19:22.032305 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:22 crc kubenswrapper[4998]: I0227 10:19:22.032340 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:22 crc kubenswrapper[4998]: I0227 10:19:22.032351 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:22 crc kubenswrapper[4998]: I0227 10:19:22.032374 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:22 crc kubenswrapper[4998]: I0227 10:19:22.032386 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:22Z","lastTransitionTime":"2026-02-27T10:19:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:22 crc kubenswrapper[4998]: I0227 10:19:22.032687 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 27 10:19:22 crc kubenswrapper[4998]: I0227 10:19:22.033431 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 27 10:19:22 crc kubenswrapper[4998]: I0227 10:19:22.043749 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:22 crc kubenswrapper[4998]: I0227 10:19:22.058670 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bceef7ff-b99d-432e-b9cb-7c538c82b74b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wh9xl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:22 crc kubenswrapper[4998]: I0227 10:19:22.065957 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jl2nx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c459deb8-e1ea-43de-a1b0-1b463eee4bdc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:16Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:16Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc6kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jl2nx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:22 crc kubenswrapper[4998]: I0227 10:19:22.075843 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:22 crc kubenswrapper[4998]: I0227 10:19:22.083933 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:22 crc kubenswrapper[4998]: I0227 10:19:22.090487 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-g7c4b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68afe6cb-a559-4162-a25f-a22003feeca4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mqxn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mqxn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-g7c4b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:22 crc kubenswrapper[4998]: I0227 10:19:22.096634 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcfqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9652967a-d4bf-4304-bd25-4fed87e89b10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:09Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:09Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jctzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcfqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:22 crc kubenswrapper[4998]: I0227 10:19:22.101753 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/68afe6cb-a559-4162-a25f-a22003feeca4-env-overrides\") pod \"ovnkube-control-plane-749d76644c-g7c4b\" (UID: \"68afe6cb-a559-4162-a25f-a22003feeca4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-g7c4b" Feb 27 10:19:22 crc kubenswrapper[4998]: I0227 10:19:22.101803 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mqxn\" (UniqueName: \"kubernetes.io/projected/68afe6cb-a559-4162-a25f-a22003feeca4-kube-api-access-6mqxn\") pod \"ovnkube-control-plane-749d76644c-g7c4b\" (UID: \"68afe6cb-a559-4162-a25f-a22003feeca4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-g7c4b" Feb 27 10:19:22 crc kubenswrapper[4998]: I0227 10:19:22.102017 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/68afe6cb-a559-4162-a25f-a22003feeca4-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-g7c4b\" (UID: \"68afe6cb-a559-4162-a25f-a22003feeca4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-g7c4b" Feb 27 10:19:22 crc kubenswrapper[4998]: I0227 10:19:22.102081 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/68afe6cb-a559-4162-a25f-a22003feeca4-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-g7c4b\" (UID: \"68afe6cb-a559-4162-a25f-a22003feeca4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-g7c4b" Feb 27 10:19:22 crc kubenswrapper[4998]: I0227 10:19:22.107652 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l9x2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e55e9768-52ee-4fcf-a279-1b55e6d6c6fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l9x2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:22 crc kubenswrapper[4998]: I0227 10:19:22.117500 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fc05123-f698-45ff-a3c3-13c18e466cdb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c807f36a54905b570613990b7428942b90cbad2d8d8dfbab02ee177d5575644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f0cce1c3309bc7de11ea57038f7a306b781b41434337fbf3da176ca2dba2b91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93c102b441273f39ca361ed86f6ef259e15d8adb7eea414db62fabebfda0dee1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b6e8ecfcaba19fb742ce57619627409073b5ea272a20cca6cff39ebcef8d3e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b6e8ecfcaba19fb742ce57619627409073b5ea272a20cca6cff39ebcef8d3e2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T10:18:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 10:18:37.748369 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 10:18:37.748616 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 10:18:37.749728 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2309284339/tls.crt::/tmp/serving-cert-2309284339/tls.key\\\\\\\"\\\\nI0227 10:18:38.164511 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 10:18:38.177544 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 10:18:38.177576 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 10:18:38.177600 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 10:18:38.177606 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 10:18:38.196196 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 10:18:38.196250 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 10:18:38.196256 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 10:18:38.196263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 10:18:38.196267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 10:18:38.196282 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 10:18:38.196286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0227 10:18:38.201111 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0227 10:18:38.201327 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T10:18:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df65cc66d990595a8fdb751234ffd8b9e890f53de56c736241bc8b52e7341357\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b42644d7b4769348dda3a3ed5f82a860712159a6e62df2ee6a04b6e890851fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b42644d7b4769348dda3a3ed5f82a860712159a6e62df2ee6a04b6e890851fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:17:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:22 crc kubenswrapper[4998]: I0227 10:19:22.127849 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-46lvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a046a5ca-7081-4920-98af-1027a5bc29d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7s287\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-46lvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:22 crc kubenswrapper[4998]: I0227 10:19:22.134196 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:22 crc kubenswrapper[4998]: I0227 10:19:22.134246 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:22 crc kubenswrapper[4998]: I0227 10:19:22.134260 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:22 crc kubenswrapper[4998]: I0227 10:19:22.134275 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:22 crc kubenswrapper[4998]: I0227 10:19:22.134283 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:22Z","lastTransitionTime":"2026-02-27T10:19:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:22 crc kubenswrapper[4998]: I0227 10:19:22.135853 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"400c5e2f-5448-49c6-bf8e-04b21e552bb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l6t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l6t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m6kr5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:22 crc kubenswrapper[4998]: I0227 10:19:22.144220 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:22 crc kubenswrapper[4998]: I0227 10:19:22.153944 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:22 crc kubenswrapper[4998]: I0227 10:19:22.160975 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:22 crc kubenswrapper[4998]: I0227 10:19:22.202783 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mqxn\" (UniqueName: \"kubernetes.io/projected/68afe6cb-a559-4162-a25f-a22003feeca4-kube-api-access-6mqxn\") pod \"ovnkube-control-plane-749d76644c-g7c4b\" (UID: \"68afe6cb-a559-4162-a25f-a22003feeca4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-g7c4b" Feb 27 10:19:22 crc kubenswrapper[4998]: I0227 10:19:22.202863 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/68afe6cb-a559-4162-a25f-a22003feeca4-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-g7c4b\" (UID: \"68afe6cb-a559-4162-a25f-a22003feeca4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-g7c4b" Feb 27 10:19:22 crc kubenswrapper[4998]: I0227 10:19:22.202888 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/68afe6cb-a559-4162-a25f-a22003feeca4-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-g7c4b\" (UID: \"68afe6cb-a559-4162-a25f-a22003feeca4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-g7c4b" Feb 27 10:19:22 crc kubenswrapper[4998]: I0227 10:19:22.202935 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/68afe6cb-a559-4162-a25f-a22003feeca4-env-overrides\") pod \"ovnkube-control-plane-749d76644c-g7c4b\" (UID: \"68afe6cb-a559-4162-a25f-a22003feeca4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-g7c4b" Feb 27 10:19:22 crc kubenswrapper[4998]: I0227 10:19:22.203712 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/68afe6cb-a559-4162-a25f-a22003feeca4-env-overrides\") pod \"ovnkube-control-plane-749d76644c-g7c4b\" (UID: \"68afe6cb-a559-4162-a25f-a22003feeca4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-g7c4b" Feb 27 10:19:22 crc kubenswrapper[4998]: I0227 10:19:22.203883 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/68afe6cb-a559-4162-a25f-a22003feeca4-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-g7c4b\" (UID: \"68afe6cb-a559-4162-a25f-a22003feeca4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-g7c4b" Feb 27 10:19:22 crc kubenswrapper[4998]: I0227 10:19:22.207654 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/68afe6cb-a559-4162-a25f-a22003feeca4-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-g7c4b\" (UID: \"68afe6cb-a559-4162-a25f-a22003feeca4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-g7c4b" Feb 27 10:19:22 crc kubenswrapper[4998]: I0227 10:19:22.210396 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Feb 27 10:19:22 crc kubenswrapper[4998]: I0227 10:19:22.211799 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"22e749ae59c7bd8ab4f2d458cd33ccbae459eb43375c8abcdd275ce7f3978d5b"} Feb 27 10:19:22 crc kubenswrapper[4998]: I0227 10:19:22.212714 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 10:19:22 crc kubenswrapper[4998]: I0227 10:19:22.229350 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:22 crc kubenswrapper[4998]: I0227 10:19:22.233297 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mqxn\" (UniqueName: \"kubernetes.io/projected/68afe6cb-a559-4162-a25f-a22003feeca4-kube-api-access-6mqxn\") pod \"ovnkube-control-plane-749d76644c-g7c4b\" (UID: \"68afe6cb-a559-4162-a25f-a22003feeca4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-g7c4b" Feb 27 10:19:22 crc kubenswrapper[4998]: I0227 10:19:22.236443 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:22 crc kubenswrapper[4998]: I0227 10:19:22.236486 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:22 crc kubenswrapper[4998]: I0227 10:19:22.236502 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:22 crc kubenswrapper[4998]: I0227 10:19:22.236525 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:22 crc kubenswrapper[4998]: I0227 10:19:22.236543 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:22Z","lastTransitionTime":"2026-02-27T10:19:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:22 crc kubenswrapper[4998]: I0227 10:19:22.244036 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:22 crc kubenswrapper[4998]: I0227 10:19:22.255116 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:22 crc kubenswrapper[4998]: I0227 10:19:22.272902 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bceef7ff-b99d-432e-b9cb-7c538c82b74b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wh9xl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:22 crc kubenswrapper[4998]: I0227 10:19:22.281202 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jl2nx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c459deb8-e1ea-43de-a1b0-1b463eee4bdc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:16Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:16Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc6kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jl2nx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:22 crc kubenswrapper[4998]: I0227 10:19:22.287613 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcfqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9652967a-d4bf-4304-bd25-4fed87e89b10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:09Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:09Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jctzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcfqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:22 crc kubenswrapper[4998]: I0227 10:19:22.300214 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l9x2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e55e9768-52ee-4fcf-a279-1b55e6d6c6fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l9x2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:22 crc kubenswrapper[4998]: I0227 10:19:22.308937 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-g7c4b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68afe6cb-a559-4162-a25f-a22003feeca4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mqxn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mqxn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-g7c4b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:22 crc kubenswrapper[4998]: I0227 10:19:22.319914 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fc05123-f698-45ff-a3c3-13c18e466cdb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c807f36a54905b570613990b7428942b90cbad2d8d8dfbab02ee177d5575644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f0cce1c3309bc7de11ea57038f7a306b781b41434337fbf3da176ca2dba2b91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93c102b441273f39ca361ed86f6ef259e15d8adb7eea414db62fabebfda0dee1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e749ae59c7bd8ab4f2d458cd33ccbae459eb43375c8abcdd275ce7f3978d5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b6e8ecfcaba19fb742ce57619627409073b5ea272a20cca6cff39ebcef8d3e2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T10:18:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 10:18:37.748369 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 10:18:37.748616 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 10:18:37.749728 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2309284339/tls.crt::/tmp/serving-cert-2309284339/tls.key\\\\\\\"\\\\nI0227 10:18:38.164511 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 10:18:38.177544 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 10:18:38.177576 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 10:18:38.177600 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 10:18:38.177606 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 10:18:38.196196 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 10:18:38.196250 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 10:18:38.196256 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 10:18:38.196263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 10:18:38.196267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 10:18:38.196282 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 10:18:38.196286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0227 10:18:38.201111 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0227 10:18:38.201327 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T10:18:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df65cc66d990595a8fdb751234ffd8b9e890f53de56c736241bc8b52e7341357\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b42644d7b4769348dda3a3ed5f82a860712159a6e62df2ee6a04b6e890851fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b42644d7b4769348dda3a3ed5f82a860712159a6e62df2ee6a04b6e890851fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:17:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:22 crc kubenswrapper[4998]: I0227 10:19:22.329974 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:22 crc kubenswrapper[4998]: I0227 10:19:22.339153 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:22 crc kubenswrapper[4998]: I0227 10:19:22.339191 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:22 crc kubenswrapper[4998]: I0227 10:19:22.339202 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:22 crc kubenswrapper[4998]: I0227 10:19:22.339218 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:22 crc kubenswrapper[4998]: I0227 10:19:22.339252 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:22Z","lastTransitionTime":"2026-02-27T10:19:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:22 crc kubenswrapper[4998]: I0227 10:19:22.340307 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:22 crc kubenswrapper[4998]: I0227 10:19:22.348357 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-g7c4b" Feb 27 10:19:22 crc kubenswrapper[4998]: I0227 10:19:22.348382 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:22 crc kubenswrapper[4998]: I0227 10:19:22.359678 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-46lvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a046a5ca-7081-4920-98af-1027a5bc29d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7s287\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-46lvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:22 crc kubenswrapper[4998]: W0227 10:19:22.365397 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod68afe6cb_a559_4162_a25f_a22003feeca4.slice/crio-49320821371e2483528540a39207ba004ae2551f11a0104f888086114333ef77 WatchSource:0}: Error finding container 49320821371e2483528540a39207ba004ae2551f11a0104f888086114333ef77: Status 404 returned error can't find the container with id 49320821371e2483528540a39207ba004ae2551f11a0104f888086114333ef77 Feb 27 10:19:22 crc kubenswrapper[4998]: E0227 10:19:22.367543 4998 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 10:19:22 crc kubenswrapper[4998]: container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,Command:[/bin/bash -c #!/bin/bash Feb 27 10:19:22 crc kubenswrapper[4998]: set -euo pipefail Feb 27 10:19:22 crc kubenswrapper[4998]: TLS_PK=/etc/pki/tls/metrics-cert/tls.key Feb 27 10:19:22 crc kubenswrapper[4998]: TLS_CERT=/etc/pki/tls/metrics-cert/tls.crt Feb 27 10:19:22 crc kubenswrapper[4998]: # As the secret mount is optional we must wait for the files to be present. Feb 27 10:19:22 crc kubenswrapper[4998]: # The service is created in monitor.yaml and this is created in sdn.yaml. Feb 27 10:19:22 crc kubenswrapper[4998]: TS=$(date +%s) Feb 27 10:19:22 crc kubenswrapper[4998]: WARN_TS=$(( ${TS} + $(( 20 * 60)) )) Feb 27 10:19:22 crc kubenswrapper[4998]: HAS_LOGGED_INFO=0 Feb 27 10:19:22 crc kubenswrapper[4998]: Feb 27 10:19:22 crc kubenswrapper[4998]: log_missing_certs(){ Feb 27 10:19:22 crc kubenswrapper[4998]: CUR_TS=$(date +%s) Feb 27 10:19:22 crc kubenswrapper[4998]: if [[ "${CUR_TS}" -gt "WARN_TS" ]]; then Feb 27 10:19:22 crc kubenswrapper[4998]: echo $(date -Iseconds) WARN: ovn-control-plane-metrics-cert not mounted after 20 minutes. Feb 27 10:19:22 crc kubenswrapper[4998]: elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then Feb 27 10:19:22 crc kubenswrapper[4998]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. Feb 27 10:19:22 crc kubenswrapper[4998]: HAS_LOGGED_INFO=1 Feb 27 10:19:22 crc kubenswrapper[4998]: fi Feb 27 10:19:22 crc kubenswrapper[4998]: } Feb 27 10:19:22 crc kubenswrapper[4998]: while [[ ! -f "${TLS_PK}" || ! -f "${TLS_CERT}" ]] ; do Feb 27 10:19:22 crc kubenswrapper[4998]: log_missing_certs Feb 27 10:19:22 crc kubenswrapper[4998]: sleep 5 Feb 27 10:19:22 crc kubenswrapper[4998]: done Feb 27 10:19:22 crc kubenswrapper[4998]: Feb 27 10:19:22 crc kubenswrapper[4998]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy Feb 27 10:19:22 crc kubenswrapper[4998]: exec /usr/bin/kube-rbac-proxy \ Feb 27 10:19:22 crc kubenswrapper[4998]: --logtostderr \ Feb 27 10:19:22 crc kubenswrapper[4998]: --secure-listen-address=:9108 \ Feb 27 10:19:22 crc kubenswrapper[4998]: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ Feb 27 10:19:22 crc kubenswrapper[4998]: --upstream=http://127.0.0.1:29108/ \ Feb 27 10:19:22 crc kubenswrapper[4998]: --tls-private-key-file=${TLS_PK} \ Feb 27 10:19:22 crc kubenswrapper[4998]: --tls-cert-file=${TLS_CERT} Feb 27 10:19:22 crc kubenswrapper[4998]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:9108,ContainerPort:9108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovn-control-plane-metrics-cert,ReadOnly:true,MountPath:/etc/pki/tls/metrics-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6mqxn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-749d76644c-g7c4b_openshift-ovn-kubernetes(68afe6cb-a559-4162-a25f-a22003feeca4): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 27 10:19:22 crc kubenswrapper[4998]: > logger="UnhandledError" Feb 27 10:19:22 crc kubenswrapper[4998]: E0227 10:19:22.370040 4998 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 10:19:22 crc kubenswrapper[4998]: container &Container{Name:ovnkube-cluster-manager,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Feb 27 10:19:22 crc kubenswrapper[4998]: if [[ -f "/env/_master" ]]; then Feb 27 10:19:22 crc kubenswrapper[4998]: set -o allexport Feb 27 10:19:22 crc kubenswrapper[4998]: source "/env/_master" Feb 27 10:19:22 crc kubenswrapper[4998]: set +o allexport Feb 27 10:19:22 crc kubenswrapper[4998]: fi Feb 27 10:19:22 crc kubenswrapper[4998]: Feb 27 10:19:22 crc kubenswrapper[4998]: ovn_v4_join_subnet_opt= Feb 27 10:19:22 crc kubenswrapper[4998]: if [[ "" != "" ]]; then Feb 27 10:19:22 crc kubenswrapper[4998]: ovn_v4_join_subnet_opt="--gateway-v4-join-subnet " Feb 27 10:19:22 crc kubenswrapper[4998]: fi Feb 27 10:19:22 crc kubenswrapper[4998]: ovn_v6_join_subnet_opt= Feb 27 10:19:22 crc kubenswrapper[4998]: if [[ "" != "" ]]; then Feb 27 10:19:22 crc kubenswrapper[4998]: ovn_v6_join_subnet_opt="--gateway-v6-join-subnet " Feb 27 10:19:22 crc kubenswrapper[4998]: fi Feb 27 10:19:22 crc kubenswrapper[4998]: Feb 27 10:19:22 crc kubenswrapper[4998]: ovn_v4_transit_switch_subnet_opt= Feb 27 10:19:22 crc kubenswrapper[4998]: if [[ "" != "" ]]; then Feb 27 10:19:22 crc kubenswrapper[4998]: ovn_v4_transit_switch_subnet_opt="--cluster-manager-v4-transit-switch-subnet " Feb 27 10:19:22 crc kubenswrapper[4998]: fi Feb 27 10:19:22 crc kubenswrapper[4998]: ovn_v6_transit_switch_subnet_opt= Feb 27 10:19:22 crc kubenswrapper[4998]: if [[ "" != "" ]]; then Feb 27 10:19:22 crc kubenswrapper[4998]: ovn_v6_transit_switch_subnet_opt="--cluster-manager-v6-transit-switch-subnet " Feb 27 10:19:22 crc kubenswrapper[4998]: fi Feb 27 10:19:22 crc kubenswrapper[4998]: Feb 27 10:19:22 crc kubenswrapper[4998]: dns_name_resolver_enabled_flag= Feb 27 10:19:22 crc kubenswrapper[4998]: if [[ "false" == "true" ]]; then Feb 27 10:19:22 crc kubenswrapper[4998]: dns_name_resolver_enabled_flag="--enable-dns-name-resolver" Feb 27 10:19:22 crc kubenswrapper[4998]: fi Feb 27 10:19:22 crc kubenswrapper[4998]: Feb 27 10:19:22 crc kubenswrapper[4998]: persistent_ips_enabled_flag= Feb 27 10:19:22 crc kubenswrapper[4998]: if [[ "true" == "true" ]]; then Feb 27 10:19:22 crc kubenswrapper[4998]: persistent_ips_enabled_flag="--enable-persistent-ips" Feb 27 10:19:22 crc kubenswrapper[4998]: fi Feb 27 10:19:22 crc kubenswrapper[4998]: Feb 27 10:19:22 crc kubenswrapper[4998]: # This is needed so that converting clusters from GA to TP Feb 27 10:19:22 crc kubenswrapper[4998]: # will rollout control plane pods as well Feb 27 10:19:22 crc kubenswrapper[4998]: network_segmentation_enabled_flag= Feb 27 10:19:22 crc kubenswrapper[4998]: multi_network_enabled_flag= Feb 27 10:19:22 crc kubenswrapper[4998]: if [[ "true" == "true" ]]; then Feb 27 10:19:22 crc kubenswrapper[4998]: multi_network_enabled_flag="--enable-multi-network" Feb 27 10:19:22 crc kubenswrapper[4998]: network_segmentation_enabled_flag="--enable-network-segmentation" Feb 27 10:19:22 crc kubenswrapper[4998]: fi Feb 27 10:19:22 crc kubenswrapper[4998]: Feb 27 10:19:22 crc kubenswrapper[4998]: echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-control-plane - start ovnkube --init-cluster-manager ${K8S_NODE}" Feb 27 10:19:22 crc kubenswrapper[4998]: exec /usr/bin/ovnkube \ Feb 27 10:19:22 crc kubenswrapper[4998]: --enable-interconnect \ Feb 27 10:19:22 crc kubenswrapper[4998]: --init-cluster-manager "${K8S_NODE}" \ Feb 27 10:19:22 crc kubenswrapper[4998]: --config-file=/run/ovnkube-config/ovnkube.conf \ Feb 27 10:19:22 crc kubenswrapper[4998]: --loglevel "${OVN_KUBE_LOG_LEVEL}" \ Feb 27 10:19:22 crc kubenswrapper[4998]: --metrics-bind-address "127.0.0.1:29108" \ Feb 27 10:19:22 crc kubenswrapper[4998]: --metrics-enable-pprof \ Feb 27 10:19:22 crc kubenswrapper[4998]: --metrics-enable-config-duration \ Feb 27 10:19:22 crc kubenswrapper[4998]: ${ovn_v4_join_subnet_opt} \ Feb 27 10:19:22 crc kubenswrapper[4998]: ${ovn_v6_join_subnet_opt} \ Feb 27 10:19:22 crc kubenswrapper[4998]: ${ovn_v4_transit_switch_subnet_opt} \ Feb 27 10:19:22 crc kubenswrapper[4998]: ${ovn_v6_transit_switch_subnet_opt} \ Feb 27 10:19:22 crc kubenswrapper[4998]: ${dns_name_resolver_enabled_flag} \ Feb 27 10:19:22 crc kubenswrapper[4998]: ${persistent_ips_enabled_flag} \ Feb 27 10:19:22 crc kubenswrapper[4998]: ${multi_network_enabled_flag} \ Feb 27 10:19:22 crc kubenswrapper[4998]: ${network_segmentation_enabled_flag} Feb 27 10:19:22 crc kubenswrapper[4998]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics-port,HostPort:29108,ContainerPort:29108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OVN_KUBE_LOG_LEVEL,Value:4,ValueFrom:nil,},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{314572800 0} {} 300Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovnkube-config,ReadOnly:false,MountPath:/run/ovnkube-config/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6mqxn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-749d76644c-g7c4b_openshift-ovn-kubernetes(68afe6cb-a559-4162-a25f-a22003feeca4): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 27 10:19:22 crc kubenswrapper[4998]: > logger="UnhandledError" Feb 27 10:19:22 crc kubenswrapper[4998]: I0227 10:19:22.370272 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"400c5e2f-5448-49c6-bf8e-04b21e552bb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l6t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l6t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m6kr5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:22 crc kubenswrapper[4998]: E0227 10:19:22.371821 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-g7c4b" podUID="68afe6cb-a559-4162-a25f-a22003feeca4" Feb 27 10:19:22 crc kubenswrapper[4998]: I0227 10:19:22.441368 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:22 crc kubenswrapper[4998]: I0227 10:19:22.441420 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:22 crc kubenswrapper[4998]: I0227 10:19:22.441431 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:22 crc kubenswrapper[4998]: I0227 10:19:22.441449 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:22 crc kubenswrapper[4998]: I0227 10:19:22.441464 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:22Z","lastTransitionTime":"2026-02-27T10:19:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:22 crc kubenswrapper[4998]: I0227 10:19:22.545053 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:22 crc kubenswrapper[4998]: I0227 10:19:22.545122 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:22 crc kubenswrapper[4998]: I0227 10:19:22.545134 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:22 crc kubenswrapper[4998]: I0227 10:19:22.545155 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:22 crc kubenswrapper[4998]: I0227 10:19:22.545172 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:22Z","lastTransitionTime":"2026-02-27T10:19:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:22 crc kubenswrapper[4998]: I0227 10:19:22.648850 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:22 crc kubenswrapper[4998]: I0227 10:19:22.648896 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:22 crc kubenswrapper[4998]: I0227 10:19:22.648907 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:22 crc kubenswrapper[4998]: I0227 10:19:22.648921 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:22 crc kubenswrapper[4998]: I0227 10:19:22.648931 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:22Z","lastTransitionTime":"2026-02-27T10:19:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:22 crc kubenswrapper[4998]: I0227 10:19:22.733747 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-86xkz"] Feb 27 10:19:22 crc kubenswrapper[4998]: I0227 10:19:22.734452 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-86xkz" Feb 27 10:19:22 crc kubenswrapper[4998]: E0227 10:19:22.734574 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-86xkz" podUID="40178d6d-6068-4937-b7d5-883538892cc5" Feb 27 10:19:22 crc kubenswrapper[4998]: I0227 10:19:22.746005 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-86xkz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40178d6d-6068-4937-b7d5-883538892cc5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqwdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqwdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-86xkz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:22 crc kubenswrapper[4998]: I0227 10:19:22.750903 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:22 crc kubenswrapper[4998]: I0227 10:19:22.750956 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:22 crc kubenswrapper[4998]: I0227 10:19:22.750969 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:22 crc kubenswrapper[4998]: I0227 10:19:22.750989 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:22 crc kubenswrapper[4998]: I0227 10:19:22.751004 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:22Z","lastTransitionTime":"2026-02-27T10:19:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:22 crc kubenswrapper[4998]: I0227 10:19:22.758491 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fc05123-f698-45ff-a3c3-13c18e466cdb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c807f36a54905b570613990b7428942b90cbad2d8d8dfbab02ee177d5575644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f0cce1c3309bc7de11ea57038f7a306b781b41434337fbf3da176ca2dba2b91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93c102b441273f39ca361ed86f6ef259e15d8adb7eea414db62fabebfda0dee1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e749ae59c7bd8ab4f2d458cd33ccbae459eb43375c8abcdd275ce7f3978d5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b6e8ecfcaba19fb742ce57619627409073b5ea272a20cca6cff39ebcef8d3e2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T10:18:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 10:18:37.748369 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 10:18:37.748616 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 10:18:37.749728 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2309284339/tls.crt::/tmp/serving-cert-2309284339/tls.key\\\\\\\"\\\\nI0227 10:18:38.164511 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 10:18:38.177544 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 10:18:38.177576 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 10:18:38.177600 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 10:18:38.177606 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 10:18:38.196196 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 10:18:38.196250 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 10:18:38.196256 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 10:18:38.196263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 10:18:38.196267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 10:18:38.196282 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 10:18:38.196286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0227 10:18:38.201111 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0227 10:18:38.201327 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T10:18:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df65cc66d990595a8fdb751234ffd8b9e890f53de56c736241bc8b52e7341357\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b42644d7b4769348dda3a3ed5f82a860712159a6e62df2ee6a04b6e890851fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b42644d7b4769348dda3a3ed5f82a860712159a6e62df2ee6a04b6e890851fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:17:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:22 crc kubenswrapper[4998]: I0227 10:19:22.763960 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:19:22 crc kubenswrapper[4998]: E0227 10:19:22.764152 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 10:19:22 crc kubenswrapper[4998]: I0227 10:19:22.764463 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:19:22 crc kubenswrapper[4998]: I0227 10:19:22.764471 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:19:22 crc kubenswrapper[4998]: E0227 10:19:22.764869 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 10:19:22 crc kubenswrapper[4998]: E0227 10:19:22.765076 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 10:19:22 crc kubenswrapper[4998]: E0227 10:19:22.766393 4998 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 10:19:22 crc kubenswrapper[4998]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Feb 27 10:19:22 crc kubenswrapper[4998]: apiVersion: v1 Feb 27 10:19:22 crc kubenswrapper[4998]: clusters: Feb 27 10:19:22 crc kubenswrapper[4998]: - cluster: Feb 27 10:19:22 crc kubenswrapper[4998]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Feb 27 10:19:22 crc kubenswrapper[4998]: server: https://api-int.crc.testing:6443 Feb 27 10:19:22 crc kubenswrapper[4998]: name: default-cluster Feb 27 10:19:22 crc kubenswrapper[4998]: contexts: Feb 27 10:19:22 crc kubenswrapper[4998]: - context: Feb 27 10:19:22 crc kubenswrapper[4998]: cluster: default-cluster Feb 27 10:19:22 crc kubenswrapper[4998]: namespace: default Feb 27 10:19:22 crc kubenswrapper[4998]: user: default-auth Feb 27 10:19:22 crc kubenswrapper[4998]: name: default-context Feb 27 10:19:22 crc kubenswrapper[4998]: current-context: default-context Feb 27 10:19:22 crc kubenswrapper[4998]: kind: Config Feb 27 10:19:22 crc kubenswrapper[4998]: preferences: {} Feb 27 10:19:22 crc kubenswrapper[4998]: users: Feb 27 10:19:22 crc kubenswrapper[4998]: - name: default-auth Feb 27 10:19:22 crc kubenswrapper[4998]: user: Feb 27 10:19:22 crc kubenswrapper[4998]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Feb 27 10:19:22 crc kubenswrapper[4998]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Feb 27 10:19:22 crc kubenswrapper[4998]: EOF Feb 27 10:19:22 crc kubenswrapper[4998]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9xxgq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-wh9xl_openshift-ovn-kubernetes(bceef7ff-b99d-432e-b9cb-7c538c82b74b): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 27 10:19:22 crc kubenswrapper[4998]: > logger="UnhandledError" Feb 27 10:19:22 crc kubenswrapper[4998]: E0227 10:19:22.766644 4998 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.18.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6l6t6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-m6kr5_openshift-machine-config-operator(400c5e2f-5448-49c6-bf8e-04b21e552bb2): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 27 10:19:22 crc kubenswrapper[4998]: E0227 10:19:22.767463 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" podUID="bceef7ff-b99d-432e-b9cb-7c538c82b74b" Feb 27 10:19:22 crc kubenswrapper[4998]: I0227 10:19:22.769189 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-46lvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a046a5ca-7081-4920-98af-1027a5bc29d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7s287\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-46lvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:22 crc kubenswrapper[4998]: E0227 10:19:22.769207 4998 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6l6t6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-m6kr5_openshift-machine-config-operator(400c5e2f-5448-49c6-bf8e-04b21e552bb2): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 27 10:19:22 crc kubenswrapper[4998]: E0227 10:19:22.770652 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" Feb 27 10:19:22 crc kubenswrapper[4998]: I0227 10:19:22.779858 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"400c5e2f-5448-49c6-bf8e-04b21e552bb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l6t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l6t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m6kr5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:22 crc kubenswrapper[4998]: I0227 10:19:22.790627 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:22 crc kubenswrapper[4998]: I0227 10:19:22.801289 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:22 crc kubenswrapper[4998]: I0227 10:19:22.809815 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/40178d6d-6068-4937-b7d5-883538892cc5-metrics-certs\") pod \"network-metrics-daemon-86xkz\" (UID: \"40178d6d-6068-4937-b7d5-883538892cc5\") " pod="openshift-multus/network-metrics-daemon-86xkz" Feb 27 10:19:22 crc kubenswrapper[4998]: I0227 10:19:22.809922 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqwdh\" (UniqueName: \"kubernetes.io/projected/40178d6d-6068-4937-b7d5-883538892cc5-kube-api-access-lqwdh\") pod \"network-metrics-daemon-86xkz\" (UID: \"40178d6d-6068-4937-b7d5-883538892cc5\") " pod="openshift-multus/network-metrics-daemon-86xkz" Feb 27 10:19:22 crc kubenswrapper[4998]: I0227 10:19:22.810654 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:22 crc kubenswrapper[4998]: I0227 10:19:22.818520 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:22 crc kubenswrapper[4998]: I0227 10:19:22.833216 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bceef7ff-b99d-432e-b9cb-7c538c82b74b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wh9xl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:22 crc kubenswrapper[4998]: I0227 10:19:22.843690 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jl2nx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c459deb8-e1ea-43de-a1b0-1b463eee4bdc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:16Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:16Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc6kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jl2nx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:22 crc kubenswrapper[4998]: I0227 10:19:22.854020 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:22 crc kubenswrapper[4998]: I0227 10:19:22.854060 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:22 crc kubenswrapper[4998]: I0227 10:19:22.854071 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:22 crc kubenswrapper[4998]: I0227 10:19:22.854090 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:22 crc kubenswrapper[4998]: I0227 10:19:22.854127 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:22Z","lastTransitionTime":"2026-02-27T10:19:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:22 crc kubenswrapper[4998]: I0227 10:19:22.855725 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:22 crc kubenswrapper[4998]: I0227 10:19:22.865920 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:22 crc kubenswrapper[4998]: I0227 10:19:22.873276 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-g7c4b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68afe6cb-a559-4162-a25f-a22003feeca4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mqxn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mqxn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-g7c4b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:22 crc kubenswrapper[4998]: I0227 10:19:22.879820 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcfqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9652967a-d4bf-4304-bd25-4fed87e89b10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:09Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:09Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jctzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcfqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:22 crc kubenswrapper[4998]: I0227 10:19:22.891937 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l9x2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e55e9768-52ee-4fcf-a279-1b55e6d6c6fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l9x2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:22 crc kubenswrapper[4998]: I0227 10:19:22.910761 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/40178d6d-6068-4937-b7d5-883538892cc5-metrics-certs\") pod \"network-metrics-daemon-86xkz\" (UID: \"40178d6d-6068-4937-b7d5-883538892cc5\") " pod="openshift-multus/network-metrics-daemon-86xkz" Feb 27 10:19:22 crc kubenswrapper[4998]: I0227 10:19:22.910835 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqwdh\" (UniqueName: \"kubernetes.io/projected/40178d6d-6068-4937-b7d5-883538892cc5-kube-api-access-lqwdh\") pod \"network-metrics-daemon-86xkz\" (UID: \"40178d6d-6068-4937-b7d5-883538892cc5\") " pod="openshift-multus/network-metrics-daemon-86xkz" Feb 27 10:19:22 crc kubenswrapper[4998]: E0227 10:19:22.910982 4998 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 27 10:19:22 crc kubenswrapper[4998]: E0227 10:19:22.911089 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/40178d6d-6068-4937-b7d5-883538892cc5-metrics-certs podName:40178d6d-6068-4937-b7d5-883538892cc5 nodeName:}" failed. No retries permitted until 2026-02-27 10:19:23.411063782 +0000 UTC m=+115.409334840 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/40178d6d-6068-4937-b7d5-883538892cc5-metrics-certs") pod "network-metrics-daemon-86xkz" (UID: "40178d6d-6068-4937-b7d5-883538892cc5") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 27 10:19:22 crc kubenswrapper[4998]: I0227 10:19:22.928686 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqwdh\" (UniqueName: \"kubernetes.io/projected/40178d6d-6068-4937-b7d5-883538892cc5-kube-api-access-lqwdh\") pod \"network-metrics-daemon-86xkz\" (UID: \"40178d6d-6068-4937-b7d5-883538892cc5\") " pod="openshift-multus/network-metrics-daemon-86xkz" Feb 27 10:19:22 crc kubenswrapper[4998]: I0227 10:19:22.956795 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:22 crc kubenswrapper[4998]: I0227 10:19:22.956867 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:22 crc kubenswrapper[4998]: I0227 10:19:22.956892 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:22 crc kubenswrapper[4998]: I0227 10:19:22.956923 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:22 crc kubenswrapper[4998]: I0227 10:19:22.956946 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:22Z","lastTransitionTime":"2026-02-27T10:19:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:23 crc kubenswrapper[4998]: I0227 10:19:23.060091 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:23 crc kubenswrapper[4998]: I0227 10:19:23.060131 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:23 crc kubenswrapper[4998]: I0227 10:19:23.060141 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:23 crc kubenswrapper[4998]: I0227 10:19:23.060154 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:23 crc kubenswrapper[4998]: I0227 10:19:23.060162 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:23Z","lastTransitionTime":"2026-02-27T10:19:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:23 crc kubenswrapper[4998]: I0227 10:19:23.163331 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:23 crc kubenswrapper[4998]: I0227 10:19:23.163385 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:23 crc kubenswrapper[4998]: I0227 10:19:23.163400 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:23 crc kubenswrapper[4998]: I0227 10:19:23.163418 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:23 crc kubenswrapper[4998]: I0227 10:19:23.163431 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:23Z","lastTransitionTime":"2026-02-27T10:19:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:23 crc kubenswrapper[4998]: I0227 10:19:23.214986 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-g7c4b" event={"ID":"68afe6cb-a559-4162-a25f-a22003feeca4","Type":"ContainerStarted","Data":"49320821371e2483528540a39207ba004ae2551f11a0104f888086114333ef77"} Feb 27 10:19:23 crc kubenswrapper[4998]: E0227 10:19:23.216399 4998 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 10:19:23 crc kubenswrapper[4998]: container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,Command:[/bin/bash -c #!/bin/bash Feb 27 10:19:23 crc kubenswrapper[4998]: set -euo pipefail Feb 27 10:19:23 crc kubenswrapper[4998]: TLS_PK=/etc/pki/tls/metrics-cert/tls.key Feb 27 10:19:23 crc kubenswrapper[4998]: TLS_CERT=/etc/pki/tls/metrics-cert/tls.crt Feb 27 10:19:23 crc kubenswrapper[4998]: # As the secret mount is optional we must wait for the files to be present. Feb 27 10:19:23 crc kubenswrapper[4998]: # The service is created in monitor.yaml and this is created in sdn.yaml. Feb 27 10:19:23 crc kubenswrapper[4998]: TS=$(date +%s) Feb 27 10:19:23 crc kubenswrapper[4998]: WARN_TS=$(( ${TS} + $(( 20 * 60)) )) Feb 27 10:19:23 crc kubenswrapper[4998]: HAS_LOGGED_INFO=0 Feb 27 10:19:23 crc kubenswrapper[4998]: Feb 27 10:19:23 crc kubenswrapper[4998]: log_missing_certs(){ Feb 27 10:19:23 crc kubenswrapper[4998]: CUR_TS=$(date +%s) Feb 27 10:19:23 crc kubenswrapper[4998]: if [[ "${CUR_TS}" -gt "WARN_TS" ]]; then Feb 27 10:19:23 crc kubenswrapper[4998]: echo $(date -Iseconds) WARN: ovn-control-plane-metrics-cert not mounted after 20 minutes. Feb 27 10:19:23 crc kubenswrapper[4998]: elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then Feb 27 10:19:23 crc kubenswrapper[4998]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. Feb 27 10:19:23 crc kubenswrapper[4998]: HAS_LOGGED_INFO=1 Feb 27 10:19:23 crc kubenswrapper[4998]: fi Feb 27 10:19:23 crc kubenswrapper[4998]: } Feb 27 10:19:23 crc kubenswrapper[4998]: while [[ ! -f "${TLS_PK}" || ! -f "${TLS_CERT}" ]] ; do Feb 27 10:19:23 crc kubenswrapper[4998]: log_missing_certs Feb 27 10:19:23 crc kubenswrapper[4998]: sleep 5 Feb 27 10:19:23 crc kubenswrapper[4998]: done Feb 27 10:19:23 crc kubenswrapper[4998]: Feb 27 10:19:23 crc kubenswrapper[4998]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy Feb 27 10:19:23 crc kubenswrapper[4998]: exec /usr/bin/kube-rbac-proxy \ Feb 27 10:19:23 crc kubenswrapper[4998]: --logtostderr \ Feb 27 10:19:23 crc kubenswrapper[4998]: --secure-listen-address=:9108 \ Feb 27 10:19:23 crc kubenswrapper[4998]: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ Feb 27 10:19:23 crc kubenswrapper[4998]: --upstream=http://127.0.0.1:29108/ \ Feb 27 10:19:23 crc kubenswrapper[4998]: --tls-private-key-file=${TLS_PK} \ Feb 27 10:19:23 crc kubenswrapper[4998]: --tls-cert-file=${TLS_CERT} Feb 27 10:19:23 crc kubenswrapper[4998]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:9108,ContainerPort:9108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovn-control-plane-metrics-cert,ReadOnly:true,MountPath:/etc/pki/tls/metrics-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6mqxn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-749d76644c-g7c4b_openshift-ovn-kubernetes(68afe6cb-a559-4162-a25f-a22003feeca4): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 27 10:19:23 crc kubenswrapper[4998]: > logger="UnhandledError" Feb 27 10:19:23 crc kubenswrapper[4998]: E0227 10:19:23.219289 4998 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 10:19:23 crc kubenswrapper[4998]: container &Container{Name:ovnkube-cluster-manager,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Feb 27 10:19:23 crc kubenswrapper[4998]: if [[ -f "/env/_master" ]]; then Feb 27 10:19:23 crc kubenswrapper[4998]: set -o allexport Feb 27 10:19:23 crc kubenswrapper[4998]: source "/env/_master" Feb 27 10:19:23 crc kubenswrapper[4998]: set +o allexport Feb 27 10:19:23 crc kubenswrapper[4998]: fi Feb 27 10:19:23 crc kubenswrapper[4998]: Feb 27 10:19:23 crc kubenswrapper[4998]: ovn_v4_join_subnet_opt= Feb 27 10:19:23 crc kubenswrapper[4998]: if [[ "" != "" ]]; then Feb 27 10:19:23 crc kubenswrapper[4998]: ovn_v4_join_subnet_opt="--gateway-v4-join-subnet " Feb 27 10:19:23 crc kubenswrapper[4998]: fi Feb 27 10:19:23 crc kubenswrapper[4998]: ovn_v6_join_subnet_opt= Feb 27 10:19:23 crc kubenswrapper[4998]: if [[ "" != "" ]]; then Feb 27 10:19:23 crc kubenswrapper[4998]: ovn_v6_join_subnet_opt="--gateway-v6-join-subnet " Feb 27 10:19:23 crc kubenswrapper[4998]: fi Feb 27 10:19:23 crc kubenswrapper[4998]: Feb 27 10:19:23 crc kubenswrapper[4998]: ovn_v4_transit_switch_subnet_opt= Feb 27 10:19:23 crc kubenswrapper[4998]: if [[ "" != "" ]]; then Feb 27 10:19:23 crc kubenswrapper[4998]: ovn_v4_transit_switch_subnet_opt="--cluster-manager-v4-transit-switch-subnet " Feb 27 10:19:23 crc kubenswrapper[4998]: fi Feb 27 10:19:23 crc kubenswrapper[4998]: ovn_v6_transit_switch_subnet_opt= Feb 27 10:19:23 crc kubenswrapper[4998]: if [[ "" != "" ]]; then Feb 27 10:19:23 crc kubenswrapper[4998]: ovn_v6_transit_switch_subnet_opt="--cluster-manager-v6-transit-switch-subnet " Feb 27 10:19:23 crc kubenswrapper[4998]: fi Feb 27 10:19:23 crc kubenswrapper[4998]: Feb 27 10:19:23 crc kubenswrapper[4998]: dns_name_resolver_enabled_flag= Feb 27 10:19:23 crc kubenswrapper[4998]: if [[ "false" == "true" ]]; then Feb 27 10:19:23 crc kubenswrapper[4998]: dns_name_resolver_enabled_flag="--enable-dns-name-resolver" Feb 27 10:19:23 crc kubenswrapper[4998]: fi Feb 27 10:19:23 crc kubenswrapper[4998]: Feb 27 10:19:23 crc kubenswrapper[4998]: persistent_ips_enabled_flag= Feb 27 10:19:23 crc kubenswrapper[4998]: if [[ "true" == "true" ]]; then Feb 27 10:19:23 crc kubenswrapper[4998]: persistent_ips_enabled_flag="--enable-persistent-ips" Feb 27 10:19:23 crc kubenswrapper[4998]: fi Feb 27 10:19:23 crc kubenswrapper[4998]: Feb 27 10:19:23 crc kubenswrapper[4998]: # This is needed so that converting clusters from GA to TP Feb 27 10:19:23 crc kubenswrapper[4998]: # will rollout control plane pods as well Feb 27 10:19:23 crc kubenswrapper[4998]: network_segmentation_enabled_flag= Feb 27 10:19:23 crc kubenswrapper[4998]: multi_network_enabled_flag= Feb 27 10:19:23 crc kubenswrapper[4998]: if [[ "true" == "true" ]]; then Feb 27 10:19:23 crc kubenswrapper[4998]: multi_network_enabled_flag="--enable-multi-network" Feb 27 10:19:23 crc kubenswrapper[4998]: network_segmentation_enabled_flag="--enable-network-segmentation" Feb 27 10:19:23 crc kubenswrapper[4998]: fi Feb 27 10:19:23 crc kubenswrapper[4998]: Feb 27 10:19:23 crc kubenswrapper[4998]: echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-control-plane - start ovnkube --init-cluster-manager ${K8S_NODE}" Feb 27 10:19:23 crc kubenswrapper[4998]: exec /usr/bin/ovnkube \ Feb 27 10:19:23 crc kubenswrapper[4998]: --enable-interconnect \ Feb 27 10:19:23 crc kubenswrapper[4998]: --init-cluster-manager "${K8S_NODE}" \ Feb 27 10:19:23 crc kubenswrapper[4998]: --config-file=/run/ovnkube-config/ovnkube.conf \ Feb 27 10:19:23 crc kubenswrapper[4998]: --loglevel "${OVN_KUBE_LOG_LEVEL}" \ Feb 27 10:19:23 crc kubenswrapper[4998]: --metrics-bind-address "127.0.0.1:29108" \ Feb 27 10:19:23 crc kubenswrapper[4998]: --metrics-enable-pprof \ Feb 27 10:19:23 crc kubenswrapper[4998]: --metrics-enable-config-duration \ Feb 27 10:19:23 crc kubenswrapper[4998]: ${ovn_v4_join_subnet_opt} \ Feb 27 10:19:23 crc kubenswrapper[4998]: ${ovn_v6_join_subnet_opt} \ Feb 27 10:19:23 crc kubenswrapper[4998]: ${ovn_v4_transit_switch_subnet_opt} \ Feb 27 10:19:23 crc kubenswrapper[4998]: ${ovn_v6_transit_switch_subnet_opt} \ Feb 27 10:19:23 crc kubenswrapper[4998]: ${dns_name_resolver_enabled_flag} \ Feb 27 10:19:23 crc kubenswrapper[4998]: ${persistent_ips_enabled_flag} \ Feb 27 10:19:23 crc kubenswrapper[4998]: ${multi_network_enabled_flag} \ Feb 27 10:19:23 crc kubenswrapper[4998]: ${network_segmentation_enabled_flag} Feb 27 10:19:23 crc kubenswrapper[4998]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics-port,HostPort:29108,ContainerPort:29108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OVN_KUBE_LOG_LEVEL,Value:4,ValueFrom:nil,},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{314572800 0} {} 300Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovnkube-config,ReadOnly:false,MountPath:/run/ovnkube-config/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6mqxn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-749d76644c-g7c4b_openshift-ovn-kubernetes(68afe6cb-a559-4162-a25f-a22003feeca4): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 27 10:19:23 crc kubenswrapper[4998]: > logger="UnhandledError" Feb 27 10:19:23 crc kubenswrapper[4998]: E0227 10:19:23.220573 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-g7c4b" podUID="68afe6cb-a559-4162-a25f-a22003feeca4" Feb 27 10:19:23 crc kubenswrapper[4998]: I0227 10:19:23.228304 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:23 crc kubenswrapper[4998]: I0227 10:19:23.238462 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:23 crc kubenswrapper[4998]: I0227 10:19:23.248635 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:23 crc kubenswrapper[4998]: I0227 10:19:23.263756 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bceef7ff-b99d-432e-b9cb-7c538c82b74b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wh9xl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:23 crc kubenswrapper[4998]: I0227 10:19:23.265304 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:23 crc kubenswrapper[4998]: I0227 10:19:23.265344 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:23 crc kubenswrapper[4998]: I0227 10:19:23.265354 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:23 crc kubenswrapper[4998]: I0227 10:19:23.265368 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:23 crc kubenswrapper[4998]: I0227 10:19:23.265376 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:23Z","lastTransitionTime":"2026-02-27T10:19:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:23 crc kubenswrapper[4998]: I0227 10:19:23.272176 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jl2nx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c459deb8-e1ea-43de-a1b0-1b463eee4bdc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:16Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:16Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc6kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jl2nx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:23 crc kubenswrapper[4998]: I0227 10:19:23.279264 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcfqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9652967a-d4bf-4304-bd25-4fed87e89b10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:09Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:09Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jctzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcfqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:23 crc kubenswrapper[4998]: I0227 10:19:23.289812 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l9x2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e55e9768-52ee-4fcf-a279-1b55e6d6c6fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l9x2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:23 crc kubenswrapper[4998]: I0227 10:19:23.297850 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-g7c4b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68afe6cb-a559-4162-a25f-a22003feeca4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mqxn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mqxn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-g7c4b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:23 crc kubenswrapper[4998]: I0227 10:19:23.308819 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fc05123-f698-45ff-a3c3-13c18e466cdb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c807f36a54905b570613990b7428942b90cbad2d8d8dfbab02ee177d5575644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f0cce1c3309bc7de11ea57038f7a306b781b41434337fbf3da176ca2dba2b91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93c102b441273f39ca361ed86f6ef259e15d8adb7eea414db62fabebfda0dee1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e749ae59c7bd8ab4f2d458cd33ccbae459eb43375c8abcdd275ce7f3978d5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b6e8ecfcaba19fb742ce57619627409073b5ea272a20cca6cff39ebcef8d3e2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T10:18:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 10:18:37.748369 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 10:18:37.748616 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 10:18:37.749728 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2309284339/tls.crt::/tmp/serving-cert-2309284339/tls.key\\\\\\\"\\\\nI0227 10:18:38.164511 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 10:18:38.177544 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 10:18:38.177576 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 10:18:38.177600 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 10:18:38.177606 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 10:18:38.196196 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 10:18:38.196250 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 10:18:38.196256 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 10:18:38.196263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 10:18:38.196267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 10:18:38.196282 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 10:18:38.196286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0227 10:18:38.201111 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0227 10:18:38.201327 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T10:18:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df65cc66d990595a8fdb751234ffd8b9e890f53de56c736241bc8b52e7341357\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b42644d7b4769348dda3a3ed5f82a860712159a6e62df2ee6a04b6e890851fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b42644d7b4769348dda3a3ed5f82a860712159a6e62df2ee6a04b6e890851fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:17:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:23 crc kubenswrapper[4998]: I0227 10:19:23.316318 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-86xkz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40178d6d-6068-4937-b7d5-883538892cc5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqwdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqwdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-86xkz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:23 crc kubenswrapper[4998]: I0227 10:19:23.323936 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:23 crc kubenswrapper[4998]: I0227 10:19:23.332185 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:23 crc kubenswrapper[4998]: I0227 10:19:23.339098 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:23 crc kubenswrapper[4998]: I0227 10:19:23.347741 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-46lvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a046a5ca-7081-4920-98af-1027a5bc29d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7s287\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-46lvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:23 crc kubenswrapper[4998]: I0227 10:19:23.356126 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"400c5e2f-5448-49c6-bf8e-04b21e552bb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l6t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l6t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m6kr5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:23 crc kubenswrapper[4998]: I0227 10:19:23.367848 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:23 crc kubenswrapper[4998]: I0227 10:19:23.367882 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:23 crc kubenswrapper[4998]: I0227 10:19:23.367892 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:23 crc kubenswrapper[4998]: I0227 10:19:23.367907 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:23 crc kubenswrapper[4998]: I0227 10:19:23.367917 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:23Z","lastTransitionTime":"2026-02-27T10:19:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:23 crc kubenswrapper[4998]: I0227 10:19:23.416377 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/40178d6d-6068-4937-b7d5-883538892cc5-metrics-certs\") pod \"network-metrics-daemon-86xkz\" (UID: \"40178d6d-6068-4937-b7d5-883538892cc5\") " pod="openshift-multus/network-metrics-daemon-86xkz" Feb 27 10:19:23 crc kubenswrapper[4998]: E0227 10:19:23.416534 4998 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 27 10:19:23 crc kubenswrapper[4998]: E0227 10:19:23.416617 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/40178d6d-6068-4937-b7d5-883538892cc5-metrics-certs podName:40178d6d-6068-4937-b7d5-883538892cc5 nodeName:}" failed. No retries permitted until 2026-02-27 10:19:24.416598864 +0000 UTC m=+116.414869832 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/40178d6d-6068-4937-b7d5-883538892cc5-metrics-certs") pod "network-metrics-daemon-86xkz" (UID: "40178d6d-6068-4937-b7d5-883538892cc5") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 27 10:19:23 crc kubenswrapper[4998]: I0227 10:19:23.470397 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:23 crc kubenswrapper[4998]: I0227 10:19:23.470460 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:23 crc kubenswrapper[4998]: I0227 10:19:23.470473 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:23 crc kubenswrapper[4998]: I0227 10:19:23.470492 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:23 crc kubenswrapper[4998]: I0227 10:19:23.470505 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:23Z","lastTransitionTime":"2026-02-27T10:19:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:23 crc kubenswrapper[4998]: I0227 10:19:23.572871 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:23 crc kubenswrapper[4998]: I0227 10:19:23.572926 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:23 crc kubenswrapper[4998]: I0227 10:19:23.572935 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:23 crc kubenswrapper[4998]: I0227 10:19:23.572957 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:23 crc kubenswrapper[4998]: I0227 10:19:23.572971 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:23Z","lastTransitionTime":"2026-02-27T10:19:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:23 crc kubenswrapper[4998]: I0227 10:19:23.675052 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:23 crc kubenswrapper[4998]: I0227 10:19:23.675106 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:23 crc kubenswrapper[4998]: I0227 10:19:23.675118 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:23 crc kubenswrapper[4998]: I0227 10:19:23.675134 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:23 crc kubenswrapper[4998]: I0227 10:19:23.675144 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:23Z","lastTransitionTime":"2026-02-27T10:19:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:23 crc kubenswrapper[4998]: I0227 10:19:23.777918 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:23 crc kubenswrapper[4998]: I0227 10:19:23.777967 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:23 crc kubenswrapper[4998]: I0227 10:19:23.778010 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:23 crc kubenswrapper[4998]: I0227 10:19:23.778025 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:23 crc kubenswrapper[4998]: I0227 10:19:23.778062 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:23Z","lastTransitionTime":"2026-02-27T10:19:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:23 crc kubenswrapper[4998]: I0227 10:19:23.880366 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:23 crc kubenswrapper[4998]: I0227 10:19:23.880402 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:23 crc kubenswrapper[4998]: I0227 10:19:23.880412 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:23 crc kubenswrapper[4998]: I0227 10:19:23.880428 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:23 crc kubenswrapper[4998]: I0227 10:19:23.880439 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:23Z","lastTransitionTime":"2026-02-27T10:19:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:23 crc kubenswrapper[4998]: I0227 10:19:23.983836 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:23 crc kubenswrapper[4998]: I0227 10:19:23.983872 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:23 crc kubenswrapper[4998]: I0227 10:19:23.983880 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:23 crc kubenswrapper[4998]: I0227 10:19:23.983893 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:23 crc kubenswrapper[4998]: I0227 10:19:23.983902 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:23Z","lastTransitionTime":"2026-02-27T10:19:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:24 crc kubenswrapper[4998]: I0227 10:19:24.086163 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:24 crc kubenswrapper[4998]: I0227 10:19:24.086194 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:24 crc kubenswrapper[4998]: I0227 10:19:24.086201 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:24 crc kubenswrapper[4998]: I0227 10:19:24.086213 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:24 crc kubenswrapper[4998]: I0227 10:19:24.086242 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:24Z","lastTransitionTime":"2026-02-27T10:19:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:24 crc kubenswrapper[4998]: I0227 10:19:24.188689 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:24 crc kubenswrapper[4998]: I0227 10:19:24.188731 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:24 crc kubenswrapper[4998]: I0227 10:19:24.188740 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:24 crc kubenswrapper[4998]: I0227 10:19:24.188753 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:24 crc kubenswrapper[4998]: I0227 10:19:24.188762 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:24Z","lastTransitionTime":"2026-02-27T10:19:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:24 crc kubenswrapper[4998]: I0227 10:19:24.290958 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:24 crc kubenswrapper[4998]: I0227 10:19:24.291201 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:24 crc kubenswrapper[4998]: I0227 10:19:24.291335 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:24 crc kubenswrapper[4998]: I0227 10:19:24.291399 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:24 crc kubenswrapper[4998]: I0227 10:19:24.291454 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:24Z","lastTransitionTime":"2026-02-27T10:19:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:24 crc kubenswrapper[4998]: I0227 10:19:24.394328 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:24 crc kubenswrapper[4998]: I0227 10:19:24.394358 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:24 crc kubenswrapper[4998]: I0227 10:19:24.394368 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:24 crc kubenswrapper[4998]: I0227 10:19:24.394380 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:24 crc kubenswrapper[4998]: I0227 10:19:24.394388 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:24Z","lastTransitionTime":"2026-02-27T10:19:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:24 crc kubenswrapper[4998]: I0227 10:19:24.425128 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/40178d6d-6068-4937-b7d5-883538892cc5-metrics-certs\") pod \"network-metrics-daemon-86xkz\" (UID: \"40178d6d-6068-4937-b7d5-883538892cc5\") " pod="openshift-multus/network-metrics-daemon-86xkz" Feb 27 10:19:24 crc kubenswrapper[4998]: E0227 10:19:24.425293 4998 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 27 10:19:24 crc kubenswrapper[4998]: E0227 10:19:24.425404 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/40178d6d-6068-4937-b7d5-883538892cc5-metrics-certs podName:40178d6d-6068-4937-b7d5-883538892cc5 nodeName:}" failed. No retries permitted until 2026-02-27 10:19:26.425360549 +0000 UTC m=+118.423631517 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/40178d6d-6068-4937-b7d5-883538892cc5-metrics-certs") pod "network-metrics-daemon-86xkz" (UID: "40178d6d-6068-4937-b7d5-883538892cc5") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 27 10:19:24 crc kubenswrapper[4998]: I0227 10:19:24.497000 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:24 crc kubenswrapper[4998]: I0227 10:19:24.497058 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:24 crc kubenswrapper[4998]: I0227 10:19:24.497076 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:24 crc kubenswrapper[4998]: I0227 10:19:24.497100 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:24 crc kubenswrapper[4998]: I0227 10:19:24.497117 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:24Z","lastTransitionTime":"2026-02-27T10:19:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:24 crc kubenswrapper[4998]: I0227 10:19:24.600454 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:24 crc kubenswrapper[4998]: I0227 10:19:24.600532 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:24 crc kubenswrapper[4998]: I0227 10:19:24.600554 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:24 crc kubenswrapper[4998]: I0227 10:19:24.600583 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:24 crc kubenswrapper[4998]: I0227 10:19:24.600609 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:24Z","lastTransitionTime":"2026-02-27T10:19:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:24 crc kubenswrapper[4998]: I0227 10:19:24.702812 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:24 crc kubenswrapper[4998]: I0227 10:19:24.702872 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:24 crc kubenswrapper[4998]: I0227 10:19:24.702889 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:24 crc kubenswrapper[4998]: I0227 10:19:24.702909 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:24 crc kubenswrapper[4998]: I0227 10:19:24.702926 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:24Z","lastTransitionTime":"2026-02-27T10:19:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:24 crc kubenswrapper[4998]: I0227 10:19:24.764053 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:19:24 crc kubenswrapper[4998]: I0227 10:19:24.764064 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:19:24 crc kubenswrapper[4998]: I0227 10:19:24.764149 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:19:24 crc kubenswrapper[4998]: I0227 10:19:24.764442 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-86xkz" Feb 27 10:19:24 crc kubenswrapper[4998]: E0227 10:19:24.764611 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 10:19:24 crc kubenswrapper[4998]: E0227 10:19:24.764690 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-86xkz" podUID="40178d6d-6068-4937-b7d5-883538892cc5" Feb 27 10:19:24 crc kubenswrapper[4998]: E0227 10:19:24.764812 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 10:19:24 crc kubenswrapper[4998]: E0227 10:19:24.765196 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 10:19:24 crc kubenswrapper[4998]: E0227 10:19:24.766954 4998 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 10:19:24 crc kubenswrapper[4998]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Feb 27 10:19:24 crc kubenswrapper[4998]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Feb 27 10:19:24 crc kubenswrapper[4998]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7s287,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-46lvx_openshift-multus(a046a5ca-7081-4920-98af-1027a5bc29d0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 27 10:19:24 crc kubenswrapper[4998]: > logger="UnhandledError" Feb 27 10:19:24 crc kubenswrapper[4998]: E0227 10:19:24.767088 4998 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gnkjs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-l9x2p_openshift-multus(e55e9768-52ee-4fcf-a279-1b55e6d6c6fd): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 27 10:19:24 crc kubenswrapper[4998]: E0227 10:19:24.768104 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-46lvx" podUID="a046a5ca-7081-4920-98af-1027a5bc29d0" Feb 27 10:19:24 crc kubenswrapper[4998]: E0227 10:19:24.768159 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-l9x2p" podUID="e55e9768-52ee-4fcf-a279-1b55e6d6c6fd" Feb 27 10:19:24 crc kubenswrapper[4998]: I0227 10:19:24.806790 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:24 crc kubenswrapper[4998]: I0227 10:19:24.806855 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:24 crc kubenswrapper[4998]: I0227 10:19:24.806871 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:24 crc kubenswrapper[4998]: I0227 10:19:24.806893 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:24 crc kubenswrapper[4998]: I0227 10:19:24.806913 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:24Z","lastTransitionTime":"2026-02-27T10:19:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:24 crc kubenswrapper[4998]: I0227 10:19:24.909719 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:24 crc kubenswrapper[4998]: I0227 10:19:24.909764 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:24 crc kubenswrapper[4998]: I0227 10:19:24.909779 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:24 crc kubenswrapper[4998]: I0227 10:19:24.909798 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:24 crc kubenswrapper[4998]: I0227 10:19:24.909814 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:24Z","lastTransitionTime":"2026-02-27T10:19:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:25 crc kubenswrapper[4998]: I0227 10:19:25.012363 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:25 crc kubenswrapper[4998]: I0227 10:19:25.012395 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:25 crc kubenswrapper[4998]: I0227 10:19:25.012403 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:25 crc kubenswrapper[4998]: I0227 10:19:25.012417 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:25 crc kubenswrapper[4998]: I0227 10:19:25.012433 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:25Z","lastTransitionTime":"2026-02-27T10:19:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:25 crc kubenswrapper[4998]: I0227 10:19:25.115945 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:25 crc kubenswrapper[4998]: I0227 10:19:25.116030 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:25 crc kubenswrapper[4998]: I0227 10:19:25.116047 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:25 crc kubenswrapper[4998]: I0227 10:19:25.116063 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:25 crc kubenswrapper[4998]: I0227 10:19:25.116074 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:25Z","lastTransitionTime":"2026-02-27T10:19:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:25 crc kubenswrapper[4998]: I0227 10:19:25.218220 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:25 crc kubenswrapper[4998]: I0227 10:19:25.218277 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:25 crc kubenswrapper[4998]: I0227 10:19:25.218290 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:25 crc kubenswrapper[4998]: I0227 10:19:25.218306 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:25 crc kubenswrapper[4998]: I0227 10:19:25.218316 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:25Z","lastTransitionTime":"2026-02-27T10:19:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:25 crc kubenswrapper[4998]: I0227 10:19:25.290044 4998 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 27 10:19:25 crc kubenswrapper[4998]: I0227 10:19:25.320667 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:25 crc kubenswrapper[4998]: I0227 10:19:25.320703 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:25 crc kubenswrapper[4998]: I0227 10:19:25.320712 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:25 crc kubenswrapper[4998]: I0227 10:19:25.320726 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:25 crc kubenswrapper[4998]: I0227 10:19:25.320736 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:25Z","lastTransitionTime":"2026-02-27T10:19:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:25 crc kubenswrapper[4998]: I0227 10:19:25.422695 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:25 crc kubenswrapper[4998]: I0227 10:19:25.422747 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:25 crc kubenswrapper[4998]: I0227 10:19:25.422764 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:25 crc kubenswrapper[4998]: I0227 10:19:25.422780 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:25 crc kubenswrapper[4998]: I0227 10:19:25.422791 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:25Z","lastTransitionTime":"2026-02-27T10:19:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:25 crc kubenswrapper[4998]: I0227 10:19:25.525521 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:25 crc kubenswrapper[4998]: I0227 10:19:25.525587 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:25 crc kubenswrapper[4998]: I0227 10:19:25.525595 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:25 crc kubenswrapper[4998]: I0227 10:19:25.525609 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:25 crc kubenswrapper[4998]: I0227 10:19:25.525618 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:25Z","lastTransitionTime":"2026-02-27T10:19:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:25 crc kubenswrapper[4998]: I0227 10:19:25.627737 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:25 crc kubenswrapper[4998]: I0227 10:19:25.627780 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:25 crc kubenswrapper[4998]: I0227 10:19:25.627791 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:25 crc kubenswrapper[4998]: I0227 10:19:25.627806 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:25 crc kubenswrapper[4998]: I0227 10:19:25.627819 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:25Z","lastTransitionTime":"2026-02-27T10:19:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:25 crc kubenswrapper[4998]: I0227 10:19:25.730599 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:25 crc kubenswrapper[4998]: I0227 10:19:25.730662 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:25 crc kubenswrapper[4998]: I0227 10:19:25.730680 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:25 crc kubenswrapper[4998]: I0227 10:19:25.730706 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:25 crc kubenswrapper[4998]: I0227 10:19:25.730723 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:25Z","lastTransitionTime":"2026-02-27T10:19:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:25 crc kubenswrapper[4998]: I0227 10:19:25.837355 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:25 crc kubenswrapper[4998]: I0227 10:19:25.837419 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:25 crc kubenswrapper[4998]: I0227 10:19:25.837436 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:25 crc kubenswrapper[4998]: I0227 10:19:25.837462 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:25 crc kubenswrapper[4998]: I0227 10:19:25.837480 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:25Z","lastTransitionTime":"2026-02-27T10:19:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:25 crc kubenswrapper[4998]: I0227 10:19:25.940934 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:25 crc kubenswrapper[4998]: I0227 10:19:25.940981 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:25 crc kubenswrapper[4998]: I0227 10:19:25.940999 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:25 crc kubenswrapper[4998]: I0227 10:19:25.941021 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:25 crc kubenswrapper[4998]: I0227 10:19:25.941038 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:25Z","lastTransitionTime":"2026-02-27T10:19:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:26 crc kubenswrapper[4998]: I0227 10:19:26.043636 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:26 crc kubenswrapper[4998]: I0227 10:19:26.043670 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:26 crc kubenswrapper[4998]: I0227 10:19:26.043679 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:26 crc kubenswrapper[4998]: I0227 10:19:26.043713 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:26 crc kubenswrapper[4998]: I0227 10:19:26.043725 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:26Z","lastTransitionTime":"2026-02-27T10:19:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:26 crc kubenswrapper[4998]: I0227 10:19:26.145900 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:26 crc kubenswrapper[4998]: I0227 10:19:26.145936 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:26 crc kubenswrapper[4998]: I0227 10:19:26.145945 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:26 crc kubenswrapper[4998]: I0227 10:19:26.145957 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:26 crc kubenswrapper[4998]: I0227 10:19:26.145986 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:26Z","lastTransitionTime":"2026-02-27T10:19:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:26 crc kubenswrapper[4998]: I0227 10:19:26.249003 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:26 crc kubenswrapper[4998]: I0227 10:19:26.249096 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:26 crc kubenswrapper[4998]: I0227 10:19:26.249160 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:26 crc kubenswrapper[4998]: I0227 10:19:26.249198 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:26 crc kubenswrapper[4998]: I0227 10:19:26.249220 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:26Z","lastTransitionTime":"2026-02-27T10:19:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:26 crc kubenswrapper[4998]: I0227 10:19:26.351310 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:26 crc kubenswrapper[4998]: I0227 10:19:26.351373 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:26 crc kubenswrapper[4998]: I0227 10:19:26.351389 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:26 crc kubenswrapper[4998]: I0227 10:19:26.351408 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:26 crc kubenswrapper[4998]: I0227 10:19:26.351423 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:26Z","lastTransitionTime":"2026-02-27T10:19:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:26 crc kubenswrapper[4998]: I0227 10:19:26.445594 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/40178d6d-6068-4937-b7d5-883538892cc5-metrics-certs\") pod \"network-metrics-daemon-86xkz\" (UID: \"40178d6d-6068-4937-b7d5-883538892cc5\") " pod="openshift-multus/network-metrics-daemon-86xkz" Feb 27 10:19:26 crc kubenswrapper[4998]: E0227 10:19:26.445732 4998 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 27 10:19:26 crc kubenswrapper[4998]: E0227 10:19:26.445790 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/40178d6d-6068-4937-b7d5-883538892cc5-metrics-certs podName:40178d6d-6068-4937-b7d5-883538892cc5 nodeName:}" failed. No retries permitted until 2026-02-27 10:19:30.445771989 +0000 UTC m=+122.444042957 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/40178d6d-6068-4937-b7d5-883538892cc5-metrics-certs") pod "network-metrics-daemon-86xkz" (UID: "40178d6d-6068-4937-b7d5-883538892cc5") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 27 10:19:26 crc kubenswrapper[4998]: I0227 10:19:26.453865 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:26 crc kubenswrapper[4998]: I0227 10:19:26.453919 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:26 crc kubenswrapper[4998]: I0227 10:19:26.453935 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:26 crc kubenswrapper[4998]: I0227 10:19:26.453956 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:26 crc kubenswrapper[4998]: I0227 10:19:26.453974 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:26Z","lastTransitionTime":"2026-02-27T10:19:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:26 crc kubenswrapper[4998]: I0227 10:19:26.557518 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:26 crc kubenswrapper[4998]: I0227 10:19:26.557572 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:26 crc kubenswrapper[4998]: I0227 10:19:26.557584 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:26 crc kubenswrapper[4998]: I0227 10:19:26.557606 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:26 crc kubenswrapper[4998]: I0227 10:19:26.557618 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:26Z","lastTransitionTime":"2026-02-27T10:19:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:26 crc kubenswrapper[4998]: I0227 10:19:26.660631 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:26 crc kubenswrapper[4998]: I0227 10:19:26.661033 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:26 crc kubenswrapper[4998]: I0227 10:19:26.661047 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:26 crc kubenswrapper[4998]: I0227 10:19:26.661121 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:26 crc kubenswrapper[4998]: I0227 10:19:26.661135 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:26Z","lastTransitionTime":"2026-02-27T10:19:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:26 crc kubenswrapper[4998]: I0227 10:19:26.763804 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:19:26 crc kubenswrapper[4998]: I0227 10:19:26.763833 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:19:26 crc kubenswrapper[4998]: I0227 10:19:26.763870 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:19:26 crc kubenswrapper[4998]: E0227 10:19:26.763911 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 10:19:26 crc kubenswrapper[4998]: I0227 10:19:26.763920 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-86xkz" Feb 27 10:19:26 crc kubenswrapper[4998]: I0227 10:19:26.763987 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:26 crc kubenswrapper[4998]: E0227 10:19:26.764000 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 10:19:26 crc kubenswrapper[4998]: I0227 10:19:26.764008 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:26 crc kubenswrapper[4998]: I0227 10:19:26.764043 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:26 crc kubenswrapper[4998]: I0227 10:19:26.764059 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:26 crc kubenswrapper[4998]: E0227 10:19:26.764043 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-86xkz" podUID="40178d6d-6068-4937-b7d5-883538892cc5" Feb 27 10:19:26 crc kubenswrapper[4998]: I0227 10:19:26.764071 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:26Z","lastTransitionTime":"2026-02-27T10:19:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:26 crc kubenswrapper[4998]: E0227 10:19:26.764080 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 10:19:26 crc kubenswrapper[4998]: I0227 10:19:26.867195 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:26 crc kubenswrapper[4998]: I0227 10:19:26.867262 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:26 crc kubenswrapper[4998]: I0227 10:19:26.867274 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:26 crc kubenswrapper[4998]: I0227 10:19:26.867291 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:26 crc kubenswrapper[4998]: I0227 10:19:26.867303 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:26Z","lastTransitionTime":"2026-02-27T10:19:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:26 crc kubenswrapper[4998]: I0227 10:19:26.881775 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:26 crc kubenswrapper[4998]: I0227 10:19:26.881853 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:26 crc kubenswrapper[4998]: I0227 10:19:26.881865 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:26 crc kubenswrapper[4998]: I0227 10:19:26.881882 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:26 crc kubenswrapper[4998]: I0227 10:19:26.881894 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:26Z","lastTransitionTime":"2026-02-27T10:19:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:26 crc kubenswrapper[4998]: E0227 10:19:26.895535 4998 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:19:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:19:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:19:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:19:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a76c225a-d617-4499-bb32-eb42f31208cd\\\",\\\"systemUUID\\\":\\\"9cb598f9-84ae-4703-b4b9-6775104308e7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:26 crc kubenswrapper[4998]: I0227 10:19:26.900099 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:26 crc kubenswrapper[4998]: I0227 10:19:26.900144 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:26 crc kubenswrapper[4998]: I0227 10:19:26.900156 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:26 crc kubenswrapper[4998]: I0227 10:19:26.900176 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:26 crc kubenswrapper[4998]: I0227 10:19:26.900188 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:26Z","lastTransitionTime":"2026-02-27T10:19:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:26 crc kubenswrapper[4998]: E0227 10:19:26.909800 4998 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:19:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:19:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:19:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:19:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a76c225a-d617-4499-bb32-eb42f31208cd\\\",\\\"systemUUID\\\":\\\"9cb598f9-84ae-4703-b4b9-6775104308e7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:26 crc kubenswrapper[4998]: I0227 10:19:26.913435 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:26 crc kubenswrapper[4998]: I0227 10:19:26.913473 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:26 crc kubenswrapper[4998]: I0227 10:19:26.913481 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:26 crc kubenswrapper[4998]: I0227 10:19:26.913494 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:26 crc kubenswrapper[4998]: I0227 10:19:26.913502 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:26Z","lastTransitionTime":"2026-02-27T10:19:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:26 crc kubenswrapper[4998]: E0227 10:19:26.940960 4998 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:19:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:19:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:19:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:19:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a76c225a-d617-4499-bb32-eb42f31208cd\\\",\\\"systemUUID\\\":\\\"9cb598f9-84ae-4703-b4b9-6775104308e7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:26 crc kubenswrapper[4998]: I0227 10:19:26.945063 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:26 crc kubenswrapper[4998]: I0227 10:19:26.945095 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:26 crc kubenswrapper[4998]: I0227 10:19:26.945104 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:26 crc kubenswrapper[4998]: I0227 10:19:26.945140 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:26 crc kubenswrapper[4998]: I0227 10:19:26.945151 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:26Z","lastTransitionTime":"2026-02-27T10:19:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:26 crc kubenswrapper[4998]: E0227 10:19:26.958833 4998 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:19:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:19:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:19:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:19:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a76c225a-d617-4499-bb32-eb42f31208cd\\\",\\\"systemUUID\\\":\\\"9cb598f9-84ae-4703-b4b9-6775104308e7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:26 crc kubenswrapper[4998]: I0227 10:19:26.966770 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:26 crc kubenswrapper[4998]: I0227 10:19:26.966833 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:26 crc kubenswrapper[4998]: I0227 10:19:26.966846 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:26 crc kubenswrapper[4998]: I0227 10:19:26.966862 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:26 crc kubenswrapper[4998]: I0227 10:19:26.966874 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:26Z","lastTransitionTime":"2026-02-27T10:19:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:26 crc kubenswrapper[4998]: E0227 10:19:26.980737 4998 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:19:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:19:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:19:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:19:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a76c225a-d617-4499-bb32-eb42f31208cd\\\",\\\"systemUUID\\\":\\\"9cb598f9-84ae-4703-b4b9-6775104308e7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:26 crc kubenswrapper[4998]: E0227 10:19:26.980869 4998 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 27 10:19:26 crc kubenswrapper[4998]: I0227 10:19:26.982257 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:26 crc kubenswrapper[4998]: I0227 10:19:26.982306 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:26 crc kubenswrapper[4998]: I0227 10:19:26.982318 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:26 crc kubenswrapper[4998]: I0227 10:19:26.982335 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:26 crc kubenswrapper[4998]: I0227 10:19:26.982346 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:26Z","lastTransitionTime":"2026-02-27T10:19:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:27 crc kubenswrapper[4998]: I0227 10:19:27.085339 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:27 crc kubenswrapper[4998]: I0227 10:19:27.085388 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:27 crc kubenswrapper[4998]: I0227 10:19:27.085400 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:27 crc kubenswrapper[4998]: I0227 10:19:27.085417 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:27 crc kubenswrapper[4998]: I0227 10:19:27.085428 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:27Z","lastTransitionTime":"2026-02-27T10:19:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:27 crc kubenswrapper[4998]: I0227 10:19:27.188692 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:27 crc kubenswrapper[4998]: I0227 10:19:27.188751 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:27 crc kubenswrapper[4998]: I0227 10:19:27.188767 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:27 crc kubenswrapper[4998]: I0227 10:19:27.188790 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:27 crc kubenswrapper[4998]: I0227 10:19:27.188821 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:27Z","lastTransitionTime":"2026-02-27T10:19:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:27 crc kubenswrapper[4998]: I0227 10:19:27.291722 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:27 crc kubenswrapper[4998]: I0227 10:19:27.291792 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:27 crc kubenswrapper[4998]: I0227 10:19:27.291814 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:27 crc kubenswrapper[4998]: I0227 10:19:27.291846 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:27 crc kubenswrapper[4998]: I0227 10:19:27.291867 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:27Z","lastTransitionTime":"2026-02-27T10:19:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:27 crc kubenswrapper[4998]: I0227 10:19:27.394429 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:27 crc kubenswrapper[4998]: I0227 10:19:27.394502 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:27 crc kubenswrapper[4998]: I0227 10:19:27.394537 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:27 crc kubenswrapper[4998]: I0227 10:19:27.394567 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:27 crc kubenswrapper[4998]: I0227 10:19:27.394588 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:27Z","lastTransitionTime":"2026-02-27T10:19:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:27 crc kubenswrapper[4998]: I0227 10:19:27.497632 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:27 crc kubenswrapper[4998]: I0227 10:19:27.497729 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:27 crc kubenswrapper[4998]: I0227 10:19:27.497748 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:27 crc kubenswrapper[4998]: I0227 10:19:27.497768 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:27 crc kubenswrapper[4998]: I0227 10:19:27.497784 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:27Z","lastTransitionTime":"2026-02-27T10:19:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:27 crc kubenswrapper[4998]: I0227 10:19:27.600971 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:27 crc kubenswrapper[4998]: I0227 10:19:27.601023 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:27 crc kubenswrapper[4998]: I0227 10:19:27.601034 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:27 crc kubenswrapper[4998]: I0227 10:19:27.601054 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:27 crc kubenswrapper[4998]: I0227 10:19:27.601067 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:27Z","lastTransitionTime":"2026-02-27T10:19:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:27 crc kubenswrapper[4998]: I0227 10:19:27.704098 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:27 crc kubenswrapper[4998]: I0227 10:19:27.704138 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:27 crc kubenswrapper[4998]: I0227 10:19:27.704146 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:27 crc kubenswrapper[4998]: I0227 10:19:27.704160 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:27 crc kubenswrapper[4998]: I0227 10:19:27.704171 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:27Z","lastTransitionTime":"2026-02-27T10:19:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:27 crc kubenswrapper[4998]: I0227 10:19:27.810563 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:27 crc kubenswrapper[4998]: I0227 10:19:27.810630 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:27 crc kubenswrapper[4998]: I0227 10:19:27.810651 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:27 crc kubenswrapper[4998]: I0227 10:19:27.810680 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:27 crc kubenswrapper[4998]: I0227 10:19:27.810702 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:27Z","lastTransitionTime":"2026-02-27T10:19:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:27 crc kubenswrapper[4998]: I0227 10:19:27.912925 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:27 crc kubenswrapper[4998]: I0227 10:19:27.912964 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:27 crc kubenswrapper[4998]: I0227 10:19:27.912974 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:27 crc kubenswrapper[4998]: I0227 10:19:27.912995 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:27 crc kubenswrapper[4998]: I0227 10:19:27.913013 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:27Z","lastTransitionTime":"2026-02-27T10:19:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:28 crc kubenswrapper[4998]: I0227 10:19:28.015477 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:28 crc kubenswrapper[4998]: I0227 10:19:28.015536 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:28 crc kubenswrapper[4998]: I0227 10:19:28.015547 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:28 crc kubenswrapper[4998]: I0227 10:19:28.015566 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:28 crc kubenswrapper[4998]: I0227 10:19:28.015579 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:28Z","lastTransitionTime":"2026-02-27T10:19:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:28 crc kubenswrapper[4998]: I0227 10:19:28.118778 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:28 crc kubenswrapper[4998]: I0227 10:19:28.118864 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:28 crc kubenswrapper[4998]: I0227 10:19:28.118884 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:28 crc kubenswrapper[4998]: I0227 10:19:28.118915 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:28 crc kubenswrapper[4998]: I0227 10:19:28.118938 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:28Z","lastTransitionTime":"2026-02-27T10:19:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:28 crc kubenswrapper[4998]: I0227 10:19:28.221858 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:28 crc kubenswrapper[4998]: I0227 10:19:28.221920 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:28 crc kubenswrapper[4998]: I0227 10:19:28.221936 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:28 crc kubenswrapper[4998]: I0227 10:19:28.221960 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:28 crc kubenswrapper[4998]: I0227 10:19:28.221980 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:28Z","lastTransitionTime":"2026-02-27T10:19:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:28 crc kubenswrapper[4998]: I0227 10:19:28.231335 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"8c7c05c29eea9415919844353f5f82a0543e2592b282913feca3776ef5a711d5"} Feb 27 10:19:28 crc kubenswrapper[4998]: I0227 10:19:28.231470 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"65614a1a13ca1ee58cd9ee838bbb0ba2ada05d130adb26b34ae208292a994bf1"} Feb 27 10:19:28 crc kubenswrapper[4998]: I0227 10:19:28.234544 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"de019f950487216eb5629eeb90db5b4dac503bb5bda901cb9dcb66a9019aaecb"} Feb 27 10:19:28 crc kubenswrapper[4998]: I0227 10:19:28.249864 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fc05123-f698-45ff-a3c3-13c18e466cdb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c807f36a54905b570613990b7428942b90cbad2d8d8dfbab02ee177d5575644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f0cce1c3309bc7de11ea57038f7a306b781b41434337fbf3da176ca2dba2b91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93c102b441273f39ca361ed86f6ef259e15d8adb7eea414db62fabebfda0dee1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e749ae59c7bd8ab4f2d458cd33ccbae459eb43375c8abcdd275ce7f3978d5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b6e8ecfcaba19fb742ce57619627409073b5ea272a20cca6cff39ebcef8d3e2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T10:18:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 10:18:37.748369 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 10:18:37.748616 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 10:18:37.749728 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2309284339/tls.crt::/tmp/serving-cert-2309284339/tls.key\\\\\\\"\\\\nI0227 10:18:38.164511 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 10:18:38.177544 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 10:18:38.177576 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 10:18:38.177600 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 10:18:38.177606 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 10:18:38.196196 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 10:18:38.196250 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 10:18:38.196256 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 10:18:38.196263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 10:18:38.196267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 10:18:38.196282 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 10:18:38.196286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0227 10:18:38.201111 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0227 10:18:38.201327 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T10:18:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df65cc66d990595a8fdb751234ffd8b9e890f53de56c736241bc8b52e7341357\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b42644d7b4769348dda3a3ed5f82a860712159a6e62df2ee6a04b6e890851fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b42644d7b4769348dda3a3ed5f82a860712159a6e62df2ee6a04b6e890851fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:17:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:28 crc kubenswrapper[4998]: I0227 10:19:28.261491 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-86xkz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40178d6d-6068-4937-b7d5-883538892cc5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqwdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqwdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-86xkz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:28 crc kubenswrapper[4998]: I0227 10:19:28.275043 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:28 crc kubenswrapper[4998]: I0227 10:19:28.287439 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c7c05c29eea9415919844353f5f82a0543e2592b282913feca3776ef5a711d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65614a1a13ca1ee58cd9ee838bbb0ba2ada05d130adb26b34ae208292a994bf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:28 crc kubenswrapper[4998]: I0227 10:19:28.300409 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:28 crc kubenswrapper[4998]: I0227 10:19:28.313100 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-46lvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a046a5ca-7081-4920-98af-1027a5bc29d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7s287\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-46lvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:28 crc kubenswrapper[4998]: I0227 10:19:28.324982 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:28 crc kubenswrapper[4998]: I0227 10:19:28.325029 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:28 crc kubenswrapper[4998]: I0227 10:19:28.325038 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:28 crc kubenswrapper[4998]: I0227 10:19:28.325054 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:28 crc kubenswrapper[4998]: I0227 10:19:28.325068 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:28Z","lastTransitionTime":"2026-02-27T10:19:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:28 crc kubenswrapper[4998]: I0227 10:19:28.325197 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"400c5e2f-5448-49c6-bf8e-04b21e552bb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l6t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l6t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m6kr5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:28 crc kubenswrapper[4998]: I0227 10:19:28.334450 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jl2nx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c459deb8-e1ea-43de-a1b0-1b463eee4bdc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:16Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:16Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc6kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jl2nx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:28 crc kubenswrapper[4998]: I0227 10:19:28.345505 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:28 crc kubenswrapper[4998]: I0227 10:19:28.354614 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:28 crc kubenswrapper[4998]: I0227 10:19:28.362432 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:28 crc kubenswrapper[4998]: I0227 10:19:28.388726 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bceef7ff-b99d-432e-b9cb-7c538c82b74b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wh9xl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:28 crc kubenswrapper[4998]: I0227 10:19:28.397165 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcfqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9652967a-d4bf-4304-bd25-4fed87e89b10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:09Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:09Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jctzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcfqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:28 crc kubenswrapper[4998]: I0227 10:19:28.410015 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l9x2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e55e9768-52ee-4fcf-a279-1b55e6d6c6fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l9x2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:19:28 crc kubenswrapper[4998]: I0227 10:19:28.424730 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-g7c4b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68afe6cb-a559-4162-a25f-a22003feeca4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mqxn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mqxn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-g7c4b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:28Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:28 crc kubenswrapper[4998]: I0227 10:19:28.427794 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:28 crc kubenswrapper[4998]: I0227 10:19:28.427836 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:28 crc kubenswrapper[4998]: I0227 10:19:28.427848 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:28 crc kubenswrapper[4998]: I0227 10:19:28.427867 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:28 crc kubenswrapper[4998]: I0227 10:19:28.427880 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:28Z","lastTransitionTime":"2026-02-27T10:19:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:28 crc kubenswrapper[4998]: I0227 10:19:28.446004 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bceef7ff-b99d-432e-b9cb-7c538c82b74b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wh9xl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:28Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:28 crc kubenswrapper[4998]: I0227 10:19:28.456880 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jl2nx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c459deb8-e1ea-43de-a1b0-1b463eee4bdc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:16Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:16Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc6kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jl2nx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:28Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:28 crc kubenswrapper[4998]: I0227 10:19:28.470958 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de019f950487216eb5629eeb90db5b4dac503bb5bda901cb9dcb66a9019aaecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:28Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:28 crc kubenswrapper[4998]: I0227 10:19:28.485829 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:28Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:28 crc kubenswrapper[4998]: I0227 10:19:28.499527 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:28Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:28 crc kubenswrapper[4998]: I0227 10:19:28.509522 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcfqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9652967a-d4bf-4304-bd25-4fed87e89b10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:09Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:09Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jctzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcfqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:28Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:28 crc kubenswrapper[4998]: I0227 10:19:28.524982 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l9x2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e55e9768-52ee-4fcf-a279-1b55e6d6c6fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l9x2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:28Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:28 crc kubenswrapper[4998]: I0227 10:19:28.530199 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:28 crc kubenswrapper[4998]: I0227 10:19:28.530255 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:28 crc kubenswrapper[4998]: I0227 10:19:28.530265 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:28 crc kubenswrapper[4998]: I0227 10:19:28.530279 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:28 crc kubenswrapper[4998]: I0227 10:19:28.530291 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:28Z","lastTransitionTime":"2026-02-27T10:19:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:28 crc kubenswrapper[4998]: I0227 10:19:28.538964 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-g7c4b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68afe6cb-a559-4162-a25f-a22003feeca4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mqxn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mqxn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-g7c4b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:28Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:28 crc kubenswrapper[4998]: I0227 10:19:28.553973 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fc05123-f698-45ff-a3c3-13c18e466cdb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c807f36a54905b570613990b7428942b90cbad2d8d8dfbab02ee177d5575644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f0cce1c3309bc7de11ea57038f7a306b781b41434337fbf3da176ca2dba2b91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93c102b441273f39ca361ed86f6ef259e15d8adb7eea414db62fabebfda0dee1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e749ae59c7bd8ab4f2d458cd33ccbae459eb43375c8abcdd275ce7f3978d5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b6e8ecfcaba19fb742ce57619627409073b5ea272a20cca6cff39ebcef8d3e2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T10:18:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 10:18:37.748369 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 10:18:37.748616 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 10:18:37.749728 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2309284339/tls.crt::/tmp/serving-cert-2309284339/tls.key\\\\\\\"\\\\nI0227 10:18:38.164511 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 10:18:38.177544 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 10:18:38.177576 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 10:18:38.177600 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 10:18:38.177606 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 10:18:38.196196 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 10:18:38.196250 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 10:18:38.196256 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 10:18:38.196263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 10:18:38.196267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 10:18:38.196282 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 10:18:38.196286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0227 10:18:38.201111 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0227 10:18:38.201327 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T10:18:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df65cc66d990595a8fdb751234ffd8b9e890f53de56c736241bc8b52e7341357\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b42644d7b4769348dda3a3ed5f82a860712159a6e62df2ee6a04b6e890851fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b42644d7b4769348dda3a3ed5f82a860712159a6e62df2ee6a04b6e890851fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:17:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:28Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:28 crc kubenswrapper[4998]: I0227 10:19:28.564838 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-86xkz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40178d6d-6068-4937-b7d5-883538892cc5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqwdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqwdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-86xkz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:28Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:28 crc kubenswrapper[4998]: I0227 10:19:28.577197 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"400c5e2f-5448-49c6-bf8e-04b21e552bb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l6t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l6t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m6kr5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:28Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:28 crc kubenswrapper[4998]: I0227 10:19:28.591652 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:28Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:28 crc kubenswrapper[4998]: I0227 10:19:28.604812 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c7c05c29eea9415919844353f5f82a0543e2592b282913feca3776ef5a711d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65614a1a13ca1ee58cd9ee838bbb0ba2ada05d130adb26b34ae208292a994bf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:28Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:28 crc kubenswrapper[4998]: I0227 10:19:28.615521 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:28Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:28 crc kubenswrapper[4998]: I0227 10:19:28.626864 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-46lvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a046a5ca-7081-4920-98af-1027a5bc29d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7s287\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-46lvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:28Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:28 crc kubenswrapper[4998]: I0227 10:19:28.632533 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:28 crc kubenswrapper[4998]: I0227 10:19:28.632588 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:28 crc kubenswrapper[4998]: I0227 10:19:28.632598 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:28 crc kubenswrapper[4998]: I0227 10:19:28.632611 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:28 crc kubenswrapper[4998]: I0227 10:19:28.632620 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:28Z","lastTransitionTime":"2026-02-27T10:19:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:28 crc kubenswrapper[4998]: E0227 10:19:28.733167 4998 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Feb 27 10:19:28 crc kubenswrapper[4998]: I0227 10:19:28.764827 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-86xkz" Feb 27 10:19:28 crc kubenswrapper[4998]: I0227 10:19:28.764828 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:19:28 crc kubenswrapper[4998]: I0227 10:19:28.764841 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:19:28 crc kubenswrapper[4998]: I0227 10:19:28.764967 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:19:28 crc kubenswrapper[4998]: E0227 10:19:28.765133 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 10:19:28 crc kubenswrapper[4998]: E0227 10:19:28.765257 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 10:19:28 crc kubenswrapper[4998]: E0227 10:19:28.765365 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-86xkz" podUID="40178d6d-6068-4937-b7d5-883538892cc5" Feb 27 10:19:28 crc kubenswrapper[4998]: E0227 10:19:28.765451 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 10:19:28 crc kubenswrapper[4998]: I0227 10:19:28.793415 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fc05123-f698-45ff-a3c3-13c18e466cdb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c807f36a54905b570613990b7428942b90cbad2d8d8dfbab02ee177d5575644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f0cce1c3309bc7de11ea57038f7a306b781b41434337fbf3da176ca2dba2b91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93c102b441273f39ca361ed86f6ef259e15d8adb7eea414db62fabebfda0dee1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e749ae59c7bd8ab4f2d458cd33ccbae459eb43375c8abcdd275ce7f3978d5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b6e8ecfcaba19fb742ce57619627409073b5ea272a20cca6cff39ebcef8d3e2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T10:18:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 10:18:37.748369 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 10:18:37.748616 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 10:18:37.749728 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2309284339/tls.crt::/tmp/serving-cert-2309284339/tls.key\\\\\\\"\\\\nI0227 10:18:38.164511 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 10:18:38.177544 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 10:18:38.177576 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 10:18:38.177600 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 10:18:38.177606 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 10:18:38.196196 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 10:18:38.196250 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 10:18:38.196256 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 10:18:38.196263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 10:18:38.196267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 10:18:38.196282 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 10:18:38.196286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0227 10:18:38.201111 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0227 10:18:38.201327 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T10:18:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df65cc66d990595a8fdb751234ffd8b9e890f53de56c736241bc8b52e7341357\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b42644d7b4769348dda3a3ed5f82a860712159a6e62df2ee6a04b6e890851fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b42644d7b4769348dda3a3ed5f82a860712159a6e62df2ee6a04b6e890851fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:17:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:28Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:28 crc kubenswrapper[4998]: I0227 10:19:28.808402 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-86xkz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40178d6d-6068-4937-b7d5-883538892cc5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqwdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqwdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-86xkz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:28Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:28 crc kubenswrapper[4998]: I0227 10:19:28.822731 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:28Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:28 crc kubenswrapper[4998]: I0227 10:19:28.839213 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c7c05c29eea9415919844353f5f82a0543e2592b282913feca3776ef5a711d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65614a1a13ca1ee58cd9ee838bbb0ba2ada05d130adb26b34ae208292a994bf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:28Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:28 crc kubenswrapper[4998]: I0227 10:19:28.849273 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:28Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:28 crc kubenswrapper[4998]: E0227 10:19:28.858904 4998 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 27 10:19:28 crc kubenswrapper[4998]: I0227 10:19:28.864114 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-46lvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a046a5ca-7081-4920-98af-1027a5bc29d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7s287\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-46lvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:28Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:28 crc kubenswrapper[4998]: I0227 10:19:28.882182 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"400c5e2f-5448-49c6-bf8e-04b21e552bb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l6t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l6t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m6kr5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:28Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:28 crc kubenswrapper[4998]: I0227 10:19:28.898580 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de019f950487216eb5629eeb90db5b4dac503bb5bda901cb9dcb66a9019aaecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:28Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:28 crc kubenswrapper[4998]: I0227 10:19:28.917709 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:28Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:28 crc kubenswrapper[4998]: I0227 10:19:28.931915 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:28Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:28 crc kubenswrapper[4998]: I0227 10:19:28.954450 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bceef7ff-b99d-432e-b9cb-7c538c82b74b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wh9xl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:28Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:28 crc kubenswrapper[4998]: I0227 10:19:28.965574 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jl2nx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c459deb8-e1ea-43de-a1b0-1b463eee4bdc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:16Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:16Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc6kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jl2nx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:28Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:28 crc kubenswrapper[4998]: I0227 10:19:28.978825 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcfqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9652967a-d4bf-4304-bd25-4fed87e89b10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:09Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:09Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jctzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcfqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:28Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:28 crc kubenswrapper[4998]: I0227 10:19:28.997038 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l9x2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e55e9768-52ee-4fcf-a279-1b55e6d6c6fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l9x2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:28Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:29 crc kubenswrapper[4998]: I0227 10:19:29.006668 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-g7c4b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68afe6cb-a559-4162-a25f-a22003feeca4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mqxn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mqxn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-g7c4b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:29Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:29 crc kubenswrapper[4998]: I0227 10:19:29.237441 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-jl2nx" event={"ID":"c459deb8-e1ea-43de-a1b0-1b463eee4bdc","Type":"ContainerStarted","Data":"a7df06ac726fa496bf82c45d7a1ec7f2defaf93f1d406f0c55cbe4c2b46c9b41"} Feb 27 10:19:29 crc kubenswrapper[4998]: I0227 10:19:29.247467 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcfqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9652967a-d4bf-4304-bd25-4fed87e89b10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:09Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:09Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jctzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcfqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:29Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:29 crc kubenswrapper[4998]: I0227 10:19:29.259718 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l9x2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e55e9768-52ee-4fcf-a279-1b55e6d6c6fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l9x2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:29Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:29 crc kubenswrapper[4998]: I0227 10:19:29.271177 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-g7c4b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68afe6cb-a559-4162-a25f-a22003feeca4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mqxn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mqxn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-g7c4b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:29Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:29 crc kubenswrapper[4998]: I0227 10:19:29.282275 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fc05123-f698-45ff-a3c3-13c18e466cdb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c807f36a54905b570613990b7428942b90cbad2d8d8dfbab02ee177d5575644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f0cce1c3309bc7de11ea57038f7a306b781b41434337fbf3da176ca2dba2b91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93c102b441273f39ca361ed86f6ef259e15d8adb7eea414db62fabebfda0dee1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e749ae59c7bd8ab4f2d458cd33ccbae459eb43375c8abcdd275ce7f3978d5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b6e8ecfcaba19fb742ce57619627409073b5ea272a20cca6cff39ebcef8d3e2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T10:18:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 10:18:37.748369 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 10:18:37.748616 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 10:18:37.749728 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2309284339/tls.crt::/tmp/serving-cert-2309284339/tls.key\\\\\\\"\\\\nI0227 10:18:38.164511 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 10:18:38.177544 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 10:18:38.177576 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 10:18:38.177600 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 10:18:38.177606 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 10:18:38.196196 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 10:18:38.196250 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 10:18:38.196256 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 10:18:38.196263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 10:18:38.196267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 10:18:38.196282 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 10:18:38.196286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0227 10:18:38.201111 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0227 10:18:38.201327 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T10:18:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df65cc66d990595a8fdb751234ffd8b9e890f53de56c736241bc8b52e7341357\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b42644d7b4769348dda3a3ed5f82a860712159a6e62df2ee6a04b6e890851fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b42644d7b4769348dda3a3ed5f82a860712159a6e62df2ee6a04b6e890851fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:17:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:29Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:29 crc kubenswrapper[4998]: I0227 10:19:29.293133 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-86xkz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40178d6d-6068-4937-b7d5-883538892cc5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqwdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqwdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-86xkz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:29Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:29 crc kubenswrapper[4998]: I0227 10:19:29.302128 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:29Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:29 crc kubenswrapper[4998]: I0227 10:19:29.314252 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c7c05c29eea9415919844353f5f82a0543e2592b282913feca3776ef5a711d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65614a1a13ca1ee58cd9ee838bbb0ba2ada05d130adb26b34ae208292a994bf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:29Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:29 crc kubenswrapper[4998]: I0227 10:19:29.324013 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:29Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:29 crc kubenswrapper[4998]: I0227 10:19:29.338688 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-46lvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a046a5ca-7081-4920-98af-1027a5bc29d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7s287\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-46lvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:29Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:29 crc kubenswrapper[4998]: I0227 10:19:29.348673 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"400c5e2f-5448-49c6-bf8e-04b21e552bb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l6t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l6t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m6kr5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:29Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:29 crc kubenswrapper[4998]: I0227 10:19:29.362034 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de019f950487216eb5629eeb90db5b4dac503bb5bda901cb9dcb66a9019aaecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:29Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:29 crc kubenswrapper[4998]: I0227 10:19:29.374075 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:29Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:29 crc kubenswrapper[4998]: I0227 10:19:29.384113 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:29Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:29 crc kubenswrapper[4998]: I0227 10:19:29.403540 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bceef7ff-b99d-432e-b9cb-7c538c82b74b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wh9xl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:29Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:29 crc kubenswrapper[4998]: I0227 10:19:29.414506 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jl2nx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c459deb8-e1ea-43de-a1b0-1b463eee4bdc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7df06ac726fa496bf82c45d7a1ec7f2defaf93f1d406f0c55cbe4c2b46c9b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc6kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jl2nx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:29Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:30 crc kubenswrapper[4998]: I0227 10:19:30.486769 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/40178d6d-6068-4937-b7d5-883538892cc5-metrics-certs\") pod \"network-metrics-daemon-86xkz\" (UID: \"40178d6d-6068-4937-b7d5-883538892cc5\") " pod="openshift-multus/network-metrics-daemon-86xkz" Feb 27 10:19:30 crc kubenswrapper[4998]: E0227 10:19:30.487005 4998 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 27 10:19:30 crc kubenswrapper[4998]: E0227 10:19:30.487103 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/40178d6d-6068-4937-b7d5-883538892cc5-metrics-certs podName:40178d6d-6068-4937-b7d5-883538892cc5 nodeName:}" failed. No retries permitted until 2026-02-27 10:19:38.487080641 +0000 UTC m=+130.485351629 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/40178d6d-6068-4937-b7d5-883538892cc5-metrics-certs") pod "network-metrics-daemon-86xkz" (UID: "40178d6d-6068-4937-b7d5-883538892cc5") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 27 10:19:30 crc kubenswrapper[4998]: I0227 10:19:30.764597 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:19:30 crc kubenswrapper[4998]: I0227 10:19:30.764751 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:19:30 crc kubenswrapper[4998]: E0227 10:19:30.764879 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 10:19:30 crc kubenswrapper[4998]: I0227 10:19:30.764911 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:19:30 crc kubenswrapper[4998]: I0227 10:19:30.764907 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-86xkz" Feb 27 10:19:30 crc kubenswrapper[4998]: E0227 10:19:30.764991 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 10:19:30 crc kubenswrapper[4998]: E0227 10:19:30.765094 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 10:19:30 crc kubenswrapper[4998]: E0227 10:19:30.765352 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-86xkz" podUID="40178d6d-6068-4937-b7d5-883538892cc5" Feb 27 10:19:30 crc kubenswrapper[4998]: I0227 10:19:30.773970 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 27 10:19:31 crc kubenswrapper[4998]: I0227 10:19:31.243745 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"8ea5f61679603b32df823b8149d832f4d6e98c0cb4c7d1124b3ff587bb729ef3"} Feb 27 10:19:31 crc kubenswrapper[4998]: I0227 10:19:31.262474 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de019f950487216eb5629eeb90db5b4dac503bb5bda901cb9dcb66a9019aaecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:31Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:31 crc kubenswrapper[4998]: I0227 10:19:31.280344 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:31Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:31 crc kubenswrapper[4998]: I0227 10:19:31.292065 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:31Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:31 crc kubenswrapper[4998]: I0227 10:19:31.310607 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bceef7ff-b99d-432e-b9cb-7c538c82b74b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wh9xl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:31Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:31 crc kubenswrapper[4998]: I0227 10:19:31.322402 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jl2nx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c459deb8-e1ea-43de-a1b0-1b463eee4bdc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7df06ac726fa496bf82c45d7a1ec7f2defaf93f1d406f0c55cbe4c2b46c9b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc6kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jl2nx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:31Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:31 crc kubenswrapper[4998]: I0227 10:19:31.333028 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcfqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9652967a-d4bf-4304-bd25-4fed87e89b10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:09Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:09Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jctzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcfqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:31Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:31 crc kubenswrapper[4998]: I0227 10:19:31.346874 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l9x2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e55e9768-52ee-4fcf-a279-1b55e6d6c6fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l9x2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:31Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:31 crc kubenswrapper[4998]: I0227 10:19:31.357924 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-g7c4b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68afe6cb-a559-4162-a25f-a22003feeca4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mqxn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mqxn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-g7c4b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:31Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:31 crc kubenswrapper[4998]: I0227 10:19:31.369215 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fc05123-f698-45ff-a3c3-13c18e466cdb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c807f36a54905b570613990b7428942b90cbad2d8d8dfbab02ee177d5575644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f0cce1c3309bc7de11ea57038f7a306b781b41434337fbf3da176ca2dba2b91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93c102b441273f39ca361ed86f6ef259e15d8adb7eea414db62fabebfda0dee1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e749ae59c7bd8ab4f2d458cd33ccbae459eb43375c8abcdd275ce7f3978d5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b6e8ecfcaba19fb742ce57619627409073b5ea272a20cca6cff39ebcef8d3e2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T10:18:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 10:18:37.748369 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 10:18:37.748616 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 10:18:37.749728 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2309284339/tls.crt::/tmp/serving-cert-2309284339/tls.key\\\\\\\"\\\\nI0227 10:18:38.164511 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 10:18:38.177544 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 10:18:38.177576 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 10:18:38.177600 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 10:18:38.177606 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 10:18:38.196196 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 10:18:38.196250 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 10:18:38.196256 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 10:18:38.196263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 10:18:38.196267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 10:18:38.196282 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 10:18:38.196286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0227 10:18:38.201111 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0227 10:18:38.201327 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T10:18:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df65cc66d990595a8fdb751234ffd8b9e890f53de56c736241bc8b52e7341357\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b42644d7b4769348dda3a3ed5f82a860712159a6e62df2ee6a04b6e890851fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b42644d7b4769348dda3a3ed5f82a860712159a6e62df2ee6a04b6e890851fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:17:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:31Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:31 crc kubenswrapper[4998]: I0227 10:19:31.381809 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da216979-4070-4044-885e-64db11be9b28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:18:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:18:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8e1a3d0cfbfb3c94ceb7fa8965506defc8d50fbd2c3c977ebea917d9b47f29c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77f77430694123a27f4e42870837e5ef1405bc4f157bc15efe4cc2daafd9456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df8df596ab8f658fa7380300b4af87a511447baba28c0a4878829e2217d7d600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcd8311d3319cee9ae6667193835b5bd30c08b0f52b6278fead9c76ffa5949ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dcd8311d3319cee9ae6667193835b5bd30c08b0f52b6278fead9c76ffa5949ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:17:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:31Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:31 crc kubenswrapper[4998]: I0227 10:19:31.392883 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-86xkz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40178d6d-6068-4937-b7d5-883538892cc5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqwdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqwdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-86xkz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:31Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:31 crc kubenswrapper[4998]: I0227 10:19:31.404787 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:31Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:31 crc kubenswrapper[4998]: I0227 10:19:31.422864 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c7c05c29eea9415919844353f5f82a0543e2592b282913feca3776ef5a711d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65614a1a13ca1ee58cd9ee838bbb0ba2ada05d130adb26b34ae208292a994bf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:31Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:31 crc kubenswrapper[4998]: I0227 10:19:31.438764 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ea5f61679603b32df823b8149d832f4d6e98c0cb4c7d1124b3ff587bb729ef3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:31Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:31 crc kubenswrapper[4998]: I0227 10:19:31.452210 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-46lvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a046a5ca-7081-4920-98af-1027a5bc29d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7s287\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-46lvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:31Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:31 crc kubenswrapper[4998]: I0227 10:19:31.468458 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"400c5e2f-5448-49c6-bf8e-04b21e552bb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l6t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l6t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m6kr5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:31Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:32 crc kubenswrapper[4998]: I0227 10:19:32.241250 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 10:19:32 crc kubenswrapper[4998]: I0227 10:19:32.259445 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fc05123-f698-45ff-a3c3-13c18e466cdb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c807f36a54905b570613990b7428942b90cbad2d8d8dfbab02ee177d5575644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f0cce1c3309bc7de11ea57038f7a306b781b41434337fbf3da176ca2dba2b91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93c102b441273f39ca361ed86f6ef259e15d8adb7eea414db62fabebfda0dee1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e749ae59c7bd8ab4f2d458cd33ccbae459eb43375c8abcdd275ce7f3978d5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b6e8ecfcaba19fb742ce57619627409073b5ea272a20cca6cff39ebcef8d3e2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T10:18:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 10:18:37.748369 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 10:18:37.748616 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 10:18:37.749728 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2309284339/tls.crt::/tmp/serving-cert-2309284339/tls.key\\\\\\\"\\\\nI0227 10:18:38.164511 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 10:18:38.177544 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 10:18:38.177576 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 10:18:38.177600 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 10:18:38.177606 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 10:18:38.196196 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 10:18:38.196250 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 10:18:38.196256 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 10:18:38.196263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 10:18:38.196267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 10:18:38.196282 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 10:18:38.196286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0227 10:18:38.201111 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0227 10:18:38.201327 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T10:18:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df65cc66d990595a8fdb751234ffd8b9e890f53de56c736241bc8b52e7341357\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b42644d7b4769348dda3a3ed5f82a860712159a6e62df2ee6a04b6e890851fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b42644d7b4769348dda3a3ed5f82a860712159a6e62df2ee6a04b6e890851fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:17:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:32Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:32 crc kubenswrapper[4998]: I0227 10:19:32.271515 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da216979-4070-4044-885e-64db11be9b28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:18:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:18:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8e1a3d0cfbfb3c94ceb7fa8965506defc8d50fbd2c3c977ebea917d9b47f29c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77f77430694123a27f4e42870837e5ef1405bc4f157bc15efe4cc2daafd9456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df8df596ab8f658fa7380300b4af87a511447baba28c0a4878829e2217d7d600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcd8311d3319cee9ae6667193835b5bd30c08b0f52b6278fead9c76ffa5949ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dcd8311d3319cee9ae6667193835b5bd30c08b0f52b6278fead9c76ffa5949ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:17:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:32Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:32 crc kubenswrapper[4998]: I0227 10:19:32.286026 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-86xkz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40178d6d-6068-4937-b7d5-883538892cc5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqwdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqwdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-86xkz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:32Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:32 crc kubenswrapper[4998]: I0227 10:19:32.298729 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:32Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:32 crc kubenswrapper[4998]: I0227 10:19:32.311044 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c7c05c29eea9415919844353f5f82a0543e2592b282913feca3776ef5a711d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65614a1a13ca1ee58cd9ee838bbb0ba2ada05d130adb26b34ae208292a994bf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:32Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:32 crc kubenswrapper[4998]: I0227 10:19:32.322129 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ea5f61679603b32df823b8149d832f4d6e98c0cb4c7d1124b3ff587bb729ef3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:32Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:32 crc kubenswrapper[4998]: I0227 10:19:32.332879 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-46lvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a046a5ca-7081-4920-98af-1027a5bc29d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7s287\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-46lvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:32Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:32 crc kubenswrapper[4998]: I0227 10:19:32.342604 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"400c5e2f-5448-49c6-bf8e-04b21e552bb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l6t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l6t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m6kr5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:32Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:32 crc kubenswrapper[4998]: I0227 10:19:32.353671 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de019f950487216eb5629eeb90db5b4dac503bb5bda901cb9dcb66a9019aaecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:32Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:32 crc kubenswrapper[4998]: I0227 10:19:32.364490 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:32Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:32 crc kubenswrapper[4998]: I0227 10:19:32.376497 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:32Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:32 crc kubenswrapper[4998]: I0227 10:19:32.393881 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bceef7ff-b99d-432e-b9cb-7c538c82b74b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wh9xl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:32Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:32 crc kubenswrapper[4998]: I0227 10:19:32.407741 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jl2nx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c459deb8-e1ea-43de-a1b0-1b463eee4bdc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7df06ac726fa496bf82c45d7a1ec7f2defaf93f1d406f0c55cbe4c2b46c9b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc6kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jl2nx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:32Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:32 crc kubenswrapper[4998]: I0227 10:19:32.420831 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcfqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9652967a-d4bf-4304-bd25-4fed87e89b10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:09Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:09Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jctzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcfqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:32Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:32 crc kubenswrapper[4998]: I0227 10:19:32.434973 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l9x2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e55e9768-52ee-4fcf-a279-1b55e6d6c6fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l9x2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:32Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:32 crc kubenswrapper[4998]: I0227 10:19:32.445824 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-g7c4b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68afe6cb-a559-4162-a25f-a22003feeca4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mqxn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mqxn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-g7c4b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:32Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:32 crc kubenswrapper[4998]: I0227 10:19:32.763870 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:19:32 crc kubenswrapper[4998]: I0227 10:19:32.763896 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:19:32 crc kubenswrapper[4998]: I0227 10:19:32.763930 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:19:32 crc kubenswrapper[4998]: E0227 10:19:32.764043 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 10:19:32 crc kubenswrapper[4998]: I0227 10:19:32.764101 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-86xkz" Feb 27 10:19:32 crc kubenswrapper[4998]: E0227 10:19:32.764422 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-86xkz" podUID="40178d6d-6068-4937-b7d5-883538892cc5" Feb 27 10:19:32 crc kubenswrapper[4998]: E0227 10:19:32.764580 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 10:19:32 crc kubenswrapper[4998]: E0227 10:19:32.764762 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 10:19:33 crc kubenswrapper[4998]: E0227 10:19:33.860732 4998 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 27 10:19:34 crc kubenswrapper[4998]: I0227 10:19:34.643221 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:19:34 crc kubenswrapper[4998]: E0227 10:19:34.643450 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 10:20:06.643412268 +0000 UTC m=+158.641683276 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:19:34 crc kubenswrapper[4998]: I0227 10:19:34.744893 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:19:34 crc kubenswrapper[4998]: I0227 10:19:34.744945 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:19:34 crc kubenswrapper[4998]: I0227 10:19:34.744964 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:19:34 crc kubenswrapper[4998]: I0227 10:19:34.744995 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:19:34 crc kubenswrapper[4998]: E0227 10:19:34.745107 4998 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 27 10:19:34 crc kubenswrapper[4998]: E0227 10:19:34.745122 4998 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 27 10:19:34 crc kubenswrapper[4998]: E0227 10:19:34.745132 4998 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 10:19:34 crc kubenswrapper[4998]: E0227 10:19:34.745179 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-27 10:20:06.745165084 +0000 UTC m=+158.743436052 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 10:19:34 crc kubenswrapper[4998]: E0227 10:19:34.745260 4998 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 27 10:19:34 crc kubenswrapper[4998]: E0227 10:19:34.745313 4998 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 27 10:19:34 crc kubenswrapper[4998]: E0227 10:19:34.745334 4998 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 10:19:34 crc kubenswrapper[4998]: E0227 10:19:34.745382 4998 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 27 10:19:34 crc kubenswrapper[4998]: E0227 10:19:34.745434 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-27 10:20:06.745406049 +0000 UTC m=+158.743677057 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 10:19:34 crc kubenswrapper[4998]: E0227 10:19:34.745422 4998 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 27 10:19:34 crc kubenswrapper[4998]: E0227 10:19:34.745651 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-27 10:20:06.745593484 +0000 UTC m=+158.743864582 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 27 10:19:34 crc kubenswrapper[4998]: E0227 10:19:34.745723 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-27 10:20:06.745701126 +0000 UTC m=+158.743972354 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 27 10:19:34 crc kubenswrapper[4998]: I0227 10:19:34.764744 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-86xkz" Feb 27 10:19:34 crc kubenswrapper[4998]: I0227 10:19:34.765060 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:19:34 crc kubenswrapper[4998]: I0227 10:19:34.765108 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:19:34 crc kubenswrapper[4998]: I0227 10:19:34.765194 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:19:34 crc kubenswrapper[4998]: E0227 10:19:34.765519 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 10:19:34 crc kubenswrapper[4998]: E0227 10:19:34.765622 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 10:19:34 crc kubenswrapper[4998]: E0227 10:19:34.765716 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 10:19:34 crc kubenswrapper[4998]: E0227 10:19:34.765805 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-86xkz" podUID="40178d6d-6068-4937-b7d5-883538892cc5" Feb 27 10:19:35 crc kubenswrapper[4998]: I0227 10:19:35.256532 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-g7c4b" event={"ID":"68afe6cb-a559-4162-a25f-a22003feeca4","Type":"ContainerStarted","Data":"a636fc90f5de5e56e91678566c0fe6812e36482554b4454222df8205a44e8d51"} Feb 27 10:19:35 crc kubenswrapper[4998]: I0227 10:19:35.256607 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-g7c4b" event={"ID":"68afe6cb-a559-4162-a25f-a22003feeca4","Type":"ContainerStarted","Data":"1e28ec5783cd6f24d22a1412ced337b92b22dc2c17624208871489dca19dcb8b"} Feb 27 10:19:35 crc kubenswrapper[4998]: I0227 10:19:35.258492 4998 generic.go:334] "Generic (PLEG): container finished" podID="bceef7ff-b99d-432e-b9cb-7c538c82b74b" containerID="a581b9e4264b5768a3af01485da05fd006f3cd0d379ef13f228187faa74f297c" exitCode=0 Feb 27 10:19:35 crc kubenswrapper[4998]: I0227 10:19:35.258532 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" event={"ID":"bceef7ff-b99d-432e-b9cb-7c538c82b74b","Type":"ContainerDied","Data":"a581b9e4264b5768a3af01485da05fd006f3cd0d379ef13f228187faa74f297c"} Feb 27 10:19:35 crc kubenswrapper[4998]: I0227 10:19:35.273626 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcfqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9652967a-d4bf-4304-bd25-4fed87e89b10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:09Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:09Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jctzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcfqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:35Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:35 crc kubenswrapper[4998]: I0227 10:19:35.294578 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l9x2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e55e9768-52ee-4fcf-a279-1b55e6d6c6fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l9x2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:35Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:35 crc kubenswrapper[4998]: I0227 10:19:35.307603 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-g7c4b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68afe6cb-a559-4162-a25f-a22003feeca4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e28ec5783cd6f24d22a1412ced337b92b22dc2c17624208871489dca19dcb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mqxn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a636fc90f5de5e56e91678566c0fe6812e36482554b4454222df8205a44e8d51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mqxn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-g7c4b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:35Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:35 crc kubenswrapper[4998]: I0227 10:19:35.327723 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fc05123-f698-45ff-a3c3-13c18e466cdb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c807f36a54905b570613990b7428942b90cbad2d8d8dfbab02ee177d5575644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f0cce1c3309bc7de11ea57038f7a306b781b41434337fbf3da176ca2dba2b91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93c102b441273f39ca361ed86f6ef259e15d8adb7eea414db62fabebfda0dee1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e749ae59c7bd8ab4f2d458cd33ccbae459eb43375c8abcdd275ce7f3978d5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b6e8ecfcaba19fb742ce57619627409073b5ea272a20cca6cff39ebcef8d3e2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T10:18:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 10:18:37.748369 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 10:18:37.748616 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 10:18:37.749728 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2309284339/tls.crt::/tmp/serving-cert-2309284339/tls.key\\\\\\\"\\\\nI0227 10:18:38.164511 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 10:18:38.177544 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 10:18:38.177576 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 10:18:38.177600 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 10:18:38.177606 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 10:18:38.196196 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 10:18:38.196250 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 10:18:38.196256 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 10:18:38.196263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 10:18:38.196267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 10:18:38.196282 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 10:18:38.196286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0227 10:18:38.201111 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0227 10:18:38.201327 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T10:18:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df65cc66d990595a8fdb751234ffd8b9e890f53de56c736241bc8b52e7341357\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b42644d7b4769348dda3a3ed5f82a860712159a6e62df2ee6a04b6e890851fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b42644d7b4769348dda3a3ed5f82a860712159a6e62df2ee6a04b6e890851fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:17:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:35Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:35 crc kubenswrapper[4998]: I0227 10:19:35.341579 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da216979-4070-4044-885e-64db11be9b28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:18:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:18:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8e1a3d0cfbfb3c94ceb7fa8965506defc8d50fbd2c3c977ebea917d9b47f29c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77f77430694123a27f4e42870837e5ef1405bc4f157bc15efe4cc2daafd9456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df8df596ab8f658fa7380300b4af87a511447baba28c0a4878829e2217d7d600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcd8311d3319cee9ae6667193835b5bd30c08b0f52b6278fead9c76ffa5949ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dcd8311d3319cee9ae6667193835b5bd30c08b0f52b6278fead9c76ffa5949ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:17:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:35Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:35 crc kubenswrapper[4998]: I0227 10:19:35.355425 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-86xkz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40178d6d-6068-4937-b7d5-883538892cc5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqwdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqwdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-86xkz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:35Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:35 crc kubenswrapper[4998]: I0227 10:19:35.370898 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"400c5e2f-5448-49c6-bf8e-04b21e552bb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l6t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l6t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m6kr5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:35Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:35 crc kubenswrapper[4998]: I0227 10:19:35.388517 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:35Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:35 crc kubenswrapper[4998]: I0227 10:19:35.404029 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c7c05c29eea9415919844353f5f82a0543e2592b282913feca3776ef5a711d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65614a1a13ca1ee58cd9ee838bbb0ba2ada05d130adb26b34ae208292a994bf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:35Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:35 crc kubenswrapper[4998]: I0227 10:19:35.416346 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ea5f61679603b32df823b8149d832f4d6e98c0cb4c7d1124b3ff587bb729ef3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:35Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:35 crc kubenswrapper[4998]: I0227 10:19:35.431597 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-46lvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a046a5ca-7081-4920-98af-1027a5bc29d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7s287\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-46lvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:35Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:35 crc kubenswrapper[4998]: I0227 10:19:35.453266 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bceef7ff-b99d-432e-b9cb-7c538c82b74b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wh9xl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:35Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:35 crc kubenswrapper[4998]: I0227 10:19:35.466902 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jl2nx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c459deb8-e1ea-43de-a1b0-1b463eee4bdc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7df06ac726fa496bf82c45d7a1ec7f2defaf93f1d406f0c55cbe4c2b46c9b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc6kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jl2nx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:35Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:35 crc kubenswrapper[4998]: I0227 10:19:35.483387 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de019f950487216eb5629eeb90db5b4dac503bb5bda901cb9dcb66a9019aaecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:35Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:35 crc kubenswrapper[4998]: I0227 10:19:35.497156 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:35Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:35 crc kubenswrapper[4998]: I0227 10:19:35.510290 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:35Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:35 crc kubenswrapper[4998]: I0227 10:19:35.525537 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c7c05c29eea9415919844353f5f82a0543e2592b282913feca3776ef5a711d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65614a1a13ca1ee58cd9ee838bbb0ba2ada05d130adb26b34ae208292a994bf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:35Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:35 crc kubenswrapper[4998]: I0227 10:19:35.551520 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ea5f61679603b32df823b8149d832f4d6e98c0cb4c7d1124b3ff587bb729ef3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:35Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:35 crc kubenswrapper[4998]: I0227 10:19:35.574665 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-46lvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a046a5ca-7081-4920-98af-1027a5bc29d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7s287\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-46lvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:35Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:35 crc kubenswrapper[4998]: I0227 10:19:35.590001 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"400c5e2f-5448-49c6-bf8e-04b21e552bb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l6t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l6t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m6kr5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:35Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:35 crc kubenswrapper[4998]: I0227 10:19:35.604578 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:35Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:35 crc kubenswrapper[4998]: I0227 10:19:35.620057 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de019f950487216eb5629eeb90db5b4dac503bb5bda901cb9dcb66a9019aaecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:35Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:35 crc kubenswrapper[4998]: I0227 10:19:35.632979 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:35Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:35 crc kubenswrapper[4998]: I0227 10:19:35.647577 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:35Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:35 crc kubenswrapper[4998]: I0227 10:19:35.665156 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bceef7ff-b99d-432e-b9cb-7c538c82b74b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a581b9e4264b5768a3af01485da05fd006f3cd0d379ef13f228187faa74f297c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a581b9e4264b5768a3af01485da05fd006f3cd0d379ef13f228187faa74f297c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wh9xl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:35Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:35 crc kubenswrapper[4998]: I0227 10:19:35.679648 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jl2nx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c459deb8-e1ea-43de-a1b0-1b463eee4bdc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7df06ac726fa496bf82c45d7a1ec7f2defaf93f1d406f0c55cbe4c2b46c9b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc6kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jl2nx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:35Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:35 crc kubenswrapper[4998]: I0227 10:19:35.699165 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcfqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9652967a-d4bf-4304-bd25-4fed87e89b10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:09Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:09Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jctzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcfqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:35Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:35 crc kubenswrapper[4998]: I0227 10:19:35.716915 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l9x2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e55e9768-52ee-4fcf-a279-1b55e6d6c6fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l9x2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:35Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:35 crc kubenswrapper[4998]: I0227 10:19:35.728915 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-g7c4b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68afe6cb-a559-4162-a25f-a22003feeca4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e28ec5783cd6f24d22a1412ced337b92b22dc2c17624208871489dca19dcb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mqxn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a636fc90f5de5e56e91678566c0fe6812e36482554b4454222df8205a44e8d51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mqxn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-g7c4b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:35Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:35 crc kubenswrapper[4998]: I0227 10:19:35.748328 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fc05123-f698-45ff-a3c3-13c18e466cdb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c807f36a54905b570613990b7428942b90cbad2d8d8dfbab02ee177d5575644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f0cce1c3309bc7de11ea57038f7a306b781b41434337fbf3da176ca2dba2b91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93c102b441273f39ca361ed86f6ef259e15d8adb7eea414db62fabebfda0dee1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e749ae59c7bd8ab4f2d458cd33ccbae459eb43375c8abcdd275ce7f3978d5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b6e8ecfcaba19fb742ce57619627409073b5ea272a20cca6cff39ebcef8d3e2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T10:18:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 10:18:37.748369 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 10:18:37.748616 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 10:18:37.749728 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2309284339/tls.crt::/tmp/serving-cert-2309284339/tls.key\\\\\\\"\\\\nI0227 10:18:38.164511 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 10:18:38.177544 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 10:18:38.177576 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 10:18:38.177600 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 10:18:38.177606 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 10:18:38.196196 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 10:18:38.196250 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 10:18:38.196256 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 10:18:38.196263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 10:18:38.196267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 10:18:38.196282 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 10:18:38.196286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0227 10:18:38.201111 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0227 10:18:38.201327 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T10:18:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df65cc66d990595a8fdb751234ffd8b9e890f53de56c736241bc8b52e7341357\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b42644d7b4769348dda3a3ed5f82a860712159a6e62df2ee6a04b6e890851fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b42644d7b4769348dda3a3ed5f82a860712159a6e62df2ee6a04b6e890851fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:17:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:35Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:35 crc kubenswrapper[4998]: I0227 10:19:35.764419 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da216979-4070-4044-885e-64db11be9b28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:18:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:18:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8e1a3d0cfbfb3c94ceb7fa8965506defc8d50fbd2c3c977ebea917d9b47f29c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77f77430694123a27f4e42870837e5ef1405bc4f157bc15efe4cc2daafd9456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df8df596ab8f658fa7380300b4af87a511447baba28c0a4878829e2217d7d600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcd8311d3319cee9ae6667193835b5bd30c08b0f52b6278fead9c76ffa5949ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dcd8311d3319cee9ae6667193835b5bd30c08b0f52b6278fead9c76ffa5949ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:17:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:35Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:35 crc kubenswrapper[4998]: I0227 10:19:35.777743 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-86xkz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40178d6d-6068-4937-b7d5-883538892cc5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqwdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqwdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-86xkz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:35Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:36 crc kubenswrapper[4998]: I0227 10:19:36.262588 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-qcfqc" event={"ID":"9652967a-d4bf-4304-bd25-4fed87e89b10","Type":"ContainerStarted","Data":"79a4ef241b85a5c9e0a28bc1b9d7c65ed60b9ca3472e0cbc0ef4aade02e6dff3"} Feb 27 10:19:36 crc kubenswrapper[4998]: I0227 10:19:36.267707 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" event={"ID":"bceef7ff-b99d-432e-b9cb-7c538c82b74b","Type":"ContainerStarted","Data":"0cc4f81dc4311732e185b32536af72d8391b070ab0b822258c13c55b9053cf53"} Feb 27 10:19:36 crc kubenswrapper[4998]: I0227 10:19:36.267823 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" event={"ID":"bceef7ff-b99d-432e-b9cb-7c538c82b74b","Type":"ContainerStarted","Data":"e38b883925e98a1b15113d65d2f9b3e133e05c14ac63651039076b8518d42467"} Feb 27 10:19:36 crc kubenswrapper[4998]: I0227 10:19:36.267841 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" event={"ID":"bceef7ff-b99d-432e-b9cb-7c538c82b74b","Type":"ContainerStarted","Data":"4f8bd4e21e6e078a8e49ae2264c035072fb77179deb36176c991ff790ea25b0e"} Feb 27 10:19:36 crc kubenswrapper[4998]: I0227 10:19:36.267859 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" event={"ID":"bceef7ff-b99d-432e-b9cb-7c538c82b74b","Type":"ContainerStarted","Data":"b586f4d902be28e9eed6ab0b15e1c5bbd3dbf2add143844fff0835e088a1e3fb"} Feb 27 10:19:36 crc kubenswrapper[4998]: I0227 10:19:36.267873 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" event={"ID":"bceef7ff-b99d-432e-b9cb-7c538c82b74b","Type":"ContainerStarted","Data":"92c409cddda85ffd90dc0522d28d181bb804f68f82561e82217b3e78cd9938ba"} Feb 27 10:19:36 crc kubenswrapper[4998]: I0227 10:19:36.267887 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" event={"ID":"bceef7ff-b99d-432e-b9cb-7c538c82b74b","Type":"ContainerStarted","Data":"f80297fee7fa0ab3d0d6a5975984b83a2d0fa2a368a3fc6d85affe2af75e326e"} Feb 27 10:19:36 crc kubenswrapper[4998]: I0227 10:19:36.270404 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" event={"ID":"400c5e2f-5448-49c6-bf8e-04b21e552bb2","Type":"ContainerStarted","Data":"b3ce74dab57b3de4f390244a0a95e5ee5c83a47521248808b783a1e94e25c00c"} Feb 27 10:19:36 crc kubenswrapper[4998]: I0227 10:19:36.270470 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" event={"ID":"400c5e2f-5448-49c6-bf8e-04b21e552bb2","Type":"ContainerStarted","Data":"94514be337c3278cc1dfc8b2f0c50050f03294a2cc4ac6a72c62695d2fe4152a"} Feb 27 10:19:36 crc kubenswrapper[4998]: I0227 10:19:36.273005 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-46lvx" event={"ID":"a046a5ca-7081-4920-98af-1027a5bc29d0","Type":"ContainerStarted","Data":"70d74d26b8ecce796cf2e1d35f24a62a2d3005aac8c11e2a148aa9b9a4e670f4"} Feb 27 10:19:36 crc kubenswrapper[4998]: I0227 10:19:36.283358 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-86xkz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40178d6d-6068-4937-b7d5-883538892cc5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqwdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqwdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-86xkz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:36Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:36 crc kubenswrapper[4998]: I0227 10:19:36.299946 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fc05123-f698-45ff-a3c3-13c18e466cdb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c807f36a54905b570613990b7428942b90cbad2d8d8dfbab02ee177d5575644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f0cce1c3309bc7de11ea57038f7a306b781b41434337fbf3da176ca2dba2b91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93c102b441273f39ca361ed86f6ef259e15d8adb7eea414db62fabebfda0dee1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e749ae59c7bd8ab4f2d458cd33ccbae459eb43375c8abcdd275ce7f3978d5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b6e8ecfcaba19fb742ce57619627409073b5ea272a20cca6cff39ebcef8d3e2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T10:18:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 10:18:37.748369 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 10:18:37.748616 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 10:18:37.749728 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2309284339/tls.crt::/tmp/serving-cert-2309284339/tls.key\\\\\\\"\\\\nI0227 10:18:38.164511 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 10:18:38.177544 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 10:18:38.177576 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 10:18:38.177600 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 10:18:38.177606 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 10:18:38.196196 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 10:18:38.196250 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 10:18:38.196256 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 10:18:38.196263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 10:18:38.196267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 10:18:38.196282 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 10:18:38.196286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0227 10:18:38.201111 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0227 10:18:38.201327 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T10:18:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df65cc66d990595a8fdb751234ffd8b9e890f53de56c736241bc8b52e7341357\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b42644d7b4769348dda3a3ed5f82a860712159a6e62df2ee6a04b6e890851fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b42644d7b4769348dda3a3ed5f82a860712159a6e62df2ee6a04b6e890851fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:17:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:36Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:36 crc kubenswrapper[4998]: I0227 10:19:36.313286 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da216979-4070-4044-885e-64db11be9b28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:18:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:18:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8e1a3d0cfbfb3c94ceb7fa8965506defc8d50fbd2c3c977ebea917d9b47f29c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77f77430694123a27f4e42870837e5ef1405bc4f157bc15efe4cc2daafd9456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df8df596ab8f658fa7380300b4af87a511447baba28c0a4878829e2217d7d600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcd8311d3319cee9ae6667193835b5bd30c08b0f52b6278fead9c76ffa5949ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dcd8311d3319cee9ae6667193835b5bd30c08b0f52b6278fead9c76ffa5949ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:17:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:36Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:36 crc kubenswrapper[4998]: I0227 10:19:36.325989 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-46lvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a046a5ca-7081-4920-98af-1027a5bc29d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7s287\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-46lvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:36Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:36 crc kubenswrapper[4998]: I0227 10:19:36.338950 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"400c5e2f-5448-49c6-bf8e-04b21e552bb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l6t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l6t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m6kr5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:36Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:36 crc kubenswrapper[4998]: I0227 10:19:36.351391 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:36Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:36 crc kubenswrapper[4998]: I0227 10:19:36.371683 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c7c05c29eea9415919844353f5f82a0543e2592b282913feca3776ef5a711d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65614a1a13ca1ee58cd9ee838bbb0ba2ada05d130adb26b34ae208292a994bf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:36Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:36 crc kubenswrapper[4998]: I0227 10:19:36.382421 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ea5f61679603b32df823b8149d832f4d6e98c0cb4c7d1124b3ff587bb729ef3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:36Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:36 crc kubenswrapper[4998]: I0227 10:19:36.394404 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:36Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:36 crc kubenswrapper[4998]: I0227 10:19:36.413713 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bceef7ff-b99d-432e-b9cb-7c538c82b74b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a581b9e4264b5768a3af01485da05fd006f3cd0d379ef13f228187faa74f297c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a581b9e4264b5768a3af01485da05fd006f3cd0d379ef13f228187faa74f297c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wh9xl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:36Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:36 crc kubenswrapper[4998]: I0227 10:19:36.425146 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jl2nx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c459deb8-e1ea-43de-a1b0-1b463eee4bdc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7df06ac726fa496bf82c45d7a1ec7f2defaf93f1d406f0c55cbe4c2b46c9b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc6kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jl2nx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:36Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:36 crc kubenswrapper[4998]: I0227 10:19:36.440401 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de019f950487216eb5629eeb90db5b4dac503bb5bda901cb9dcb66a9019aaecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:36Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:36 crc kubenswrapper[4998]: I0227 10:19:36.454285 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:36Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:36 crc kubenswrapper[4998]: I0227 10:19:36.468120 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-g7c4b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68afe6cb-a559-4162-a25f-a22003feeca4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e28ec5783cd6f24d22a1412ced337b92b22dc2c17624208871489dca19dcb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mqxn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a636fc90f5de5e56e91678566c0fe6812e36482554b4454222df8205a44e8d51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mqxn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-g7c4b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:36Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:36 crc kubenswrapper[4998]: I0227 10:19:36.480402 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcfqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9652967a-d4bf-4304-bd25-4fed87e89b10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79a4ef241b85a5c9e0a28bc1b9d7c65ed60b9ca3472e0cbc0ef4aade02e6dff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jctzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcfqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:36Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:36 crc kubenswrapper[4998]: I0227 10:19:36.494190 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l9x2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e55e9768-52ee-4fcf-a279-1b55e6d6c6fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l9x2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:36Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:36 crc kubenswrapper[4998]: I0227 10:19:36.508065 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c7c05c29eea9415919844353f5f82a0543e2592b282913feca3776ef5a711d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65614a1a13ca1ee58cd9ee838bbb0ba2ada05d130adb26b34ae208292a994bf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:36Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:36 crc kubenswrapper[4998]: I0227 10:19:36.520544 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ea5f61679603b32df823b8149d832f4d6e98c0cb4c7d1124b3ff587bb729ef3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:36Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:36 crc kubenswrapper[4998]: I0227 10:19:36.534353 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-46lvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a046a5ca-7081-4920-98af-1027a5bc29d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70d74d26b8ecce796cf2e1d35f24a62a2d3005aac8c11e2a148aa9b9a4e670f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7s287\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-46lvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:36Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:36 crc kubenswrapper[4998]: I0227 10:19:36.547250 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"400c5e2f-5448-49c6-bf8e-04b21e552bb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3ce74dab57b3de4f390244a0a95e5ee5c83a47521248808b783a1e94e25c00c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l6t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94514be337c3278cc1dfc8b2f0c50050f03294a2cc4ac6a72c62695d2fe4152a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l6t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m6kr5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:36Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:36 crc kubenswrapper[4998]: I0227 10:19:36.560954 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:36Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:36 crc kubenswrapper[4998]: I0227 10:19:36.575411 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de019f950487216eb5629eeb90db5b4dac503bb5bda901cb9dcb66a9019aaecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:36Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:36 crc kubenswrapper[4998]: I0227 10:19:36.586940 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:36Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:36 crc kubenswrapper[4998]: I0227 10:19:36.599874 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:36Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:36 crc kubenswrapper[4998]: I0227 10:19:36.620174 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bceef7ff-b99d-432e-b9cb-7c538c82b74b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a581b9e4264b5768a3af01485da05fd006f3cd0d379ef13f228187faa74f297c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a581b9e4264b5768a3af01485da05fd006f3cd0d379ef13f228187faa74f297c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wh9xl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:36Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:36 crc kubenswrapper[4998]: I0227 10:19:36.630997 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jl2nx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c459deb8-e1ea-43de-a1b0-1b463eee4bdc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7df06ac726fa496bf82c45d7a1ec7f2defaf93f1d406f0c55cbe4c2b46c9b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc6kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jl2nx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:36Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:36 crc kubenswrapper[4998]: I0227 10:19:36.644780 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcfqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9652967a-d4bf-4304-bd25-4fed87e89b10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79a4ef241b85a5c9e0a28bc1b9d7c65ed60b9ca3472e0cbc0ef4aade02e6dff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jctzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcfqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:36Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:36 crc kubenswrapper[4998]: I0227 10:19:36.661338 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l9x2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e55e9768-52ee-4fcf-a279-1b55e6d6c6fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l9x2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:36Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:36 crc kubenswrapper[4998]: I0227 10:19:36.678000 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-g7c4b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68afe6cb-a559-4162-a25f-a22003feeca4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e28ec5783cd6f24d22a1412ced337b92b22dc2c17624208871489dca19dcb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mqxn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a636fc90f5de5e56e91678566c0fe6812e36482554b4454222df8205a44e8d51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mqxn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-g7c4b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:36Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:36 crc kubenswrapper[4998]: I0227 10:19:36.699538 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fc05123-f698-45ff-a3c3-13c18e466cdb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c807f36a54905b570613990b7428942b90cbad2d8d8dfbab02ee177d5575644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f0cce1c3309bc7de11ea57038f7a306b781b41434337fbf3da176ca2dba2b91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93c102b441273f39ca361ed86f6ef259e15d8adb7eea414db62fabebfda0dee1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e749ae59c7bd8ab4f2d458cd33ccbae459eb43375c8abcdd275ce7f3978d5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b6e8ecfcaba19fb742ce57619627409073b5ea272a20cca6cff39ebcef8d3e2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T10:18:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 10:18:37.748369 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 10:18:37.748616 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 10:18:37.749728 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2309284339/tls.crt::/tmp/serving-cert-2309284339/tls.key\\\\\\\"\\\\nI0227 10:18:38.164511 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 10:18:38.177544 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 10:18:38.177576 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 10:18:38.177600 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 10:18:38.177606 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 10:18:38.196196 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 10:18:38.196250 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 10:18:38.196256 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 10:18:38.196263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 10:18:38.196267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 10:18:38.196282 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 10:18:38.196286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0227 10:18:38.201111 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0227 10:18:38.201327 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T10:18:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df65cc66d990595a8fdb751234ffd8b9e890f53de56c736241bc8b52e7341357\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b42644d7b4769348dda3a3ed5f82a860712159a6e62df2ee6a04b6e890851fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b42644d7b4769348dda3a3ed5f82a860712159a6e62df2ee6a04b6e890851fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:17:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:36Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:36 crc kubenswrapper[4998]: I0227 10:19:36.715605 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da216979-4070-4044-885e-64db11be9b28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:18:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:18:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8e1a3d0cfbfb3c94ceb7fa8965506defc8d50fbd2c3c977ebea917d9b47f29c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77f77430694123a27f4e42870837e5ef1405bc4f157bc15efe4cc2daafd9456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df8df596ab8f658fa7380300b4af87a511447baba28c0a4878829e2217d7d600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcd8311d3319cee9ae6667193835b5bd30c08b0f52b6278fead9c76ffa5949ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dcd8311d3319cee9ae6667193835b5bd30c08b0f52b6278fead9c76ffa5949ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:17:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:36Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:36 crc kubenswrapper[4998]: I0227 10:19:36.733244 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-86xkz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40178d6d-6068-4937-b7d5-883538892cc5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqwdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqwdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-86xkz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:36Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:36 crc kubenswrapper[4998]: I0227 10:19:36.764470 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:19:36 crc kubenswrapper[4998]: I0227 10:19:36.764507 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:19:36 crc kubenswrapper[4998]: I0227 10:19:36.764512 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:19:36 crc kubenswrapper[4998]: E0227 10:19:36.764608 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 10:19:36 crc kubenswrapper[4998]: I0227 10:19:36.764640 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-86xkz" Feb 27 10:19:36 crc kubenswrapper[4998]: E0227 10:19:36.764883 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 10:19:36 crc kubenswrapper[4998]: E0227 10:19:36.764957 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-86xkz" podUID="40178d6d-6068-4937-b7d5-883538892cc5" Feb 27 10:19:36 crc kubenswrapper[4998]: E0227 10:19:36.765035 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 10:19:37 crc kubenswrapper[4998]: I0227 10:19:37.121619 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:37 crc kubenswrapper[4998]: I0227 10:19:37.121840 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:37 crc kubenswrapper[4998]: I0227 10:19:37.121849 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:37 crc kubenswrapper[4998]: I0227 10:19:37.121860 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:37 crc kubenswrapper[4998]: I0227 10:19:37.121868 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:37Z","lastTransitionTime":"2026-02-27T10:19:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:37 crc kubenswrapper[4998]: E0227 10:19:37.134645 4998 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:19:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:19:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:19:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:19:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a76c225a-d617-4499-bb32-eb42f31208cd\\\",\\\"systemUUID\\\":\\\"9cb598f9-84ae-4703-b4b9-6775104308e7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:37Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:37 crc kubenswrapper[4998]: I0227 10:19:37.137852 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:37 crc kubenswrapper[4998]: I0227 10:19:37.137883 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:37 crc kubenswrapper[4998]: I0227 10:19:37.137891 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:37 crc kubenswrapper[4998]: I0227 10:19:37.137907 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:37 crc kubenswrapper[4998]: I0227 10:19:37.137916 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:37Z","lastTransitionTime":"2026-02-27T10:19:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:37 crc kubenswrapper[4998]: E0227 10:19:37.149104 4998 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:19:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:19:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:19:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:19:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a76c225a-d617-4499-bb32-eb42f31208cd\\\",\\\"systemUUID\\\":\\\"9cb598f9-84ae-4703-b4b9-6775104308e7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:37Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:37 crc kubenswrapper[4998]: I0227 10:19:37.154647 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:37 crc kubenswrapper[4998]: I0227 10:19:37.154684 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:37 crc kubenswrapper[4998]: I0227 10:19:37.154692 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:37 crc kubenswrapper[4998]: I0227 10:19:37.154706 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:37 crc kubenswrapper[4998]: I0227 10:19:37.154715 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:37Z","lastTransitionTime":"2026-02-27T10:19:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:37 crc kubenswrapper[4998]: E0227 10:19:37.166735 4998 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:19:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:19:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:19:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:19:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a76c225a-d617-4499-bb32-eb42f31208cd\\\",\\\"systemUUID\\\":\\\"9cb598f9-84ae-4703-b4b9-6775104308e7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:37Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:37 crc kubenswrapper[4998]: I0227 10:19:37.169993 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:37 crc kubenswrapper[4998]: I0227 10:19:37.170051 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:37 crc kubenswrapper[4998]: I0227 10:19:37.170064 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:37 crc kubenswrapper[4998]: I0227 10:19:37.170080 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:37 crc kubenswrapper[4998]: I0227 10:19:37.170092 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:37Z","lastTransitionTime":"2026-02-27T10:19:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:37 crc kubenswrapper[4998]: E0227 10:19:37.184074 4998 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:19:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:19:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:19:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:19:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a76c225a-d617-4499-bb32-eb42f31208cd\\\",\\\"systemUUID\\\":\\\"9cb598f9-84ae-4703-b4b9-6775104308e7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:37Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:37 crc kubenswrapper[4998]: I0227 10:19:37.186989 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:37 crc kubenswrapper[4998]: I0227 10:19:37.187026 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:37 crc kubenswrapper[4998]: I0227 10:19:37.187036 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:37 crc kubenswrapper[4998]: I0227 10:19:37.187052 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:37 crc kubenswrapper[4998]: I0227 10:19:37.187060 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:37Z","lastTransitionTime":"2026-02-27T10:19:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:37 crc kubenswrapper[4998]: E0227 10:19:37.199999 4998 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:19:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:19:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:19:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:19:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a76c225a-d617-4499-bb32-eb42f31208cd\\\",\\\"systemUUID\\\":\\\"9cb598f9-84ae-4703-b4b9-6775104308e7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:37Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:37 crc kubenswrapper[4998]: E0227 10:19:37.200145 4998 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 27 10:19:38 crc kubenswrapper[4998]: I0227 10:19:38.281604 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" event={"ID":"bceef7ff-b99d-432e-b9cb-7c538c82b74b","Type":"ContainerStarted","Data":"6f39b4e222d74a12d8889750f34e5bb8e07d73c9d7fcdbaaebe18066fba744a6"} Feb 27 10:19:38 crc kubenswrapper[4998]: I0227 10:19:38.581450 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/40178d6d-6068-4937-b7d5-883538892cc5-metrics-certs\") pod \"network-metrics-daemon-86xkz\" (UID: \"40178d6d-6068-4937-b7d5-883538892cc5\") " pod="openshift-multus/network-metrics-daemon-86xkz" Feb 27 10:19:38 crc kubenswrapper[4998]: E0227 10:19:38.581598 4998 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 27 10:19:38 crc kubenswrapper[4998]: E0227 10:19:38.581653 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/40178d6d-6068-4937-b7d5-883538892cc5-metrics-certs podName:40178d6d-6068-4937-b7d5-883538892cc5 nodeName:}" failed. No retries permitted until 2026-02-27 10:19:54.581638163 +0000 UTC m=+146.579909131 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/40178d6d-6068-4937-b7d5-883538892cc5-metrics-certs") pod "network-metrics-daemon-86xkz" (UID: "40178d6d-6068-4937-b7d5-883538892cc5") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 27 10:19:38 crc kubenswrapper[4998]: I0227 10:19:38.764830 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:19:38 crc kubenswrapper[4998]: I0227 10:19:38.764870 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:19:38 crc kubenswrapper[4998]: I0227 10:19:38.764900 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:19:38 crc kubenswrapper[4998]: I0227 10:19:38.764965 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-86xkz" Feb 27 10:19:38 crc kubenswrapper[4998]: E0227 10:19:38.764967 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 10:19:38 crc kubenswrapper[4998]: E0227 10:19:38.765065 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 10:19:38 crc kubenswrapper[4998]: E0227 10:19:38.765267 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 10:19:38 crc kubenswrapper[4998]: E0227 10:19:38.765339 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-86xkz" podUID="40178d6d-6068-4937-b7d5-883538892cc5" Feb 27 10:19:38 crc kubenswrapper[4998]: I0227 10:19:38.776690 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcfqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9652967a-d4bf-4304-bd25-4fed87e89b10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79a4ef241b85a5c9e0a28bc1b9d7c65ed60b9ca3472e0cbc0ef4aade02e6dff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jctzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcfqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:38Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:38 crc kubenswrapper[4998]: I0227 10:19:38.792780 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l9x2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e55e9768-52ee-4fcf-a279-1b55e6d6c6fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l9x2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:38Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:38 crc kubenswrapper[4998]: I0227 10:19:38.802399 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-g7c4b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68afe6cb-a559-4162-a25f-a22003feeca4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e28ec5783cd6f24d22a1412ced337b92b22dc2c17624208871489dca19dcb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mqxn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a636fc90f5de5e56e91678566c0fe6812e36482554b4454222df8205a44e8d51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mqxn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-g7c4b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:38Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:38 crc kubenswrapper[4998]: I0227 10:19:38.815066 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fc05123-f698-45ff-a3c3-13c18e466cdb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c807f36a54905b570613990b7428942b90cbad2d8d8dfbab02ee177d5575644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f0cce1c3309bc7de11ea57038f7a306b781b41434337fbf3da176ca2dba2b91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93c102b441273f39ca361ed86f6ef259e15d8adb7eea414db62fabebfda0dee1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e749ae59c7bd8ab4f2d458cd33ccbae459eb43375c8abcdd275ce7f3978d5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b6e8ecfcaba19fb742ce57619627409073b5ea272a20cca6cff39ebcef8d3e2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T10:18:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 10:18:37.748369 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 10:18:37.748616 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 10:18:37.749728 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2309284339/tls.crt::/tmp/serving-cert-2309284339/tls.key\\\\\\\"\\\\nI0227 10:18:38.164511 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 10:18:38.177544 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 10:18:38.177576 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 10:18:38.177600 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 10:18:38.177606 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 10:18:38.196196 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 10:18:38.196250 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 10:18:38.196256 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 10:18:38.196263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 10:18:38.196267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 10:18:38.196282 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 10:18:38.196286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0227 10:18:38.201111 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0227 10:18:38.201327 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T10:18:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df65cc66d990595a8fdb751234ffd8b9e890f53de56c736241bc8b52e7341357\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b42644d7b4769348dda3a3ed5f82a860712159a6e62df2ee6a04b6e890851fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b42644d7b4769348dda3a3ed5f82a860712159a6e62df2ee6a04b6e890851fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:17:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:38Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:38 crc kubenswrapper[4998]: I0227 10:19:38.828345 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da216979-4070-4044-885e-64db11be9b28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:18:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:18:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8e1a3d0cfbfb3c94ceb7fa8965506defc8d50fbd2c3c977ebea917d9b47f29c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77f77430694123a27f4e42870837e5ef1405bc4f157bc15efe4cc2daafd9456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df8df596ab8f658fa7380300b4af87a511447baba28c0a4878829e2217d7d600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcd8311d3319cee9ae6667193835b5bd30c08b0f52b6278fead9c76ffa5949ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dcd8311d3319cee9ae6667193835b5bd30c08b0f52b6278fead9c76ffa5949ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:17:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:38Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:38 crc kubenswrapper[4998]: I0227 10:19:38.839241 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-86xkz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40178d6d-6068-4937-b7d5-883538892cc5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqwdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqwdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-86xkz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:38Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:38 crc kubenswrapper[4998]: I0227 10:19:38.857634 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:38Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:38 crc kubenswrapper[4998]: E0227 10:19:38.861239 4998 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 27 10:19:38 crc kubenswrapper[4998]: I0227 10:19:38.871179 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c7c05c29eea9415919844353f5f82a0543e2592b282913feca3776ef5a711d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65614a1a13ca1ee58cd9ee838bbb0ba2ada05d130adb26b34ae208292a994bf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:38Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:38 crc kubenswrapper[4998]: I0227 10:19:38.883028 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ea5f61679603b32df823b8149d832f4d6e98c0cb4c7d1124b3ff587bb729ef3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:38Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:38 crc kubenswrapper[4998]: I0227 10:19:38.896964 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-46lvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a046a5ca-7081-4920-98af-1027a5bc29d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70d74d26b8ecce796cf2e1d35f24a62a2d3005aac8c11e2a148aa9b9a4e670f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7s287\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-46lvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:38Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:38 crc kubenswrapper[4998]: I0227 10:19:38.908131 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"400c5e2f-5448-49c6-bf8e-04b21e552bb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3ce74dab57b3de4f390244a0a95e5ee5c83a47521248808b783a1e94e25c00c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l6t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94514be337c3278cc1dfc8b2f0c50050f03294a2cc4ac6a72c62695d2fe4152a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l6t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m6kr5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:38Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:38 crc kubenswrapper[4998]: I0227 10:19:38.919572 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de019f950487216eb5629eeb90db5b4dac503bb5bda901cb9dcb66a9019aaecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:38Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:38 crc kubenswrapper[4998]: I0227 10:19:38.930281 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:38Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:38 crc kubenswrapper[4998]: I0227 10:19:38.940735 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:38Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:38 crc kubenswrapper[4998]: I0227 10:19:38.959475 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bceef7ff-b99d-432e-b9cb-7c538c82b74b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a581b9e4264b5768a3af01485da05fd006f3cd0d379ef13f228187faa74f297c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a581b9e4264b5768a3af01485da05fd006f3cd0d379ef13f228187faa74f297c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wh9xl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:38Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:38 crc kubenswrapper[4998]: I0227 10:19:38.979688 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jl2nx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c459deb8-e1ea-43de-a1b0-1b463eee4bdc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7df06ac726fa496bf82c45d7a1ec7f2defaf93f1d406f0c55cbe4c2b46c9b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc6kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jl2nx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:38Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:39 crc kubenswrapper[4998]: I0227 10:19:39.782204 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 27 10:19:40 crc kubenswrapper[4998]: I0227 10:19:40.291080 4998 generic.go:334] "Generic (PLEG): container finished" podID="e55e9768-52ee-4fcf-a279-1b55e6d6c6fd" containerID="8ac2d83408a7c3cccea1e095f4712f9f524751eec680b2df6373e44643438bdb" exitCode=0 Feb 27 10:19:40 crc kubenswrapper[4998]: I0227 10:19:40.291182 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-l9x2p" event={"ID":"e55e9768-52ee-4fcf-a279-1b55e6d6c6fd","Type":"ContainerDied","Data":"8ac2d83408a7c3cccea1e095f4712f9f524751eec680b2df6373e44643438bdb"} Feb 27 10:19:40 crc kubenswrapper[4998]: I0227 10:19:40.332401 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bceef7ff-b99d-432e-b9cb-7c538c82b74b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a581b9e4264b5768a3af01485da05fd006f3cd0d379ef13f228187faa74f297c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a581b9e4264b5768a3af01485da05fd006f3cd0d379ef13f228187faa74f297c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wh9xl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:40Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:40 crc kubenswrapper[4998]: I0227 10:19:40.344354 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jl2nx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c459deb8-e1ea-43de-a1b0-1b463eee4bdc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7df06ac726fa496bf82c45d7a1ec7f2defaf93f1d406f0c55cbe4c2b46c9b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc6kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jl2nx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:40Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:40 crc kubenswrapper[4998]: I0227 10:19:40.360191 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1182527-22cd-46f5-ac86-450cc2fbc851\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f69cef55dc6c16d2f015053573c088caee5fcca24208ad8c7affb58bc981c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a72f2f016ab471dc647a29e72e4e49dd5008018682ff255ca5b8496cfc4a2a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a72f2f016ab471dc647a29e72e4e49dd5008018682ff255ca5b8496cfc4a2a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:17:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:40Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:40 crc kubenswrapper[4998]: I0227 10:19:40.374247 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de019f950487216eb5629eeb90db5b4dac503bb5bda901cb9dcb66a9019aaecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:40Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:40 crc kubenswrapper[4998]: I0227 10:19:40.388924 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:40Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:40 crc kubenswrapper[4998]: I0227 10:19:40.404106 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:40Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:40 crc kubenswrapper[4998]: I0227 10:19:40.413512 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcfqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9652967a-d4bf-4304-bd25-4fed87e89b10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79a4ef241b85a5c9e0a28bc1b9d7c65ed60b9ca3472e0cbc0ef4aade02e6dff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jctzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcfqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:40Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:40 crc kubenswrapper[4998]: I0227 10:19:40.431493 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l9x2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e55e9768-52ee-4fcf-a279-1b55e6d6c6fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ac2d83408a7c3cccea1e095f4712f9f524751eec680b2df6373e44643438bdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ac2d83408a7c3cccea1e095f4712f9f524751eec680b2df6373e44643438bdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l9x2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:40Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:40 crc kubenswrapper[4998]: I0227 10:19:40.443878 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-g7c4b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68afe6cb-a559-4162-a25f-a22003feeca4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e28ec5783cd6f24d22a1412ced337b92b22dc2c17624208871489dca19dcb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mqxn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a636fc90f5de5e56e91678566c0fe6812e36482554b4454222df8205a44e8d51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mqxn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-g7c4b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:40Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:40 crc kubenswrapper[4998]: I0227 10:19:40.459996 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fc05123-f698-45ff-a3c3-13c18e466cdb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c807f36a54905b570613990b7428942b90cbad2d8d8dfbab02ee177d5575644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f0cce1c3309bc7de11ea57038f7a306b781b41434337fbf3da176ca2dba2b91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93c102b441273f39ca361ed86f6ef259e15d8adb7eea414db62fabebfda0dee1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e749ae59c7bd8ab4f2d458cd33ccbae459eb43375c8abcdd275ce7f3978d5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b6e8ecfcaba19fb742ce57619627409073b5ea272a20cca6cff39ebcef8d3e2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T10:18:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 10:18:37.748369 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 10:18:37.748616 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 10:18:37.749728 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2309284339/tls.crt::/tmp/serving-cert-2309284339/tls.key\\\\\\\"\\\\nI0227 10:18:38.164511 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 10:18:38.177544 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 10:18:38.177576 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 10:18:38.177600 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 10:18:38.177606 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 10:18:38.196196 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 10:18:38.196250 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 10:18:38.196256 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 10:18:38.196263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 10:18:38.196267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 10:18:38.196282 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 10:18:38.196286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0227 10:18:38.201111 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0227 10:18:38.201327 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T10:18:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df65cc66d990595a8fdb751234ffd8b9e890f53de56c736241bc8b52e7341357\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b42644d7b4769348dda3a3ed5f82a860712159a6e62df2ee6a04b6e890851fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b42644d7b4769348dda3a3ed5f82a860712159a6e62df2ee6a04b6e890851fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:17:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:40Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:40 crc kubenswrapper[4998]: I0227 10:19:40.477216 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da216979-4070-4044-885e-64db11be9b28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:18:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:18:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8e1a3d0cfbfb3c94ceb7fa8965506defc8d50fbd2c3c977ebea917d9b47f29c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77f77430694123a27f4e42870837e5ef1405bc4f157bc15efe4cc2daafd9456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df8df596ab8f658fa7380300b4af87a511447baba28c0a4878829e2217d7d600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcd8311d3319cee9ae6667193835b5bd30c08b0f52b6278fead9c76ffa5949ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dcd8311d3319cee9ae6667193835b5bd30c08b0f52b6278fead9c76ffa5949ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:17:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:40Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:40 crc kubenswrapper[4998]: I0227 10:19:40.491472 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-86xkz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40178d6d-6068-4937-b7d5-883538892cc5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqwdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqwdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-86xkz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:40Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:40 crc kubenswrapper[4998]: I0227 10:19:40.504469 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"400c5e2f-5448-49c6-bf8e-04b21e552bb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3ce74dab57b3de4f390244a0a95e5ee5c83a47521248808b783a1e94e25c00c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l6t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94514be337c3278cc1dfc8b2f0c50050f03294a2cc4ac6a72c62695d2fe4152a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l6t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m6kr5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:40Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:40 crc kubenswrapper[4998]: I0227 10:19:40.520213 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:40Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:40 crc kubenswrapper[4998]: I0227 10:19:40.533770 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c7c05c29eea9415919844353f5f82a0543e2592b282913feca3776ef5a711d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65614a1a13ca1ee58cd9ee838bbb0ba2ada05d130adb26b34ae208292a994bf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:40Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:40 crc kubenswrapper[4998]: I0227 10:19:40.544909 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ea5f61679603b32df823b8149d832f4d6e98c0cb4c7d1124b3ff587bb729ef3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:40Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:40 crc kubenswrapper[4998]: I0227 10:19:40.559937 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-46lvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a046a5ca-7081-4920-98af-1027a5bc29d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70d74d26b8ecce796cf2e1d35f24a62a2d3005aac8c11e2a148aa9b9a4e670f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7s287\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-46lvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:40Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:40 crc kubenswrapper[4998]: I0227 10:19:40.764110 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:19:40 crc kubenswrapper[4998]: I0227 10:19:40.764170 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-86xkz" Feb 27 10:19:40 crc kubenswrapper[4998]: I0227 10:19:40.764136 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:19:40 crc kubenswrapper[4998]: E0227 10:19:40.764295 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 10:19:40 crc kubenswrapper[4998]: E0227 10:19:40.764379 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 10:19:40 crc kubenswrapper[4998]: E0227 10:19:40.764438 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-86xkz" podUID="40178d6d-6068-4937-b7d5-883538892cc5" Feb 27 10:19:40 crc kubenswrapper[4998]: I0227 10:19:40.764510 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:19:40 crc kubenswrapper[4998]: E0227 10:19:40.764558 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 10:19:41 crc kubenswrapper[4998]: I0227 10:19:41.302820 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" event={"ID":"bceef7ff-b99d-432e-b9cb-7c538c82b74b","Type":"ContainerStarted","Data":"2928f7a82ae132fbce54cc3880c63a70af1dd6757847ff83a07240c9faa1c9f7"} Feb 27 10:19:41 crc kubenswrapper[4998]: I0227 10:19:41.304321 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" Feb 27 10:19:41 crc kubenswrapper[4998]: I0227 10:19:41.304468 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" Feb 27 10:19:41 crc kubenswrapper[4998]: I0227 10:19:41.304501 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" Feb 27 10:19:41 crc kubenswrapper[4998]: I0227 10:19:41.307808 4998 generic.go:334] "Generic (PLEG): container finished" podID="e55e9768-52ee-4fcf-a279-1b55e6d6c6fd" containerID="f3d5eb208ecd8e1f8ab650a14b8bc638732ebe1d85e05caa726a967bd010334d" exitCode=0 Feb 27 10:19:41 crc kubenswrapper[4998]: I0227 10:19:41.307876 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-l9x2p" event={"ID":"e55e9768-52ee-4fcf-a279-1b55e6d6c6fd","Type":"ContainerDied","Data":"f3d5eb208ecd8e1f8ab650a14b8bc638732ebe1d85e05caa726a967bd010334d"} Feb 27 10:19:41 crc kubenswrapper[4998]: I0227 10:19:41.323034 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fc05123-f698-45ff-a3c3-13c18e466cdb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c807f36a54905b570613990b7428942b90cbad2d8d8dfbab02ee177d5575644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f0cce1c3309bc7de11ea57038f7a306b781b41434337fbf3da176ca2dba2b91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93c102b441273f39ca361ed86f6ef259e15d8adb7eea414db62fabebfda0dee1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e749ae59c7bd8ab4f2d458cd33ccbae459eb43375c8abcdd275ce7f3978d5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b6e8ecfcaba19fb742ce57619627409073b5ea272a20cca6cff39ebcef8d3e2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T10:18:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 10:18:37.748369 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 10:18:37.748616 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 10:18:37.749728 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2309284339/tls.crt::/tmp/serving-cert-2309284339/tls.key\\\\\\\"\\\\nI0227 10:18:38.164511 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 10:18:38.177544 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 10:18:38.177576 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 10:18:38.177600 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 10:18:38.177606 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 10:18:38.196196 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 10:18:38.196250 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 10:18:38.196256 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 10:18:38.196263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 10:18:38.196267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 10:18:38.196282 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 10:18:38.196286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0227 10:18:38.201111 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0227 10:18:38.201327 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T10:18:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df65cc66d990595a8fdb751234ffd8b9e890f53de56c736241bc8b52e7341357\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b42644d7b4769348dda3a3ed5f82a860712159a6e62df2ee6a04b6e890851fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b42644d7b4769348dda3a3ed5f82a860712159a6e62df2ee6a04b6e890851fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:17:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:41Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:41 crc kubenswrapper[4998]: I0227 10:19:41.339299 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da216979-4070-4044-885e-64db11be9b28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:18:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:18:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8e1a3d0cfbfb3c94ceb7fa8965506defc8d50fbd2c3c977ebea917d9b47f29c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77f77430694123a27f4e42870837e5ef1405bc4f157bc15efe4cc2daafd9456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df8df596ab8f658fa7380300b4af87a511447baba28c0a4878829e2217d7d600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcd8311d3319cee9ae6667193835b5bd30c08b0f52b6278fead9c76ffa5949ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dcd8311d3319cee9ae6667193835b5bd30c08b0f52b6278fead9c76ffa5949ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:17:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:41Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:41 crc kubenswrapper[4998]: I0227 10:19:41.339915 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" Feb 27 10:19:41 crc kubenswrapper[4998]: I0227 10:19:41.340578 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" Feb 27 10:19:41 crc kubenswrapper[4998]: I0227 10:19:41.352020 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-86xkz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40178d6d-6068-4937-b7d5-883538892cc5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqwdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqwdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-86xkz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:41Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:41 crc kubenswrapper[4998]: I0227 10:19:41.364392 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"400c5e2f-5448-49c6-bf8e-04b21e552bb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3ce74dab57b3de4f390244a0a95e5ee5c83a47521248808b783a1e94e25c00c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l6t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94514be337c3278cc1dfc8b2f0c50050f03294a2cc4ac6a72c62695d2fe4152a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l6t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m6kr5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:41Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:41 crc kubenswrapper[4998]: I0227 10:19:41.378534 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:41Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:41 crc kubenswrapper[4998]: I0227 10:19:41.391695 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c7c05c29eea9415919844353f5f82a0543e2592b282913feca3776ef5a711d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65614a1a13ca1ee58cd9ee838bbb0ba2ada05d130adb26b34ae208292a994bf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:41Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:41 crc kubenswrapper[4998]: I0227 10:19:41.403893 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ea5f61679603b32df823b8149d832f4d6e98c0cb4c7d1124b3ff587bb729ef3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:41Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:41 crc kubenswrapper[4998]: I0227 10:19:41.419549 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-46lvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a046a5ca-7081-4920-98af-1027a5bc29d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70d74d26b8ecce796cf2e1d35f24a62a2d3005aac8c11e2a148aa9b9a4e670f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7s287\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-46lvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:41Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:41 crc kubenswrapper[4998]: I0227 10:19:41.440851 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bceef7ff-b99d-432e-b9cb-7c538c82b74b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b586f4d902be28e9eed6ab0b15e1c5bbd3dbf2add143844fff0835e088a1e3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f8bd4e21e6e078a8e49ae2264c035072fb77179deb36176c991ff790ea25b0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cc4f81dc4311732e185b32536af72d8391b070ab0b822258c13c55b9053cf53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e38b883925e98a1b15113d65d2f9b3e133e05c14ac63651039076b8518d42467\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92c409cddda85ffd90dc0522d28d181bb804f68f82561e82217b3e78cd9938ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80297fee7fa0ab3d0d6a5975984b83a2d0fa2a368a3fc6d85affe2af75e326e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2928f7a82ae132fbce54cc3880c63a70af1dd6757847ff83a07240c9faa1c9f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f39b4e222d74a12d8889750f34e5bb8e07d73c9d7fcdbaaebe18066fba744a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a581b9e4264b5768a3af01485da05fd006f3cd0d379ef13f228187faa74f297c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a581b9e4264b5768a3af01485da05fd006f3cd0d379ef13f228187faa74f297c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wh9xl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:41Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:41 crc kubenswrapper[4998]: I0227 10:19:41.456048 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jl2nx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c459deb8-e1ea-43de-a1b0-1b463eee4bdc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7df06ac726fa496bf82c45d7a1ec7f2defaf93f1d406f0c55cbe4c2b46c9b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc6kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jl2nx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:41Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:41 crc kubenswrapper[4998]: I0227 10:19:41.466820 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1182527-22cd-46f5-ac86-450cc2fbc851\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f69cef55dc6c16d2f015053573c088caee5fcca24208ad8c7affb58bc981c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a72f2f016ab471dc647a29e72e4e49dd5008018682ff255ca5b8496cfc4a2a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a72f2f016ab471dc647a29e72e4e49dd5008018682ff255ca5b8496cfc4a2a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:17:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:41Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:41 crc kubenswrapper[4998]: I0227 10:19:41.479454 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de019f950487216eb5629eeb90db5b4dac503bb5bda901cb9dcb66a9019aaecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:41Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:41 crc kubenswrapper[4998]: I0227 10:19:41.496019 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:41Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:41 crc kubenswrapper[4998]: I0227 10:19:41.506815 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:41Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:41 crc kubenswrapper[4998]: I0227 10:19:41.515256 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcfqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9652967a-d4bf-4304-bd25-4fed87e89b10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79a4ef241b85a5c9e0a28bc1b9d7c65ed60b9ca3472e0cbc0ef4aade02e6dff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jctzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcfqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:41Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:41 crc kubenswrapper[4998]: I0227 10:19:41.542139 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l9x2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e55e9768-52ee-4fcf-a279-1b55e6d6c6fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ac2d83408a7c3cccea1e095f4712f9f524751eec680b2df6373e44643438bdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ac2d83408a7c3cccea1e095f4712f9f524751eec680b2df6373e44643438bdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l9x2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:41Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:41 crc kubenswrapper[4998]: I0227 10:19:41.553422 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-g7c4b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68afe6cb-a559-4162-a25f-a22003feeca4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e28ec5783cd6f24d22a1412ced337b92b22dc2c17624208871489dca19dcb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mqxn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a636fc90f5de5e56e91678566c0fe6812e36482554b4454222df8205a44e8d51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mqxn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-g7c4b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:41Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:41 crc kubenswrapper[4998]: I0227 10:19:41.563296 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1182527-22cd-46f5-ac86-450cc2fbc851\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f69cef55dc6c16d2f015053573c088caee5fcca24208ad8c7affb58bc981c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a72f2f016ab471dc647a29e72e4e49dd5008018682ff255ca5b8496cfc4a2a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a72f2f016ab471dc647a29e72e4e49dd5008018682ff255ca5b8496cfc4a2a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:17:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:41Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:41 crc kubenswrapper[4998]: I0227 10:19:41.574575 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de019f950487216eb5629eeb90db5b4dac503bb5bda901cb9dcb66a9019aaecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:41Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:41 crc kubenswrapper[4998]: I0227 10:19:41.585803 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:41Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:41 crc kubenswrapper[4998]: I0227 10:19:41.599603 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:41Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:41 crc kubenswrapper[4998]: I0227 10:19:41.690578 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bceef7ff-b99d-432e-b9cb-7c538c82b74b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b586f4d902be28e9eed6ab0b15e1c5bbd3dbf2add143844fff0835e088a1e3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f8bd4e21e6e078a8e49ae2264c035072fb77179deb36176c991ff790ea25b0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cc4f81dc4311732e185b32536af72d8391b070ab0b822258c13c55b9053cf53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e38b883925e98a1b15113d65d2f9b3e133e05c14ac63651039076b8518d42467\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92c409cddda85ffd90dc0522d28d181bb804f68f82561e82217b3e78cd9938ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80297fee7fa0ab3d0d6a5975984b83a2d0fa2a368a3fc6d85affe2af75e326e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2928f7a82ae132fbce54cc3880c63a70af1dd6757847ff83a07240c9faa1c9f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f39b4e222d74a12d8889750f34e5bb8e07d73c9d7fcdbaaebe18066fba744a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a581b9e4264b5768a3af01485da05fd006f3cd0d379ef13f228187faa74f297c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a581b9e4264b5768a3af01485da05fd006f3cd0d379ef13f228187faa74f297c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wh9xl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:41Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:41 crc kubenswrapper[4998]: I0227 10:19:41.706353 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jl2nx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c459deb8-e1ea-43de-a1b0-1b463eee4bdc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7df06ac726fa496bf82c45d7a1ec7f2defaf93f1d406f0c55cbe4c2b46c9b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc6kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jl2nx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:41Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:41 crc kubenswrapper[4998]: I0227 10:19:41.718505 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcfqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9652967a-d4bf-4304-bd25-4fed87e89b10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79a4ef241b85a5c9e0a28bc1b9d7c65ed60b9ca3472e0cbc0ef4aade02e6dff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jctzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcfqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:41Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:41 crc kubenswrapper[4998]: I0227 10:19:41.737802 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l9x2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e55e9768-52ee-4fcf-a279-1b55e6d6c6fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ac2d83408a7c3cccea1e095f4712f9f524751eec680b2df6373e44643438bdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ac2d83408a7c3cccea1e095f4712f9f524751eec680b2df6373e44643438bdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3d5eb208ecd8e1f8ab650a14b8bc638732ebe1d85e05caa726a967bd010334d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3d5eb208ecd8e1f8ab650a14b8bc638732ebe1d85e05caa726a967bd010334d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l9x2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:41Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:41 crc kubenswrapper[4998]: I0227 10:19:41.747899 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-g7c4b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68afe6cb-a559-4162-a25f-a22003feeca4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e28ec5783cd6f24d22a1412ced337b92b22dc2c17624208871489dca19dcb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mqxn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a636fc90f5de5e56e91678566c0fe6812e36482554b4454222df8205a44e8d51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mqxn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-g7c4b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:41Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:41 crc kubenswrapper[4998]: I0227 10:19:41.759891 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fc05123-f698-45ff-a3c3-13c18e466cdb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c807f36a54905b570613990b7428942b90cbad2d8d8dfbab02ee177d5575644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f0cce1c3309bc7de11ea57038f7a306b781b41434337fbf3da176ca2dba2b91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93c102b441273f39ca361ed86f6ef259e15d8adb7eea414db62fabebfda0dee1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e749ae59c7bd8ab4f2d458cd33ccbae459eb43375c8abcdd275ce7f3978d5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b6e8ecfcaba19fb742ce57619627409073b5ea272a20cca6cff39ebcef8d3e2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T10:18:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 10:18:37.748369 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 10:18:37.748616 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 10:18:37.749728 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2309284339/tls.crt::/tmp/serving-cert-2309284339/tls.key\\\\\\\"\\\\nI0227 10:18:38.164511 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 10:18:38.177544 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 10:18:38.177576 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 10:18:38.177600 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 10:18:38.177606 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 10:18:38.196196 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 10:18:38.196250 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 10:18:38.196256 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 10:18:38.196263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 10:18:38.196267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 10:18:38.196282 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 10:18:38.196286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0227 10:18:38.201111 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0227 10:18:38.201327 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T10:18:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df65cc66d990595a8fdb751234ffd8b9e890f53de56c736241bc8b52e7341357\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b42644d7b4769348dda3a3ed5f82a860712159a6e62df2ee6a04b6e890851fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b42644d7b4769348dda3a3ed5f82a860712159a6e62df2ee6a04b6e890851fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:17:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:41Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:41 crc kubenswrapper[4998]: I0227 10:19:41.772143 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da216979-4070-4044-885e-64db11be9b28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:18:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:18:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8e1a3d0cfbfb3c94ceb7fa8965506defc8d50fbd2c3c977ebea917d9b47f29c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77f77430694123a27f4e42870837e5ef1405bc4f157bc15efe4cc2daafd9456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df8df596ab8f658fa7380300b4af87a511447baba28c0a4878829e2217d7d600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcd8311d3319cee9ae6667193835b5bd30c08b0f52b6278fead9c76ffa5949ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dcd8311d3319cee9ae6667193835b5bd30c08b0f52b6278fead9c76ffa5949ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:17:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:41Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:41 crc kubenswrapper[4998]: I0227 10:19:41.782424 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-86xkz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40178d6d-6068-4937-b7d5-883538892cc5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqwdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqwdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-86xkz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:41Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:41 crc kubenswrapper[4998]: I0227 10:19:41.825827 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:41Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:41 crc kubenswrapper[4998]: I0227 10:19:41.838768 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c7c05c29eea9415919844353f5f82a0543e2592b282913feca3776ef5a711d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65614a1a13ca1ee58cd9ee838bbb0ba2ada05d130adb26b34ae208292a994bf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:41Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:41 crc kubenswrapper[4998]: I0227 10:19:41.849994 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ea5f61679603b32df823b8149d832f4d6e98c0cb4c7d1124b3ff587bb729ef3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:41Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:41 crc kubenswrapper[4998]: I0227 10:19:41.861938 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-46lvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a046a5ca-7081-4920-98af-1027a5bc29d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70d74d26b8ecce796cf2e1d35f24a62a2d3005aac8c11e2a148aa9b9a4e670f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7s287\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-46lvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:41Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:41 crc kubenswrapper[4998]: I0227 10:19:41.872151 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"400c5e2f-5448-49c6-bf8e-04b21e552bb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3ce74dab57b3de4f390244a0a95e5ee5c83a47521248808b783a1e94e25c00c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l6t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94514be337c3278cc1dfc8b2f0c50050f03294a2cc4ac6a72c62695d2fe4152a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l6t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m6kr5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:41Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:42 crc kubenswrapper[4998]: I0227 10:19:42.313701 4998 generic.go:334] "Generic (PLEG): container finished" podID="e55e9768-52ee-4fcf-a279-1b55e6d6c6fd" containerID="736262e0b6f343139b2e1a42c6d3c90fc1ff6df0c02511cb66074a2143476a48" exitCode=0 Feb 27 10:19:42 crc kubenswrapper[4998]: I0227 10:19:42.313785 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-l9x2p" event={"ID":"e55e9768-52ee-4fcf-a279-1b55e6d6c6fd","Type":"ContainerDied","Data":"736262e0b6f343139b2e1a42c6d3c90fc1ff6df0c02511cb66074a2143476a48"} Feb 27 10:19:42 crc kubenswrapper[4998]: I0227 10:19:42.332871 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fc05123-f698-45ff-a3c3-13c18e466cdb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c807f36a54905b570613990b7428942b90cbad2d8d8dfbab02ee177d5575644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f0cce1c3309bc7de11ea57038f7a306b781b41434337fbf3da176ca2dba2b91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93c102b441273f39ca361ed86f6ef259e15d8adb7eea414db62fabebfda0dee1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e749ae59c7bd8ab4f2d458cd33ccbae459eb43375c8abcdd275ce7f3978d5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b6e8ecfcaba19fb742ce57619627409073b5ea272a20cca6cff39ebcef8d3e2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T10:18:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 10:18:37.748369 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 10:18:37.748616 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 10:18:37.749728 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2309284339/tls.crt::/tmp/serving-cert-2309284339/tls.key\\\\\\\"\\\\nI0227 10:18:38.164511 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 10:18:38.177544 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 10:18:38.177576 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 10:18:38.177600 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 10:18:38.177606 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 10:18:38.196196 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 10:18:38.196250 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 10:18:38.196256 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 10:18:38.196263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 10:18:38.196267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 10:18:38.196282 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 10:18:38.196286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0227 10:18:38.201111 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0227 10:18:38.201327 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T10:18:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df65cc66d990595a8fdb751234ffd8b9e890f53de56c736241bc8b52e7341357\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b42644d7b4769348dda3a3ed5f82a860712159a6e62df2ee6a04b6e890851fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b42644d7b4769348dda3a3ed5f82a860712159a6e62df2ee6a04b6e890851fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:17:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:42Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:42 crc kubenswrapper[4998]: I0227 10:19:42.346392 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da216979-4070-4044-885e-64db11be9b28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:18:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:18:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8e1a3d0cfbfb3c94ceb7fa8965506defc8d50fbd2c3c977ebea917d9b47f29c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77f77430694123a27f4e42870837e5ef1405bc4f157bc15efe4cc2daafd9456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df8df596ab8f658fa7380300b4af87a511447baba28c0a4878829e2217d7d600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcd8311d3319cee9ae6667193835b5bd30c08b0f52b6278fead9c76ffa5949ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dcd8311d3319cee9ae6667193835b5bd30c08b0f52b6278fead9c76ffa5949ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:17:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:42Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:42 crc kubenswrapper[4998]: I0227 10:19:42.358284 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-86xkz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40178d6d-6068-4937-b7d5-883538892cc5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqwdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqwdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-86xkz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:42Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:42 crc kubenswrapper[4998]: I0227 10:19:42.374712 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:42Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:42 crc kubenswrapper[4998]: I0227 10:19:42.390798 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c7c05c29eea9415919844353f5f82a0543e2592b282913feca3776ef5a711d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65614a1a13ca1ee58cd9ee838bbb0ba2ada05d130adb26b34ae208292a994bf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:42Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:42 crc kubenswrapper[4998]: I0227 10:19:42.405985 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ea5f61679603b32df823b8149d832f4d6e98c0cb4c7d1124b3ff587bb729ef3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:42Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:42 crc kubenswrapper[4998]: I0227 10:19:42.420008 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-46lvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a046a5ca-7081-4920-98af-1027a5bc29d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70d74d26b8ecce796cf2e1d35f24a62a2d3005aac8c11e2a148aa9b9a4e670f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7s287\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-46lvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:42Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:42 crc kubenswrapper[4998]: I0227 10:19:42.434801 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"400c5e2f-5448-49c6-bf8e-04b21e552bb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3ce74dab57b3de4f390244a0a95e5ee5c83a47521248808b783a1e94e25c00c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l6t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94514be337c3278cc1dfc8b2f0c50050f03294a2cc4ac6a72c62695d2fe4152a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l6t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m6kr5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:42Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:42 crc kubenswrapper[4998]: I0227 10:19:42.447046 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jl2nx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c459deb8-e1ea-43de-a1b0-1b463eee4bdc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7df06ac726fa496bf82c45d7a1ec7f2defaf93f1d406f0c55cbe4c2b46c9b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc6kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jl2nx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:42Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:42 crc kubenswrapper[4998]: I0227 10:19:42.459897 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1182527-22cd-46f5-ac86-450cc2fbc851\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f69cef55dc6c16d2f015053573c088caee5fcca24208ad8c7affb58bc981c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a72f2f016ab471dc647a29e72e4e49dd5008018682ff255ca5b8496cfc4a2a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a72f2f016ab471dc647a29e72e4e49dd5008018682ff255ca5b8496cfc4a2a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:17:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:42Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:42 crc kubenswrapper[4998]: I0227 10:19:42.474887 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de019f950487216eb5629eeb90db5b4dac503bb5bda901cb9dcb66a9019aaecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:42Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:42 crc kubenswrapper[4998]: I0227 10:19:42.489126 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:42Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:42 crc kubenswrapper[4998]: I0227 10:19:42.504933 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:42Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:42 crc kubenswrapper[4998]: I0227 10:19:42.523625 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bceef7ff-b99d-432e-b9cb-7c538c82b74b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b586f4d902be28e9eed6ab0b15e1c5bbd3dbf2add143844fff0835e088a1e3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f8bd4e21e6e078a8e49ae2264c035072fb77179deb36176c991ff790ea25b0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cc4f81dc4311732e185b32536af72d8391b070ab0b822258c13c55b9053cf53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e38b883925e98a1b15113d65d2f9b3e133e05c14ac63651039076b8518d42467\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92c409cddda85ffd90dc0522d28d181bb804f68f82561e82217b3e78cd9938ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80297fee7fa0ab3d0d6a5975984b83a2d0fa2a368a3fc6d85affe2af75e326e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2928f7a82ae132fbce54cc3880c63a70af1dd6757847ff83a07240c9faa1c9f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f39b4e222d74a12d8889750f34e5bb8e07d73c9d7fcdbaaebe18066fba744a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a581b9e4264b5768a3af01485da05fd006f3cd0d379ef13f228187faa74f297c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a581b9e4264b5768a3af01485da05fd006f3cd0d379ef13f228187faa74f297c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wh9xl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:42Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:42 crc kubenswrapper[4998]: I0227 10:19:42.535304 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcfqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9652967a-d4bf-4304-bd25-4fed87e89b10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79a4ef241b85a5c9e0a28bc1b9d7c65ed60b9ca3472e0cbc0ef4aade02e6dff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jctzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcfqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:42Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:42 crc kubenswrapper[4998]: I0227 10:19:42.552270 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l9x2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e55e9768-52ee-4fcf-a279-1b55e6d6c6fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ac2d83408a7c3cccea1e095f4712f9f524751eec680b2df6373e44643438bdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ac2d83408a7c3cccea1e095f4712f9f524751eec680b2df6373e44643438bdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3d5eb208ecd8e1f8ab650a14b8bc638732ebe1d85e05caa726a967bd010334d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3d5eb208ecd8e1f8ab650a14b8bc638732ebe1d85e05caa726a967bd010334d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://736262e0b6f343139b2e1a42c6d3c90fc1ff6df0c02511cb66074a2143476a48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://736262e0b6f343139b2e1a42c6d3c90fc1ff6df0c02511cb66074a2143476a48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l9x2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:42Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:42 crc kubenswrapper[4998]: I0227 10:19:42.564011 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-g7c4b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68afe6cb-a559-4162-a25f-a22003feeca4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e28ec5783cd6f24d22a1412ced337b92b22dc2c17624208871489dca19dcb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mqxn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a636fc90f5de5e56e91678566c0fe6812e36482554b4454222df8205a44e8d51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mqxn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-g7c4b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:42Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:42 crc kubenswrapper[4998]: I0227 10:19:42.764754 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:19:42 crc kubenswrapper[4998]: I0227 10:19:42.764839 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:19:42 crc kubenswrapper[4998]: E0227 10:19:42.764873 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 10:19:42 crc kubenswrapper[4998]: I0227 10:19:42.765034 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:19:42 crc kubenswrapper[4998]: E0227 10:19:42.765029 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 10:19:42 crc kubenswrapper[4998]: E0227 10:19:42.765082 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 10:19:42 crc kubenswrapper[4998]: I0227 10:19:42.765197 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-86xkz" Feb 27 10:19:42 crc kubenswrapper[4998]: E0227 10:19:42.765261 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-86xkz" podUID="40178d6d-6068-4937-b7d5-883538892cc5" Feb 27 10:19:43 crc kubenswrapper[4998]: I0227 10:19:43.321016 4998 generic.go:334] "Generic (PLEG): container finished" podID="e55e9768-52ee-4fcf-a279-1b55e6d6c6fd" containerID="3b20c69cb476fe3c5e90cc0086cccef9148e6f31a8a99340967dfee5ad04f8ee" exitCode=0 Feb 27 10:19:43 crc kubenswrapper[4998]: I0227 10:19:43.321118 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-l9x2p" event={"ID":"e55e9768-52ee-4fcf-a279-1b55e6d6c6fd","Type":"ContainerDied","Data":"3b20c69cb476fe3c5e90cc0086cccef9148e6f31a8a99340967dfee5ad04f8ee"} Feb 27 10:19:43 crc kubenswrapper[4998]: I0227 10:19:43.324253 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wh9xl_bceef7ff-b99d-432e-b9cb-7c538c82b74b/ovnkube-controller/0.log" Feb 27 10:19:43 crc kubenswrapper[4998]: I0227 10:19:43.328338 4998 generic.go:334] "Generic (PLEG): container finished" podID="bceef7ff-b99d-432e-b9cb-7c538c82b74b" containerID="2928f7a82ae132fbce54cc3880c63a70af1dd6757847ff83a07240c9faa1c9f7" exitCode=1 Feb 27 10:19:43 crc kubenswrapper[4998]: I0227 10:19:43.328399 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" event={"ID":"bceef7ff-b99d-432e-b9cb-7c538c82b74b","Type":"ContainerDied","Data":"2928f7a82ae132fbce54cc3880c63a70af1dd6757847ff83a07240c9faa1c9f7"} Feb 27 10:19:43 crc kubenswrapper[4998]: I0227 10:19:43.329082 4998 scope.go:117] "RemoveContainer" containerID="2928f7a82ae132fbce54cc3880c63a70af1dd6757847ff83a07240c9faa1c9f7" Feb 27 10:19:43 crc kubenswrapper[4998]: I0227 10:19:43.336286 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1182527-22cd-46f5-ac86-450cc2fbc851\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f69cef55dc6c16d2f015053573c088caee5fcca24208ad8c7affb58bc981c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a72f2f016ab471dc647a29e72e4e49dd5008018682ff255ca5b8496cfc4a2a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a72f2f016ab471dc647a29e72e4e49dd5008018682ff255ca5b8496cfc4a2a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:17:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:43Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:43 crc kubenswrapper[4998]: I0227 10:19:43.360607 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de019f950487216eb5629eeb90db5b4dac503bb5bda901cb9dcb66a9019aaecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:43Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:43 crc kubenswrapper[4998]: I0227 10:19:43.377533 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:43Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:43 crc kubenswrapper[4998]: I0227 10:19:43.394364 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:43Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:43 crc kubenswrapper[4998]: I0227 10:19:43.419179 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bceef7ff-b99d-432e-b9cb-7c538c82b74b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b586f4d902be28e9eed6ab0b15e1c5bbd3dbf2add143844fff0835e088a1e3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f8bd4e21e6e078a8e49ae2264c035072fb77179deb36176c991ff790ea25b0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cc4f81dc4311732e185b32536af72d8391b070ab0b822258c13c55b9053cf53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e38b883925e98a1b15113d65d2f9b3e133e05c14ac63651039076b8518d42467\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92c409cddda85ffd90dc0522d28d181bb804f68f82561e82217b3e78cd9938ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80297fee7fa0ab3d0d6a5975984b83a2d0fa2a368a3fc6d85affe2af75e326e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2928f7a82ae132fbce54cc3880c63a70af1dd6757847ff83a07240c9faa1c9f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f39b4e222d74a12d8889750f34e5bb8e07d73c9d7fcdbaaebe18066fba744a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a581b9e4264b5768a3af01485da05fd006f3cd0d379ef13f228187faa74f297c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a581b9e4264b5768a3af01485da05fd006f3cd0d379ef13f228187faa74f297c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wh9xl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:43Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:43 crc kubenswrapper[4998]: I0227 10:19:43.431936 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jl2nx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c459deb8-e1ea-43de-a1b0-1b463eee4bdc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7df06ac726fa496bf82c45d7a1ec7f2defaf93f1d406f0c55cbe4c2b46c9b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc6kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jl2nx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:43Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:43 crc kubenswrapper[4998]: I0227 10:19:43.444290 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcfqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9652967a-d4bf-4304-bd25-4fed87e89b10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79a4ef241b85a5c9e0a28bc1b9d7c65ed60b9ca3472e0cbc0ef4aade02e6dff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jctzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcfqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:43Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:43 crc kubenswrapper[4998]: I0227 10:19:43.458332 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l9x2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e55e9768-52ee-4fcf-a279-1b55e6d6c6fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ac2d83408a7c3cccea1e095f4712f9f524751eec680b2df6373e44643438bdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ac2d83408a7c3cccea1e095f4712f9f524751eec680b2df6373e44643438bdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3d5eb208ecd8e1f8ab650a14b8bc638732ebe1d85e05caa726a967bd010334d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3d5eb208ecd8e1f8ab650a14b8bc638732ebe1d85e05caa726a967bd010334d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://736262e0b6f343139b2e1a42c6d3c90fc1ff6df0c02511cb66074a2143476a48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://736262e0b6f343139b2e1a42c6d3c90fc1ff6df0c02511cb66074a2143476a48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b20c69cb476fe3c5e90cc0086cccef9148e6f31a8a99340967dfee5ad04f8ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b20c69cb476fe3c5e90cc0086cccef9148e6f31a8a99340967dfee5ad04f8ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l9x2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:43Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:43 crc kubenswrapper[4998]: I0227 10:19:43.469758 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-g7c4b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68afe6cb-a559-4162-a25f-a22003feeca4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e28ec5783cd6f24d22a1412ced337b92b22dc2c17624208871489dca19dcb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mqxn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a636fc90f5de5e56e91678566c0fe6812e36482554b4454222df8205a44e8d51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mqxn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-g7c4b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:43Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:43 crc kubenswrapper[4998]: I0227 10:19:43.485797 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fc05123-f698-45ff-a3c3-13c18e466cdb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c807f36a54905b570613990b7428942b90cbad2d8d8dfbab02ee177d5575644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f0cce1c3309bc7de11ea57038f7a306b781b41434337fbf3da176ca2dba2b91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93c102b441273f39ca361ed86f6ef259e15d8adb7eea414db62fabebfda0dee1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e749ae59c7bd8ab4f2d458cd33ccbae459eb43375c8abcdd275ce7f3978d5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b6e8ecfcaba19fb742ce57619627409073b5ea272a20cca6cff39ebcef8d3e2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T10:18:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 10:18:37.748369 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 10:18:37.748616 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 10:18:37.749728 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2309284339/tls.crt::/tmp/serving-cert-2309284339/tls.key\\\\\\\"\\\\nI0227 10:18:38.164511 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 10:18:38.177544 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 10:18:38.177576 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 10:18:38.177600 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 10:18:38.177606 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 10:18:38.196196 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 10:18:38.196250 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 10:18:38.196256 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 10:18:38.196263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 10:18:38.196267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 10:18:38.196282 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 10:18:38.196286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0227 10:18:38.201111 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0227 10:18:38.201327 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T10:18:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df65cc66d990595a8fdb751234ffd8b9e890f53de56c736241bc8b52e7341357\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b42644d7b4769348dda3a3ed5f82a860712159a6e62df2ee6a04b6e890851fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b42644d7b4769348dda3a3ed5f82a860712159a6e62df2ee6a04b6e890851fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:17:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:43Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:43 crc kubenswrapper[4998]: I0227 10:19:43.497649 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da216979-4070-4044-885e-64db11be9b28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:18:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:18:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8e1a3d0cfbfb3c94ceb7fa8965506defc8d50fbd2c3c977ebea917d9b47f29c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77f77430694123a27f4e42870837e5ef1405bc4f157bc15efe4cc2daafd9456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df8df596ab8f658fa7380300b4af87a511447baba28c0a4878829e2217d7d600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcd8311d3319cee9ae6667193835b5bd30c08b0f52b6278fead9c76ffa5949ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dcd8311d3319cee9ae6667193835b5bd30c08b0f52b6278fead9c76ffa5949ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:17:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:43Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:43 crc kubenswrapper[4998]: I0227 10:19:43.508394 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-86xkz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40178d6d-6068-4937-b7d5-883538892cc5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqwdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqwdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-86xkz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:43Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:43 crc kubenswrapper[4998]: I0227 10:19:43.521993 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:43Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:43 crc kubenswrapper[4998]: I0227 10:19:43.533839 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c7c05c29eea9415919844353f5f82a0543e2592b282913feca3776ef5a711d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65614a1a13ca1ee58cd9ee838bbb0ba2ada05d130adb26b34ae208292a994bf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:43Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:43 crc kubenswrapper[4998]: I0227 10:19:43.544214 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ea5f61679603b32df823b8149d832f4d6e98c0cb4c7d1124b3ff587bb729ef3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:43Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:43 crc kubenswrapper[4998]: I0227 10:19:43.556349 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-46lvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a046a5ca-7081-4920-98af-1027a5bc29d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70d74d26b8ecce796cf2e1d35f24a62a2d3005aac8c11e2a148aa9b9a4e670f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7s287\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-46lvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:43Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:43 crc kubenswrapper[4998]: I0227 10:19:43.570145 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"400c5e2f-5448-49c6-bf8e-04b21e552bb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3ce74dab57b3de4f390244a0a95e5ee5c83a47521248808b783a1e94e25c00c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l6t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94514be337c3278cc1dfc8b2f0c50050f03294a2cc4ac6a72c62695d2fe4152a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l6t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m6kr5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:43Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:43 crc kubenswrapper[4998]: I0227 10:19:43.581725 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcfqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9652967a-d4bf-4304-bd25-4fed87e89b10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79a4ef241b85a5c9e0a28bc1b9d7c65ed60b9ca3472e0cbc0ef4aade02e6dff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jctzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcfqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:43Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:43 crc kubenswrapper[4998]: I0227 10:19:43.595880 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l9x2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e55e9768-52ee-4fcf-a279-1b55e6d6c6fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ac2d83408a7c3cccea1e095f4712f9f524751eec680b2df6373e44643438bdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ac2d83408a7c3cccea1e095f4712f9f524751eec680b2df6373e44643438bdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3d5eb208ecd8e1f8ab650a14b8bc638732ebe1d85e05caa726a967bd010334d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3d5eb208ecd8e1f8ab650a14b8bc638732ebe1d85e05caa726a967bd010334d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://736262e0b6f343139b2e1a42c6d3c90fc1ff6df0c02511cb66074a2143476a48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://736262e0b6f343139b2e1a42c6d3c90fc1ff6df0c02511cb66074a2143476a48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b20c69cb476fe3c5e90cc0086cccef9148e6f31a8a99340967dfee5ad04f8ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b20c69cb476fe3c5e90cc0086cccef9148e6f31a8a99340967dfee5ad04f8ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l9x2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:43Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:43 crc kubenswrapper[4998]: I0227 10:19:43.605786 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-g7c4b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68afe6cb-a559-4162-a25f-a22003feeca4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e28ec5783cd6f24d22a1412ced337b92b22dc2c17624208871489dca19dcb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mqxn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a636fc90f5de5e56e91678566c0fe6812e36482554b4454222df8205a44e8d51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mqxn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-g7c4b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:43Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:43 crc kubenswrapper[4998]: I0227 10:19:43.619131 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fc05123-f698-45ff-a3c3-13c18e466cdb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c807f36a54905b570613990b7428942b90cbad2d8d8dfbab02ee177d5575644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f0cce1c3309bc7de11ea57038f7a306b781b41434337fbf3da176ca2dba2b91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93c102b441273f39ca361ed86f6ef259e15d8adb7eea414db62fabebfda0dee1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e749ae59c7bd8ab4f2d458cd33ccbae459eb43375c8abcdd275ce7f3978d5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b6e8ecfcaba19fb742ce57619627409073b5ea272a20cca6cff39ebcef8d3e2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T10:18:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 10:18:37.748369 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 10:18:37.748616 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 10:18:37.749728 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2309284339/tls.crt::/tmp/serving-cert-2309284339/tls.key\\\\\\\"\\\\nI0227 10:18:38.164511 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 10:18:38.177544 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 10:18:38.177576 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 10:18:38.177600 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 10:18:38.177606 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 10:18:38.196196 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 10:18:38.196250 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 10:18:38.196256 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 10:18:38.196263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 10:18:38.196267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 10:18:38.196282 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 10:18:38.196286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0227 10:18:38.201111 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0227 10:18:38.201327 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T10:18:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df65cc66d990595a8fdb751234ffd8b9e890f53de56c736241bc8b52e7341357\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b42644d7b4769348dda3a3ed5f82a860712159a6e62df2ee6a04b6e890851fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b42644d7b4769348dda3a3ed5f82a860712159a6e62df2ee6a04b6e890851fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:17:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:43Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:43 crc kubenswrapper[4998]: I0227 10:19:43.632209 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da216979-4070-4044-885e-64db11be9b28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:18:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:18:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8e1a3d0cfbfb3c94ceb7fa8965506defc8d50fbd2c3c977ebea917d9b47f29c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77f77430694123a27f4e42870837e5ef1405bc4f157bc15efe4cc2daafd9456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df8df596ab8f658fa7380300b4af87a511447baba28c0a4878829e2217d7d600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcd8311d3319cee9ae6667193835b5bd30c08b0f52b6278fead9c76ffa5949ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dcd8311d3319cee9ae6667193835b5bd30c08b0f52b6278fead9c76ffa5949ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:17:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:43Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:43 crc kubenswrapper[4998]: I0227 10:19:43.642302 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-86xkz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40178d6d-6068-4937-b7d5-883538892cc5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqwdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqwdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-86xkz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:43Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:43 crc kubenswrapper[4998]: I0227 10:19:43.654181 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"400c5e2f-5448-49c6-bf8e-04b21e552bb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3ce74dab57b3de4f390244a0a95e5ee5c83a47521248808b783a1e94e25c00c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l6t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94514be337c3278cc1dfc8b2f0c50050f03294a2cc4ac6a72c62695d2fe4152a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l6t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m6kr5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:43Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:43 crc kubenswrapper[4998]: I0227 10:19:43.667324 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:43Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:43 crc kubenswrapper[4998]: I0227 10:19:43.679287 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c7c05c29eea9415919844353f5f82a0543e2592b282913feca3776ef5a711d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65614a1a13ca1ee58cd9ee838bbb0ba2ada05d130adb26b34ae208292a994bf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:43Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:43 crc kubenswrapper[4998]: I0227 10:19:43.706646 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ea5f61679603b32df823b8149d832f4d6e98c0cb4c7d1124b3ff587bb729ef3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:43Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:43 crc kubenswrapper[4998]: I0227 10:19:43.725184 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-46lvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a046a5ca-7081-4920-98af-1027a5bc29d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70d74d26b8ecce796cf2e1d35f24a62a2d3005aac8c11e2a148aa9b9a4e670f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7s287\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-46lvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:43Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:43 crc kubenswrapper[4998]: I0227 10:19:43.748388 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bceef7ff-b99d-432e-b9cb-7c538c82b74b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b586f4d902be28e9eed6ab0b15e1c5bbd3dbf2add143844fff0835e088a1e3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f8bd4e21e6e078a8e49ae2264c035072fb77179deb36176c991ff790ea25b0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cc4f81dc4311732e185b32536af72d8391b070ab0b822258c13c55b9053cf53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e38b883925e98a1b15113d65d2f9b3e133e05c14ac63651039076b8518d42467\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92c409cddda85ffd90dc0522d28d181bb804f68f82561e82217b3e78cd9938ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80297fee7fa0ab3d0d6a5975984b83a2d0fa2a368a3fc6d85affe2af75e326e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2928f7a82ae132fbce54cc3880c63a70af1dd6757847ff83a07240c9faa1c9f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2928f7a82ae132fbce54cc3880c63a70af1dd6757847ff83a07240c9faa1c9f7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T10:19:42Z\\\",\\\"message\\\":\\\"s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0227 10:19:42.875868 6822 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0227 10:19:42.876307 6822 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0227 10:19:42.876376 6822 reflector.go:311] Stopping reflector *v1.UserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0227 10:19:42.876677 6822 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0227 10:19:42.876734 6822 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0227 10:19:42.876745 6822 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0227 10:19:42.876787 6822 factory.go:656] Stopping watch factory\\\\nI0227 10:19:42.876808 6822 ovnkube.go:599] Stopped ovnkube\\\\nI0227 10:19:42.876811 6822 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0227 10:19:42.876848 6822 handler.go:208] Removed *v1.Node event handler 2\\\\nI02\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f39b4e222d74a12d8889750f34e5bb8e07d73c9d7fcdbaaebe18066fba744a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a581b9e4264b5768a3af01485da05fd006f3cd0d379ef13f228187faa74f297c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a581b9e4264b5768a3af01485da05fd006f3cd0d379ef13f228187faa74f297c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wh9xl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:43Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:43 crc kubenswrapper[4998]: I0227 10:19:43.760578 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jl2nx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c459deb8-e1ea-43de-a1b0-1b463eee4bdc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7df06ac726fa496bf82c45d7a1ec7f2defaf93f1d406f0c55cbe4c2b46c9b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc6kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jl2nx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:43Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:43 crc kubenswrapper[4998]: I0227 10:19:43.774633 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1182527-22cd-46f5-ac86-450cc2fbc851\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f69cef55dc6c16d2f015053573c088caee5fcca24208ad8c7affb58bc981c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a72f2f016ab471dc647a29e72e4e49dd5008018682ff255ca5b8496cfc4a2a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a72f2f016ab471dc647a29e72e4e49dd5008018682ff255ca5b8496cfc4a2a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:17:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:43Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:43 crc kubenswrapper[4998]: I0227 10:19:43.790345 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de019f950487216eb5629eeb90db5b4dac503bb5bda901cb9dcb66a9019aaecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:43Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:43 crc kubenswrapper[4998]: I0227 10:19:43.804359 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:43Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:43 crc kubenswrapper[4998]: I0227 10:19:43.818169 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:43Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:43 crc kubenswrapper[4998]: E0227 10:19:43.862299 4998 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 27 10:19:44 crc kubenswrapper[4998]: I0227 10:19:44.337771 4998 generic.go:334] "Generic (PLEG): container finished" podID="e55e9768-52ee-4fcf-a279-1b55e6d6c6fd" containerID="16873e6b57f8d49e2acc350c6940109206ef970ae711b7cd62aa87fc90df15eb" exitCode=0 Feb 27 10:19:44 crc kubenswrapper[4998]: I0227 10:19:44.337841 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-l9x2p" event={"ID":"e55e9768-52ee-4fcf-a279-1b55e6d6c6fd","Type":"ContainerDied","Data":"16873e6b57f8d49e2acc350c6940109206ef970ae711b7cd62aa87fc90df15eb"} Feb 27 10:19:44 crc kubenswrapper[4998]: I0227 10:19:44.340915 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wh9xl_bceef7ff-b99d-432e-b9cb-7c538c82b74b/ovnkube-controller/0.log" Feb 27 10:19:44 crc kubenswrapper[4998]: I0227 10:19:44.345065 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" event={"ID":"bceef7ff-b99d-432e-b9cb-7c538c82b74b","Type":"ContainerStarted","Data":"c4d0eb0f1e9b5c39bd1f8cf22c6eeef46408add1f6abc2e18be2a9d20a54a094"} Feb 27 10:19:44 crc kubenswrapper[4998]: I0227 10:19:44.345350 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" Feb 27 10:19:44 crc kubenswrapper[4998]: I0227 10:19:44.353209 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcfqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9652967a-d4bf-4304-bd25-4fed87e89b10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79a4ef241b85a5c9e0a28bc1b9d7c65ed60b9ca3472e0cbc0ef4aade02e6dff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jctzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcfqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:44Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:44 crc kubenswrapper[4998]: I0227 10:19:44.381556 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l9x2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e55e9768-52ee-4fcf-a279-1b55e6d6c6fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ac2d83408a7c3cccea1e095f4712f9f524751eec680b2df6373e44643438bdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ac2d83408a7c3cccea1e095f4712f9f524751eec680b2df6373e44643438bdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3d5eb208ecd8e1f8ab650a14b8bc638732ebe1d85e05caa726a967bd010334d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3d5eb208ecd8e1f8ab650a14b8bc638732ebe1d85e05caa726a967bd010334d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://736262e0b6f343139b2e1a42c6d3c90fc1ff6df0c02511cb66074a2143476a48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://736262e0b6f343139b2e1a42c6d3c90fc1ff6df0c02511cb66074a2143476a48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b20c69cb476fe3c5e90cc0086cccef9148e6f31a8a99340967dfee5ad04f8ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b20c69cb476fe3c5e90cc0086cccef9148e6f31a8a99340967dfee5ad04f8ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16873e6b57f8d49e2acc350c6940109206ef970ae711b7cd62aa87fc90df15eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16873e6b57f8d49e2acc350c6940109206ef970ae711b7cd62aa87fc90df15eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l9x2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:44Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:44 crc kubenswrapper[4998]: I0227 10:19:44.396012 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-g7c4b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68afe6cb-a559-4162-a25f-a22003feeca4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e28ec5783cd6f24d22a1412ced337b92b22dc2c17624208871489dca19dcb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mqxn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a636fc90f5de5e56e91678566c0fe6812e36482554b4454222df8205a44e8d51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mqxn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-g7c4b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:44Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:44 crc kubenswrapper[4998]: I0227 10:19:44.412258 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fc05123-f698-45ff-a3c3-13c18e466cdb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c807f36a54905b570613990b7428942b90cbad2d8d8dfbab02ee177d5575644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f0cce1c3309bc7de11ea57038f7a306b781b41434337fbf3da176ca2dba2b91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93c102b441273f39ca361ed86f6ef259e15d8adb7eea414db62fabebfda0dee1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e749ae59c7bd8ab4f2d458cd33ccbae459eb43375c8abcdd275ce7f3978d5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b6e8ecfcaba19fb742ce57619627409073b5ea272a20cca6cff39ebcef8d3e2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T10:18:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 10:18:37.748369 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 10:18:37.748616 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 10:18:37.749728 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2309284339/tls.crt::/tmp/serving-cert-2309284339/tls.key\\\\\\\"\\\\nI0227 10:18:38.164511 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 10:18:38.177544 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 10:18:38.177576 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 10:18:38.177600 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 10:18:38.177606 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 10:18:38.196196 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 10:18:38.196250 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 10:18:38.196256 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 10:18:38.196263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 10:18:38.196267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 10:18:38.196282 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 10:18:38.196286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0227 10:18:38.201111 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0227 10:18:38.201327 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T10:18:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df65cc66d990595a8fdb751234ffd8b9e890f53de56c736241bc8b52e7341357\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b42644d7b4769348dda3a3ed5f82a860712159a6e62df2ee6a04b6e890851fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b42644d7b4769348dda3a3ed5f82a860712159a6e62df2ee6a04b6e890851fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:17:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:44Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:44 crc kubenswrapper[4998]: I0227 10:19:44.424711 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da216979-4070-4044-885e-64db11be9b28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:18:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:18:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8e1a3d0cfbfb3c94ceb7fa8965506defc8d50fbd2c3c977ebea917d9b47f29c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77f77430694123a27f4e42870837e5ef1405bc4f157bc15efe4cc2daafd9456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df8df596ab8f658fa7380300b4af87a511447baba28c0a4878829e2217d7d600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcd8311d3319cee9ae6667193835b5bd30c08b0f52b6278fead9c76ffa5949ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dcd8311d3319cee9ae6667193835b5bd30c08b0f52b6278fead9c76ffa5949ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:17:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:44Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:44 crc kubenswrapper[4998]: I0227 10:19:44.436032 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-86xkz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40178d6d-6068-4937-b7d5-883538892cc5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqwdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqwdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-86xkz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:44Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:44 crc kubenswrapper[4998]: I0227 10:19:44.447288 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:44Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:44 crc kubenswrapper[4998]: I0227 10:19:44.458428 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c7c05c29eea9415919844353f5f82a0543e2592b282913feca3776ef5a711d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65614a1a13ca1ee58cd9ee838bbb0ba2ada05d130adb26b34ae208292a994bf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:44Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:44 crc kubenswrapper[4998]: I0227 10:19:44.473479 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ea5f61679603b32df823b8149d832f4d6e98c0cb4c7d1124b3ff587bb729ef3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:44Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:44 crc kubenswrapper[4998]: I0227 10:19:44.488829 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-46lvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a046a5ca-7081-4920-98af-1027a5bc29d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70d74d26b8ecce796cf2e1d35f24a62a2d3005aac8c11e2a148aa9b9a4e670f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7s287\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-46lvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:44Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:44 crc kubenswrapper[4998]: I0227 10:19:44.500656 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"400c5e2f-5448-49c6-bf8e-04b21e552bb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3ce74dab57b3de4f390244a0a95e5ee5c83a47521248808b783a1e94e25c00c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l6t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94514be337c3278cc1dfc8b2f0c50050f03294a2cc4ac6a72c62695d2fe4152a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l6t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m6kr5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:44Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:44 crc kubenswrapper[4998]: I0227 10:19:44.517329 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jl2nx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c459deb8-e1ea-43de-a1b0-1b463eee4bdc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7df06ac726fa496bf82c45d7a1ec7f2defaf93f1d406f0c55cbe4c2b46c9b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc6kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jl2nx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:44Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:44 crc kubenswrapper[4998]: I0227 10:19:44.527480 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1182527-22cd-46f5-ac86-450cc2fbc851\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f69cef55dc6c16d2f015053573c088caee5fcca24208ad8c7affb58bc981c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a72f2f016ab471dc647a29e72e4e49dd5008018682ff255ca5b8496cfc4a2a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a72f2f016ab471dc647a29e72e4e49dd5008018682ff255ca5b8496cfc4a2a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:17:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:44Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:44 crc kubenswrapper[4998]: I0227 10:19:44.545036 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de019f950487216eb5629eeb90db5b4dac503bb5bda901cb9dcb66a9019aaecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:44Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:44 crc kubenswrapper[4998]: I0227 10:19:44.561717 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:44Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:44 crc kubenswrapper[4998]: I0227 10:19:44.574613 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:44Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:44 crc kubenswrapper[4998]: I0227 10:19:44.597986 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bceef7ff-b99d-432e-b9cb-7c538c82b74b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b586f4d902be28e9eed6ab0b15e1c5bbd3dbf2add143844fff0835e088a1e3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f8bd4e21e6e078a8e49ae2264c035072fb77179deb36176c991ff790ea25b0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cc4f81dc4311732e185b32536af72d8391b070ab0b822258c13c55b9053cf53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e38b883925e98a1b15113d65d2f9b3e133e05c14ac63651039076b8518d42467\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92c409cddda85ffd90dc0522d28d181bb804f68f82561e82217b3e78cd9938ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80297fee7fa0ab3d0d6a5975984b83a2d0fa2a368a3fc6d85affe2af75e326e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2928f7a82ae132fbce54cc3880c63a70af1dd6757847ff83a07240c9faa1c9f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2928f7a82ae132fbce54cc3880c63a70af1dd6757847ff83a07240c9faa1c9f7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T10:19:42Z\\\",\\\"message\\\":\\\"s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0227 10:19:42.875868 6822 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0227 10:19:42.876307 6822 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0227 10:19:42.876376 6822 reflector.go:311] Stopping reflector *v1.UserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0227 10:19:42.876677 6822 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0227 10:19:42.876734 6822 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0227 10:19:42.876745 6822 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0227 10:19:42.876787 6822 factory.go:656] Stopping watch factory\\\\nI0227 10:19:42.876808 6822 ovnkube.go:599] Stopped ovnkube\\\\nI0227 10:19:42.876811 6822 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0227 10:19:42.876848 6822 handler.go:208] Removed *v1.Node event handler 2\\\\nI02\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f39b4e222d74a12d8889750f34e5bb8e07d73c9d7fcdbaaebe18066fba744a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a581b9e4264b5768a3af01485da05fd006f3cd0d379ef13f228187faa74f297c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a581b9e4264b5768a3af01485da05fd006f3cd0d379ef13f228187faa74f297c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wh9xl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:44Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:44 crc kubenswrapper[4998]: I0227 10:19:44.610167 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-86xkz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40178d6d-6068-4937-b7d5-883538892cc5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqwdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqwdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-86xkz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:44Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:44 crc kubenswrapper[4998]: I0227 10:19:44.623259 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fc05123-f698-45ff-a3c3-13c18e466cdb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c807f36a54905b570613990b7428942b90cbad2d8d8dfbab02ee177d5575644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f0cce1c3309bc7de11ea57038f7a306b781b41434337fbf3da176ca2dba2b91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93c102b441273f39ca361ed86f6ef259e15d8adb7eea414db62fabebfda0dee1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e749ae59c7bd8ab4f2d458cd33ccbae459eb43375c8abcdd275ce7f3978d5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b6e8ecfcaba19fb742ce57619627409073b5ea272a20cca6cff39ebcef8d3e2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T10:18:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 10:18:37.748369 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 10:18:37.748616 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 10:18:37.749728 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2309284339/tls.crt::/tmp/serving-cert-2309284339/tls.key\\\\\\\"\\\\nI0227 10:18:38.164511 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 10:18:38.177544 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 10:18:38.177576 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 10:18:38.177600 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 10:18:38.177606 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 10:18:38.196196 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 10:18:38.196250 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 10:18:38.196256 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 10:18:38.196263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 10:18:38.196267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 10:18:38.196282 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 10:18:38.196286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0227 10:18:38.201111 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0227 10:18:38.201327 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T10:18:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df65cc66d990595a8fdb751234ffd8b9e890f53de56c736241bc8b52e7341357\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b42644d7b4769348dda3a3ed5f82a860712159a6e62df2ee6a04b6e890851fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b42644d7b4769348dda3a3ed5f82a860712159a6e62df2ee6a04b6e890851fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:17:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:44Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:44 crc kubenswrapper[4998]: I0227 10:19:44.637035 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da216979-4070-4044-885e-64db11be9b28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:18:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:18:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8e1a3d0cfbfb3c94ceb7fa8965506defc8d50fbd2c3c977ebea917d9b47f29c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77f77430694123a27f4e42870837e5ef1405bc4f157bc15efe4cc2daafd9456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df8df596ab8f658fa7380300b4af87a511447baba28c0a4878829e2217d7d600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcd8311d3319cee9ae6667193835b5bd30c08b0f52b6278fead9c76ffa5949ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dcd8311d3319cee9ae6667193835b5bd30c08b0f52b6278fead9c76ffa5949ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:17:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:44Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:44 crc kubenswrapper[4998]: I0227 10:19:44.650190 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-46lvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a046a5ca-7081-4920-98af-1027a5bc29d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70d74d26b8ecce796cf2e1d35f24a62a2d3005aac8c11e2a148aa9b9a4e670f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7s287\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-46lvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:44Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:44 crc kubenswrapper[4998]: I0227 10:19:44.663022 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"400c5e2f-5448-49c6-bf8e-04b21e552bb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3ce74dab57b3de4f390244a0a95e5ee5c83a47521248808b783a1e94e25c00c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l6t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94514be337c3278cc1dfc8b2f0c50050f03294a2cc4ac6a72c62695d2fe4152a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l6t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m6kr5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:44Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:44 crc kubenswrapper[4998]: I0227 10:19:44.676169 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:44Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:44 crc kubenswrapper[4998]: I0227 10:19:44.689138 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c7c05c29eea9415919844353f5f82a0543e2592b282913feca3776ef5a711d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65614a1a13ca1ee58cd9ee838bbb0ba2ada05d130adb26b34ae208292a994bf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:44Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:44 crc kubenswrapper[4998]: I0227 10:19:44.701817 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ea5f61679603b32df823b8149d832f4d6e98c0cb4c7d1124b3ff587bb729ef3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:44Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:44 crc kubenswrapper[4998]: I0227 10:19:44.714001 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:44Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:44 crc kubenswrapper[4998]: I0227 10:19:44.734267 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bceef7ff-b99d-432e-b9cb-7c538c82b74b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b586f4d902be28e9eed6ab0b15e1c5bbd3dbf2add143844fff0835e088a1e3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f8bd4e21e6e078a8e49ae2264c035072fb77179deb36176c991ff790ea25b0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cc4f81dc4311732e185b32536af72d8391b070ab0b822258c13c55b9053cf53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e38b883925e98a1b15113d65d2f9b3e133e05c14ac63651039076b8518d42467\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92c409cddda85ffd90dc0522d28d181bb804f68f82561e82217b3e78cd9938ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80297fee7fa0ab3d0d6a5975984b83a2d0fa2a368a3fc6d85affe2af75e326e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4d0eb0f1e9b5c39bd1f8cf22c6eeef46408add1f6abc2e18be2a9d20a54a094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2928f7a82ae132fbce54cc3880c63a70af1dd6757847ff83a07240c9faa1c9f7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T10:19:42Z\\\",\\\"message\\\":\\\"s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0227 10:19:42.875868 6822 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0227 10:19:42.876307 6822 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0227 10:19:42.876376 6822 reflector.go:311] Stopping reflector *v1.UserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0227 10:19:42.876677 6822 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0227 10:19:42.876734 6822 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0227 10:19:42.876745 6822 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0227 10:19:42.876787 6822 factory.go:656] Stopping watch factory\\\\nI0227 10:19:42.876808 6822 ovnkube.go:599] Stopped ovnkube\\\\nI0227 10:19:42.876811 6822 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0227 10:19:42.876848 6822 handler.go:208] Removed *v1.Node event handler 2\\\\nI02\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f39b4e222d74a12d8889750f34e5bb8e07d73c9d7fcdbaaebe18066fba744a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a581b9e4264b5768a3af01485da05fd006f3cd0d379ef13f228187faa74f297c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a581b9e4264b5768a3af01485da05fd006f3cd0d379ef13f228187faa74f297c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wh9xl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:44Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:44 crc kubenswrapper[4998]: I0227 10:19:44.745066 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jl2nx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c459deb8-e1ea-43de-a1b0-1b463eee4bdc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7df06ac726fa496bf82c45d7a1ec7f2defaf93f1d406f0c55cbe4c2b46c9b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc6kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jl2nx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:44Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:44 crc kubenswrapper[4998]: I0227 10:19:44.755798 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1182527-22cd-46f5-ac86-450cc2fbc851\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f69cef55dc6c16d2f015053573c088caee5fcca24208ad8c7affb58bc981c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a72f2f016ab471dc647a29e72e4e49dd5008018682ff255ca5b8496cfc4a2a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a72f2f016ab471dc647a29e72e4e49dd5008018682ff255ca5b8496cfc4a2a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:17:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:44Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:44 crc kubenswrapper[4998]: I0227 10:19:44.763915 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:19:44 crc kubenswrapper[4998]: I0227 10:19:44.764020 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:19:44 crc kubenswrapper[4998]: E0227 10:19:44.764146 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 10:19:44 crc kubenswrapper[4998]: I0227 10:19:44.764207 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-86xkz" Feb 27 10:19:44 crc kubenswrapper[4998]: E0227 10:19:44.764282 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 10:19:44 crc kubenswrapper[4998]: I0227 10:19:44.764438 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:19:44 crc kubenswrapper[4998]: E0227 10:19:44.764520 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-86xkz" podUID="40178d6d-6068-4937-b7d5-883538892cc5" Feb 27 10:19:44 crc kubenswrapper[4998]: E0227 10:19:44.764683 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 10:19:44 crc kubenswrapper[4998]: I0227 10:19:44.770121 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de019f950487216eb5629eeb90db5b4dac503bb5bda901cb9dcb66a9019aaecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:44Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:44 crc kubenswrapper[4998]: I0227 10:19:44.783724 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:44Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:44 crc kubenswrapper[4998]: I0227 10:19:44.794854 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-g7c4b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68afe6cb-a559-4162-a25f-a22003feeca4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e28ec5783cd6f24d22a1412ced337b92b22dc2c17624208871489dca19dcb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mqxn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a636fc90f5de5e56e91678566c0fe6812e36482554b4454222df8205a44e8d51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mqxn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-g7c4b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:44Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:44 crc kubenswrapper[4998]: I0227 10:19:44.804163 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcfqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9652967a-d4bf-4304-bd25-4fed87e89b10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79a4ef241b85a5c9e0a28bc1b9d7c65ed60b9ca3472e0cbc0ef4aade02e6dff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jctzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcfqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:44Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:44 crc kubenswrapper[4998]: I0227 10:19:44.822344 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l9x2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e55e9768-52ee-4fcf-a279-1b55e6d6c6fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ac2d83408a7c3cccea1e095f4712f9f524751eec680b2df6373e44643438bdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ac2d83408a7c3cccea1e095f4712f9f524751eec680b2df6373e44643438bdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3d5eb208ecd8e1f8ab650a14b8bc638732ebe1d85e05caa726a967bd010334d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3d5eb208ecd8e1f8ab650a14b8bc638732ebe1d85e05caa726a967bd010334d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://736262e0b6f343139b2e1a42c6d3c90fc1ff6df0c02511cb66074a2143476a48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://736262e0b6f343139b2e1a42c6d3c90fc1ff6df0c02511cb66074a2143476a48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b20c69cb476fe3c5e90cc0086cccef9148e6f31a8a99340967dfee5ad04f8ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b20c69cb476fe3c5e90cc0086cccef9148e6f31a8a99340967dfee5ad04f8ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16873e6b57f8d49e2acc350c6940109206ef970ae711b7cd62aa87fc90df15eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16873e6b57f8d49e2acc350c6940109206ef970ae711b7cd62aa87fc90df15eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l9x2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:44Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:45 crc kubenswrapper[4998]: I0227 10:19:45.356128 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wh9xl_bceef7ff-b99d-432e-b9cb-7c538c82b74b/ovnkube-controller/1.log" Feb 27 10:19:45 crc kubenswrapper[4998]: I0227 10:19:45.357752 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wh9xl_bceef7ff-b99d-432e-b9cb-7c538c82b74b/ovnkube-controller/0.log" Feb 27 10:19:45 crc kubenswrapper[4998]: I0227 10:19:45.361203 4998 generic.go:334] "Generic (PLEG): container finished" podID="bceef7ff-b99d-432e-b9cb-7c538c82b74b" containerID="c4d0eb0f1e9b5c39bd1f8cf22c6eeef46408add1f6abc2e18be2a9d20a54a094" exitCode=1 Feb 27 10:19:45 crc kubenswrapper[4998]: I0227 10:19:45.361290 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" event={"ID":"bceef7ff-b99d-432e-b9cb-7c538c82b74b","Type":"ContainerDied","Data":"c4d0eb0f1e9b5c39bd1f8cf22c6eeef46408add1f6abc2e18be2a9d20a54a094"} Feb 27 10:19:45 crc kubenswrapper[4998]: I0227 10:19:45.361323 4998 scope.go:117] "RemoveContainer" containerID="2928f7a82ae132fbce54cc3880c63a70af1dd6757847ff83a07240c9faa1c9f7" Feb 27 10:19:45 crc kubenswrapper[4998]: I0227 10:19:45.361884 4998 scope.go:117] "RemoveContainer" containerID="c4d0eb0f1e9b5c39bd1f8cf22c6eeef46408add1f6abc2e18be2a9d20a54a094" Feb 27 10:19:45 crc kubenswrapper[4998]: E0227 10:19:45.362015 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-wh9xl_openshift-ovn-kubernetes(bceef7ff-b99d-432e-b9cb-7c538c82b74b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" podUID="bceef7ff-b99d-432e-b9cb-7c538c82b74b" Feb 27 10:19:45 crc kubenswrapper[4998]: I0227 10:19:45.371109 4998 generic.go:334] "Generic (PLEG): container finished" podID="e55e9768-52ee-4fcf-a279-1b55e6d6c6fd" containerID="3e03d30a9a2dc44255a2b9c9dd2346f2081d87c91020679615c71a30842c6556" exitCode=0 Feb 27 10:19:45 crc kubenswrapper[4998]: I0227 10:19:45.371165 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-l9x2p" event={"ID":"e55e9768-52ee-4fcf-a279-1b55e6d6c6fd","Type":"ContainerDied","Data":"3e03d30a9a2dc44255a2b9c9dd2346f2081d87c91020679615c71a30842c6556"} Feb 27 10:19:45 crc kubenswrapper[4998]: I0227 10:19:45.379089 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da216979-4070-4044-885e-64db11be9b28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:18:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:18:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8e1a3d0cfbfb3c94ceb7fa8965506defc8d50fbd2c3c977ebea917d9b47f29c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77f77430694123a27f4e42870837e5ef1405bc4f157bc15efe4cc2daafd9456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df8df596ab8f658fa7380300b4af87a511447baba28c0a4878829e2217d7d600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcd8311d3319cee9ae6667193835b5bd30c08b0f52b6278fead9c76ffa5949ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dcd8311d3319cee9ae6667193835b5bd30c08b0f52b6278fead9c76ffa5949ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:17:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:45Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:45 crc kubenswrapper[4998]: I0227 10:19:45.393994 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-86xkz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40178d6d-6068-4937-b7d5-883538892cc5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqwdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqwdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-86xkz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:45Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:45 crc kubenswrapper[4998]: I0227 10:19:45.412207 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fc05123-f698-45ff-a3c3-13c18e466cdb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c807f36a54905b570613990b7428942b90cbad2d8d8dfbab02ee177d5575644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f0cce1c3309bc7de11ea57038f7a306b781b41434337fbf3da176ca2dba2b91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93c102b441273f39ca361ed86f6ef259e15d8adb7eea414db62fabebfda0dee1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e749ae59c7bd8ab4f2d458cd33ccbae459eb43375c8abcdd275ce7f3978d5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b6e8ecfcaba19fb742ce57619627409073b5ea272a20cca6cff39ebcef8d3e2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T10:18:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 10:18:37.748369 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 10:18:37.748616 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 10:18:37.749728 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2309284339/tls.crt::/tmp/serving-cert-2309284339/tls.key\\\\\\\"\\\\nI0227 10:18:38.164511 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 10:18:38.177544 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 10:18:38.177576 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 10:18:38.177600 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 10:18:38.177606 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 10:18:38.196196 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 10:18:38.196250 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 10:18:38.196256 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 10:18:38.196263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 10:18:38.196267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 10:18:38.196282 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 10:18:38.196286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0227 10:18:38.201111 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0227 10:18:38.201327 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T10:18:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df65cc66d990595a8fdb751234ffd8b9e890f53de56c736241bc8b52e7341357\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b42644d7b4769348dda3a3ed5f82a860712159a6e62df2ee6a04b6e890851fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b42644d7b4769348dda3a3ed5f82a860712159a6e62df2ee6a04b6e890851fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:17:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:45Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:45 crc kubenswrapper[4998]: I0227 10:19:45.426753 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ea5f61679603b32df823b8149d832f4d6e98c0cb4c7d1124b3ff587bb729ef3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:45Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:45 crc kubenswrapper[4998]: I0227 10:19:45.442593 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-46lvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a046a5ca-7081-4920-98af-1027a5bc29d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70d74d26b8ecce796cf2e1d35f24a62a2d3005aac8c11e2a148aa9b9a4e670f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7s287\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-46lvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:45Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:45 crc kubenswrapper[4998]: I0227 10:19:45.457183 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"400c5e2f-5448-49c6-bf8e-04b21e552bb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3ce74dab57b3de4f390244a0a95e5ee5c83a47521248808b783a1e94e25c00c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l6t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94514be337c3278cc1dfc8b2f0c50050f03294a2cc4ac6a72c62695d2fe4152a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l6t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m6kr5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:45Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:45 crc kubenswrapper[4998]: I0227 10:19:45.475343 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:45Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:45 crc kubenswrapper[4998]: I0227 10:19:45.489805 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c7c05c29eea9415919844353f5f82a0543e2592b282913feca3776ef5a711d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65614a1a13ca1ee58cd9ee838bbb0ba2ada05d130adb26b34ae208292a994bf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:45Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:45 crc kubenswrapper[4998]: I0227 10:19:45.503923 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:45Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:45 crc kubenswrapper[4998]: I0227 10:19:45.516541 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:45Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:45 crc kubenswrapper[4998]: I0227 10:19:45.537184 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bceef7ff-b99d-432e-b9cb-7c538c82b74b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b586f4d902be28e9eed6ab0b15e1c5bbd3dbf2add143844fff0835e088a1e3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f8bd4e21e6e078a8e49ae2264c035072fb77179deb36176c991ff790ea25b0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cc4f81dc4311732e185b32536af72d8391b070ab0b822258c13c55b9053cf53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e38b883925e98a1b15113d65d2f9b3e133e05c14ac63651039076b8518d42467\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92c409cddda85ffd90dc0522d28d181bb804f68f82561e82217b3e78cd9938ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80297fee7fa0ab3d0d6a5975984b83a2d0fa2a368a3fc6d85affe2af75e326e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4d0eb0f1e9b5c39bd1f8cf22c6eeef46408add1f6abc2e18be2a9d20a54a094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2928f7a82ae132fbce54cc3880c63a70af1dd6757847ff83a07240c9faa1c9f7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T10:19:42Z\\\",\\\"message\\\":\\\"s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0227 10:19:42.875868 6822 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0227 10:19:42.876307 6822 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0227 10:19:42.876376 6822 reflector.go:311] Stopping reflector *v1.UserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0227 10:19:42.876677 6822 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0227 10:19:42.876734 6822 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0227 10:19:42.876745 6822 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0227 10:19:42.876787 6822 factory.go:656] Stopping watch factory\\\\nI0227 10:19:42.876808 6822 ovnkube.go:599] Stopped ovnkube\\\\nI0227 10:19:42.876811 6822 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0227 10:19:42.876848 6822 handler.go:208] Removed *v1.Node event handler 2\\\\nI02\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4d0eb0f1e9b5c39bd1f8cf22c6eeef46408add1f6abc2e18be2a9d20a54a094\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T10:19:44Z\\\",\\\"message\\\":\\\" policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:44Z is after 2025-08-24T17:21:41Z]\\\\nI0227 10:19:44.383828 7059 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-controller-manager/kube-controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"ba175bbe-5cc4-47e6-a32d-57693e1320bd\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-controller-manager/kube-controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, b\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f39b4e222d74a12d8889750f34e5bb8e07d73c9d7fcdbaaebe18066fba744a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a581b9e4264b5768a3af01485da05fd006f3cd0d379ef13f228187faa74f297c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a581b9e4264b5768a3af01485da05fd006f3cd0d379ef13f228187faa74f297c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wh9xl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:45Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:45 crc kubenswrapper[4998]: I0227 10:19:45.547081 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jl2nx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c459deb8-e1ea-43de-a1b0-1b463eee4bdc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7df06ac726fa496bf82c45d7a1ec7f2defaf93f1d406f0c55cbe4c2b46c9b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc6kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jl2nx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:45Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:45 crc kubenswrapper[4998]: I0227 10:19:45.558816 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1182527-22cd-46f5-ac86-450cc2fbc851\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f69cef55dc6c16d2f015053573c088caee5fcca24208ad8c7affb58bc981c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a72f2f016ab471dc647a29e72e4e49dd5008018682ff255ca5b8496cfc4a2a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a72f2f016ab471dc647a29e72e4e49dd5008018682ff255ca5b8496cfc4a2a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:17:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:45Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:45 crc kubenswrapper[4998]: I0227 10:19:45.574711 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de019f950487216eb5629eeb90db5b4dac503bb5bda901cb9dcb66a9019aaecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:45Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:45 crc kubenswrapper[4998]: I0227 10:19:45.590164 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l9x2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e55e9768-52ee-4fcf-a279-1b55e6d6c6fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ac2d83408a7c3cccea1e095f4712f9f524751eec680b2df6373e44643438bdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ac2d83408a7c3cccea1e095f4712f9f524751eec680b2df6373e44643438bdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3d5eb208ecd8e1f8ab650a14b8bc638732ebe1d85e05caa726a967bd010334d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3d5eb208ecd8e1f8ab650a14b8bc638732ebe1d85e05caa726a967bd010334d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://736262e0b6f343139b2e1a42c6d3c90fc1ff6df0c02511cb66074a2143476a48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://736262e0b6f343139b2e1a42c6d3c90fc1ff6df0c02511cb66074a2143476a48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b20c69cb476fe3c5e90cc0086cccef9148e6f31a8a99340967dfee5ad04f8ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b20c69cb476fe3c5e90cc0086cccef9148e6f31a8a99340967dfee5ad04f8ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16873e6b57f8d49e2acc350c6940109206ef970ae711b7cd62aa87fc90df15eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16873e6b57f8d49e2acc350c6940109206ef970ae711b7cd62aa87fc90df15eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l9x2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:45Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:45 crc kubenswrapper[4998]: I0227 10:19:45.604256 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-g7c4b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68afe6cb-a559-4162-a25f-a22003feeca4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e28ec5783cd6f24d22a1412ced337b92b22dc2c17624208871489dca19dcb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mqxn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a636fc90f5de5e56e91678566c0fe6812e36482554b4454222df8205a44e8d51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mqxn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-g7c4b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:45Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:45 crc kubenswrapper[4998]: I0227 10:19:45.616955 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcfqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9652967a-d4bf-4304-bd25-4fed87e89b10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79a4ef241b85a5c9e0a28bc1b9d7c65ed60b9ca3472e0cbc0ef4aade02e6dff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jctzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcfqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:45Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:45 crc kubenswrapper[4998]: I0227 10:19:45.636099 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fc05123-f698-45ff-a3c3-13c18e466cdb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c807f36a54905b570613990b7428942b90cbad2d8d8dfbab02ee177d5575644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f0cce1c3309bc7de11ea57038f7a306b781b41434337fbf3da176ca2dba2b91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93c102b441273f39ca361ed86f6ef259e15d8adb7eea414db62fabebfda0dee1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e749ae59c7bd8ab4f2d458cd33ccbae459eb43375c8abcdd275ce7f3978d5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b6e8ecfcaba19fb742ce57619627409073b5ea272a20cca6cff39ebcef8d3e2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T10:18:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 10:18:37.748369 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 10:18:37.748616 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 10:18:37.749728 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2309284339/tls.crt::/tmp/serving-cert-2309284339/tls.key\\\\\\\"\\\\nI0227 10:18:38.164511 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 10:18:38.177544 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 10:18:38.177576 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 10:18:38.177600 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 10:18:38.177606 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 10:18:38.196196 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 10:18:38.196250 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 10:18:38.196256 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 10:18:38.196263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 10:18:38.196267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 10:18:38.196282 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 10:18:38.196286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0227 10:18:38.201111 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0227 10:18:38.201327 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T10:18:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df65cc66d990595a8fdb751234ffd8b9e890f53de56c736241bc8b52e7341357\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b42644d7b4769348dda3a3ed5f82a860712159a6e62df2ee6a04b6e890851fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b42644d7b4769348dda3a3ed5f82a860712159a6e62df2ee6a04b6e890851fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:17:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:45Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:45 crc kubenswrapper[4998]: I0227 10:19:45.650967 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da216979-4070-4044-885e-64db11be9b28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:18:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:18:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8e1a3d0cfbfb3c94ceb7fa8965506defc8d50fbd2c3c977ebea917d9b47f29c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77f77430694123a27f4e42870837e5ef1405bc4f157bc15efe4cc2daafd9456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df8df596ab8f658fa7380300b4af87a511447baba28c0a4878829e2217d7d600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcd8311d3319cee9ae6667193835b5bd30c08b0f52b6278fead9c76ffa5949ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dcd8311d3319cee9ae6667193835b5bd30c08b0f52b6278fead9c76ffa5949ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:17:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:45Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:45 crc kubenswrapper[4998]: I0227 10:19:45.662837 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-86xkz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40178d6d-6068-4937-b7d5-883538892cc5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqwdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqwdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-86xkz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:45Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:45 crc kubenswrapper[4998]: I0227 10:19:45.677285 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c7c05c29eea9415919844353f5f82a0543e2592b282913feca3776ef5a711d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65614a1a13ca1ee58cd9ee838bbb0ba2ada05d130adb26b34ae208292a994bf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:45Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:45 crc kubenswrapper[4998]: I0227 10:19:45.689991 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ea5f61679603b32df823b8149d832f4d6e98c0cb4c7d1124b3ff587bb729ef3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:45Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:45 crc kubenswrapper[4998]: I0227 10:19:45.705348 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-46lvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a046a5ca-7081-4920-98af-1027a5bc29d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70d74d26b8ecce796cf2e1d35f24a62a2d3005aac8c11e2a148aa9b9a4e670f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7s287\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-46lvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:45Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:45 crc kubenswrapper[4998]: I0227 10:19:45.716041 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"400c5e2f-5448-49c6-bf8e-04b21e552bb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3ce74dab57b3de4f390244a0a95e5ee5c83a47521248808b783a1e94e25c00c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l6t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94514be337c3278cc1dfc8b2f0c50050f03294a2cc4ac6a72c62695d2fe4152a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l6t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m6kr5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:45Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:45 crc kubenswrapper[4998]: I0227 10:19:45.728126 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:45Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:45 crc kubenswrapper[4998]: I0227 10:19:45.740339 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de019f950487216eb5629eeb90db5b4dac503bb5bda901cb9dcb66a9019aaecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:45Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:45 crc kubenswrapper[4998]: I0227 10:19:45.753405 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:45Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:45 crc kubenswrapper[4998]: I0227 10:19:45.764725 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:45Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:45 crc kubenswrapper[4998]: I0227 10:19:45.783427 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bceef7ff-b99d-432e-b9cb-7c538c82b74b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b586f4d902be28e9eed6ab0b15e1c5bbd3dbf2add143844fff0835e088a1e3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f8bd4e21e6e078a8e49ae2264c035072fb77179deb36176c991ff790ea25b0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cc4f81dc4311732e185b32536af72d8391b070ab0b822258c13c55b9053cf53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e38b883925e98a1b15113d65d2f9b3e133e05c14ac63651039076b8518d42467\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92c409cddda85ffd90dc0522d28d181bb804f68f82561e82217b3e78cd9938ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80297fee7fa0ab3d0d6a5975984b83a2d0fa2a368a3fc6d85affe2af75e326e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4d0eb0f1e9b5c39bd1f8cf22c6eeef46408add1f6abc2e18be2a9d20a54a094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2928f7a82ae132fbce54cc3880c63a70af1dd6757847ff83a07240c9faa1c9f7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T10:19:42Z\\\",\\\"message\\\":\\\"s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0227 10:19:42.875868 6822 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0227 10:19:42.876307 6822 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0227 10:19:42.876376 6822 reflector.go:311] Stopping reflector *v1.UserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0227 10:19:42.876677 6822 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0227 10:19:42.876734 6822 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0227 10:19:42.876745 6822 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0227 10:19:42.876787 6822 factory.go:656] Stopping watch factory\\\\nI0227 10:19:42.876808 6822 ovnkube.go:599] Stopped ovnkube\\\\nI0227 10:19:42.876811 6822 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0227 10:19:42.876848 6822 handler.go:208] Removed *v1.Node event handler 2\\\\nI02\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4d0eb0f1e9b5c39bd1f8cf22c6eeef46408add1f6abc2e18be2a9d20a54a094\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T10:19:44Z\\\",\\\"message\\\":\\\" policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:44Z is after 2025-08-24T17:21:41Z]\\\\nI0227 10:19:44.383828 7059 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-controller-manager/kube-controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"ba175bbe-5cc4-47e6-a32d-57693e1320bd\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-controller-manager/kube-controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, b\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f39b4e222d74a12d8889750f34e5bb8e07d73c9d7fcdbaaebe18066fba744a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a581b9e4264b5768a3af01485da05fd006f3cd0d379ef13f228187faa74f297c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a581b9e4264b5768a3af01485da05fd006f3cd0d379ef13f228187faa74f297c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wh9xl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:45Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:45 crc kubenswrapper[4998]: I0227 10:19:45.794810 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jl2nx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c459deb8-e1ea-43de-a1b0-1b463eee4bdc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7df06ac726fa496bf82c45d7a1ec7f2defaf93f1d406f0c55cbe4c2b46c9b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc6kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jl2nx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:45Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:45 crc kubenswrapper[4998]: I0227 10:19:45.808054 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1182527-22cd-46f5-ac86-450cc2fbc851\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f69cef55dc6c16d2f015053573c088caee5fcca24208ad8c7affb58bc981c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a72f2f016ab471dc647a29e72e4e49dd5008018682ff255ca5b8496cfc4a2a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a72f2f016ab471dc647a29e72e4e49dd5008018682ff255ca5b8496cfc4a2a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:17:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:45Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:45 crc kubenswrapper[4998]: I0227 10:19:45.819370 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcfqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9652967a-d4bf-4304-bd25-4fed87e89b10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79a4ef241b85a5c9e0a28bc1b9d7c65ed60b9ca3472e0cbc0ef4aade02e6dff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jctzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcfqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:45Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:45 crc kubenswrapper[4998]: I0227 10:19:45.834912 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l9x2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e55e9768-52ee-4fcf-a279-1b55e6d6c6fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ac2d83408a7c3cccea1e095f4712f9f524751eec680b2df6373e44643438bdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ac2d83408a7c3cccea1e095f4712f9f524751eec680b2df6373e44643438bdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3d5eb208ecd8e1f8ab650a14b8bc638732ebe1d85e05caa726a967bd010334d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3d5eb208ecd8e1f8ab650a14b8bc638732ebe1d85e05caa726a967bd010334d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://736262e0b6f343139b2e1a42c6d3c90fc1ff6df0c02511cb66074a2143476a48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://736262e0b6f343139b2e1a42c6d3c90fc1ff6df0c02511cb66074a2143476a48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b20c69cb476fe3c5e90cc0086cccef9148e6f31a8a99340967dfee5ad04f8ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b20c69cb476fe3c5e90cc0086cccef9148e6f31a8a99340967dfee5ad04f8ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16873e6b57f8d49e2acc350c6940109206ef970ae711b7cd62aa87fc90df15eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16873e6b57f8d49e2acc350c6940109206ef970ae711b7cd62aa87fc90df15eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e03d30a9a2dc44255a2b9c9dd2346f2081d87c91020679615c71a30842c6556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e03d30a9a2dc44255a2b9c9dd2346f2081d87c91020679615c71a30842c6556\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l9x2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:45Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:45 crc kubenswrapper[4998]: I0227 10:19:45.849506 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-g7c4b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68afe6cb-a559-4162-a25f-a22003feeca4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e28ec5783cd6f24d22a1412ced337b92b22dc2c17624208871489dca19dcb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mqxn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a636fc90f5de5e56e91678566c0fe6812e36482554b4454222df8205a44e8d51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mqxn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-g7c4b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:45Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:46 crc kubenswrapper[4998]: I0227 10:19:46.382659 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-l9x2p" event={"ID":"e55e9768-52ee-4fcf-a279-1b55e6d6c6fd","Type":"ContainerStarted","Data":"403bf97889905e9e76dc8ba71be2b5382ee401ce7939314113ff6019cff589db"} Feb 27 10:19:46 crc kubenswrapper[4998]: I0227 10:19:46.384960 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wh9xl_bceef7ff-b99d-432e-b9cb-7c538c82b74b/ovnkube-controller/1.log" Feb 27 10:19:46 crc kubenswrapper[4998]: I0227 10:19:46.390803 4998 scope.go:117] "RemoveContainer" containerID="c4d0eb0f1e9b5c39bd1f8cf22c6eeef46408add1f6abc2e18be2a9d20a54a094" Feb 27 10:19:46 crc kubenswrapper[4998]: E0227 10:19:46.391366 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-wh9xl_openshift-ovn-kubernetes(bceef7ff-b99d-432e-b9cb-7c538c82b74b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" podUID="bceef7ff-b99d-432e-b9cb-7c538c82b74b" Feb 27 10:19:46 crc kubenswrapper[4998]: I0227 10:19:46.400077 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcfqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9652967a-d4bf-4304-bd25-4fed87e89b10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79a4ef241b85a5c9e0a28bc1b9d7c65ed60b9ca3472e0cbc0ef4aade02e6dff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jctzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcfqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:46Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:46 crc kubenswrapper[4998]: I0227 10:19:46.415743 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l9x2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e55e9768-52ee-4fcf-a279-1b55e6d6c6fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://403bf97889905e9e76dc8ba71be2b5382ee401ce7939314113ff6019cff589db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ac2d83408a7c3cccea1e095f4712f9f524751eec680b2df6373e44643438bdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ac2d83408a7c3cccea1e095f4712f9f524751eec680b2df6373e44643438bdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3d5eb208ecd8e1f8ab650a14b8bc638732ebe1d85e05caa726a967bd010334d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3d5eb208ecd8e1f8ab650a14b8bc638732ebe1d85e05caa726a967bd010334d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://736262e0b6f343139b2e1a42c6d3c90fc1ff6df0c02511cb66074a2143476a48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://736262e0b6f343139b2e1a42c6d3c90fc1ff6df0c02511cb66074a2143476a48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b20c69cb476fe3c5e90cc0086cccef9148e6f31a8a99340967dfee5ad04f8ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b20c69cb476fe3c5e90cc0086cccef9148e6f31a8a99340967dfee5ad04f8ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16873e6b57f8d49e2acc350c6940109206ef970ae711b7cd62aa87fc90df15eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16873e6b57f8d49e2acc350c6940109206ef970ae711b7cd62aa87fc90df15eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e03d30a9a2dc44255a2b9c9dd2346f2081d87c91020679615c71a30842c6556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e03d30a9a2dc44255a2b9c9dd2346f2081d87c91020679615c71a30842c6556\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l9x2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:46Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:46 crc kubenswrapper[4998]: I0227 10:19:46.430691 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-g7c4b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68afe6cb-a559-4162-a25f-a22003feeca4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e28ec5783cd6f24d22a1412ced337b92b22dc2c17624208871489dca19dcb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mqxn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a636fc90f5de5e56e91678566c0fe6812e36482554b4454222df8205a44e8d51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mqxn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-g7c4b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:46Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:46 crc kubenswrapper[4998]: I0227 10:19:46.444120 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fc05123-f698-45ff-a3c3-13c18e466cdb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c807f36a54905b570613990b7428942b90cbad2d8d8dfbab02ee177d5575644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f0cce1c3309bc7de11ea57038f7a306b781b41434337fbf3da176ca2dba2b91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93c102b441273f39ca361ed86f6ef259e15d8adb7eea414db62fabebfda0dee1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e749ae59c7bd8ab4f2d458cd33ccbae459eb43375c8abcdd275ce7f3978d5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b6e8ecfcaba19fb742ce57619627409073b5ea272a20cca6cff39ebcef8d3e2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T10:18:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 10:18:37.748369 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 10:18:37.748616 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 10:18:37.749728 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2309284339/tls.crt::/tmp/serving-cert-2309284339/tls.key\\\\\\\"\\\\nI0227 10:18:38.164511 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 10:18:38.177544 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 10:18:38.177576 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 10:18:38.177600 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 10:18:38.177606 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 10:18:38.196196 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 10:18:38.196250 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 10:18:38.196256 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 10:18:38.196263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 10:18:38.196267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 10:18:38.196282 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 10:18:38.196286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0227 10:18:38.201111 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0227 10:18:38.201327 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T10:18:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df65cc66d990595a8fdb751234ffd8b9e890f53de56c736241bc8b52e7341357\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b42644d7b4769348dda3a3ed5f82a860712159a6e62df2ee6a04b6e890851fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b42644d7b4769348dda3a3ed5f82a860712159a6e62df2ee6a04b6e890851fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:17:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:46Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:46 crc kubenswrapper[4998]: I0227 10:19:46.459381 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da216979-4070-4044-885e-64db11be9b28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:18:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:18:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8e1a3d0cfbfb3c94ceb7fa8965506defc8d50fbd2c3c977ebea917d9b47f29c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77f77430694123a27f4e42870837e5ef1405bc4f157bc15efe4cc2daafd9456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df8df596ab8f658fa7380300b4af87a511447baba28c0a4878829e2217d7d600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcd8311d3319cee9ae6667193835b5bd30c08b0f52b6278fead9c76ffa5949ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dcd8311d3319cee9ae6667193835b5bd30c08b0f52b6278fead9c76ffa5949ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:17:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:46Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:46 crc kubenswrapper[4998]: I0227 10:19:46.474765 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-86xkz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40178d6d-6068-4937-b7d5-883538892cc5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqwdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqwdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-86xkz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:46Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:46 crc kubenswrapper[4998]: I0227 10:19:46.497593 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:46Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:46 crc kubenswrapper[4998]: I0227 10:19:46.513734 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c7c05c29eea9415919844353f5f82a0543e2592b282913feca3776ef5a711d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65614a1a13ca1ee58cd9ee838bbb0ba2ada05d130adb26b34ae208292a994bf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:46Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:46 crc kubenswrapper[4998]: I0227 10:19:46.530124 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ea5f61679603b32df823b8149d832f4d6e98c0cb4c7d1124b3ff587bb729ef3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:46Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:46 crc kubenswrapper[4998]: I0227 10:19:46.547607 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-46lvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a046a5ca-7081-4920-98af-1027a5bc29d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70d74d26b8ecce796cf2e1d35f24a62a2d3005aac8c11e2a148aa9b9a4e670f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7s287\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-46lvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:46Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:46 crc kubenswrapper[4998]: I0227 10:19:46.576660 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"400c5e2f-5448-49c6-bf8e-04b21e552bb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3ce74dab57b3de4f390244a0a95e5ee5c83a47521248808b783a1e94e25c00c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l6t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94514be337c3278cc1dfc8b2f0c50050f03294a2cc4ac6a72c62695d2fe4152a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l6t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m6kr5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:46Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:46 crc kubenswrapper[4998]: I0227 10:19:46.598167 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1182527-22cd-46f5-ac86-450cc2fbc851\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f69cef55dc6c16d2f015053573c088caee5fcca24208ad8c7affb58bc981c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a72f2f016ab471dc647a29e72e4e49dd5008018682ff255ca5b8496cfc4a2a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a72f2f016ab471dc647a29e72e4e49dd5008018682ff255ca5b8496cfc4a2a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:17:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:46Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:46 crc kubenswrapper[4998]: I0227 10:19:46.626104 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de019f950487216eb5629eeb90db5b4dac503bb5bda901cb9dcb66a9019aaecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:46Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:46 crc kubenswrapper[4998]: I0227 10:19:46.653443 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:46Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:46 crc kubenswrapper[4998]: I0227 10:19:46.669743 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:46Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:46 crc kubenswrapper[4998]: I0227 10:19:46.689806 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bceef7ff-b99d-432e-b9cb-7c538c82b74b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b586f4d902be28e9eed6ab0b15e1c5bbd3dbf2add143844fff0835e088a1e3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f8bd4e21e6e078a8e49ae2264c035072fb77179deb36176c991ff790ea25b0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cc4f81dc4311732e185b32536af72d8391b070ab0b822258c13c55b9053cf53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e38b883925e98a1b15113d65d2f9b3e133e05c14ac63651039076b8518d42467\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92c409cddda85ffd90dc0522d28d181bb804f68f82561e82217b3e78cd9938ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80297fee7fa0ab3d0d6a5975984b83a2d0fa2a368a3fc6d85affe2af75e326e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4d0eb0f1e9b5c39bd1f8cf22c6eeef46408add1f6abc2e18be2a9d20a54a094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2928f7a82ae132fbce54cc3880c63a70af1dd6757847ff83a07240c9faa1c9f7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T10:19:42Z\\\",\\\"message\\\":\\\"s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0227 10:19:42.875868 6822 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0227 10:19:42.876307 6822 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0227 10:19:42.876376 6822 reflector.go:311] Stopping reflector *v1.UserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0227 10:19:42.876677 6822 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0227 10:19:42.876734 6822 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0227 10:19:42.876745 6822 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0227 10:19:42.876787 6822 factory.go:656] Stopping watch factory\\\\nI0227 10:19:42.876808 6822 ovnkube.go:599] Stopped ovnkube\\\\nI0227 10:19:42.876811 6822 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0227 10:19:42.876848 6822 handler.go:208] Removed *v1.Node event handler 2\\\\nI02\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4d0eb0f1e9b5c39bd1f8cf22c6eeef46408add1f6abc2e18be2a9d20a54a094\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T10:19:44Z\\\",\\\"message\\\":\\\" policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:44Z is after 2025-08-24T17:21:41Z]\\\\nI0227 10:19:44.383828 7059 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-controller-manager/kube-controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"ba175bbe-5cc4-47e6-a32d-57693e1320bd\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-controller-manager/kube-controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, b\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f39b4e222d74a12d8889750f34e5bb8e07d73c9d7fcdbaaebe18066fba744a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a581b9e4264b5768a3af01485da05fd006f3cd0d379ef13f228187faa74f297c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a581b9e4264b5768a3af01485da05fd006f3cd0d379ef13f228187faa74f297c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wh9xl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:46Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:46 crc kubenswrapper[4998]: I0227 10:19:46.700888 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jl2nx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c459deb8-e1ea-43de-a1b0-1b463eee4bdc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7df06ac726fa496bf82c45d7a1ec7f2defaf93f1d406f0c55cbe4c2b46c9b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc6kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jl2nx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:46Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:46 crc kubenswrapper[4998]: I0227 10:19:46.714857 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fc05123-f698-45ff-a3c3-13c18e466cdb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c807f36a54905b570613990b7428942b90cbad2d8d8dfbab02ee177d5575644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f0cce1c3309bc7de11ea57038f7a306b781b41434337fbf3da176ca2dba2b91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93c102b441273f39ca361ed86f6ef259e15d8adb7eea414db62fabebfda0dee1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e749ae59c7bd8ab4f2d458cd33ccbae459eb43375c8abcdd275ce7f3978d5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b6e8ecfcaba19fb742ce57619627409073b5ea272a20cca6cff39ebcef8d3e2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T10:18:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 10:18:37.748369 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 10:18:37.748616 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 10:18:37.749728 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2309284339/tls.crt::/tmp/serving-cert-2309284339/tls.key\\\\\\\"\\\\nI0227 10:18:38.164511 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 10:18:38.177544 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 10:18:38.177576 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 10:18:38.177600 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 10:18:38.177606 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 10:18:38.196196 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 10:18:38.196250 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 10:18:38.196256 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 10:18:38.196263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 10:18:38.196267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 10:18:38.196282 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 10:18:38.196286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0227 10:18:38.201111 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0227 10:18:38.201327 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T10:18:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df65cc66d990595a8fdb751234ffd8b9e890f53de56c736241bc8b52e7341357\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b42644d7b4769348dda3a3ed5f82a860712159a6e62df2ee6a04b6e890851fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b42644d7b4769348dda3a3ed5f82a860712159a6e62df2ee6a04b6e890851fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:17:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:46Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:46 crc kubenswrapper[4998]: I0227 10:19:46.726032 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da216979-4070-4044-885e-64db11be9b28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:18:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:18:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8e1a3d0cfbfb3c94ceb7fa8965506defc8d50fbd2c3c977ebea917d9b47f29c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77f77430694123a27f4e42870837e5ef1405bc4f157bc15efe4cc2daafd9456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df8df596ab8f658fa7380300b4af87a511447baba28c0a4878829e2217d7d600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcd8311d3319cee9ae6667193835b5bd30c08b0f52b6278fead9c76ffa5949ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dcd8311d3319cee9ae6667193835b5bd30c08b0f52b6278fead9c76ffa5949ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:17:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:46Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:46 crc kubenswrapper[4998]: I0227 10:19:46.734797 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-86xkz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40178d6d-6068-4937-b7d5-883538892cc5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqwdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqwdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-86xkz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:46Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:46 crc kubenswrapper[4998]: I0227 10:19:46.748311 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:46Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:46 crc kubenswrapper[4998]: I0227 10:19:46.758690 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c7c05c29eea9415919844353f5f82a0543e2592b282913feca3776ef5a711d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65614a1a13ca1ee58cd9ee838bbb0ba2ada05d130adb26b34ae208292a994bf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:46Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:46 crc kubenswrapper[4998]: I0227 10:19:46.764925 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:19:46 crc kubenswrapper[4998]: I0227 10:19:46.765019 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-86xkz" Feb 27 10:19:46 crc kubenswrapper[4998]: I0227 10:19:46.765034 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:19:46 crc kubenswrapper[4998]: E0227 10:19:46.765094 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 10:19:46 crc kubenswrapper[4998]: I0227 10:19:46.765076 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:19:46 crc kubenswrapper[4998]: E0227 10:19:46.765168 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-86xkz" podUID="40178d6d-6068-4937-b7d5-883538892cc5" Feb 27 10:19:46 crc kubenswrapper[4998]: E0227 10:19:46.765316 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 10:19:46 crc kubenswrapper[4998]: E0227 10:19:46.765456 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 10:19:46 crc kubenswrapper[4998]: I0227 10:19:46.773408 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ea5f61679603b32df823b8149d832f4d6e98c0cb4c7d1124b3ff587bb729ef3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:46Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:46 crc kubenswrapper[4998]: I0227 10:19:46.786379 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-46lvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a046a5ca-7081-4920-98af-1027a5bc29d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70d74d26b8ecce796cf2e1d35f24a62a2d3005aac8c11e2a148aa9b9a4e670f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7s287\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-46lvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:46Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:46 crc kubenswrapper[4998]: I0227 10:19:46.798458 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"400c5e2f-5448-49c6-bf8e-04b21e552bb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3ce74dab57b3de4f390244a0a95e5ee5c83a47521248808b783a1e94e25c00c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l6t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94514be337c3278cc1dfc8b2f0c50050f03294a2cc4ac6a72c62695d2fe4152a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l6t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m6kr5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:46Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:46 crc kubenswrapper[4998]: I0227 10:19:46.809530 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1182527-22cd-46f5-ac86-450cc2fbc851\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f69cef55dc6c16d2f015053573c088caee5fcca24208ad8c7affb58bc981c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a72f2f016ab471dc647a29e72e4e49dd5008018682ff255ca5b8496cfc4a2a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a72f2f016ab471dc647a29e72e4e49dd5008018682ff255ca5b8496cfc4a2a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:17:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:46Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:46 crc kubenswrapper[4998]: I0227 10:19:46.822840 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de019f950487216eb5629eeb90db5b4dac503bb5bda901cb9dcb66a9019aaecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:46Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:46 crc kubenswrapper[4998]: I0227 10:19:46.837094 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:46Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:46 crc kubenswrapper[4998]: I0227 10:19:46.853559 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:46Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:46 crc kubenswrapper[4998]: I0227 10:19:46.874715 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bceef7ff-b99d-432e-b9cb-7c538c82b74b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b586f4d902be28e9eed6ab0b15e1c5bbd3dbf2add143844fff0835e088a1e3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f8bd4e21e6e078a8e49ae2264c035072fb77179deb36176c991ff790ea25b0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cc4f81dc4311732e185b32536af72d8391b070ab0b822258c13c55b9053cf53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e38b883925e98a1b15113d65d2f9b3e133e05c14ac63651039076b8518d42467\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92c409cddda85ffd90dc0522d28d181bb804f68f82561e82217b3e78cd9938ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80297fee7fa0ab3d0d6a5975984b83a2d0fa2a368a3fc6d85affe2af75e326e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4d0eb0f1e9b5c39bd1f8cf22c6eeef46408add1f6abc2e18be2a9d20a54a094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4d0eb0f1e9b5c39bd1f8cf22c6eeef46408add1f6abc2e18be2a9d20a54a094\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T10:19:44Z\\\",\\\"message\\\":\\\" policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:44Z is after 2025-08-24T17:21:41Z]\\\\nI0227 10:19:44.383828 7059 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-controller-manager/kube-controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"ba175bbe-5cc4-47e6-a32d-57693e1320bd\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-controller-manager/kube-controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, b\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-wh9xl_openshift-ovn-kubernetes(bceef7ff-b99d-432e-b9cb-7c538c82b74b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f39b4e222d74a12d8889750f34e5bb8e07d73c9d7fcdbaaebe18066fba744a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a581b9e4264b5768a3af01485da05fd006f3cd0d379ef13f228187faa74f297c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a581b9e4264b5768a3af01485da05fd006f3cd0d379ef13f228187faa74f297c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wh9xl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:46Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:46 crc kubenswrapper[4998]: I0227 10:19:46.888256 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jl2nx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c459deb8-e1ea-43de-a1b0-1b463eee4bdc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7df06ac726fa496bf82c45d7a1ec7f2defaf93f1d406f0c55cbe4c2b46c9b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc6kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jl2nx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:46Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:46 crc kubenswrapper[4998]: I0227 10:19:46.909093 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcfqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9652967a-d4bf-4304-bd25-4fed87e89b10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79a4ef241b85a5c9e0a28bc1b9d7c65ed60b9ca3472e0cbc0ef4aade02e6dff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jctzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcfqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:46Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:46 crc kubenswrapper[4998]: I0227 10:19:46.927946 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l9x2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e55e9768-52ee-4fcf-a279-1b55e6d6c6fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://403bf97889905e9e76dc8ba71be2b5382ee401ce7939314113ff6019cff589db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ac2d83408a7c3cccea1e095f4712f9f524751eec680b2df6373e44643438bdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ac2d83408a7c3cccea1e095f4712f9f524751eec680b2df6373e44643438bdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3d5eb208ecd8e1f8ab650a14b8bc638732ebe1d85e05caa726a967bd010334d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3d5eb208ecd8e1f8ab650a14b8bc638732ebe1d85e05caa726a967bd010334d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://736262e0b6f343139b2e1a42c6d3c90fc1ff6df0c02511cb66074a2143476a48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://736262e0b6f343139b2e1a42c6d3c90fc1ff6df0c02511cb66074a2143476a48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b20c69cb476fe3c5e90cc0086cccef9148e6f31a8a99340967dfee5ad04f8ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b20c69cb476fe3c5e90cc0086cccef9148e6f31a8a99340967dfee5ad04f8ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16873e6b57f8d49e2acc350c6940109206ef970ae711b7cd62aa87fc90df15eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16873e6b57f8d49e2acc350c6940109206ef970ae711b7cd62aa87fc90df15eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e03d30a9a2dc44255a2b9c9dd2346f2081d87c91020679615c71a30842c6556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e03d30a9a2dc44255a2b9c9dd2346f2081d87c91020679615c71a30842c6556\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l9x2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:46Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:46 crc kubenswrapper[4998]: I0227 10:19:46.941988 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-g7c4b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68afe6cb-a559-4162-a25f-a22003feeca4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e28ec5783cd6f24d22a1412ced337b92b22dc2c17624208871489dca19dcb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mqxn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a636fc90f5de5e56e91678566c0fe6812e36482554b4454222df8205a44e8d51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mqxn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-g7c4b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:46Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:47 crc kubenswrapper[4998]: I0227 10:19:47.565164 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:47 crc kubenswrapper[4998]: I0227 10:19:47.565202 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:47 crc kubenswrapper[4998]: I0227 10:19:47.565214 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:47 crc kubenswrapper[4998]: I0227 10:19:47.565251 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:47 crc kubenswrapper[4998]: I0227 10:19:47.565263 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:47Z","lastTransitionTime":"2026-02-27T10:19:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:47 crc kubenswrapper[4998]: E0227 10:19:47.585962 4998 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:19:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:19:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:19:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:19:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a76c225a-d617-4499-bb32-eb42f31208cd\\\",\\\"systemUUID\\\":\\\"9cb598f9-84ae-4703-b4b9-6775104308e7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:47Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:47 crc kubenswrapper[4998]: I0227 10:19:47.590829 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:47 crc kubenswrapper[4998]: I0227 10:19:47.590861 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:47 crc kubenswrapper[4998]: I0227 10:19:47.590873 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:47 crc kubenswrapper[4998]: I0227 10:19:47.590889 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:47 crc kubenswrapper[4998]: I0227 10:19:47.590902 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:47Z","lastTransitionTime":"2026-02-27T10:19:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:47 crc kubenswrapper[4998]: E0227 10:19:47.603322 4998 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:19:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:19:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:19:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:19:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a76c225a-d617-4499-bb32-eb42f31208cd\\\",\\\"systemUUID\\\":\\\"9cb598f9-84ae-4703-b4b9-6775104308e7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:47Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:47 crc kubenswrapper[4998]: I0227 10:19:47.609491 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:47 crc kubenswrapper[4998]: I0227 10:19:47.609536 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:47 crc kubenswrapper[4998]: I0227 10:19:47.609547 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:47 crc kubenswrapper[4998]: I0227 10:19:47.609567 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:47 crc kubenswrapper[4998]: I0227 10:19:47.609582 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:47Z","lastTransitionTime":"2026-02-27T10:19:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:47 crc kubenswrapper[4998]: E0227 10:19:47.624481 4998 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:19:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:19:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:19:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:19:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a76c225a-d617-4499-bb32-eb42f31208cd\\\",\\\"systemUUID\\\":\\\"9cb598f9-84ae-4703-b4b9-6775104308e7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:47Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:47 crc kubenswrapper[4998]: I0227 10:19:47.628760 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:47 crc kubenswrapper[4998]: I0227 10:19:47.628788 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:47 crc kubenswrapper[4998]: I0227 10:19:47.628797 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:47 crc kubenswrapper[4998]: I0227 10:19:47.628813 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:47 crc kubenswrapper[4998]: I0227 10:19:47.628823 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:47Z","lastTransitionTime":"2026-02-27T10:19:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:47 crc kubenswrapper[4998]: E0227 10:19:47.642709 4998 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:19:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:19:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:19:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:19:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a76c225a-d617-4499-bb32-eb42f31208cd\\\",\\\"systemUUID\\\":\\\"9cb598f9-84ae-4703-b4b9-6775104308e7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:47Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:47 crc kubenswrapper[4998]: I0227 10:19:47.646922 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:47 crc kubenswrapper[4998]: I0227 10:19:47.647125 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:47 crc kubenswrapper[4998]: I0227 10:19:47.647295 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:47 crc kubenswrapper[4998]: I0227 10:19:47.647456 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:47 crc kubenswrapper[4998]: I0227 10:19:47.647597 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:47Z","lastTransitionTime":"2026-02-27T10:19:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:47 crc kubenswrapper[4998]: E0227 10:19:47.675438 4998 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:19:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:19:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:19:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:19:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a76c225a-d617-4499-bb32-eb42f31208cd\\\",\\\"systemUUID\\\":\\\"9cb598f9-84ae-4703-b4b9-6775104308e7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:47Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:47 crc kubenswrapper[4998]: E0227 10:19:47.675959 4998 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 27 10:19:48 crc kubenswrapper[4998]: I0227 10:19:48.764176 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:19:48 crc kubenswrapper[4998]: I0227 10:19:48.764418 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:19:48 crc kubenswrapper[4998]: E0227 10:19:48.764587 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 10:19:48 crc kubenswrapper[4998]: I0227 10:19:48.764632 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-86xkz" Feb 27 10:19:48 crc kubenswrapper[4998]: E0227 10:19:48.764762 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 10:19:48 crc kubenswrapper[4998]: E0227 10:19:48.764941 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-86xkz" podUID="40178d6d-6068-4937-b7d5-883538892cc5" Feb 27 10:19:48 crc kubenswrapper[4998]: I0227 10:19:48.765358 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:19:48 crc kubenswrapper[4998]: E0227 10:19:48.765581 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 10:19:48 crc kubenswrapper[4998]: I0227 10:19:48.787292 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fc05123-f698-45ff-a3c3-13c18e466cdb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c807f36a54905b570613990b7428942b90cbad2d8d8dfbab02ee177d5575644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f0cce1c3309bc7de11ea57038f7a306b781b41434337fbf3da176ca2dba2b91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93c102b441273f39ca361ed86f6ef259e15d8adb7eea414db62fabebfda0dee1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e749ae59c7bd8ab4f2d458cd33ccbae459eb43375c8abcdd275ce7f3978d5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b6e8ecfcaba19fb742ce57619627409073b5ea272a20cca6cff39ebcef8d3e2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T10:18:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 10:18:37.748369 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 10:18:37.748616 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 10:18:37.749728 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2309284339/tls.crt::/tmp/serving-cert-2309284339/tls.key\\\\\\\"\\\\nI0227 10:18:38.164511 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 10:18:38.177544 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 10:18:38.177576 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 10:18:38.177600 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 10:18:38.177606 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 10:18:38.196196 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 10:18:38.196250 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 10:18:38.196256 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 10:18:38.196263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 10:18:38.196267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 10:18:38.196282 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 10:18:38.196286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0227 10:18:38.201111 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0227 10:18:38.201327 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T10:18:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df65cc66d990595a8fdb751234ffd8b9e890f53de56c736241bc8b52e7341357\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b42644d7b4769348dda3a3ed5f82a860712159a6e62df2ee6a04b6e890851fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b42644d7b4769348dda3a3ed5f82a860712159a6e62df2ee6a04b6e890851fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:17:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:48Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:48 crc kubenswrapper[4998]: I0227 10:19:48.805844 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da216979-4070-4044-885e-64db11be9b28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:18:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:18:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8e1a3d0cfbfb3c94ceb7fa8965506defc8d50fbd2c3c977ebea917d9b47f29c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77f77430694123a27f4e42870837e5ef1405bc4f157bc15efe4cc2daafd9456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df8df596ab8f658fa7380300b4af87a511447baba28c0a4878829e2217d7d600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcd8311d3319cee9ae6667193835b5bd30c08b0f52b6278fead9c76ffa5949ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dcd8311d3319cee9ae6667193835b5bd30c08b0f52b6278fead9c76ffa5949ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:17:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:48Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:48 crc kubenswrapper[4998]: I0227 10:19:48.816500 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-86xkz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40178d6d-6068-4937-b7d5-883538892cc5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqwdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqwdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-86xkz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:48Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:48 crc kubenswrapper[4998]: I0227 10:19:48.828680 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c7c05c29eea9415919844353f5f82a0543e2592b282913feca3776ef5a711d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65614a1a13ca1ee58cd9ee838bbb0ba2ada05d130adb26b34ae208292a994bf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:48Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:48 crc kubenswrapper[4998]: I0227 10:19:48.840637 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ea5f61679603b32df823b8149d832f4d6e98c0cb4c7d1124b3ff587bb729ef3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:48Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:48 crc kubenswrapper[4998]: I0227 10:19:48.855327 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-46lvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a046a5ca-7081-4920-98af-1027a5bc29d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70d74d26b8ecce796cf2e1d35f24a62a2d3005aac8c11e2a148aa9b9a4e670f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7s287\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-46lvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:48Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:48 crc kubenswrapper[4998]: E0227 10:19:48.862917 4998 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 27 10:19:48 crc kubenswrapper[4998]: I0227 10:19:48.869623 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"400c5e2f-5448-49c6-bf8e-04b21e552bb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3ce74dab57b3de4f390244a0a95e5ee5c83a47521248808b783a1e94e25c00c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l6t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94514be337c3278cc1dfc8b2f0c50050f03294a2cc4ac6a72c62695d2fe4152a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l6t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m6kr5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:48Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:48 crc kubenswrapper[4998]: I0227 10:19:48.885393 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:48Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:48 crc kubenswrapper[4998]: I0227 10:19:48.902352 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de019f950487216eb5629eeb90db5b4dac503bb5bda901cb9dcb66a9019aaecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:48Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:48 crc kubenswrapper[4998]: I0227 10:19:48.922202 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:48Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:48 crc kubenswrapper[4998]: I0227 10:19:48.939558 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:48Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:48 crc kubenswrapper[4998]: I0227 10:19:48.967543 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bceef7ff-b99d-432e-b9cb-7c538c82b74b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b586f4d902be28e9eed6ab0b15e1c5bbd3dbf2add143844fff0835e088a1e3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f8bd4e21e6e078a8e49ae2264c035072fb77179deb36176c991ff790ea25b0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cc4f81dc4311732e185b32536af72d8391b070ab0b822258c13c55b9053cf53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e38b883925e98a1b15113d65d2f9b3e133e05c14ac63651039076b8518d42467\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92c409cddda85ffd90dc0522d28d181bb804f68f82561e82217b3e78cd9938ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80297fee7fa0ab3d0d6a5975984b83a2d0fa2a368a3fc6d85affe2af75e326e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4d0eb0f1e9b5c39bd1f8cf22c6eeef46408add1f6abc2e18be2a9d20a54a094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4d0eb0f1e9b5c39bd1f8cf22c6eeef46408add1f6abc2e18be2a9d20a54a094\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T10:19:44Z\\\",\\\"message\\\":\\\" policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:44Z is after 2025-08-24T17:21:41Z]\\\\nI0227 10:19:44.383828 7059 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-controller-manager/kube-controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"ba175bbe-5cc4-47e6-a32d-57693e1320bd\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-controller-manager/kube-controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, b\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-wh9xl_openshift-ovn-kubernetes(bceef7ff-b99d-432e-b9cb-7c538c82b74b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f39b4e222d74a12d8889750f34e5bb8e07d73c9d7fcdbaaebe18066fba744a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a581b9e4264b5768a3af01485da05fd006f3cd0d379ef13f228187faa74f297c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a581b9e4264b5768a3af01485da05fd006f3cd0d379ef13f228187faa74f297c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wh9xl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:48Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:48 crc kubenswrapper[4998]: I0227 10:19:48.982872 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jl2nx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c459deb8-e1ea-43de-a1b0-1b463eee4bdc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7df06ac726fa496bf82c45d7a1ec7f2defaf93f1d406f0c55cbe4c2b46c9b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc6kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jl2nx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:48Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:48 crc kubenswrapper[4998]: I0227 10:19:48.995131 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1182527-22cd-46f5-ac86-450cc2fbc851\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f69cef55dc6c16d2f015053573c088caee5fcca24208ad8c7affb58bc981c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a72f2f016ab471dc647a29e72e4e49dd5008018682ff255ca5b8496cfc4a2a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a72f2f016ab471dc647a29e72e4e49dd5008018682ff255ca5b8496cfc4a2a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:17:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:48Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:49 crc kubenswrapper[4998]: I0227 10:19:49.009849 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcfqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9652967a-d4bf-4304-bd25-4fed87e89b10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79a4ef241b85a5c9e0a28bc1b9d7c65ed60b9ca3472e0cbc0ef4aade02e6dff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jctzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcfqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:49Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:49 crc kubenswrapper[4998]: I0227 10:19:49.034680 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l9x2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e55e9768-52ee-4fcf-a279-1b55e6d6c6fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://403bf97889905e9e76dc8ba71be2b5382ee401ce7939314113ff6019cff589db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ac2d83408a7c3cccea1e095f4712f9f524751eec680b2df6373e44643438bdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ac2d83408a7c3cccea1e095f4712f9f524751eec680b2df6373e44643438bdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3d5eb208ecd8e1f8ab650a14b8bc638732ebe1d85e05caa726a967bd010334d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3d5eb208ecd8e1f8ab650a14b8bc638732ebe1d85e05caa726a967bd010334d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://736262e0b6f343139b2e1a42c6d3c90fc1ff6df0c02511cb66074a2143476a48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://736262e0b6f343139b2e1a42c6d3c90fc1ff6df0c02511cb66074a2143476a48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b20c69cb476fe3c5e90cc0086cccef9148e6f31a8a99340967dfee5ad04f8ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b20c69cb476fe3c5e90cc0086cccef9148e6f31a8a99340967dfee5ad04f8ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16873e6b57f8d49e2acc350c6940109206ef970ae711b7cd62aa87fc90df15eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16873e6b57f8d49e2acc350c6940109206ef970ae711b7cd62aa87fc90df15eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e03d30a9a2dc44255a2b9c9dd2346f2081d87c91020679615c71a30842c6556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e03d30a9a2dc44255a2b9c9dd2346f2081d87c91020679615c71a30842c6556\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l9x2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:49Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:49 crc kubenswrapper[4998]: I0227 10:19:49.055258 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-g7c4b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68afe6cb-a559-4162-a25f-a22003feeca4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e28ec5783cd6f24d22a1412ced337b92b22dc2c17624208871489dca19dcb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mqxn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a636fc90f5de5e56e91678566c0fe6812e36482554b4454222df8205a44e8d51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mqxn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-g7c4b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:49Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:50 crc kubenswrapper[4998]: I0227 10:19:50.764320 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:19:50 crc kubenswrapper[4998]: I0227 10:19:50.764384 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-86xkz" Feb 27 10:19:50 crc kubenswrapper[4998]: I0227 10:19:50.764421 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:19:50 crc kubenswrapper[4998]: I0227 10:19:50.764355 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:19:50 crc kubenswrapper[4998]: E0227 10:19:50.764477 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 10:19:50 crc kubenswrapper[4998]: E0227 10:19:50.764611 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 10:19:50 crc kubenswrapper[4998]: E0227 10:19:50.764687 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-86xkz" podUID="40178d6d-6068-4937-b7d5-883538892cc5" Feb 27 10:19:50 crc kubenswrapper[4998]: E0227 10:19:50.764858 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 10:19:52 crc kubenswrapper[4998]: I0227 10:19:52.764210 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:19:52 crc kubenswrapper[4998]: I0227 10:19:52.764334 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:19:52 crc kubenswrapper[4998]: I0227 10:19:52.764363 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-86xkz" Feb 27 10:19:52 crc kubenswrapper[4998]: I0227 10:19:52.764407 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:19:52 crc kubenswrapper[4998]: E0227 10:19:52.764491 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 10:19:52 crc kubenswrapper[4998]: E0227 10:19:52.764650 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 10:19:52 crc kubenswrapper[4998]: E0227 10:19:52.764773 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-86xkz" podUID="40178d6d-6068-4937-b7d5-883538892cc5" Feb 27 10:19:52 crc kubenswrapper[4998]: E0227 10:19:52.764879 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 10:19:53 crc kubenswrapper[4998]: E0227 10:19:53.864460 4998 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 27 10:19:54 crc kubenswrapper[4998]: I0227 10:19:54.671995 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/40178d6d-6068-4937-b7d5-883538892cc5-metrics-certs\") pod \"network-metrics-daemon-86xkz\" (UID: \"40178d6d-6068-4937-b7d5-883538892cc5\") " pod="openshift-multus/network-metrics-daemon-86xkz" Feb 27 10:19:54 crc kubenswrapper[4998]: E0227 10:19:54.672208 4998 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 27 10:19:54 crc kubenswrapper[4998]: E0227 10:19:54.672330 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/40178d6d-6068-4937-b7d5-883538892cc5-metrics-certs podName:40178d6d-6068-4937-b7d5-883538892cc5 nodeName:}" failed. No retries permitted until 2026-02-27 10:20:26.672303096 +0000 UTC m=+178.670574104 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/40178d6d-6068-4937-b7d5-883538892cc5-metrics-certs") pod "network-metrics-daemon-86xkz" (UID: "40178d6d-6068-4937-b7d5-883538892cc5") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 27 10:19:54 crc kubenswrapper[4998]: I0227 10:19:54.764237 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:19:54 crc kubenswrapper[4998]: I0227 10:19:54.764218 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:19:54 crc kubenswrapper[4998]: I0227 10:19:54.764317 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:19:54 crc kubenswrapper[4998]: E0227 10:19:54.764383 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 10:19:54 crc kubenswrapper[4998]: E0227 10:19:54.764438 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 10:19:54 crc kubenswrapper[4998]: I0227 10:19:54.764515 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-86xkz" Feb 27 10:19:54 crc kubenswrapper[4998]: E0227 10:19:54.764585 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-86xkz" podUID="40178d6d-6068-4937-b7d5-883538892cc5" Feb 27 10:19:54 crc kubenswrapper[4998]: E0227 10:19:54.765547 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 10:19:56 crc kubenswrapper[4998]: I0227 10:19:56.764479 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:19:56 crc kubenswrapper[4998]: I0227 10:19:56.764555 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:19:56 crc kubenswrapper[4998]: I0227 10:19:56.764604 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-86xkz" Feb 27 10:19:56 crc kubenswrapper[4998]: E0227 10:19:56.764666 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 10:19:56 crc kubenswrapper[4998]: I0227 10:19:56.764677 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:19:56 crc kubenswrapper[4998]: E0227 10:19:56.764852 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 10:19:56 crc kubenswrapper[4998]: E0227 10:19:56.764982 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-86xkz" podUID="40178d6d-6068-4937-b7d5-883538892cc5" Feb 27 10:19:56 crc kubenswrapper[4998]: E0227 10:19:56.765060 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 10:19:57 crc kubenswrapper[4998]: I0227 10:19:57.868537 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:57 crc kubenswrapper[4998]: I0227 10:19:57.868823 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:57 crc kubenswrapper[4998]: I0227 10:19:57.868834 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:57 crc kubenswrapper[4998]: I0227 10:19:57.868850 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:57 crc kubenswrapper[4998]: I0227 10:19:57.868858 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:57Z","lastTransitionTime":"2026-02-27T10:19:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:57 crc kubenswrapper[4998]: E0227 10:19:57.883455 4998 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:19:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:19:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:19:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:19:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a76c225a-d617-4499-bb32-eb42f31208cd\\\",\\\"systemUUID\\\":\\\"9cb598f9-84ae-4703-b4b9-6775104308e7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:57Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:57 crc kubenswrapper[4998]: I0227 10:19:57.887160 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:57 crc kubenswrapper[4998]: I0227 10:19:57.887198 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:57 crc kubenswrapper[4998]: I0227 10:19:57.887209 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:57 crc kubenswrapper[4998]: I0227 10:19:57.887243 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:57 crc kubenswrapper[4998]: I0227 10:19:57.887260 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:57Z","lastTransitionTime":"2026-02-27T10:19:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:57 crc kubenswrapper[4998]: E0227 10:19:57.900429 4998 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:19:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:19:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:19:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:19:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a76c225a-d617-4499-bb32-eb42f31208cd\\\",\\\"systemUUID\\\":\\\"9cb598f9-84ae-4703-b4b9-6775104308e7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:57Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:57 crc kubenswrapper[4998]: I0227 10:19:57.904203 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:57 crc kubenswrapper[4998]: I0227 10:19:57.904257 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:57 crc kubenswrapper[4998]: I0227 10:19:57.904268 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:57 crc kubenswrapper[4998]: I0227 10:19:57.904282 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:57 crc kubenswrapper[4998]: I0227 10:19:57.904292 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:57Z","lastTransitionTime":"2026-02-27T10:19:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:57 crc kubenswrapper[4998]: E0227 10:19:57.919543 4998 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:19:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:19:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:19:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:19:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a76c225a-d617-4499-bb32-eb42f31208cd\\\",\\\"systemUUID\\\":\\\"9cb598f9-84ae-4703-b4b9-6775104308e7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:57Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:57 crc kubenswrapper[4998]: I0227 10:19:57.923721 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:57 crc kubenswrapper[4998]: I0227 10:19:57.923760 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:57 crc kubenswrapper[4998]: I0227 10:19:57.923771 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:57 crc kubenswrapper[4998]: I0227 10:19:57.923785 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:57 crc kubenswrapper[4998]: I0227 10:19:57.923795 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:57Z","lastTransitionTime":"2026-02-27T10:19:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:57 crc kubenswrapper[4998]: E0227 10:19:57.934203 4998 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:19:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:19:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:19:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:19:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a76c225a-d617-4499-bb32-eb42f31208cd\\\",\\\"systemUUID\\\":\\\"9cb598f9-84ae-4703-b4b9-6775104308e7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:57Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:57 crc kubenswrapper[4998]: I0227 10:19:57.938368 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:19:57 crc kubenswrapper[4998]: I0227 10:19:57.938427 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:19:57 crc kubenswrapper[4998]: I0227 10:19:57.938437 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:19:57 crc kubenswrapper[4998]: I0227 10:19:57.938450 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:19:57 crc kubenswrapper[4998]: I0227 10:19:57.938459 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:19:57Z","lastTransitionTime":"2026-02-27T10:19:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:19:57 crc kubenswrapper[4998]: E0227 10:19:57.951482 4998 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:19:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:19:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:19:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:19:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a76c225a-d617-4499-bb32-eb42f31208cd\\\",\\\"systemUUID\\\":\\\"9cb598f9-84ae-4703-b4b9-6775104308e7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:57Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:57 crc kubenswrapper[4998]: E0227 10:19:57.951636 4998 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 27 10:19:58 crc kubenswrapper[4998]: I0227 10:19:58.764822 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:19:58 crc kubenswrapper[4998]: I0227 10:19:58.764820 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:19:58 crc kubenswrapper[4998]: I0227 10:19:58.765182 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:19:58 crc kubenswrapper[4998]: E0227 10:19:58.765180 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 10:19:58 crc kubenswrapper[4998]: E0227 10:19:58.765375 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 10:19:58 crc kubenswrapper[4998]: I0227 10:19:58.765450 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-86xkz" Feb 27 10:19:58 crc kubenswrapper[4998]: E0227 10:19:58.765537 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 10:19:58 crc kubenswrapper[4998]: E0227 10:19:58.765661 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-86xkz" podUID="40178d6d-6068-4937-b7d5-883538892cc5" Feb 27 10:19:58 crc kubenswrapper[4998]: I0227 10:19:58.783942 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fc05123-f698-45ff-a3c3-13c18e466cdb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c807f36a54905b570613990b7428942b90cbad2d8d8dfbab02ee177d5575644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f0cce1c3309bc7de11ea57038f7a306b781b41434337fbf3da176ca2dba2b91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93c102b441273f39ca361ed86f6ef259e15d8adb7eea414db62fabebfda0dee1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e749ae59c7bd8ab4f2d458cd33ccbae459eb43375c8abcdd275ce7f3978d5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b6e8ecfcaba19fb742ce57619627409073b5ea272a20cca6cff39ebcef8d3e2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T10:18:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 10:18:37.748369 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 10:18:37.748616 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 10:18:37.749728 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2309284339/tls.crt::/tmp/serving-cert-2309284339/tls.key\\\\\\\"\\\\nI0227 10:18:38.164511 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 10:18:38.177544 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 10:18:38.177576 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 10:18:38.177600 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 10:18:38.177606 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 10:18:38.196196 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 10:18:38.196250 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 10:18:38.196256 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 10:18:38.196263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 10:18:38.196267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 10:18:38.196282 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 10:18:38.196286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0227 10:18:38.201111 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0227 10:18:38.201327 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T10:18:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df65cc66d990595a8fdb751234ffd8b9e890f53de56c736241bc8b52e7341357\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b42644d7b4769348dda3a3ed5f82a860712159a6e62df2ee6a04b6e890851fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b42644d7b4769348dda3a3ed5f82a860712159a6e62df2ee6a04b6e890851fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:17:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:58Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:58 crc kubenswrapper[4998]: I0227 10:19:58.798489 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da216979-4070-4044-885e-64db11be9b28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:18:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:18:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8e1a3d0cfbfb3c94ceb7fa8965506defc8d50fbd2c3c977ebea917d9b47f29c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77f77430694123a27f4e42870837e5ef1405bc4f157bc15efe4cc2daafd9456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df8df596ab8f658fa7380300b4af87a511447baba28c0a4878829e2217d7d600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcd8311d3319cee9ae6667193835b5bd30c08b0f52b6278fead9c76ffa5949ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dcd8311d3319cee9ae6667193835b5bd30c08b0f52b6278fead9c76ffa5949ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:17:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:58Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:58 crc kubenswrapper[4998]: I0227 10:19:58.809747 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-86xkz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40178d6d-6068-4937-b7d5-883538892cc5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqwdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqwdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-86xkz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:58Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:58 crc kubenswrapper[4998]: I0227 10:19:58.822273 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c7c05c29eea9415919844353f5f82a0543e2592b282913feca3776ef5a711d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65614a1a13ca1ee58cd9ee838bbb0ba2ada05d130adb26b34ae208292a994bf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:58Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:58 crc kubenswrapper[4998]: I0227 10:19:58.838495 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ea5f61679603b32df823b8149d832f4d6e98c0cb4c7d1124b3ff587bb729ef3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:58Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:58 crc kubenswrapper[4998]: I0227 10:19:58.855653 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-46lvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a046a5ca-7081-4920-98af-1027a5bc29d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70d74d26b8ecce796cf2e1d35f24a62a2d3005aac8c11e2a148aa9b9a4e670f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7s287\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-46lvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:58Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:58 crc kubenswrapper[4998]: E0227 10:19:58.866259 4998 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 27 10:19:58 crc kubenswrapper[4998]: I0227 10:19:58.875739 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"400c5e2f-5448-49c6-bf8e-04b21e552bb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3ce74dab57b3de4f390244a0a95e5ee5c83a47521248808b783a1e94e25c00c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l6t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94514be337c3278cc1dfc8b2f0c50050f03294a2cc4ac6a72c62695d2fe4152a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l6t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m6kr5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:58Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:58 crc kubenswrapper[4998]: I0227 10:19:58.889153 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:58Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:58 crc kubenswrapper[4998]: I0227 10:19:58.905020 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de019f950487216eb5629eeb90db5b4dac503bb5bda901cb9dcb66a9019aaecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:58Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:58 crc kubenswrapper[4998]: I0227 10:19:58.921150 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:58Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:58 crc kubenswrapper[4998]: I0227 10:19:58.935451 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:58Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:58 crc kubenswrapper[4998]: I0227 10:19:58.957420 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bceef7ff-b99d-432e-b9cb-7c538c82b74b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b586f4d902be28e9eed6ab0b15e1c5bbd3dbf2add143844fff0835e088a1e3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f8bd4e21e6e078a8e49ae2264c035072fb77179deb36176c991ff790ea25b0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cc4f81dc4311732e185b32536af72d8391b070ab0b822258c13c55b9053cf53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e38b883925e98a1b15113d65d2f9b3e133e05c14ac63651039076b8518d42467\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92c409cddda85ffd90dc0522d28d181bb804f68f82561e82217b3e78cd9938ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80297fee7fa0ab3d0d6a5975984b83a2d0fa2a368a3fc6d85affe2af75e326e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4d0eb0f1e9b5c39bd1f8cf22c6eeef46408add1f6abc2e18be2a9d20a54a094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4d0eb0f1e9b5c39bd1f8cf22c6eeef46408add1f6abc2e18be2a9d20a54a094\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T10:19:44Z\\\",\\\"message\\\":\\\" policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:44Z is after 2025-08-24T17:21:41Z]\\\\nI0227 10:19:44.383828 7059 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-controller-manager/kube-controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"ba175bbe-5cc4-47e6-a32d-57693e1320bd\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-controller-manager/kube-controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, b\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-wh9xl_openshift-ovn-kubernetes(bceef7ff-b99d-432e-b9cb-7c538c82b74b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f39b4e222d74a12d8889750f34e5bb8e07d73c9d7fcdbaaebe18066fba744a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a581b9e4264b5768a3af01485da05fd006f3cd0d379ef13f228187faa74f297c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a581b9e4264b5768a3af01485da05fd006f3cd0d379ef13f228187faa74f297c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wh9xl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:58Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:58 crc kubenswrapper[4998]: I0227 10:19:58.970615 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jl2nx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c459deb8-e1ea-43de-a1b0-1b463eee4bdc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7df06ac726fa496bf82c45d7a1ec7f2defaf93f1d406f0c55cbe4c2b46c9b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc6kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jl2nx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:58Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:58 crc kubenswrapper[4998]: I0227 10:19:58.983424 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1182527-22cd-46f5-ac86-450cc2fbc851\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f69cef55dc6c16d2f015053573c088caee5fcca24208ad8c7affb58bc981c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a72f2f016ab471dc647a29e72e4e49dd5008018682ff255ca5b8496cfc4a2a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a72f2f016ab471dc647a29e72e4e49dd5008018682ff255ca5b8496cfc4a2a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:17:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:58Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:58 crc kubenswrapper[4998]: I0227 10:19:58.994990 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcfqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9652967a-d4bf-4304-bd25-4fed87e89b10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79a4ef241b85a5c9e0a28bc1b9d7c65ed60b9ca3472e0cbc0ef4aade02e6dff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jctzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcfqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:58Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:59 crc kubenswrapper[4998]: I0227 10:19:59.014097 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l9x2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e55e9768-52ee-4fcf-a279-1b55e6d6c6fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://403bf97889905e9e76dc8ba71be2b5382ee401ce7939314113ff6019cff589db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ac2d83408a7c3cccea1e095f4712f9f524751eec680b2df6373e44643438bdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ac2d83408a7c3cccea1e095f4712f9f524751eec680b2df6373e44643438bdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3d5eb208ecd8e1f8ab650a14b8bc638732ebe1d85e05caa726a967bd010334d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3d5eb208ecd8e1f8ab650a14b8bc638732ebe1d85e05caa726a967bd010334d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://736262e0b6f343139b2e1a42c6d3c90fc1ff6df0c02511cb66074a2143476a48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://736262e0b6f343139b2e1a42c6d3c90fc1ff6df0c02511cb66074a2143476a48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b20c69cb476fe3c5e90cc0086cccef9148e6f31a8a99340967dfee5ad04f8ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b20c69cb476fe3c5e90cc0086cccef9148e6f31a8a99340967dfee5ad04f8ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16873e6b57f8d49e2acc350c6940109206ef970ae711b7cd62aa87fc90df15eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16873e6b57f8d49e2acc350c6940109206ef970ae711b7cd62aa87fc90df15eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e03d30a9a2dc44255a2b9c9dd2346f2081d87c91020679615c71a30842c6556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e03d30a9a2dc44255a2b9c9dd2346f2081d87c91020679615c71a30842c6556\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l9x2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:59Z is after 2025-08-24T17:21:41Z" Feb 27 10:19:59 crc kubenswrapper[4998]: I0227 10:19:59.028402 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-g7c4b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68afe6cb-a559-4162-a25f-a22003feeca4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e28ec5783cd6f24d22a1412ced337b92b22dc2c17624208871489dca19dcb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mqxn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a636fc90f5de5e56e91678566c0fe6812e36482554b4454222df8205a44e8d51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mqxn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-g7c4b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:59Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:00 crc kubenswrapper[4998]: I0227 10:20:00.764509 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:20:00 crc kubenswrapper[4998]: I0227 10:20:00.764567 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-86xkz" Feb 27 10:20:00 crc kubenswrapper[4998]: I0227 10:20:00.764591 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:20:00 crc kubenswrapper[4998]: I0227 10:20:00.764656 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:20:00 crc kubenswrapper[4998]: E0227 10:20:00.764809 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 10:20:00 crc kubenswrapper[4998]: E0227 10:20:00.764896 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-86xkz" podUID="40178d6d-6068-4937-b7d5-883538892cc5" Feb 27 10:20:00 crc kubenswrapper[4998]: E0227 10:20:00.764964 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 10:20:00 crc kubenswrapper[4998]: E0227 10:20:00.765086 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 10:20:01 crc kubenswrapper[4998]: I0227 10:20:01.765805 4998 scope.go:117] "RemoveContainer" containerID="c4d0eb0f1e9b5c39bd1f8cf22c6eeef46408add1f6abc2e18be2a9d20a54a094" Feb 27 10:20:02 crc kubenswrapper[4998]: I0227 10:20:02.212507 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wh9xl_bceef7ff-b99d-432e-b9cb-7c538c82b74b/ovnkube-controller/1.log" Feb 27 10:20:02 crc kubenswrapper[4998]: I0227 10:20:02.215050 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" event={"ID":"bceef7ff-b99d-432e-b9cb-7c538c82b74b","Type":"ContainerStarted","Data":"8132f979bac54c03a1c63ec4f187152f6ce768d0f42623a73d71788142b919df"} Feb 27 10:20:02 crc kubenswrapper[4998]: I0227 10:20:02.215532 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" Feb 27 10:20:02 crc kubenswrapper[4998]: I0227 10:20:02.229466 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jl2nx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c459deb8-e1ea-43de-a1b0-1b463eee4bdc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7df06ac726fa496bf82c45d7a1ec7f2defaf93f1d406f0c55cbe4c2b46c9b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc6kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jl2nx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:02Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:02 crc kubenswrapper[4998]: I0227 10:20:02.242995 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1182527-22cd-46f5-ac86-450cc2fbc851\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f69cef55dc6c16d2f015053573c088caee5fcca24208ad8c7affb58bc981c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a72f2f016ab471dc647a29e72e4e49dd5008018682ff255ca5b8496cfc4a2a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a72f2f016ab471dc647a29e72e4e49dd5008018682ff255ca5b8496cfc4a2a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:17:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:02Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:02 crc kubenswrapper[4998]: I0227 10:20:02.263241 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de019f950487216eb5629eeb90db5b4dac503bb5bda901cb9dcb66a9019aaecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:02Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:02 crc kubenswrapper[4998]: I0227 10:20:02.277696 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:02Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:02 crc kubenswrapper[4998]: I0227 10:20:02.291984 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:02Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:02 crc kubenswrapper[4998]: I0227 10:20:02.309940 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bceef7ff-b99d-432e-b9cb-7c538c82b74b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b586f4d902be28e9eed6ab0b15e1c5bbd3dbf2add143844fff0835e088a1e3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f8bd4e21e6e078a8e49ae2264c035072fb77179deb36176c991ff790ea25b0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cc4f81dc4311732e185b32536af72d8391b070ab0b822258c13c55b9053cf53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e38b883925e98a1b15113d65d2f9b3e133e05c14ac63651039076b8518d42467\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92c409cddda85ffd90dc0522d28d181bb804f68f82561e82217b3e78cd9938ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80297fee7fa0ab3d0d6a5975984b83a2d0fa2a368a3fc6d85affe2af75e326e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8132f979bac54c03a1c63ec4f187152f6ce768d0f42623a73d71788142b919df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4d0eb0f1e9b5c39bd1f8cf22c6eeef46408add1f6abc2e18be2a9d20a54a094\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T10:19:44Z\\\",\\\"message\\\":\\\" policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:44Z is after 2025-08-24T17:21:41Z]\\\\nI0227 10:19:44.383828 7059 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-controller-manager/kube-controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"ba175bbe-5cc4-47e6-a32d-57693e1320bd\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-controller-manager/kube-controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, b\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f39b4e222d74a12d8889750f34e5bb8e07d73c9d7fcdbaaebe18066fba744a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a581b9e4264b5768a3af01485da05fd006f3cd0d379ef13f228187faa74f297c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a581b9e4264b5768a3af01485da05fd006f3cd0d379ef13f228187faa74f297c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wh9xl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:02Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:02 crc kubenswrapper[4998]: I0227 10:20:02.321262 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcfqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9652967a-d4bf-4304-bd25-4fed87e89b10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79a4ef241b85a5c9e0a28bc1b9d7c65ed60b9ca3472e0cbc0ef4aade02e6dff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jctzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcfqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:02Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:02 crc kubenswrapper[4998]: I0227 10:20:02.335064 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l9x2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e55e9768-52ee-4fcf-a279-1b55e6d6c6fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://403bf97889905e9e76dc8ba71be2b5382ee401ce7939314113ff6019cff589db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ac2d83408a7c3cccea1e095f4712f9f524751eec680b2df6373e44643438bdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ac2d83408a7c3cccea1e095f4712f9f524751eec680b2df6373e44643438bdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3d5eb208ecd8e1f8ab650a14b8bc638732ebe1d85e05caa726a967bd010334d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3d5eb208ecd8e1f8ab650a14b8bc638732ebe1d85e05caa726a967bd010334d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://736262e0b6f343139b2e1a42c6d3c90fc1ff6df0c02511cb66074a2143476a48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://736262e0b6f343139b2e1a42c6d3c90fc1ff6df0c02511cb66074a2143476a48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b20c69cb476fe3c5e90cc0086cccef9148e6f31a8a99340967dfee5ad04f8ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b20c69cb476fe3c5e90cc0086cccef9148e6f31a8a99340967dfee5ad04f8ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16873e6b57f8d49e2acc350c6940109206ef970ae711b7cd62aa87fc90df15eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16873e6b57f8d49e2acc350c6940109206ef970ae711b7cd62aa87fc90df15eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e03d30a9a2dc44255a2b9c9dd2346f2081d87c91020679615c71a30842c6556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e03d30a9a2dc44255a2b9c9dd2346f2081d87c91020679615c71a30842c6556\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l9x2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:02Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:02 crc kubenswrapper[4998]: I0227 10:20:02.348758 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-g7c4b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68afe6cb-a559-4162-a25f-a22003feeca4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e28ec5783cd6f24d22a1412ced337b92b22dc2c17624208871489dca19dcb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mqxn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a636fc90f5de5e56e91678566c0fe6812e36482554b4454222df8205a44e8d51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mqxn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-g7c4b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:02Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:02 crc kubenswrapper[4998]: I0227 10:20:02.362528 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fc05123-f698-45ff-a3c3-13c18e466cdb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c807f36a54905b570613990b7428942b90cbad2d8d8dfbab02ee177d5575644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f0cce1c3309bc7de11ea57038f7a306b781b41434337fbf3da176ca2dba2b91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93c102b441273f39ca361ed86f6ef259e15d8adb7eea414db62fabebfda0dee1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e749ae59c7bd8ab4f2d458cd33ccbae459eb43375c8abcdd275ce7f3978d5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b6e8ecfcaba19fb742ce57619627409073b5ea272a20cca6cff39ebcef8d3e2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T10:18:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 10:18:37.748369 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 10:18:37.748616 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 10:18:37.749728 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2309284339/tls.crt::/tmp/serving-cert-2309284339/tls.key\\\\\\\"\\\\nI0227 10:18:38.164511 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 10:18:38.177544 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 10:18:38.177576 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 10:18:38.177600 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 10:18:38.177606 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 10:18:38.196196 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 10:18:38.196250 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 10:18:38.196256 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 10:18:38.196263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 10:18:38.196267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 10:18:38.196282 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 10:18:38.196286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0227 10:18:38.201111 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0227 10:18:38.201327 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T10:18:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df65cc66d990595a8fdb751234ffd8b9e890f53de56c736241bc8b52e7341357\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b42644d7b4769348dda3a3ed5f82a860712159a6e62df2ee6a04b6e890851fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b42644d7b4769348dda3a3ed5f82a860712159a6e62df2ee6a04b6e890851fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:17:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:02Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:02 crc kubenswrapper[4998]: I0227 10:20:02.373934 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da216979-4070-4044-885e-64db11be9b28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:18:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:18:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8e1a3d0cfbfb3c94ceb7fa8965506defc8d50fbd2c3c977ebea917d9b47f29c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77f77430694123a27f4e42870837e5ef1405bc4f157bc15efe4cc2daafd9456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df8df596ab8f658fa7380300b4af87a511447baba28c0a4878829e2217d7d600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcd8311d3319cee9ae6667193835b5bd30c08b0f52b6278fead9c76ffa5949ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dcd8311d3319cee9ae6667193835b5bd30c08b0f52b6278fead9c76ffa5949ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:17:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:02Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:02 crc kubenswrapper[4998]: I0227 10:20:02.386055 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-86xkz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40178d6d-6068-4937-b7d5-883538892cc5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqwdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqwdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-86xkz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:02Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:02 crc kubenswrapper[4998]: I0227 10:20:02.397144 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:02Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:02 crc kubenswrapper[4998]: I0227 10:20:02.410716 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c7c05c29eea9415919844353f5f82a0543e2592b282913feca3776ef5a711d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65614a1a13ca1ee58cd9ee838bbb0ba2ada05d130adb26b34ae208292a994bf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:02Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:02 crc kubenswrapper[4998]: I0227 10:20:02.423006 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ea5f61679603b32df823b8149d832f4d6e98c0cb4c7d1124b3ff587bb729ef3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:02Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:02 crc kubenswrapper[4998]: I0227 10:20:02.435695 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-46lvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a046a5ca-7081-4920-98af-1027a5bc29d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70d74d26b8ecce796cf2e1d35f24a62a2d3005aac8c11e2a148aa9b9a4e670f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7s287\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-46lvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:02Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:02 crc kubenswrapper[4998]: I0227 10:20:02.445219 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"400c5e2f-5448-49c6-bf8e-04b21e552bb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3ce74dab57b3de4f390244a0a95e5ee5c83a47521248808b783a1e94e25c00c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l6t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94514be337c3278cc1dfc8b2f0c50050f03294a2cc4ac6a72c62695d2fe4152a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l6t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m6kr5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:02Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:02 crc kubenswrapper[4998]: I0227 10:20:02.764665 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:20:02 crc kubenswrapper[4998]: I0227 10:20:02.764836 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:20:02 crc kubenswrapper[4998]: E0227 10:20:02.765064 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 10:20:02 crc kubenswrapper[4998]: I0227 10:20:02.765132 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-86xkz" Feb 27 10:20:02 crc kubenswrapper[4998]: I0227 10:20:02.764902 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:20:02 crc kubenswrapper[4998]: E0227 10:20:02.765304 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-86xkz" podUID="40178d6d-6068-4937-b7d5-883538892cc5" Feb 27 10:20:02 crc kubenswrapper[4998]: E0227 10:20:02.765477 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 10:20:02 crc kubenswrapper[4998]: E0227 10:20:02.765533 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 10:20:03 crc kubenswrapper[4998]: I0227 10:20:03.220992 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wh9xl_bceef7ff-b99d-432e-b9cb-7c538c82b74b/ovnkube-controller/2.log" Feb 27 10:20:03 crc kubenswrapper[4998]: I0227 10:20:03.221689 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wh9xl_bceef7ff-b99d-432e-b9cb-7c538c82b74b/ovnkube-controller/1.log" Feb 27 10:20:03 crc kubenswrapper[4998]: I0227 10:20:03.224922 4998 generic.go:334] "Generic (PLEG): container finished" podID="bceef7ff-b99d-432e-b9cb-7c538c82b74b" containerID="8132f979bac54c03a1c63ec4f187152f6ce768d0f42623a73d71788142b919df" exitCode=1 Feb 27 10:20:03 crc kubenswrapper[4998]: I0227 10:20:03.224974 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" event={"ID":"bceef7ff-b99d-432e-b9cb-7c538c82b74b","Type":"ContainerDied","Data":"8132f979bac54c03a1c63ec4f187152f6ce768d0f42623a73d71788142b919df"} Feb 27 10:20:03 crc kubenswrapper[4998]: I0227 10:20:03.225015 4998 scope.go:117] "RemoveContainer" containerID="c4d0eb0f1e9b5c39bd1f8cf22c6eeef46408add1f6abc2e18be2a9d20a54a094" Feb 27 10:20:03 crc kubenswrapper[4998]: I0227 10:20:03.225672 4998 scope.go:117] "RemoveContainer" containerID="8132f979bac54c03a1c63ec4f187152f6ce768d0f42623a73d71788142b919df" Feb 27 10:20:03 crc kubenswrapper[4998]: E0227 10:20:03.225853 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-wh9xl_openshift-ovn-kubernetes(bceef7ff-b99d-432e-b9cb-7c538c82b74b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" podUID="bceef7ff-b99d-432e-b9cb-7c538c82b74b" Feb 27 10:20:03 crc kubenswrapper[4998]: I0227 10:20:03.245551 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fc05123-f698-45ff-a3c3-13c18e466cdb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c807f36a54905b570613990b7428942b90cbad2d8d8dfbab02ee177d5575644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f0cce1c3309bc7de11ea57038f7a306b781b41434337fbf3da176ca2dba2b91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93c102b441273f39ca361ed86f6ef259e15d8adb7eea414db62fabebfda0dee1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e749ae59c7bd8ab4f2d458cd33ccbae459eb43375c8abcdd275ce7f3978d5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b6e8ecfcaba19fb742ce57619627409073b5ea272a20cca6cff39ebcef8d3e2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T10:18:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 10:18:37.748369 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 10:18:37.748616 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 10:18:37.749728 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2309284339/tls.crt::/tmp/serving-cert-2309284339/tls.key\\\\\\\"\\\\nI0227 10:18:38.164511 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 10:18:38.177544 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 10:18:38.177576 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 10:18:38.177600 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 10:18:38.177606 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 10:18:38.196196 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 10:18:38.196250 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 10:18:38.196256 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 10:18:38.196263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 10:18:38.196267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 10:18:38.196282 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 10:18:38.196286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0227 10:18:38.201111 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0227 10:18:38.201327 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T10:18:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df65cc66d990595a8fdb751234ffd8b9e890f53de56c736241bc8b52e7341357\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b42644d7b4769348dda3a3ed5f82a860712159a6e62df2ee6a04b6e890851fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b42644d7b4769348dda3a3ed5f82a860712159a6e62df2ee6a04b6e890851fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:17:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:03Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:03 crc kubenswrapper[4998]: I0227 10:20:03.263774 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da216979-4070-4044-885e-64db11be9b28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:18:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:18:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8e1a3d0cfbfb3c94ceb7fa8965506defc8d50fbd2c3c977ebea917d9b47f29c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77f77430694123a27f4e42870837e5ef1405bc4f157bc15efe4cc2daafd9456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df8df596ab8f658fa7380300b4af87a511447baba28c0a4878829e2217d7d600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcd8311d3319cee9ae6667193835b5bd30c08b0f52b6278fead9c76ffa5949ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dcd8311d3319cee9ae6667193835b5bd30c08b0f52b6278fead9c76ffa5949ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:17:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:03Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:03 crc kubenswrapper[4998]: I0227 10:20:03.278764 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-86xkz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40178d6d-6068-4937-b7d5-883538892cc5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqwdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqwdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-86xkz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:03Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:03 crc kubenswrapper[4998]: I0227 10:20:03.293194 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:03Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:03 crc kubenswrapper[4998]: I0227 10:20:03.307042 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c7c05c29eea9415919844353f5f82a0543e2592b282913feca3776ef5a711d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65614a1a13ca1ee58cd9ee838bbb0ba2ada05d130adb26b34ae208292a994bf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:03Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:03 crc kubenswrapper[4998]: I0227 10:20:03.318181 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ea5f61679603b32df823b8149d832f4d6e98c0cb4c7d1124b3ff587bb729ef3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:03Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:03 crc kubenswrapper[4998]: I0227 10:20:03.332596 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-46lvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a046a5ca-7081-4920-98af-1027a5bc29d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70d74d26b8ecce796cf2e1d35f24a62a2d3005aac8c11e2a148aa9b9a4e670f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7s287\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-46lvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:03Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:03 crc kubenswrapper[4998]: I0227 10:20:03.343532 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"400c5e2f-5448-49c6-bf8e-04b21e552bb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3ce74dab57b3de4f390244a0a95e5ee5c83a47521248808b783a1e94e25c00c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l6t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94514be337c3278cc1dfc8b2f0c50050f03294a2cc4ac6a72c62695d2fe4152a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l6t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m6kr5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:03Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:03 crc kubenswrapper[4998]: I0227 10:20:03.356095 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1182527-22cd-46f5-ac86-450cc2fbc851\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f69cef55dc6c16d2f015053573c088caee5fcca24208ad8c7affb58bc981c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a72f2f016ab471dc647a29e72e4e49dd5008018682ff255ca5b8496cfc4a2a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a72f2f016ab471dc647a29e72e4e49dd5008018682ff255ca5b8496cfc4a2a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:17:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:03Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:03 crc kubenswrapper[4998]: I0227 10:20:03.370391 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de019f950487216eb5629eeb90db5b4dac503bb5bda901cb9dcb66a9019aaecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:03Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:03 crc kubenswrapper[4998]: I0227 10:20:03.384490 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:03Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:03 crc kubenswrapper[4998]: I0227 10:20:03.398607 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:03Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:03 crc kubenswrapper[4998]: I0227 10:20:03.418900 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bceef7ff-b99d-432e-b9cb-7c538c82b74b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b586f4d902be28e9eed6ab0b15e1c5bbd3dbf2add143844fff0835e088a1e3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f8bd4e21e6e078a8e49ae2264c035072fb77179deb36176c991ff790ea25b0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cc4f81dc4311732e185b32536af72d8391b070ab0b822258c13c55b9053cf53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e38b883925e98a1b15113d65d2f9b3e133e05c14ac63651039076b8518d42467\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92c409cddda85ffd90dc0522d28d181bb804f68f82561e82217b3e78cd9938ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80297fee7fa0ab3d0d6a5975984b83a2d0fa2a368a3fc6d85affe2af75e326e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8132f979bac54c03a1c63ec4f187152f6ce768d0f42623a73d71788142b919df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4d0eb0f1e9b5c39bd1f8cf22c6eeef46408add1f6abc2e18be2a9d20a54a094\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T10:19:44Z\\\",\\\"message\\\":\\\" policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:19:44Z is after 2025-08-24T17:21:41Z]\\\\nI0227 10:19:44.383828 7059 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-controller-manager/kube-controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"ba175bbe-5cc4-47e6-a32d-57693e1320bd\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-controller-manager/kube-controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, b\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8132f979bac54c03a1c63ec4f187152f6ce768d0f42623a73d71788142b919df\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T10:20:02Z\\\",\\\"message\\\":\\\"al for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nI0227 10:20:02.609935 7320 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/iptables-alerter-4ln5h in node crc\\\\nI0227 10:20:02.609942 7320 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-m6kr5\\\\nI0227 10:20:02.609946 7320 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nF0227 10:20:02.609952 7320 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:02Z is after 2025-08-24T17:21:41Z]\\\\nI\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T10:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f39b4e222d74a12d8889750f34e5bb8e07d73c9d7fcdbaaebe18066fba744a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a581b9e4264b5768a3af01485da05fd006f3cd0d379ef13f228187faa74f297c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a581b9e4264b5768a3af01485da05fd006f3cd0d379ef13f228187faa74f297c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wh9xl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:03Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:03 crc kubenswrapper[4998]: I0227 10:20:03.431731 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jl2nx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c459deb8-e1ea-43de-a1b0-1b463eee4bdc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7df06ac726fa496bf82c45d7a1ec7f2defaf93f1d406f0c55cbe4c2b46c9b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc6kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jl2nx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:03Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:03 crc kubenswrapper[4998]: I0227 10:20:03.442513 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcfqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9652967a-d4bf-4304-bd25-4fed87e89b10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79a4ef241b85a5c9e0a28bc1b9d7c65ed60b9ca3472e0cbc0ef4aade02e6dff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jctzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcfqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:03Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:03 crc kubenswrapper[4998]: I0227 10:20:03.456373 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l9x2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e55e9768-52ee-4fcf-a279-1b55e6d6c6fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://403bf97889905e9e76dc8ba71be2b5382ee401ce7939314113ff6019cff589db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ac2d83408a7c3cccea1e095f4712f9f524751eec680b2df6373e44643438bdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ac2d83408a7c3cccea1e095f4712f9f524751eec680b2df6373e44643438bdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3d5eb208ecd8e1f8ab650a14b8bc638732ebe1d85e05caa726a967bd010334d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3d5eb208ecd8e1f8ab650a14b8bc638732ebe1d85e05caa726a967bd010334d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://736262e0b6f343139b2e1a42c6d3c90fc1ff6df0c02511cb66074a2143476a48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://736262e0b6f343139b2e1a42c6d3c90fc1ff6df0c02511cb66074a2143476a48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b20c69cb476fe3c5e90cc0086cccef9148e6f31a8a99340967dfee5ad04f8ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b20c69cb476fe3c5e90cc0086cccef9148e6f31a8a99340967dfee5ad04f8ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16873e6b57f8d49e2acc350c6940109206ef970ae711b7cd62aa87fc90df15eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16873e6b57f8d49e2acc350c6940109206ef970ae711b7cd62aa87fc90df15eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e03d30a9a2dc44255a2b9c9dd2346f2081d87c91020679615c71a30842c6556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e03d30a9a2dc44255a2b9c9dd2346f2081d87c91020679615c71a30842c6556\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l9x2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:03Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:03 crc kubenswrapper[4998]: I0227 10:20:03.471427 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-g7c4b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68afe6cb-a559-4162-a25f-a22003feeca4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e28ec5783cd6f24d22a1412ced337b92b22dc2c17624208871489dca19dcb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mqxn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a636fc90f5de5e56e91678566c0fe6812e36482554b4454222df8205a44e8d51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mqxn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-g7c4b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:03Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:03 crc kubenswrapper[4998]: E0227 10:20:03.867058 4998 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 27 10:20:04 crc kubenswrapper[4998]: I0227 10:20:04.231020 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wh9xl_bceef7ff-b99d-432e-b9cb-7c538c82b74b/ovnkube-controller/2.log" Feb 27 10:20:04 crc kubenswrapper[4998]: I0227 10:20:04.234908 4998 scope.go:117] "RemoveContainer" containerID="8132f979bac54c03a1c63ec4f187152f6ce768d0f42623a73d71788142b919df" Feb 27 10:20:04 crc kubenswrapper[4998]: E0227 10:20:04.235083 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-wh9xl_openshift-ovn-kubernetes(bceef7ff-b99d-432e-b9cb-7c538c82b74b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" podUID="bceef7ff-b99d-432e-b9cb-7c538c82b74b" Feb 27 10:20:04 crc kubenswrapper[4998]: I0227 10:20:04.254938 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fc05123-f698-45ff-a3c3-13c18e466cdb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c807f36a54905b570613990b7428942b90cbad2d8d8dfbab02ee177d5575644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f0cce1c3309bc7de11ea57038f7a306b781b41434337fbf3da176ca2dba2b91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93c102b441273f39ca361ed86f6ef259e15d8adb7eea414db62fabebfda0dee1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e749ae59c7bd8ab4f2d458cd33ccbae459eb43375c8abcdd275ce7f3978d5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b6e8ecfcaba19fb742ce57619627409073b5ea272a20cca6cff39ebcef8d3e2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T10:18:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 10:18:37.748369 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 10:18:37.748616 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 10:18:37.749728 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2309284339/tls.crt::/tmp/serving-cert-2309284339/tls.key\\\\\\\"\\\\nI0227 10:18:38.164511 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 10:18:38.177544 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 10:18:38.177576 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 10:18:38.177600 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 10:18:38.177606 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 10:18:38.196196 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 10:18:38.196250 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 10:18:38.196256 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 10:18:38.196263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 10:18:38.196267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 10:18:38.196282 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 10:18:38.196286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0227 10:18:38.201111 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0227 10:18:38.201327 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T10:18:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df65cc66d990595a8fdb751234ffd8b9e890f53de56c736241bc8b52e7341357\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b42644d7b4769348dda3a3ed5f82a860712159a6e62df2ee6a04b6e890851fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b42644d7b4769348dda3a3ed5f82a860712159a6e62df2ee6a04b6e890851fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:17:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:04Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:04 crc kubenswrapper[4998]: I0227 10:20:04.267561 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da216979-4070-4044-885e-64db11be9b28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:18:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:18:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8e1a3d0cfbfb3c94ceb7fa8965506defc8d50fbd2c3c977ebea917d9b47f29c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77f77430694123a27f4e42870837e5ef1405bc4f157bc15efe4cc2daafd9456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df8df596ab8f658fa7380300b4af87a511447baba28c0a4878829e2217d7d600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcd8311d3319cee9ae6667193835b5bd30c08b0f52b6278fead9c76ffa5949ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dcd8311d3319cee9ae6667193835b5bd30c08b0f52b6278fead9c76ffa5949ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:17:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:04Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:04 crc kubenswrapper[4998]: I0227 10:20:04.280740 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-86xkz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40178d6d-6068-4937-b7d5-883538892cc5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqwdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqwdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-86xkz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:04Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:04 crc kubenswrapper[4998]: I0227 10:20:04.291708 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:04Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:04 crc kubenswrapper[4998]: I0227 10:20:04.303046 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c7c05c29eea9415919844353f5f82a0543e2592b282913feca3776ef5a711d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65614a1a13ca1ee58cd9ee838bbb0ba2ada05d130adb26b34ae208292a994bf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:04Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:04 crc kubenswrapper[4998]: I0227 10:20:04.314944 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ea5f61679603b32df823b8149d832f4d6e98c0cb4c7d1124b3ff587bb729ef3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:04Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:04 crc kubenswrapper[4998]: I0227 10:20:04.332249 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-46lvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a046a5ca-7081-4920-98af-1027a5bc29d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70d74d26b8ecce796cf2e1d35f24a62a2d3005aac8c11e2a148aa9b9a4e670f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7s287\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-46lvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:04Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:04 crc kubenswrapper[4998]: I0227 10:20:04.341645 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"400c5e2f-5448-49c6-bf8e-04b21e552bb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3ce74dab57b3de4f390244a0a95e5ee5c83a47521248808b783a1e94e25c00c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l6t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94514be337c3278cc1dfc8b2f0c50050f03294a2cc4ac6a72c62695d2fe4152a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l6t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m6kr5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:04Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:04 crc kubenswrapper[4998]: I0227 10:20:04.350433 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1182527-22cd-46f5-ac86-450cc2fbc851\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f69cef55dc6c16d2f015053573c088caee5fcca24208ad8c7affb58bc981c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a72f2f016ab471dc647a29e72e4e49dd5008018682ff255ca5b8496cfc4a2a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a72f2f016ab471dc647a29e72e4e49dd5008018682ff255ca5b8496cfc4a2a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:17:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:04Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:04 crc kubenswrapper[4998]: I0227 10:20:04.364619 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de019f950487216eb5629eeb90db5b4dac503bb5bda901cb9dcb66a9019aaecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:04Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:04 crc kubenswrapper[4998]: I0227 10:20:04.378399 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:04Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:04 crc kubenswrapper[4998]: I0227 10:20:04.389264 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:04Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:04 crc kubenswrapper[4998]: I0227 10:20:04.416973 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bceef7ff-b99d-432e-b9cb-7c538c82b74b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b586f4d902be28e9eed6ab0b15e1c5bbd3dbf2add143844fff0835e088a1e3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f8bd4e21e6e078a8e49ae2264c035072fb77179deb36176c991ff790ea25b0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cc4f81dc4311732e185b32536af72d8391b070ab0b822258c13c55b9053cf53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e38b883925e98a1b15113d65d2f9b3e133e05c14ac63651039076b8518d42467\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92c409cddda85ffd90dc0522d28d181bb804f68f82561e82217b3e78cd9938ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80297fee7fa0ab3d0d6a5975984b83a2d0fa2a368a3fc6d85affe2af75e326e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8132f979bac54c03a1c63ec4f187152f6ce768d0f42623a73d71788142b919df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8132f979bac54c03a1c63ec4f187152f6ce768d0f42623a73d71788142b919df\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T10:20:02Z\\\",\\\"message\\\":\\\"al for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nI0227 10:20:02.609935 7320 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/iptables-alerter-4ln5h in node crc\\\\nI0227 10:20:02.609942 7320 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-m6kr5\\\\nI0227 10:20:02.609946 7320 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nF0227 10:20:02.609952 7320 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:02Z is after 2025-08-24T17:21:41Z]\\\\nI\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T10:20:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-wh9xl_openshift-ovn-kubernetes(bceef7ff-b99d-432e-b9cb-7c538c82b74b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f39b4e222d74a12d8889750f34e5bb8e07d73c9d7fcdbaaebe18066fba744a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a581b9e4264b5768a3af01485da05fd006f3cd0d379ef13f228187faa74f297c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a581b9e4264b5768a3af01485da05fd006f3cd0d379ef13f228187faa74f297c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wh9xl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:04Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:04 crc kubenswrapper[4998]: I0227 10:20:04.429209 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jl2nx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c459deb8-e1ea-43de-a1b0-1b463eee4bdc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7df06ac726fa496bf82c45d7a1ec7f2defaf93f1d406f0c55cbe4c2b46c9b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc6kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jl2nx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:04Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:04 crc kubenswrapper[4998]: I0227 10:20:04.441155 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcfqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9652967a-d4bf-4304-bd25-4fed87e89b10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79a4ef241b85a5c9e0a28bc1b9d7c65ed60b9ca3472e0cbc0ef4aade02e6dff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jctzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcfqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:04Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:04 crc kubenswrapper[4998]: I0227 10:20:04.458741 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l9x2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e55e9768-52ee-4fcf-a279-1b55e6d6c6fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://403bf97889905e9e76dc8ba71be2b5382ee401ce7939314113ff6019cff589db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ac2d83408a7c3cccea1e095f4712f9f524751eec680b2df6373e44643438bdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ac2d83408a7c3cccea1e095f4712f9f524751eec680b2df6373e44643438bdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3d5eb208ecd8e1f8ab650a14b8bc638732ebe1d85e05caa726a967bd010334d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3d5eb208ecd8e1f8ab650a14b8bc638732ebe1d85e05caa726a967bd010334d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://736262e0b6f343139b2e1a42c6d3c90fc1ff6df0c02511cb66074a2143476a48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://736262e0b6f343139b2e1a42c6d3c90fc1ff6df0c02511cb66074a2143476a48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b20c69cb476fe3c5e90cc0086cccef9148e6f31a8a99340967dfee5ad04f8ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b20c69cb476fe3c5e90cc0086cccef9148e6f31a8a99340967dfee5ad04f8ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16873e6b57f8d49e2acc350c6940109206ef970ae711b7cd62aa87fc90df15eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16873e6b57f8d49e2acc350c6940109206ef970ae711b7cd62aa87fc90df15eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e03d30a9a2dc44255a2b9c9dd2346f2081d87c91020679615c71a30842c6556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e03d30a9a2dc44255a2b9c9dd2346f2081d87c91020679615c71a30842c6556\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l9x2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:04Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:04 crc kubenswrapper[4998]: I0227 10:20:04.470898 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-g7c4b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68afe6cb-a559-4162-a25f-a22003feeca4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e28ec5783cd6f24d22a1412ced337b92b22dc2c17624208871489dca19dcb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mqxn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a636fc90f5de5e56e91678566c0fe6812e36482554b4454222df8205a44e8d51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mqxn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-g7c4b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:04Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:04 crc kubenswrapper[4998]: I0227 10:20:04.764085 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-86xkz" Feb 27 10:20:04 crc kubenswrapper[4998]: I0227 10:20:04.764191 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:20:04 crc kubenswrapper[4998]: E0227 10:20:04.764332 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-86xkz" podUID="40178d6d-6068-4937-b7d5-883538892cc5" Feb 27 10:20:04 crc kubenswrapper[4998]: E0227 10:20:04.764518 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 10:20:04 crc kubenswrapper[4998]: I0227 10:20:04.764631 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:20:04 crc kubenswrapper[4998]: I0227 10:20:04.764659 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:20:04 crc kubenswrapper[4998]: E0227 10:20:04.764824 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 10:20:04 crc kubenswrapper[4998]: E0227 10:20:04.764923 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 10:20:06 crc kubenswrapper[4998]: I0227 10:20:06.700962 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:20:06 crc kubenswrapper[4998]: E0227 10:20:06.701163 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 10:21:10.701146285 +0000 UTC m=+222.699417253 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:20:06 crc kubenswrapper[4998]: I0227 10:20:06.764587 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:20:06 crc kubenswrapper[4998]: I0227 10:20:06.764630 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:20:06 crc kubenswrapper[4998]: E0227 10:20:06.764721 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 10:20:06 crc kubenswrapper[4998]: I0227 10:20:06.764775 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-86xkz" Feb 27 10:20:06 crc kubenswrapper[4998]: I0227 10:20:06.765396 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:20:06 crc kubenswrapper[4998]: E0227 10:20:06.765574 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 10:20:06 crc kubenswrapper[4998]: E0227 10:20:06.765602 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-86xkz" podUID="40178d6d-6068-4937-b7d5-883538892cc5" Feb 27 10:20:06 crc kubenswrapper[4998]: E0227 10:20:06.765669 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 10:20:06 crc kubenswrapper[4998]: I0227 10:20:06.801877 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:20:06 crc kubenswrapper[4998]: I0227 10:20:06.801976 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:20:06 crc kubenswrapper[4998]: I0227 10:20:06.802034 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:20:06 crc kubenswrapper[4998]: I0227 10:20:06.802077 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:20:06 crc kubenswrapper[4998]: E0227 10:20:06.802214 4998 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 27 10:20:06 crc kubenswrapper[4998]: E0227 10:20:06.802301 4998 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 27 10:20:06 crc kubenswrapper[4998]: E0227 10:20:06.802321 4998 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 27 10:20:06 crc kubenswrapper[4998]: E0227 10:20:06.802333 4998 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 10:20:06 crc kubenswrapper[4998]: E0227 10:20:06.802220 4998 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 27 10:20:06 crc kubenswrapper[4998]: E0227 10:20:06.802395 4998 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 27 10:20:06 crc kubenswrapper[4998]: E0227 10:20:06.802374 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-27 10:21:10.802343162 +0000 UTC m=+222.800614170 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 27 10:20:06 crc kubenswrapper[4998]: E0227 10:20:06.802810 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-27 10:21:10.802777763 +0000 UTC m=+222.801048721 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 10:20:06 crc kubenswrapper[4998]: E0227 10:20:06.802734 4998 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 27 10:20:06 crc kubenswrapper[4998]: E0227 10:20:06.802936 4998 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 10:20:06 crc kubenswrapper[4998]: E0227 10:20:06.807345 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-27 10:21:10.802831605 +0000 UTC m=+222.801102563 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 27 10:20:06 crc kubenswrapper[4998]: E0227 10:20:06.807417 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-27 10:21:10.807386861 +0000 UTC m=+222.805657869 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 10:20:07 crc kubenswrapper[4998]: I0227 10:20:07.774076 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Feb 27 10:20:07 crc kubenswrapper[4998]: I0227 10:20:07.954747 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:20:07 crc kubenswrapper[4998]: I0227 10:20:07.954836 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:20:07 crc kubenswrapper[4998]: I0227 10:20:07.954851 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:20:07 crc kubenswrapper[4998]: I0227 10:20:07.954868 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:20:07 crc kubenswrapper[4998]: I0227 10:20:07.954877 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:20:07Z","lastTransitionTime":"2026-02-27T10:20:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:20:07 crc kubenswrapper[4998]: E0227 10:20:07.967958 4998 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:20:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:20:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:20:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:20:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:20:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:20:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:20:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:20:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a76c225a-d617-4499-bb32-eb42f31208cd\\\",\\\"systemUUID\\\":\\\"9cb598f9-84ae-4703-b4b9-6775104308e7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:07Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:07 crc kubenswrapper[4998]: I0227 10:20:07.972135 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:20:07 crc kubenswrapper[4998]: I0227 10:20:07.972178 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:20:07 crc kubenswrapper[4998]: I0227 10:20:07.972189 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:20:07 crc kubenswrapper[4998]: I0227 10:20:07.972204 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:20:07 crc kubenswrapper[4998]: I0227 10:20:07.972215 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:20:07Z","lastTransitionTime":"2026-02-27T10:20:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:20:07 crc kubenswrapper[4998]: E0227 10:20:07.992080 4998 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:20:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:20:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:20:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:20:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:20:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:20:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:20:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:20:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a76c225a-d617-4499-bb32-eb42f31208cd\\\",\\\"systemUUID\\\":\\\"9cb598f9-84ae-4703-b4b9-6775104308e7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:07Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:07 crc kubenswrapper[4998]: I0227 10:20:07.995594 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:20:07 crc kubenswrapper[4998]: I0227 10:20:07.995653 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:20:07 crc kubenswrapper[4998]: I0227 10:20:07.995664 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:20:07 crc kubenswrapper[4998]: I0227 10:20:07.995679 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:20:07 crc kubenswrapper[4998]: I0227 10:20:07.995691 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:20:07Z","lastTransitionTime":"2026-02-27T10:20:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:20:08 crc kubenswrapper[4998]: E0227 10:20:08.009441 4998 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:20:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:20:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:20:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:20:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:20:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:20:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:20:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:20:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a76c225a-d617-4499-bb32-eb42f31208cd\\\",\\\"systemUUID\\\":\\\"9cb598f9-84ae-4703-b4b9-6775104308e7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:08Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:08 crc kubenswrapper[4998]: I0227 10:20:08.013137 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:20:08 crc kubenswrapper[4998]: I0227 10:20:08.013194 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:20:08 crc kubenswrapper[4998]: I0227 10:20:08.013206 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:20:08 crc kubenswrapper[4998]: I0227 10:20:08.013258 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:20:08 crc kubenswrapper[4998]: I0227 10:20:08.013272 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:20:08Z","lastTransitionTime":"2026-02-27T10:20:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:20:08 crc kubenswrapper[4998]: E0227 10:20:08.029123 4998 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:20:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:20:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:20:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:20:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:20:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:20:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:20:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:20:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a76c225a-d617-4499-bb32-eb42f31208cd\\\",\\\"systemUUID\\\":\\\"9cb598f9-84ae-4703-b4b9-6775104308e7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:08Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:08 crc kubenswrapper[4998]: I0227 10:20:08.033900 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:20:08 crc kubenswrapper[4998]: I0227 10:20:08.033959 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:20:08 crc kubenswrapper[4998]: I0227 10:20:08.033980 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:20:08 crc kubenswrapper[4998]: I0227 10:20:08.034009 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:20:08 crc kubenswrapper[4998]: I0227 10:20:08.034032 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:20:08Z","lastTransitionTime":"2026-02-27T10:20:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:20:08 crc kubenswrapper[4998]: E0227 10:20:08.046720 4998 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:20:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:20:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:20:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:20:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:20:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:20:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:20:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:20:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a76c225a-d617-4499-bb32-eb42f31208cd\\\",\\\"systemUUID\\\":\\\"9cb598f9-84ae-4703-b4b9-6775104308e7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:08Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:08 crc kubenswrapper[4998]: E0227 10:20:08.046835 4998 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 27 10:20:08 crc kubenswrapper[4998]: I0227 10:20:08.764475 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:20:08 crc kubenswrapper[4998]: E0227 10:20:08.764717 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 10:20:08 crc kubenswrapper[4998]: I0227 10:20:08.764789 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:20:08 crc kubenswrapper[4998]: I0227 10:20:08.764828 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:20:08 crc kubenswrapper[4998]: I0227 10:20:08.764855 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-86xkz" Feb 27 10:20:08 crc kubenswrapper[4998]: E0227 10:20:08.764899 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 10:20:08 crc kubenswrapper[4998]: E0227 10:20:08.764966 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 10:20:08 crc kubenswrapper[4998]: E0227 10:20:08.765082 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-86xkz" podUID="40178d6d-6068-4937-b7d5-883538892cc5" Feb 27 10:20:08 crc kubenswrapper[4998]: I0227 10:20:08.780270 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80f17ef8-6ca9-4916-b46a-449ae073f38b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54c1076d7b06c386da934e0ef2a7ae42071884aaa9781bf133c657585aadddbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a48766e32131d5ab9abc23e7d20a4d07291cb6a8fbfd6cc71727209ac18d01a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T10:18:33Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0227 10:18:02.681302 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0227 10:18:02.682345 1 observer_polling.go:159] Starting file observer\\\\nI0227 10:18:02.683346 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0227 10:18:02.684103 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0227 10:18:25.996529 1 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials\\\\nF0227 10:18:32.988177 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T10:18:02Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:18:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://522fda246ba145bdebb76c4053bdce6892f1420b31e9bf785d9f94cc17d7f88a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3aa3432d6c4ec6208a82daff53e88297a9a36dcba4740c03f7982e806c8278e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e09abfc39e6ff32d77e2feeb903f10bb1d82033d56c9b6b7cc45bf52c6a5d2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:17:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:08Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:08 crc kubenswrapper[4998]: I0227 10:20:08.792197 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcfqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9652967a-d4bf-4304-bd25-4fed87e89b10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79a4ef241b85a5c9e0a28bc1b9d7c65ed60b9ca3472e0cbc0ef4aade02e6dff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jctzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcfqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:08Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:08 crc kubenswrapper[4998]: I0227 10:20:08.809819 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l9x2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e55e9768-52ee-4fcf-a279-1b55e6d6c6fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://403bf97889905e9e76dc8ba71be2b5382ee401ce7939314113ff6019cff589db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ac2d83408a7c3cccea1e095f4712f9f524751eec680b2df6373e44643438bdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ac2d83408a7c3cccea1e095f4712f9f524751eec680b2df6373e44643438bdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3d5eb208ecd8e1f8ab650a14b8bc638732ebe1d85e05caa726a967bd010334d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3d5eb208ecd8e1f8ab650a14b8bc638732ebe1d85e05caa726a967bd010334d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://736262e0b6f343139b2e1a42c6d3c90fc1ff6df0c02511cb66074a2143476a48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://736262e0b6f343139b2e1a42c6d3c90fc1ff6df0c02511cb66074a2143476a48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b20c69cb476fe3c5e90cc0086cccef9148e6f31a8a99340967dfee5ad04f8ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b20c69cb476fe3c5e90cc0086cccef9148e6f31a8a99340967dfee5ad04f8ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16873e6b57f8d49e2acc350c6940109206ef970ae711b7cd62aa87fc90df15eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16873e6b57f8d49e2acc350c6940109206ef970ae711b7cd62aa87fc90df15eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e03d30a9a2dc44255a2b9c9dd2346f2081d87c91020679615c71a30842c6556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e03d30a9a2dc44255a2b9c9dd2346f2081d87c91020679615c71a30842c6556\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l9x2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:08Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:08 crc kubenswrapper[4998]: I0227 10:20:08.825862 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-g7c4b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68afe6cb-a559-4162-a25f-a22003feeca4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e28ec5783cd6f24d22a1412ced337b92b22dc2c17624208871489dca19dcb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mqxn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a636fc90f5de5e56e91678566c0fe6812e36482554b4454222df8205a44e8d51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mqxn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-g7c4b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:08Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:08 crc kubenswrapper[4998]: I0227 10:20:08.841396 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fc05123-f698-45ff-a3c3-13c18e466cdb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c807f36a54905b570613990b7428942b90cbad2d8d8dfbab02ee177d5575644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f0cce1c3309bc7de11ea57038f7a306b781b41434337fbf3da176ca2dba2b91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93c102b441273f39ca361ed86f6ef259e15d8adb7eea414db62fabebfda0dee1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e749ae59c7bd8ab4f2d458cd33ccbae459eb43375c8abcdd275ce7f3978d5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b6e8ecfcaba19fb742ce57619627409073b5ea272a20cca6cff39ebcef8d3e2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T10:18:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 10:18:37.748369 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 10:18:37.748616 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 10:18:37.749728 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2309284339/tls.crt::/tmp/serving-cert-2309284339/tls.key\\\\\\\"\\\\nI0227 10:18:38.164511 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 10:18:38.177544 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 10:18:38.177576 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 10:18:38.177600 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 10:18:38.177606 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 10:18:38.196196 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 10:18:38.196250 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 10:18:38.196256 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 10:18:38.196263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 10:18:38.196267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 10:18:38.196282 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 10:18:38.196286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0227 10:18:38.201111 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0227 10:18:38.201327 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T10:18:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df65cc66d990595a8fdb751234ffd8b9e890f53de56c736241bc8b52e7341357\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b42644d7b4769348dda3a3ed5f82a860712159a6e62df2ee6a04b6e890851fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b42644d7b4769348dda3a3ed5f82a860712159a6e62df2ee6a04b6e890851fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:17:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:08Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:08 crc kubenswrapper[4998]: I0227 10:20:08.853919 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da216979-4070-4044-885e-64db11be9b28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:18:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:18:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8e1a3d0cfbfb3c94ceb7fa8965506defc8d50fbd2c3c977ebea917d9b47f29c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77f77430694123a27f4e42870837e5ef1405bc4f157bc15efe4cc2daafd9456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df8df596ab8f658fa7380300b4af87a511447baba28c0a4878829e2217d7d600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcd8311d3319cee9ae6667193835b5bd30c08b0f52b6278fead9c76ffa5949ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dcd8311d3319cee9ae6667193835b5bd30c08b0f52b6278fead9c76ffa5949ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:17:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:08Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:08 crc kubenswrapper[4998]: I0227 10:20:08.866049 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-86xkz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40178d6d-6068-4937-b7d5-883538892cc5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqwdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqwdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-86xkz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:08Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:08 crc kubenswrapper[4998]: E0227 10:20:08.868727 4998 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 27 10:20:08 crc kubenswrapper[4998]: I0227 10:20:08.886250 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:08Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:08 crc kubenswrapper[4998]: I0227 10:20:08.897551 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c7c05c29eea9415919844353f5f82a0543e2592b282913feca3776ef5a711d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65614a1a13ca1ee58cd9ee838bbb0ba2ada05d130adb26b34ae208292a994bf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:08Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:08 crc kubenswrapper[4998]: I0227 10:20:08.910516 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ea5f61679603b32df823b8149d832f4d6e98c0cb4c7d1124b3ff587bb729ef3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:08Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:08 crc kubenswrapper[4998]: I0227 10:20:08.922993 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-46lvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a046a5ca-7081-4920-98af-1027a5bc29d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70d74d26b8ecce796cf2e1d35f24a62a2d3005aac8c11e2a148aa9b9a4e670f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7s287\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-46lvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:08Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:08 crc kubenswrapper[4998]: I0227 10:20:08.935257 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"400c5e2f-5448-49c6-bf8e-04b21e552bb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3ce74dab57b3de4f390244a0a95e5ee5c83a47521248808b783a1e94e25c00c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l6t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94514be337c3278cc1dfc8b2f0c50050f03294a2cc4ac6a72c62695d2fe4152a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l6t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m6kr5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:08Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:08 crc kubenswrapper[4998]: I0227 10:20:08.952114 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1182527-22cd-46f5-ac86-450cc2fbc851\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f69cef55dc6c16d2f015053573c088caee5fcca24208ad8c7affb58bc981c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a72f2f016ab471dc647a29e72e4e49dd5008018682ff255ca5b8496cfc4a2a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a72f2f016ab471dc647a29e72e4e49dd5008018682ff255ca5b8496cfc4a2a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:17:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:08Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:08 crc kubenswrapper[4998]: I0227 10:20:08.966501 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de019f950487216eb5629eeb90db5b4dac503bb5bda901cb9dcb66a9019aaecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:08Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:08 crc kubenswrapper[4998]: I0227 10:20:08.978364 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:08Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:08 crc kubenswrapper[4998]: I0227 10:20:08.992118 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:08Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:09 crc kubenswrapper[4998]: I0227 10:20:09.009267 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bceef7ff-b99d-432e-b9cb-7c538c82b74b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b586f4d902be28e9eed6ab0b15e1c5bbd3dbf2add143844fff0835e088a1e3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f8bd4e21e6e078a8e49ae2264c035072fb77179deb36176c991ff790ea25b0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cc4f81dc4311732e185b32536af72d8391b070ab0b822258c13c55b9053cf53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e38b883925e98a1b15113d65d2f9b3e133e05c14ac63651039076b8518d42467\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92c409cddda85ffd90dc0522d28d181bb804f68f82561e82217b3e78cd9938ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80297fee7fa0ab3d0d6a5975984b83a2d0fa2a368a3fc6d85affe2af75e326e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8132f979bac54c03a1c63ec4f187152f6ce768d0f42623a73d71788142b919df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8132f979bac54c03a1c63ec4f187152f6ce768d0f42623a73d71788142b919df\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T10:20:02Z\\\",\\\"message\\\":\\\"al for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nI0227 10:20:02.609935 7320 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/iptables-alerter-4ln5h in node crc\\\\nI0227 10:20:02.609942 7320 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-m6kr5\\\\nI0227 10:20:02.609946 7320 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nF0227 10:20:02.609952 7320 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:02Z is after 2025-08-24T17:21:41Z]\\\\nI\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T10:20:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-wh9xl_openshift-ovn-kubernetes(bceef7ff-b99d-432e-b9cb-7c538c82b74b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f39b4e222d74a12d8889750f34e5bb8e07d73c9d7fcdbaaebe18066fba744a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a581b9e4264b5768a3af01485da05fd006f3cd0d379ef13f228187faa74f297c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a581b9e4264b5768a3af01485da05fd006f3cd0d379ef13f228187faa74f297c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wh9xl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:09Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:09 crc kubenswrapper[4998]: I0227 10:20:09.018871 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jl2nx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c459deb8-e1ea-43de-a1b0-1b463eee4bdc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7df06ac726fa496bf82c45d7a1ec7f2defaf93f1d406f0c55cbe4c2b46c9b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc6kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jl2nx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:09Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:10 crc kubenswrapper[4998]: I0227 10:20:10.764329 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:20:10 crc kubenswrapper[4998]: I0227 10:20:10.764368 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:20:10 crc kubenswrapper[4998]: I0227 10:20:10.764375 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:20:10 crc kubenswrapper[4998]: I0227 10:20:10.764423 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-86xkz" Feb 27 10:20:10 crc kubenswrapper[4998]: E0227 10:20:10.764552 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 10:20:10 crc kubenswrapper[4998]: E0227 10:20:10.764594 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 10:20:10 crc kubenswrapper[4998]: E0227 10:20:10.764673 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 10:20:10 crc kubenswrapper[4998]: E0227 10:20:10.764747 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-86xkz" podUID="40178d6d-6068-4937-b7d5-883538892cc5" Feb 27 10:20:12 crc kubenswrapper[4998]: I0227 10:20:12.764569 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:20:12 crc kubenswrapper[4998]: I0227 10:20:12.764584 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:20:12 crc kubenswrapper[4998]: E0227 10:20:12.764789 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 10:20:12 crc kubenswrapper[4998]: I0227 10:20:12.764583 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:20:12 crc kubenswrapper[4998]: E0227 10:20:12.764844 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 10:20:12 crc kubenswrapper[4998]: E0227 10:20:12.764927 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 10:20:12 crc kubenswrapper[4998]: I0227 10:20:12.764611 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-86xkz" Feb 27 10:20:12 crc kubenswrapper[4998]: E0227 10:20:12.765085 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-86xkz" podUID="40178d6d-6068-4937-b7d5-883538892cc5" Feb 27 10:20:13 crc kubenswrapper[4998]: E0227 10:20:13.870039 4998 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 27 10:20:14 crc kubenswrapper[4998]: I0227 10:20:14.764801 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:20:14 crc kubenswrapper[4998]: I0227 10:20:14.764871 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-86xkz" Feb 27 10:20:14 crc kubenswrapper[4998]: I0227 10:20:14.765016 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:20:14 crc kubenswrapper[4998]: E0227 10:20:14.764920 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 10:20:14 crc kubenswrapper[4998]: I0227 10:20:14.764880 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:20:14 crc kubenswrapper[4998]: E0227 10:20:14.765187 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-86xkz" podUID="40178d6d-6068-4937-b7d5-883538892cc5" Feb 27 10:20:14 crc kubenswrapper[4998]: E0227 10:20:14.765385 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 10:20:14 crc kubenswrapper[4998]: E0227 10:20:14.765428 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 10:20:16 crc kubenswrapper[4998]: I0227 10:20:16.764700 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:20:16 crc kubenswrapper[4998]: I0227 10:20:16.764700 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:20:16 crc kubenswrapper[4998]: E0227 10:20:16.764880 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 10:20:16 crc kubenswrapper[4998]: I0227 10:20:16.764995 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:20:16 crc kubenswrapper[4998]: I0227 10:20:16.765041 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-86xkz" Feb 27 10:20:16 crc kubenswrapper[4998]: E0227 10:20:16.765218 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 10:20:16 crc kubenswrapper[4998]: E0227 10:20:16.765360 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 10:20:16 crc kubenswrapper[4998]: E0227 10:20:16.765553 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-86xkz" podUID="40178d6d-6068-4937-b7d5-883538892cc5" Feb 27 10:20:17 crc kubenswrapper[4998]: I0227 10:20:17.780718 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Feb 27 10:20:18 crc kubenswrapper[4998]: I0227 10:20:18.316876 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:20:18 crc kubenswrapper[4998]: I0227 10:20:18.316935 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:20:18 crc kubenswrapper[4998]: I0227 10:20:18.316953 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:20:18 crc kubenswrapper[4998]: I0227 10:20:18.316976 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:20:18 crc kubenswrapper[4998]: I0227 10:20:18.316992 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:20:18Z","lastTransitionTime":"2026-02-27T10:20:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:20:18 crc kubenswrapper[4998]: E0227 10:20:18.336879 4998 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:20:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:20:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:20:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:20:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:20:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:20:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:20:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:20:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a76c225a-d617-4499-bb32-eb42f31208cd\\\",\\\"systemUUID\\\":\\\"9cb598f9-84ae-4703-b4b9-6775104308e7\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:18Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:18 crc kubenswrapper[4998]: I0227 10:20:18.342176 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:20:18 crc kubenswrapper[4998]: I0227 10:20:18.342207 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:20:18 crc kubenswrapper[4998]: I0227 10:20:18.342218 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:20:18 crc kubenswrapper[4998]: I0227 10:20:18.342253 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:20:18 crc kubenswrapper[4998]: I0227 10:20:18.342265 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:20:18Z","lastTransitionTime":"2026-02-27T10:20:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:20:18 crc kubenswrapper[4998]: E0227 10:20:18.362090 4998 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:20:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:20:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:20:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:20:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:20:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:20:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:20:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:20:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a76c225a-d617-4499-bb32-eb42f31208cd\\\",\\\"systemUUID\\\":\\\"9cb598f9-84ae-4703-b4b9-6775104308e7\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:18Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:18 crc kubenswrapper[4998]: I0227 10:20:18.370947 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:20:18 crc kubenswrapper[4998]: I0227 10:20:18.371022 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:20:18 crc kubenswrapper[4998]: I0227 10:20:18.371043 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:20:18 crc kubenswrapper[4998]: I0227 10:20:18.371070 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:20:18 crc kubenswrapper[4998]: I0227 10:20:18.371093 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:20:18Z","lastTransitionTime":"2026-02-27T10:20:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:20:18 crc kubenswrapper[4998]: E0227 10:20:18.390331 4998 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:20:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:20:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:20:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:20:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:20:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:20:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:20:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:20:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a76c225a-d617-4499-bb32-eb42f31208cd\\\",\\\"systemUUID\\\":\\\"9cb598f9-84ae-4703-b4b9-6775104308e7\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:18Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:18 crc kubenswrapper[4998]: I0227 10:20:18.394500 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:20:18 crc kubenswrapper[4998]: I0227 10:20:18.394585 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:20:18 crc kubenswrapper[4998]: I0227 10:20:18.394599 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:20:18 crc kubenswrapper[4998]: I0227 10:20:18.394619 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:20:18 crc kubenswrapper[4998]: I0227 10:20:18.394631 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:20:18Z","lastTransitionTime":"2026-02-27T10:20:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:20:18 crc kubenswrapper[4998]: E0227 10:20:18.408916 4998 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:20:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:20:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:20:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:20:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:20:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:20:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:20:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:20:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a76c225a-d617-4499-bb32-eb42f31208cd\\\",\\\"systemUUID\\\":\\\"9cb598f9-84ae-4703-b4b9-6775104308e7\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:18Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:18 crc kubenswrapper[4998]: I0227 10:20:18.413688 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:20:18 crc kubenswrapper[4998]: I0227 10:20:18.413748 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:20:18 crc kubenswrapper[4998]: I0227 10:20:18.413765 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:20:18 crc kubenswrapper[4998]: I0227 10:20:18.413790 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:20:18 crc kubenswrapper[4998]: I0227 10:20:18.413807 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:20:18Z","lastTransitionTime":"2026-02-27T10:20:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:20:18 crc kubenswrapper[4998]: E0227 10:20:18.429830 4998 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:20:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:20:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:20:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:20:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:20:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:20:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:20:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:20:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a76c225a-d617-4499-bb32-eb42f31208cd\\\",\\\"systemUUID\\\":\\\"9cb598f9-84ae-4703-b4b9-6775104308e7\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:18Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:18 crc kubenswrapper[4998]: E0227 10:20:18.429978 4998 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 27 10:20:18 crc kubenswrapper[4998]: I0227 10:20:18.764309 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-86xkz" Feb 27 10:20:18 crc kubenswrapper[4998]: I0227 10:20:18.764417 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:20:18 crc kubenswrapper[4998]: I0227 10:20:18.764359 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:20:18 crc kubenswrapper[4998]: E0227 10:20:18.764589 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-86xkz" podUID="40178d6d-6068-4937-b7d5-883538892cc5" Feb 27 10:20:18 crc kubenswrapper[4998]: I0227 10:20:18.764627 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:20:18 crc kubenswrapper[4998]: E0227 10:20:18.764761 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 10:20:18 crc kubenswrapper[4998]: E0227 10:20:18.764900 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 10:20:18 crc kubenswrapper[4998]: E0227 10:20:18.765360 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 10:20:18 crc kubenswrapper[4998]: I0227 10:20:18.765655 4998 scope.go:117] "RemoveContainer" containerID="8132f979bac54c03a1c63ec4f187152f6ce768d0f42623a73d71788142b919df" Feb 27 10:20:18 crc kubenswrapper[4998]: E0227 10:20:18.765857 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-wh9xl_openshift-ovn-kubernetes(bceef7ff-b99d-432e-b9cb-7c538c82b74b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" podUID="bceef7ff-b99d-432e-b9cb-7c538c82b74b" Feb 27 10:20:18 crc kubenswrapper[4998]: I0227 10:20:18.776503 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1182527-22cd-46f5-ac86-450cc2fbc851\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f69cef55dc6c16d2f015053573c088caee5fcca24208ad8c7affb58bc981c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a72f2f016ab471dc647a29e72e4e49dd5008018682ff255ca5b8496cfc4a2a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a72f2f016ab471dc647a29e72e4e49dd5008018682ff255ca5b8496cfc4a2a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:17:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:18Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:18 crc kubenswrapper[4998]: I0227 10:20:18.793244 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de019f950487216eb5629eeb90db5b4dac503bb5bda901cb9dcb66a9019aaecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:18Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:18 crc kubenswrapper[4998]: I0227 10:20:18.809579 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:18Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:18 crc kubenswrapper[4998]: I0227 10:20:18.824105 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:18Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:18 crc kubenswrapper[4998]: I0227 10:20:18.846101 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bceef7ff-b99d-432e-b9cb-7c538c82b74b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b586f4d902be28e9eed6ab0b15e1c5bbd3dbf2add143844fff0835e088a1e3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f8bd4e21e6e078a8e49ae2264c035072fb77179deb36176c991ff790ea25b0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cc4f81dc4311732e185b32536af72d8391b070ab0b822258c13c55b9053cf53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e38b883925e98a1b15113d65d2f9b3e133e05c14ac63651039076b8518d42467\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92c409cddda85ffd90dc0522d28d181bb804f68f82561e82217b3e78cd9938ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80297fee7fa0ab3d0d6a5975984b83a2d0fa2a368a3fc6d85affe2af75e326e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8132f979bac54c03a1c63ec4f187152f6ce768d0f42623a73d71788142b919df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8132f979bac54c03a1c63ec4f187152f6ce768d0f42623a73d71788142b919df\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T10:20:02Z\\\",\\\"message\\\":\\\"al for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nI0227 10:20:02.609935 7320 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/iptables-alerter-4ln5h in node crc\\\\nI0227 10:20:02.609942 7320 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-m6kr5\\\\nI0227 10:20:02.609946 7320 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nF0227 10:20:02.609952 7320 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:02Z is after 2025-08-24T17:21:41Z]\\\\nI\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T10:20:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-wh9xl_openshift-ovn-kubernetes(bceef7ff-b99d-432e-b9cb-7c538c82b74b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f39b4e222d74a12d8889750f34e5bb8e07d73c9d7fcdbaaebe18066fba744a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a581b9e4264b5768a3af01485da05fd006f3cd0d379ef13f228187faa74f297c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a581b9e4264b5768a3af01485da05fd006f3cd0d379ef13f228187faa74f297c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wh9xl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:18Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:18 crc kubenswrapper[4998]: I0227 10:20:18.859261 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jl2nx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c459deb8-e1ea-43de-a1b0-1b463eee4bdc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7df06ac726fa496bf82c45d7a1ec7f2defaf93f1d406f0c55cbe4c2b46c9b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc6kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jl2nx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:18Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:18 crc kubenswrapper[4998]: E0227 10:20:18.871564 4998 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 27 10:20:18 crc kubenswrapper[4998]: I0227 10:20:18.875420 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80f17ef8-6ca9-4916-b46a-449ae073f38b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54c1076d7b06c386da934e0ef2a7ae42071884aaa9781bf133c657585aadddbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a48766e32131d5ab9abc23e7d20a4d07291cb6a8fbfd6cc71727209ac18d01a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T10:18:33Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0227 10:18:02.681302 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0227 10:18:02.682345 1 observer_polling.go:159] Starting file observer\\\\nI0227 10:18:02.683346 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0227 10:18:02.684103 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0227 10:18:25.996529 1 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials\\\\nF0227 10:18:32.988177 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T10:18:02Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:18:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://522fda246ba145bdebb76c4053bdce6892f1420b31e9bf785d9f94cc17d7f88a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3aa3432d6c4ec6208a82daff53e88297a9a36dcba4740c03f7982e806c8278e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e09abfc39e6ff32d77e2feeb903f10bb1d82033d56c9b6b7cc45bf52c6a5d2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:17:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:18Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:18 crc kubenswrapper[4998]: I0227 10:20:18.890593 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcfqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9652967a-d4bf-4304-bd25-4fed87e89b10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79a4ef241b85a5c9e0a28bc1b9d7c65ed60b9ca3472e0cbc0ef4aade02e6dff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jctzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcfqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:18Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:18 crc kubenswrapper[4998]: I0227 10:20:18.906198 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l9x2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e55e9768-52ee-4fcf-a279-1b55e6d6c6fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://403bf97889905e9e76dc8ba71be2b5382ee401ce7939314113ff6019cff589db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ac2d83408a7c3cccea1e095f4712f9f524751eec680b2df6373e44643438bdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ac2d83408a7c3cccea1e095f4712f9f524751eec680b2df6373e44643438bdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3d5eb208ecd8e1f8ab650a14b8bc638732ebe1d85e05caa726a967bd010334d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3d5eb208ecd8e1f8ab650a14b8bc638732ebe1d85e05caa726a967bd010334d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://736262e0b6f343139b2e1a42c6d3c90fc1ff6df0c02511cb66074a2143476a48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://736262e0b6f343139b2e1a42c6d3c90fc1ff6df0c02511cb66074a2143476a48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b20c69cb476fe3c5e90cc0086cccef9148e6f31a8a99340967dfee5ad04f8ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b20c69cb476fe3c5e90cc0086cccef9148e6f31a8a99340967dfee5ad04f8ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16873e6b57f8d49e2acc350c6940109206ef970ae711b7cd62aa87fc90df15eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16873e6b57f8d49e2acc350c6940109206ef970ae711b7cd62aa87fc90df15eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e03d30a9a2dc44255a2b9c9dd2346f2081d87c91020679615c71a30842c6556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e03d30a9a2dc44255a2b9c9dd2346f2081d87c91020679615c71a30842c6556\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l9x2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:18Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:18 crc kubenswrapper[4998]: I0227 10:20:18.917135 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-g7c4b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68afe6cb-a559-4162-a25f-a22003feeca4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e28ec5783cd6f24d22a1412ced337b92b22dc2c17624208871489dca19dcb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mqxn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a636fc90f5de5e56e91678566c0fe6812e36482554b4454222df8205a44e8d51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mqxn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-g7c4b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:18Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:18 crc kubenswrapper[4998]: I0227 10:20:18.936643 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c89c9d58-a300-46ee-9598-f461887c8f9c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b0013632790bf9b7d865e7dd2b831daeac0c10446a270d6017c83c65f5687ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d760d1817981842ba5ff7cb6e99e17a2bb3c2a706ac403aff1e9df265bc38e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07faee8c5639e04f711dedbcfa4188da5234cf6ccb4c1800d23311dc29e41f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f99112c9996d81cabf24593764be3695882a1acbdb45233464eb28fbdaa0869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ceedf973a7cd5ad682ffbb7ac21dbfff2f467cbebab605bc17fcb5eb53e05f55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a28ebb836f0d7af6bdb1e5b7eaa78692585c79e9e2f0f05e806cf253adc7070f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a28ebb836f0d7af6bdb1e5b7eaa78692585c79e9e2f0f05e806cf253adc7070f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6f5b604c3667bd9b9ff07f7ee8dd1fd046911e6fbc216f8a850621ac53d010a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6f5b604c3667bd9b9ff07f7ee8dd1fd046911e6fbc216f8a850621ac53d010a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e799d0c5d0a29fa4a20c358b5b14eb2910623970ec18dbc0511b9888bddfbc87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e799d0c5d0a29fa4a20c358b5b14eb2910623970ec18dbc0511b9888bddfbc87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:17:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:17:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:18Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:18 crc kubenswrapper[4998]: I0227 10:20:18.948783 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fc05123-f698-45ff-a3c3-13c18e466cdb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c807f36a54905b570613990b7428942b90cbad2d8d8dfbab02ee177d5575644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f0cce1c3309bc7de11ea57038f7a306b781b41434337fbf3da176ca2dba2b91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93c102b441273f39ca361ed86f6ef259e15d8adb7eea414db62fabebfda0dee1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e749ae59c7bd8ab4f2d458cd33ccbae459eb43375c8abcdd275ce7f3978d5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b6e8ecfcaba19fb742ce57619627409073b5ea272a20cca6cff39ebcef8d3e2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T10:18:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 10:18:37.748369 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 10:18:37.748616 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 10:18:37.749728 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2309284339/tls.crt::/tmp/serving-cert-2309284339/tls.key\\\\\\\"\\\\nI0227 10:18:38.164511 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 10:18:38.177544 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 10:18:38.177576 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 10:18:38.177600 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 10:18:38.177606 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 10:18:38.196196 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 10:18:38.196250 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 10:18:38.196256 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 10:18:38.196263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 10:18:38.196267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 10:18:38.196282 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 10:18:38.196286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0227 10:18:38.201111 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0227 10:18:38.201327 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T10:18:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df65cc66d990595a8fdb751234ffd8b9e890f53de56c736241bc8b52e7341357\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b42644d7b4769348dda3a3ed5f82a860712159a6e62df2ee6a04b6e890851fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b42644d7b4769348dda3a3ed5f82a860712159a6e62df2ee6a04b6e890851fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:17:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:18Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:18 crc kubenswrapper[4998]: I0227 10:20:18.960466 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da216979-4070-4044-885e-64db11be9b28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:18:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:18:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8e1a3d0cfbfb3c94ceb7fa8965506defc8d50fbd2c3c977ebea917d9b47f29c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77f77430694123a27f4e42870837e5ef1405bc4f157bc15efe4cc2daafd9456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df8df596ab8f658fa7380300b4af87a511447baba28c0a4878829e2217d7d600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcd8311d3319cee9ae6667193835b5bd30c08b0f52b6278fead9c76ffa5949ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dcd8311d3319cee9ae6667193835b5bd30c08b0f52b6278fead9c76ffa5949ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:17:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:18Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:18 crc kubenswrapper[4998]: I0227 10:20:18.970485 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-86xkz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40178d6d-6068-4937-b7d5-883538892cc5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqwdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqwdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-86xkz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:18Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:18 crc kubenswrapper[4998]: I0227 10:20:18.981184 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:18Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:18 crc kubenswrapper[4998]: I0227 10:20:18.996176 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c7c05c29eea9415919844353f5f82a0543e2592b282913feca3776ef5a711d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65614a1a13ca1ee58cd9ee838bbb0ba2ada05d130adb26b34ae208292a994bf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:18Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:19 crc kubenswrapper[4998]: I0227 10:20:19.012061 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ea5f61679603b32df823b8149d832f4d6e98c0cb4c7d1124b3ff587bb729ef3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:19Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:19 crc kubenswrapper[4998]: I0227 10:20:19.025414 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-46lvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a046a5ca-7081-4920-98af-1027a5bc29d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70d74d26b8ecce796cf2e1d35f24a62a2d3005aac8c11e2a148aa9b9a4e670f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7s287\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-46lvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:19Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:19 crc kubenswrapper[4998]: I0227 10:20:19.035939 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"400c5e2f-5448-49c6-bf8e-04b21e552bb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3ce74dab57b3de4f390244a0a95e5ee5c83a47521248808b783a1e94e25c00c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l6t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94514be337c3278cc1dfc8b2f0c50050f03294a2cc4ac6a72c62695d2fe4152a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l6t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m6kr5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:19Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:20 crc kubenswrapper[4998]: I0227 10:20:20.764031 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:20:20 crc kubenswrapper[4998]: I0227 10:20:20.764169 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-86xkz" Feb 27 10:20:20 crc kubenswrapper[4998]: E0227 10:20:20.764285 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 10:20:20 crc kubenswrapper[4998]: I0227 10:20:20.764350 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:20:20 crc kubenswrapper[4998]: E0227 10:20:20.764553 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-86xkz" podUID="40178d6d-6068-4937-b7d5-883538892cc5" Feb 27 10:20:20 crc kubenswrapper[4998]: I0227 10:20:20.764650 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:20:20 crc kubenswrapper[4998]: E0227 10:20:20.764740 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 10:20:20 crc kubenswrapper[4998]: E0227 10:20:20.764876 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 10:20:22 crc kubenswrapper[4998]: I0227 10:20:22.764340 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-86xkz" Feb 27 10:20:22 crc kubenswrapper[4998]: I0227 10:20:22.764373 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:20:22 crc kubenswrapper[4998]: I0227 10:20:22.764409 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:20:22 crc kubenswrapper[4998]: I0227 10:20:22.764425 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:20:22 crc kubenswrapper[4998]: E0227 10:20:22.764496 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-86xkz" podUID="40178d6d-6068-4937-b7d5-883538892cc5" Feb 27 10:20:22 crc kubenswrapper[4998]: E0227 10:20:22.764637 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 10:20:22 crc kubenswrapper[4998]: E0227 10:20:22.764749 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 10:20:22 crc kubenswrapper[4998]: E0227 10:20:22.765038 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 10:20:23 crc kubenswrapper[4998]: E0227 10:20:23.874536 4998 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 27 10:20:24 crc kubenswrapper[4998]: I0227 10:20:24.306805 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-46lvx_a046a5ca-7081-4920-98af-1027a5bc29d0/kube-multus/0.log" Feb 27 10:20:24 crc kubenswrapper[4998]: I0227 10:20:24.306876 4998 generic.go:334] "Generic (PLEG): container finished" podID="a046a5ca-7081-4920-98af-1027a5bc29d0" containerID="70d74d26b8ecce796cf2e1d35f24a62a2d3005aac8c11e2a148aa9b9a4e670f4" exitCode=1 Feb 27 10:20:24 crc kubenswrapper[4998]: I0227 10:20:24.306913 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-46lvx" event={"ID":"a046a5ca-7081-4920-98af-1027a5bc29d0","Type":"ContainerDied","Data":"70d74d26b8ecce796cf2e1d35f24a62a2d3005aac8c11e2a148aa9b9a4e670f4"} Feb 27 10:20:24 crc kubenswrapper[4998]: I0227 10:20:24.307493 4998 scope.go:117] "RemoveContainer" containerID="70d74d26b8ecce796cf2e1d35f24a62a2d3005aac8c11e2a148aa9b9a4e670f4" Feb 27 10:20:24 crc kubenswrapper[4998]: I0227 10:20:24.339936 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bceef7ff-b99d-432e-b9cb-7c538c82b74b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b586f4d902be28e9eed6ab0b15e1c5bbd3dbf2add143844fff0835e088a1e3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f8bd4e21e6e078a8e49ae2264c035072fb77179deb36176c991ff790ea25b0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cc4f81dc4311732e185b32536af72d8391b070ab0b822258c13c55b9053cf53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e38b883925e98a1b15113d65d2f9b3e133e05c14ac63651039076b8518d42467\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92c409cddda85ffd90dc0522d28d181bb804f68f82561e82217b3e78cd9938ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80297fee7fa0ab3d0d6a5975984b83a2d0fa2a368a3fc6d85affe2af75e326e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8132f979bac54c03a1c63ec4f187152f6ce768d0f42623a73d71788142b919df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8132f979bac54c03a1c63ec4f187152f6ce768d0f42623a73d71788142b919df\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T10:20:02Z\\\",\\\"message\\\":\\\"al for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nI0227 10:20:02.609935 7320 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/iptables-alerter-4ln5h in node crc\\\\nI0227 10:20:02.609942 7320 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-m6kr5\\\\nI0227 10:20:02.609946 7320 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nF0227 10:20:02.609952 7320 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:02Z is after 2025-08-24T17:21:41Z]\\\\nI\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T10:20:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-wh9xl_openshift-ovn-kubernetes(bceef7ff-b99d-432e-b9cb-7c538c82b74b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f39b4e222d74a12d8889750f34e5bb8e07d73c9d7fcdbaaebe18066fba744a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a581b9e4264b5768a3af01485da05fd006f3cd0d379ef13f228187faa74f297c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a581b9e4264b5768a3af01485da05fd006f3cd0d379ef13f228187faa74f297c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wh9xl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:24Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:24 crc kubenswrapper[4998]: I0227 10:20:24.357034 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jl2nx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c459deb8-e1ea-43de-a1b0-1b463eee4bdc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7df06ac726fa496bf82c45d7a1ec7f2defaf93f1d406f0c55cbe4c2b46c9b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc6kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jl2nx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:24Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:24 crc kubenswrapper[4998]: I0227 10:20:24.370986 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1182527-22cd-46f5-ac86-450cc2fbc851\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f69cef55dc6c16d2f015053573c088caee5fcca24208ad8c7affb58bc981c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a72f2f016ab471dc647a29e72e4e49dd5008018682ff255ca5b8496cfc4a2a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a72f2f016ab471dc647a29e72e4e49dd5008018682ff255ca5b8496cfc4a2a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:17:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:24Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:24 crc kubenswrapper[4998]: I0227 10:20:24.391806 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de019f950487216eb5629eeb90db5b4dac503bb5bda901cb9dcb66a9019aaecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:24Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:24 crc kubenswrapper[4998]: I0227 10:20:24.406474 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:24Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:24 crc kubenswrapper[4998]: I0227 10:20:24.419991 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:24Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:24 crc kubenswrapper[4998]: I0227 10:20:24.443715 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80f17ef8-6ca9-4916-b46a-449ae073f38b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54c1076d7b06c386da934e0ef2a7ae42071884aaa9781bf133c657585aadddbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a48766e32131d5ab9abc23e7d20a4d07291cb6a8fbfd6cc71727209ac18d01a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T10:18:33Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0227 10:18:02.681302 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0227 10:18:02.682345 1 observer_polling.go:159] Starting file observer\\\\nI0227 10:18:02.683346 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0227 10:18:02.684103 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0227 10:18:25.996529 1 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials\\\\nF0227 10:18:32.988177 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T10:18:02Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:18:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://522fda246ba145bdebb76c4053bdce6892f1420b31e9bf785d9f94cc17d7f88a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3aa3432d6c4ec6208a82daff53e88297a9a36dcba4740c03f7982e806c8278e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e09abfc39e6ff32d77e2feeb903f10bb1d82033d56c9b6b7cc45bf52c6a5d2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:17:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:24Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:24 crc kubenswrapper[4998]: I0227 10:20:24.456380 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcfqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9652967a-d4bf-4304-bd25-4fed87e89b10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79a4ef241b85a5c9e0a28bc1b9d7c65ed60b9ca3472e0cbc0ef4aade02e6dff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jctzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcfqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:24Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:24 crc kubenswrapper[4998]: I0227 10:20:24.471705 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l9x2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e55e9768-52ee-4fcf-a279-1b55e6d6c6fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://403bf97889905e9e76dc8ba71be2b5382ee401ce7939314113ff6019cff589db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ac2d83408a7c3cccea1e095f4712f9f524751eec680b2df6373e44643438bdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ac2d83408a7c3cccea1e095f4712f9f524751eec680b2df6373e44643438bdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3d5eb208ecd8e1f8ab650a14b8bc638732ebe1d85e05caa726a967bd010334d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3d5eb208ecd8e1f8ab650a14b8bc638732ebe1d85e05caa726a967bd010334d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://736262e0b6f343139b2e1a42c6d3c90fc1ff6df0c02511cb66074a2143476a48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://736262e0b6f343139b2e1a42c6d3c90fc1ff6df0c02511cb66074a2143476a48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b20c69cb476fe3c5e90cc0086cccef9148e6f31a8a99340967dfee5ad04f8ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b20c69cb476fe3c5e90cc0086cccef9148e6f31a8a99340967dfee5ad04f8ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16873e6b57f8d49e2acc350c6940109206ef970ae711b7cd62aa87fc90df15eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16873e6b57f8d49e2acc350c6940109206ef970ae711b7cd62aa87fc90df15eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e03d30a9a2dc44255a2b9c9dd2346f2081d87c91020679615c71a30842c6556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e03d30a9a2dc44255a2b9c9dd2346f2081d87c91020679615c71a30842c6556\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l9x2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:24Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:24 crc kubenswrapper[4998]: I0227 10:20:24.485326 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-g7c4b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68afe6cb-a559-4162-a25f-a22003feeca4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e28ec5783cd6f24d22a1412ced337b92b22dc2c17624208871489dca19dcb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mqxn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a636fc90f5de5e56e91678566c0fe6812e36482554b4454222df8205a44e8d51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mqxn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-g7c4b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:24Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:24 crc kubenswrapper[4998]: I0227 10:20:24.513520 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c89c9d58-a300-46ee-9598-f461887c8f9c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b0013632790bf9b7d865e7dd2b831daeac0c10446a270d6017c83c65f5687ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d760d1817981842ba5ff7cb6e99e17a2bb3c2a706ac403aff1e9df265bc38e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07faee8c5639e04f711dedbcfa4188da5234cf6ccb4c1800d23311dc29e41f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f99112c9996d81cabf24593764be3695882a1acbdb45233464eb28fbdaa0869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ceedf973a7cd5ad682ffbb7ac21dbfff2f467cbebab605bc17fcb5eb53e05f55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a28ebb836f0d7af6bdb1e5b7eaa78692585c79e9e2f0f05e806cf253adc7070f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a28ebb836f0d7af6bdb1e5b7eaa78692585c79e9e2f0f05e806cf253adc7070f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6f5b604c3667bd9b9ff07f7ee8dd1fd046911e6fbc216f8a850621ac53d010a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6f5b604c3667bd9b9ff07f7ee8dd1fd046911e6fbc216f8a850621ac53d010a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e799d0c5d0a29fa4a20c358b5b14eb2910623970ec18dbc0511b9888bddfbc87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e799d0c5d0a29fa4a20c358b5b14eb2910623970ec18dbc0511b9888bddfbc87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:17:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:17:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:24Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:24 crc kubenswrapper[4998]: I0227 10:20:24.532842 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fc05123-f698-45ff-a3c3-13c18e466cdb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c807f36a54905b570613990b7428942b90cbad2d8d8dfbab02ee177d5575644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f0cce1c3309bc7de11ea57038f7a306b781b41434337fbf3da176ca2dba2b91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93c102b441273f39ca361ed86f6ef259e15d8adb7eea414db62fabebfda0dee1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e749ae59c7bd8ab4f2d458cd33ccbae459eb43375c8abcdd275ce7f3978d5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b6e8ecfcaba19fb742ce57619627409073b5ea272a20cca6cff39ebcef8d3e2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T10:18:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 10:18:37.748369 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 10:18:37.748616 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 10:18:37.749728 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2309284339/tls.crt::/tmp/serving-cert-2309284339/tls.key\\\\\\\"\\\\nI0227 10:18:38.164511 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 10:18:38.177544 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 10:18:38.177576 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 10:18:38.177600 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 10:18:38.177606 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 10:18:38.196196 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 10:18:38.196250 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 10:18:38.196256 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 10:18:38.196263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 10:18:38.196267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 10:18:38.196282 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 10:18:38.196286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0227 10:18:38.201111 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0227 10:18:38.201327 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T10:18:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df65cc66d990595a8fdb751234ffd8b9e890f53de56c736241bc8b52e7341357\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b42644d7b4769348dda3a3ed5f82a860712159a6e62df2ee6a04b6e890851fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b42644d7b4769348dda3a3ed5f82a860712159a6e62df2ee6a04b6e890851fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:17:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:24Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:24 crc kubenswrapper[4998]: I0227 10:20:24.551320 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da216979-4070-4044-885e-64db11be9b28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:18:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:18:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8e1a3d0cfbfb3c94ceb7fa8965506defc8d50fbd2c3c977ebea917d9b47f29c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77f77430694123a27f4e42870837e5ef1405bc4f157bc15efe4cc2daafd9456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df8df596ab8f658fa7380300b4af87a511447baba28c0a4878829e2217d7d600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcd8311d3319cee9ae6667193835b5bd30c08b0f52b6278fead9c76ffa5949ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dcd8311d3319cee9ae6667193835b5bd30c08b0f52b6278fead9c76ffa5949ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:17:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:24Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:24 crc kubenswrapper[4998]: I0227 10:20:24.562506 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-86xkz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40178d6d-6068-4937-b7d5-883538892cc5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqwdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqwdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-86xkz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:24Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:24 crc kubenswrapper[4998]: I0227 10:20:24.574519 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"400c5e2f-5448-49c6-bf8e-04b21e552bb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3ce74dab57b3de4f390244a0a95e5ee5c83a47521248808b783a1e94e25c00c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l6t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94514be337c3278cc1dfc8b2f0c50050f03294a2cc4ac6a72c62695d2fe4152a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l6t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m6kr5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:24Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:24 crc kubenswrapper[4998]: I0227 10:20:24.586401 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:24Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:24 crc kubenswrapper[4998]: I0227 10:20:24.601307 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c7c05c29eea9415919844353f5f82a0543e2592b282913feca3776ef5a711d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65614a1a13ca1ee58cd9ee838bbb0ba2ada05d130adb26b34ae208292a994bf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:24Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:24 crc kubenswrapper[4998]: I0227 10:20:24.613372 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ea5f61679603b32df823b8149d832f4d6e98c0cb4c7d1124b3ff587bb729ef3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:24Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:24 crc kubenswrapper[4998]: I0227 10:20:24.627504 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-46lvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a046a5ca-7081-4920-98af-1027a5bc29d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:20:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:20:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70d74d26b8ecce796cf2e1d35f24a62a2d3005aac8c11e2a148aa9b9a4e670f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70d74d26b8ecce796cf2e1d35f24a62a2d3005aac8c11e2a148aa9b9a4e670f4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T10:20:23Z\\\",\\\"message\\\":\\\"2026-02-27T10:19:37+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_633bc58b-0bda-4a44-899f-9a59253933fc\\\\n2026-02-27T10:19:37+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_633bc58b-0bda-4a44-899f-9a59253933fc to /host/opt/cni/bin/\\\\n2026-02-27T10:19:38Z [verbose] multus-daemon started\\\\n2026-02-27T10:19:38Z [verbose] Readiness Indicator file check\\\\n2026-02-27T10:20:23Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7s287\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-46lvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:24Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:24 crc kubenswrapper[4998]: I0227 10:20:24.765583 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:20:24 crc kubenswrapper[4998]: I0227 10:20:24.765664 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:20:24 crc kubenswrapper[4998]: I0227 10:20:24.765617 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:20:24 crc kubenswrapper[4998]: E0227 10:20:24.765829 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 10:20:24 crc kubenswrapper[4998]: I0227 10:20:24.765867 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-86xkz" Feb 27 10:20:24 crc kubenswrapper[4998]: E0227 10:20:24.765956 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 10:20:24 crc kubenswrapper[4998]: E0227 10:20:24.766059 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-86xkz" podUID="40178d6d-6068-4937-b7d5-883538892cc5" Feb 27 10:20:24 crc kubenswrapper[4998]: E0227 10:20:24.766151 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 10:20:25 crc kubenswrapper[4998]: I0227 10:20:25.315354 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-46lvx_a046a5ca-7081-4920-98af-1027a5bc29d0/kube-multus/0.log" Feb 27 10:20:25 crc kubenswrapper[4998]: I0227 10:20:25.315452 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-46lvx" event={"ID":"a046a5ca-7081-4920-98af-1027a5bc29d0","Type":"ContainerStarted","Data":"cf965cba1260453351278ff54565d3985b1c2eeee25ddba57db0ce0f8e335a93"} Feb 27 10:20:25 crc kubenswrapper[4998]: I0227 10:20:25.353319 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c89c9d58-a300-46ee-9598-f461887c8f9c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b0013632790bf9b7d865e7dd2b831daeac0c10446a270d6017c83c65f5687ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d760d1817981842ba5ff7cb6e99e17a2bb3c2a706ac403aff1e9df265bc38e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07faee8c5639e04f711dedbcfa4188da5234cf6ccb4c1800d23311dc29e41f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f99112c9996d81cabf24593764be3695882a1acbdb45233464eb28fbdaa0869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ceedf973a7cd5ad682ffbb7ac21dbfff2f467cbebab605bc17fcb5eb53e05f55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a28ebb836f0d7af6bdb1e5b7eaa78692585c79e9e2f0f05e806cf253adc7070f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a28ebb836f0d7af6bdb1e5b7eaa78692585c79e9e2f0f05e806cf253adc7070f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6f5b604c3667bd9b9ff07f7ee8dd1fd046911e6fbc216f8a850621ac53d010a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6f5b604c3667bd9b9ff07f7ee8dd1fd046911e6fbc216f8a850621ac53d010a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e799d0c5d0a29fa4a20c358b5b14eb2910623970ec18dbc0511b9888bddfbc87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e799d0c5d0a29fa4a20c358b5b14eb2910623970ec18dbc0511b9888bddfbc87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:17:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:17:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:25Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:25 crc kubenswrapper[4998]: I0227 10:20:25.381374 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fc05123-f698-45ff-a3c3-13c18e466cdb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c807f36a54905b570613990b7428942b90cbad2d8d8dfbab02ee177d5575644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f0cce1c3309bc7de11ea57038f7a306b781b41434337fbf3da176ca2dba2b91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93c102b441273f39ca361ed86f6ef259e15d8adb7eea414db62fabebfda0dee1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e749ae59c7bd8ab4f2d458cd33ccbae459eb43375c8abcdd275ce7f3978d5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b6e8ecfcaba19fb742ce57619627409073b5ea272a20cca6cff39ebcef8d3e2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T10:18:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 10:18:37.748369 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 10:18:37.748616 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 10:18:37.749728 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2309284339/tls.crt::/tmp/serving-cert-2309284339/tls.key\\\\\\\"\\\\nI0227 10:18:38.164511 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 10:18:38.177544 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 10:18:38.177576 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 10:18:38.177600 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 10:18:38.177606 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 10:18:38.196196 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 10:18:38.196250 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 10:18:38.196256 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 10:18:38.196263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 10:18:38.196267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 10:18:38.196282 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 10:18:38.196286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0227 10:18:38.201111 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0227 10:18:38.201327 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T10:18:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df65cc66d990595a8fdb751234ffd8b9e890f53de56c736241bc8b52e7341357\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b42644d7b4769348dda3a3ed5f82a860712159a6e62df2ee6a04b6e890851fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b42644d7b4769348dda3a3ed5f82a860712159a6e62df2ee6a04b6e890851fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:17:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:25Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:25 crc kubenswrapper[4998]: I0227 10:20:25.399785 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da216979-4070-4044-885e-64db11be9b28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:18:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:18:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8e1a3d0cfbfb3c94ceb7fa8965506defc8d50fbd2c3c977ebea917d9b47f29c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77f77430694123a27f4e42870837e5ef1405bc4f157bc15efe4cc2daafd9456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df8df596ab8f658fa7380300b4af87a511447baba28c0a4878829e2217d7d600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcd8311d3319cee9ae6667193835b5bd30c08b0f52b6278fead9c76ffa5949ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dcd8311d3319cee9ae6667193835b5bd30c08b0f52b6278fead9c76ffa5949ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:17:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:25Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:25 crc kubenswrapper[4998]: I0227 10:20:25.414904 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-86xkz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40178d6d-6068-4937-b7d5-883538892cc5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqwdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqwdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-86xkz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:25Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:25 crc kubenswrapper[4998]: I0227 10:20:25.432297 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:25Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:25 crc kubenswrapper[4998]: I0227 10:20:25.447407 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c7c05c29eea9415919844353f5f82a0543e2592b282913feca3776ef5a711d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65614a1a13ca1ee58cd9ee838bbb0ba2ada05d130adb26b34ae208292a994bf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:25Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:25 crc kubenswrapper[4998]: I0227 10:20:25.464764 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ea5f61679603b32df823b8149d832f4d6e98c0cb4c7d1124b3ff587bb729ef3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:25Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:25 crc kubenswrapper[4998]: I0227 10:20:25.481965 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-46lvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a046a5ca-7081-4920-98af-1027a5bc29d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:20:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:20:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf965cba1260453351278ff54565d3985b1c2eeee25ddba57db0ce0f8e335a93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70d74d26b8ecce796cf2e1d35f24a62a2d3005aac8c11e2a148aa9b9a4e670f4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T10:20:23Z\\\",\\\"message\\\":\\\"2026-02-27T10:19:37+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_633bc58b-0bda-4a44-899f-9a59253933fc\\\\n2026-02-27T10:19:37+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_633bc58b-0bda-4a44-899f-9a59253933fc to /host/opt/cni/bin/\\\\n2026-02-27T10:19:38Z [verbose] multus-daemon started\\\\n2026-02-27T10:19:38Z [verbose] Readiness Indicator file check\\\\n2026-02-27T10:20:23Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:20:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7s287\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-46lvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:25Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:25 crc kubenswrapper[4998]: I0227 10:20:25.494068 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"400c5e2f-5448-49c6-bf8e-04b21e552bb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3ce74dab57b3de4f390244a0a95e5ee5c83a47521248808b783a1e94e25c00c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l6t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94514be337c3278cc1dfc8b2f0c50050f03294a2cc4ac6a72c62695d2fe4152a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l6t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m6kr5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:25Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:25 crc kubenswrapper[4998]: I0227 10:20:25.505216 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jl2nx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c459deb8-e1ea-43de-a1b0-1b463eee4bdc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7df06ac726fa496bf82c45d7a1ec7f2defaf93f1d406f0c55cbe4c2b46c9b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc6kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jl2nx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:25Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:25 crc kubenswrapper[4998]: I0227 10:20:25.516524 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1182527-22cd-46f5-ac86-450cc2fbc851\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f69cef55dc6c16d2f015053573c088caee5fcca24208ad8c7affb58bc981c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a72f2f016ab471dc647a29e72e4e49dd5008018682ff255ca5b8496cfc4a2a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a72f2f016ab471dc647a29e72e4e49dd5008018682ff255ca5b8496cfc4a2a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:17:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:25Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:25 crc kubenswrapper[4998]: I0227 10:20:25.530469 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de019f950487216eb5629eeb90db5b4dac503bb5bda901cb9dcb66a9019aaecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:25Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:25 crc kubenswrapper[4998]: I0227 10:20:25.548477 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:25Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:25 crc kubenswrapper[4998]: I0227 10:20:25.566808 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:25Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:25 crc kubenswrapper[4998]: I0227 10:20:25.592363 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bceef7ff-b99d-432e-b9cb-7c538c82b74b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b586f4d902be28e9eed6ab0b15e1c5bbd3dbf2add143844fff0835e088a1e3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f8bd4e21e6e078a8e49ae2264c035072fb77179deb36176c991ff790ea25b0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cc4f81dc4311732e185b32536af72d8391b070ab0b822258c13c55b9053cf53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e38b883925e98a1b15113d65d2f9b3e133e05c14ac63651039076b8518d42467\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92c409cddda85ffd90dc0522d28d181bb804f68f82561e82217b3e78cd9938ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80297fee7fa0ab3d0d6a5975984b83a2d0fa2a368a3fc6d85affe2af75e326e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8132f979bac54c03a1c63ec4f187152f6ce768d0f42623a73d71788142b919df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8132f979bac54c03a1c63ec4f187152f6ce768d0f42623a73d71788142b919df\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T10:20:02Z\\\",\\\"message\\\":\\\"al for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nI0227 10:20:02.609935 7320 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/iptables-alerter-4ln5h in node crc\\\\nI0227 10:20:02.609942 7320 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-m6kr5\\\\nI0227 10:20:02.609946 7320 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nF0227 10:20:02.609952 7320 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:02Z is after 2025-08-24T17:21:41Z]\\\\nI\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T10:20:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-wh9xl_openshift-ovn-kubernetes(bceef7ff-b99d-432e-b9cb-7c538c82b74b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f39b4e222d74a12d8889750f34e5bb8e07d73c9d7fcdbaaebe18066fba744a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a581b9e4264b5768a3af01485da05fd006f3cd0d379ef13f228187faa74f297c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a581b9e4264b5768a3af01485da05fd006f3cd0d379ef13f228187faa74f297c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wh9xl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:25Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:25 crc kubenswrapper[4998]: I0227 10:20:25.610830 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80f17ef8-6ca9-4916-b46a-449ae073f38b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54c1076d7b06c386da934e0ef2a7ae42071884aaa9781bf133c657585aadddbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a48766e32131d5ab9abc23e7d20a4d07291cb6a8fbfd6cc71727209ac18d01a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T10:18:33Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0227 10:18:02.681302 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0227 10:18:02.682345 1 observer_polling.go:159] Starting file observer\\\\nI0227 10:18:02.683346 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0227 10:18:02.684103 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0227 10:18:25.996529 1 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials\\\\nF0227 10:18:32.988177 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T10:18:02Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:18:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://522fda246ba145bdebb76c4053bdce6892f1420b31e9bf785d9f94cc17d7f88a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3aa3432d6c4ec6208a82daff53e88297a9a36dcba4740c03f7982e806c8278e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e09abfc39e6ff32d77e2feeb903f10bb1d82033d56c9b6b7cc45bf52c6a5d2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:17:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:25Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:25 crc kubenswrapper[4998]: I0227 10:20:25.624499 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcfqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9652967a-d4bf-4304-bd25-4fed87e89b10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79a4ef241b85a5c9e0a28bc1b9d7c65ed60b9ca3472e0cbc0ef4aade02e6dff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jctzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcfqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:25Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:25 crc kubenswrapper[4998]: I0227 10:20:25.646796 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l9x2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e55e9768-52ee-4fcf-a279-1b55e6d6c6fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://403bf97889905e9e76dc8ba71be2b5382ee401ce7939314113ff6019cff589db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ac2d83408a7c3cccea1e095f4712f9f524751eec680b2df6373e44643438bdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ac2d83408a7c3cccea1e095f4712f9f524751eec680b2df6373e44643438bdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3d5eb208ecd8e1f8ab650a14b8bc638732ebe1d85e05caa726a967bd010334d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3d5eb208ecd8e1f8ab650a14b8bc638732ebe1d85e05caa726a967bd010334d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://736262e0b6f343139b2e1a42c6d3c90fc1ff6df0c02511cb66074a2143476a48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://736262e0b6f343139b2e1a42c6d3c90fc1ff6df0c02511cb66074a2143476a48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b20c69cb476fe3c5e90cc0086cccef9148e6f31a8a99340967dfee5ad04f8ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b20c69cb476fe3c5e90cc0086cccef9148e6f31a8a99340967dfee5ad04f8ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16873e6b57f8d49e2acc350c6940109206ef970ae711b7cd62aa87fc90df15eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16873e6b57f8d49e2acc350c6940109206ef970ae711b7cd62aa87fc90df15eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e03d30a9a2dc44255a2b9c9dd2346f2081d87c91020679615c71a30842c6556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e03d30a9a2dc44255a2b9c9dd2346f2081d87c91020679615c71a30842c6556\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l9x2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:25Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:25 crc kubenswrapper[4998]: I0227 10:20:25.663595 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-g7c4b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68afe6cb-a559-4162-a25f-a22003feeca4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e28ec5783cd6f24d22a1412ced337b92b22dc2c17624208871489dca19dcb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mqxn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a636fc90f5de5e56e91678566c0fe6812e36482554b4454222df8205a44e8d51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mqxn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-g7c4b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:25Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:26 crc kubenswrapper[4998]: I0227 10:20:26.716410 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/40178d6d-6068-4937-b7d5-883538892cc5-metrics-certs\") pod \"network-metrics-daemon-86xkz\" (UID: \"40178d6d-6068-4937-b7d5-883538892cc5\") " pod="openshift-multus/network-metrics-daemon-86xkz" Feb 27 10:20:26 crc kubenswrapper[4998]: E0227 10:20:26.716578 4998 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 27 10:20:26 crc kubenswrapper[4998]: E0227 10:20:26.716642 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/40178d6d-6068-4937-b7d5-883538892cc5-metrics-certs podName:40178d6d-6068-4937-b7d5-883538892cc5 nodeName:}" failed. No retries permitted until 2026-02-27 10:21:30.716625519 +0000 UTC m=+242.714896497 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/40178d6d-6068-4937-b7d5-883538892cc5-metrics-certs") pod "network-metrics-daemon-86xkz" (UID: "40178d6d-6068-4937-b7d5-883538892cc5") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 27 10:20:26 crc kubenswrapper[4998]: I0227 10:20:26.764512 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:20:26 crc kubenswrapper[4998]: E0227 10:20:26.764645 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 10:20:26 crc kubenswrapper[4998]: I0227 10:20:26.764512 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:20:26 crc kubenswrapper[4998]: I0227 10:20:26.764702 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:20:26 crc kubenswrapper[4998]: I0227 10:20:26.764736 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-86xkz" Feb 27 10:20:26 crc kubenswrapper[4998]: E0227 10:20:26.764920 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 10:20:26 crc kubenswrapper[4998]: E0227 10:20:26.765047 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-86xkz" podUID="40178d6d-6068-4937-b7d5-883538892cc5" Feb 27 10:20:26 crc kubenswrapper[4998]: E0227 10:20:26.765133 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 10:20:28 crc kubenswrapper[4998]: I0227 10:20:28.764715 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:20:28 crc kubenswrapper[4998]: I0227 10:20:28.764738 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:20:28 crc kubenswrapper[4998]: E0227 10:20:28.765522 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 10:20:28 crc kubenswrapper[4998]: I0227 10:20:28.764763 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:20:28 crc kubenswrapper[4998]: I0227 10:20:28.764895 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-86xkz" Feb 27 10:20:28 crc kubenswrapper[4998]: E0227 10:20:28.765621 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 10:20:28 crc kubenswrapper[4998]: E0227 10:20:28.765659 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 10:20:28 crc kubenswrapper[4998]: E0227 10:20:28.765721 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-86xkz" podUID="40178d6d-6068-4937-b7d5-883538892cc5" Feb 27 10:20:28 crc kubenswrapper[4998]: I0227 10:20:28.791392 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bceef7ff-b99d-432e-b9cb-7c538c82b74b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b586f4d902be28e9eed6ab0b15e1c5bbd3dbf2add143844fff0835e088a1e3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f8bd4e21e6e078a8e49ae2264c035072fb77179deb36176c991ff790ea25b0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cc4f81dc4311732e185b32536af72d8391b070ab0b822258c13c55b9053cf53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e38b883925e98a1b15113d65d2f9b3e133e05c14ac63651039076b8518d42467\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92c409cddda85ffd90dc0522d28d181bb804f68f82561e82217b3e78cd9938ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80297fee7fa0ab3d0d6a5975984b83a2d0fa2a368a3fc6d85affe2af75e326e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8132f979bac54c03a1c63ec4f187152f6ce768d0f42623a73d71788142b919df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8132f979bac54c03a1c63ec4f187152f6ce768d0f42623a73d71788142b919df\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T10:20:02Z\\\",\\\"message\\\":\\\"al for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nI0227 10:20:02.609935 7320 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/iptables-alerter-4ln5h in node crc\\\\nI0227 10:20:02.609942 7320 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-m6kr5\\\\nI0227 10:20:02.609946 7320 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nF0227 10:20:02.609952 7320 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:02Z is after 2025-08-24T17:21:41Z]\\\\nI\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T10:20:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-wh9xl_openshift-ovn-kubernetes(bceef7ff-b99d-432e-b9cb-7c538c82b74b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f39b4e222d74a12d8889750f34e5bb8e07d73c9d7fcdbaaebe18066fba744a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a581b9e4264b5768a3af01485da05fd006f3cd0d379ef13f228187faa74f297c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a581b9e4264b5768a3af01485da05fd006f3cd0d379ef13f228187faa74f297c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wh9xl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:28Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:28 crc kubenswrapper[4998]: I0227 10:20:28.806102 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jl2nx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c459deb8-e1ea-43de-a1b0-1b463eee4bdc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7df06ac726fa496bf82c45d7a1ec7f2defaf93f1d406f0c55cbe4c2b46c9b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc6kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jl2nx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:28Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:28 crc kubenswrapper[4998]: I0227 10:20:28.807786 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:20:28 crc kubenswrapper[4998]: I0227 10:20:28.807828 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:20:28 crc kubenswrapper[4998]: I0227 10:20:28.807838 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:20:28 crc kubenswrapper[4998]: I0227 10:20:28.807857 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:20:28 crc kubenswrapper[4998]: I0227 10:20:28.807868 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:20:28Z","lastTransitionTime":"2026-02-27T10:20:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:20:28 crc kubenswrapper[4998]: I0227 10:20:28.823192 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1182527-22cd-46f5-ac86-450cc2fbc851\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f69cef55dc6c16d2f015053573c088caee5fcca24208ad8c7affb58bc981c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a72f2f016ab471dc647a29e72e4e49dd5008018682ff255ca5b8496cfc4a2a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a72f2f016ab471dc647a29e72e4e49dd5008018682ff255ca5b8496cfc4a2a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:17:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:28Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:28 crc kubenswrapper[4998]: E0227 10:20:28.826509 4998 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:20:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:20:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:20:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:20:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:20:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:20:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:20:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:20:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a76c225a-d617-4499-bb32-eb42f31208cd\\\",\\\"systemUUID\\\":\\\"9cb598f9-84ae-4703-b4b9-6775104308e7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:28Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:28 crc kubenswrapper[4998]: I0227 10:20:28.831059 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:20:28 crc kubenswrapper[4998]: I0227 10:20:28.831131 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:20:28 crc kubenswrapper[4998]: I0227 10:20:28.831147 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:20:28 crc kubenswrapper[4998]: I0227 10:20:28.831173 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:20:28 crc kubenswrapper[4998]: I0227 10:20:28.831189 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:20:28Z","lastTransitionTime":"2026-02-27T10:20:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:20:28 crc kubenswrapper[4998]: I0227 10:20:28.840090 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de019f950487216eb5629eeb90db5b4dac503bb5bda901cb9dcb66a9019aaecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:28Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:28 crc kubenswrapper[4998]: E0227 10:20:28.847976 4998 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:20:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:20:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:20:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:20:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:20:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:20:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:20:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:20:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a76c225a-d617-4499-bb32-eb42f31208cd\\\",\\\"systemUUID\\\":\\\"9cb598f9-84ae-4703-b4b9-6775104308e7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:28Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:28 crc kubenswrapper[4998]: I0227 10:20:28.853144 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:20:28 crc kubenswrapper[4998]: I0227 10:20:28.853190 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:20:28 crc kubenswrapper[4998]: I0227 10:20:28.853202 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:20:28 crc kubenswrapper[4998]: I0227 10:20:28.853255 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:20:28 crc kubenswrapper[4998]: I0227 10:20:28.853274 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:20:28Z","lastTransitionTime":"2026-02-27T10:20:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:20:28 crc kubenswrapper[4998]: I0227 10:20:28.860119 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:28Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:28 crc kubenswrapper[4998]: E0227 10:20:28.869338 4998 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:20:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:20:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:20:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:20:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:20:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:20:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:20:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:20:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a76c225a-d617-4499-bb32-eb42f31208cd\\\",\\\"systemUUID\\\":\\\"9cb598f9-84ae-4703-b4b9-6775104308e7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:28Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:28 crc kubenswrapper[4998]: I0227 10:20:28.875196 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:20:28 crc kubenswrapper[4998]: I0227 10:20:28.875302 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:20:28 crc kubenswrapper[4998]: I0227 10:20:28.875319 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:20:28 crc kubenswrapper[4998]: I0227 10:20:28.875345 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:20:28 crc kubenswrapper[4998]: I0227 10:20:28.875361 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:20:28Z","lastTransitionTime":"2026-02-27T10:20:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:20:28 crc kubenswrapper[4998]: E0227 10:20:28.875603 4998 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 27 10:20:28 crc kubenswrapper[4998]: I0227 10:20:28.878316 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:28Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:28 crc kubenswrapper[4998]: E0227 10:20:28.891397 4998 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:20:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:20:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:20:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:20:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:20:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:20:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:20:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:20:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a76c225a-d617-4499-bb32-eb42f31208cd\\\",\\\"systemUUID\\\":\\\"9cb598f9-84ae-4703-b4b9-6775104308e7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:28Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:28 crc kubenswrapper[4998]: I0227 10:20:28.895821 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80f17ef8-6ca9-4916-b46a-449ae073f38b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54c1076d7b06c386da934e0ef2a7ae42071884aaa9781bf133c657585aadddbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a48766e32131d5ab9abc23e7d20a4d07291cb6a8fbfd6cc71727209ac18d01a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T10:18:33Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0227 10:18:02.681302 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0227 10:18:02.682345 1 observer_polling.go:159] Starting file observer\\\\nI0227 10:18:02.683346 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0227 10:18:02.684103 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0227 10:18:25.996529 1 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials\\\\nF0227 10:18:32.988177 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T10:18:02Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:18:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://522fda246ba145bdebb76c4053bdce6892f1420b31e9bf785d9f94cc17d7f88a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3aa3432d6c4ec6208a82daff53e88297a9a36dcba4740c03f7982e806c8278e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e09abfc39e6ff32d77e2feeb903f10bb1d82033d56c9b6b7cc45bf52c6a5d2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:17:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:28Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:28 crc kubenswrapper[4998]: I0227 10:20:28.898303 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:20:28 crc kubenswrapper[4998]: I0227 10:20:28.898353 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:20:28 crc kubenswrapper[4998]: I0227 10:20:28.898369 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:20:28 crc kubenswrapper[4998]: I0227 10:20:28.898391 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:20:28 crc kubenswrapper[4998]: I0227 10:20:28.898405 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:20:28Z","lastTransitionTime":"2026-02-27T10:20:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:20:28 crc kubenswrapper[4998]: I0227 10:20:28.909590 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcfqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9652967a-d4bf-4304-bd25-4fed87e89b10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79a4ef241b85a5c9e0a28bc1b9d7c65ed60b9ca3472e0cbc0ef4aade02e6dff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jctzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcfqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:28Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:28 crc kubenswrapper[4998]: E0227 10:20:28.913521 4998 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:20:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:20:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:20:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:20:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:20:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:20:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:20:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:20:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a76c225a-d617-4499-bb32-eb42f31208cd\\\",\\\"systemUUID\\\":\\\"9cb598f9-84ae-4703-b4b9-6775104308e7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:28Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:28 crc kubenswrapper[4998]: E0227 10:20:28.913776 4998 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 27 10:20:28 crc kubenswrapper[4998]: I0227 10:20:28.927894 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l9x2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e55e9768-52ee-4fcf-a279-1b55e6d6c6fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://403bf97889905e9e76dc8ba71be2b5382ee401ce7939314113ff6019cff589db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ac2d83408a7c3cccea1e095f4712f9f524751eec680b2df6373e44643438bdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ac2d83408a7c3cccea1e095f4712f9f524751eec680b2df6373e44643438bdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3d5eb208ecd8e1f8ab650a14b8bc638732ebe1d85e05caa726a967bd010334d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3d5eb208ecd8e1f8ab650a14b8bc638732ebe1d85e05caa726a967bd010334d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://736262e0b6f343139b2e1a42c6d3c90fc1ff6df0c02511cb66074a2143476a48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://736262e0b6f343139b2e1a42c6d3c90fc1ff6df0c02511cb66074a2143476a48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b20c69cb476fe3c5e90cc0086cccef9148e6f31a8a99340967dfee5ad04f8ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b20c69cb476fe3c5e90cc0086cccef9148e6f31a8a99340967dfee5ad04f8ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16873e6b57f8d49e2acc350c6940109206ef970ae711b7cd62aa87fc90df15eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16873e6b57f8d49e2acc350c6940109206ef970ae711b7cd62aa87fc90df15eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e03d30a9a2dc44255a2b9c9dd2346f2081d87c91020679615c71a30842c6556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e03d30a9a2dc44255a2b9c9dd2346f2081d87c91020679615c71a30842c6556\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l9x2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:28Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:28 crc kubenswrapper[4998]: I0227 10:20:28.947959 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-g7c4b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68afe6cb-a559-4162-a25f-a22003feeca4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e28ec5783cd6f24d22a1412ced337b92b22dc2c17624208871489dca19dcb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mqxn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a636fc90f5de5e56e91678566c0fe6812e36482554b4454222df8205a44e8d51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mqxn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-g7c4b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:28Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:28 crc kubenswrapper[4998]: I0227 10:20:28.966264 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c89c9d58-a300-46ee-9598-f461887c8f9c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b0013632790bf9b7d865e7dd2b831daeac0c10446a270d6017c83c65f5687ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d760d1817981842ba5ff7cb6e99e17a2bb3c2a706ac403aff1e9df265bc38e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07faee8c5639e04f711dedbcfa4188da5234cf6ccb4c1800d23311dc29e41f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f99112c9996d81cabf24593764be3695882a1acbdb45233464eb28fbdaa0869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ceedf973a7cd5ad682ffbb7ac21dbfff2f467cbebab605bc17fcb5eb53e05f55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a28ebb836f0d7af6bdb1e5b7eaa78692585c79e9e2f0f05e806cf253adc7070f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a28ebb836f0d7af6bdb1e5b7eaa78692585c79e9e2f0f05e806cf253adc7070f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6f5b604c3667bd9b9ff07f7ee8dd1fd046911e6fbc216f8a850621ac53d010a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6f5b604c3667bd9b9ff07f7ee8dd1fd046911e6fbc216f8a850621ac53d010a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e799d0c5d0a29fa4a20c358b5b14eb2910623970ec18dbc0511b9888bddfbc87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e799d0c5d0a29fa4a20c358b5b14eb2910623970ec18dbc0511b9888bddfbc87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:17:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:17:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:28Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:28 crc kubenswrapper[4998]: I0227 10:20:28.982172 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fc05123-f698-45ff-a3c3-13c18e466cdb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c807f36a54905b570613990b7428942b90cbad2d8d8dfbab02ee177d5575644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f0cce1c3309bc7de11ea57038f7a306b781b41434337fbf3da176ca2dba2b91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93c102b441273f39ca361ed86f6ef259e15d8adb7eea414db62fabebfda0dee1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e749ae59c7bd8ab4f2d458cd33ccbae459eb43375c8abcdd275ce7f3978d5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b6e8ecfcaba19fb742ce57619627409073b5ea272a20cca6cff39ebcef8d3e2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T10:18:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 10:18:37.748369 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 10:18:37.748616 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 10:18:37.749728 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2309284339/tls.crt::/tmp/serving-cert-2309284339/tls.key\\\\\\\"\\\\nI0227 10:18:38.164511 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 10:18:38.177544 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 10:18:38.177576 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 10:18:38.177600 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 10:18:38.177606 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 10:18:38.196196 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 10:18:38.196250 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 10:18:38.196256 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 10:18:38.196263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 10:18:38.196267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 10:18:38.196282 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 10:18:38.196286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0227 10:18:38.201111 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0227 10:18:38.201327 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T10:18:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df65cc66d990595a8fdb751234ffd8b9e890f53de56c736241bc8b52e7341357\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b42644d7b4769348dda3a3ed5f82a860712159a6e62df2ee6a04b6e890851fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b42644d7b4769348dda3a3ed5f82a860712159a6e62df2ee6a04b6e890851fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:17:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:28Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:28 crc kubenswrapper[4998]: I0227 10:20:28.992974 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da216979-4070-4044-885e-64db11be9b28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:18:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:18:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8e1a3d0cfbfb3c94ceb7fa8965506defc8d50fbd2c3c977ebea917d9b47f29c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77f77430694123a27f4e42870837e5ef1405bc4f157bc15efe4cc2daafd9456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df8df596ab8f658fa7380300b4af87a511447baba28c0a4878829e2217d7d600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcd8311d3319cee9ae6667193835b5bd30c08b0f52b6278fead9c76ffa5949ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dcd8311d3319cee9ae6667193835b5bd30c08b0f52b6278fead9c76ffa5949ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:17:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:28Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:29 crc kubenswrapper[4998]: I0227 10:20:29.005297 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-86xkz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40178d6d-6068-4937-b7d5-883538892cc5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqwdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqwdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-86xkz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:29Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:29 crc kubenswrapper[4998]: I0227 10:20:29.017428 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"400c5e2f-5448-49c6-bf8e-04b21e552bb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3ce74dab57b3de4f390244a0a95e5ee5c83a47521248808b783a1e94e25c00c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l6t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94514be337c3278cc1dfc8b2f0c50050f03294a2cc4ac6a72c62695d2fe4152a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l6t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m6kr5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:29Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:29 crc kubenswrapper[4998]: I0227 10:20:29.030966 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:29Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:29 crc kubenswrapper[4998]: I0227 10:20:29.046508 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c7c05c29eea9415919844353f5f82a0543e2592b282913feca3776ef5a711d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65614a1a13ca1ee58cd9ee838bbb0ba2ada05d130adb26b34ae208292a994bf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:29Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:29 crc kubenswrapper[4998]: I0227 10:20:29.059131 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ea5f61679603b32df823b8149d832f4d6e98c0cb4c7d1124b3ff587bb729ef3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:29Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:29 crc kubenswrapper[4998]: I0227 10:20:29.075318 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-46lvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a046a5ca-7081-4920-98af-1027a5bc29d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:20:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:20:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf965cba1260453351278ff54565d3985b1c2eeee25ddba57db0ce0f8e335a93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70d74d26b8ecce796cf2e1d35f24a62a2d3005aac8c11e2a148aa9b9a4e670f4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T10:20:23Z\\\",\\\"message\\\":\\\"2026-02-27T10:19:37+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_633bc58b-0bda-4a44-899f-9a59253933fc\\\\n2026-02-27T10:19:37+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_633bc58b-0bda-4a44-899f-9a59253933fc to /host/opt/cni/bin/\\\\n2026-02-27T10:19:38Z [verbose] multus-daemon started\\\\n2026-02-27T10:19:38Z [verbose] Readiness Indicator file check\\\\n2026-02-27T10:20:23Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:20:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7s287\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-46lvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:29Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:29 crc kubenswrapper[4998]: I0227 10:20:29.765154 4998 scope.go:117] "RemoveContainer" containerID="8132f979bac54c03a1c63ec4f187152f6ce768d0f42623a73d71788142b919df" Feb 27 10:20:30 crc kubenswrapper[4998]: I0227 10:20:30.335924 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wh9xl_bceef7ff-b99d-432e-b9cb-7c538c82b74b/ovnkube-controller/2.log" Feb 27 10:20:30 crc kubenswrapper[4998]: I0227 10:20:30.338913 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" event={"ID":"bceef7ff-b99d-432e-b9cb-7c538c82b74b","Type":"ContainerStarted","Data":"2d3b4c0f3e17786c3bc75771f5dffed4f1c11d33aea4bd153117e1eda621a35b"} Feb 27 10:20:30 crc kubenswrapper[4998]: I0227 10:20:30.339347 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" Feb 27 10:20:30 crc kubenswrapper[4998]: I0227 10:20:30.353557 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:30Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:30 crc kubenswrapper[4998]: I0227 10:20:30.367266 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:30Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:30 crc kubenswrapper[4998]: I0227 10:20:30.387704 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bceef7ff-b99d-432e-b9cb-7c538c82b74b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b586f4d902be28e9eed6ab0b15e1c5bbd3dbf2add143844fff0835e088a1e3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f8bd4e21e6e078a8e49ae2264c035072fb77179deb36176c991ff790ea25b0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cc4f81dc4311732e185b32536af72d8391b070ab0b822258c13c55b9053cf53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e38b883925e98a1b15113d65d2f9b3e133e05c14ac63651039076b8518d42467\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92c409cddda85ffd90dc0522d28d181bb804f68f82561e82217b3e78cd9938ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80297fee7fa0ab3d0d6a5975984b83a2d0fa2a368a3fc6d85affe2af75e326e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d3b4c0f3e17786c3bc75771f5dffed4f1c11d33aea4bd153117e1eda621a35b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8132f979bac54c03a1c63ec4f187152f6ce768d0f42623a73d71788142b919df\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T10:20:02Z\\\",\\\"message\\\":\\\"al for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nI0227 10:20:02.609935 7320 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/iptables-alerter-4ln5h in node crc\\\\nI0227 10:20:02.609942 7320 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-m6kr5\\\\nI0227 10:20:02.609946 7320 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nF0227 10:20:02.609952 7320 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:02Z is after 2025-08-24T17:21:41Z]\\\\nI\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T10:20:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:20:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f39b4e222d74a12d8889750f34e5bb8e07d73c9d7fcdbaaebe18066fba744a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a581b9e4264b5768a3af01485da05fd006f3cd0d379ef13f228187faa74f297c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a581b9e4264b5768a3af01485da05fd006f3cd0d379ef13f228187faa74f297c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wh9xl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:30Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:30 crc kubenswrapper[4998]: I0227 10:20:30.400105 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jl2nx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c459deb8-e1ea-43de-a1b0-1b463eee4bdc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7df06ac726fa496bf82c45d7a1ec7f2defaf93f1d406f0c55cbe4c2b46c9b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc6kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jl2nx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:30Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:30 crc kubenswrapper[4998]: I0227 10:20:30.412963 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1182527-22cd-46f5-ac86-450cc2fbc851\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f69cef55dc6c16d2f015053573c088caee5fcca24208ad8c7affb58bc981c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a72f2f016ab471dc647a29e72e4e49dd5008018682ff255ca5b8496cfc4a2a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a72f2f016ab471dc647a29e72e4e49dd5008018682ff255ca5b8496cfc4a2a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:17:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:30Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:30 crc kubenswrapper[4998]: I0227 10:20:30.427760 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de019f950487216eb5629eeb90db5b4dac503bb5bda901cb9dcb66a9019aaecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:30Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:30 crc kubenswrapper[4998]: I0227 10:20:30.443405 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l9x2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e55e9768-52ee-4fcf-a279-1b55e6d6c6fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://403bf97889905e9e76dc8ba71be2b5382ee401ce7939314113ff6019cff589db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ac2d83408a7c3cccea1e095f4712f9f524751eec680b2df6373e44643438bdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ac2d83408a7c3cccea1e095f4712f9f524751eec680b2df6373e44643438bdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3d5eb208ecd8e1f8ab650a14b8bc638732ebe1d85e05caa726a967bd010334d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3d5eb208ecd8e1f8ab650a14b8bc638732ebe1d85e05caa726a967bd010334d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://736262e0b6f343139b2e1a42c6d3c90fc1ff6df0c02511cb66074a2143476a48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://736262e0b6f343139b2e1a42c6d3c90fc1ff6df0c02511cb66074a2143476a48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b20c69cb476fe3c5e90cc0086cccef9148e6f31a8a99340967dfee5ad04f8ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b20c69cb476fe3c5e90cc0086cccef9148e6f31a8a99340967dfee5ad04f8ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16873e6b57f8d49e2acc350c6940109206ef970ae711b7cd62aa87fc90df15eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16873e6b57f8d49e2acc350c6940109206ef970ae711b7cd62aa87fc90df15eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e03d30a9a2dc44255a2b9c9dd2346f2081d87c91020679615c71a30842c6556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e03d30a9a2dc44255a2b9c9dd2346f2081d87c91020679615c71a30842c6556\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l9x2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:30Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:30 crc kubenswrapper[4998]: I0227 10:20:30.455639 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-g7c4b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68afe6cb-a559-4162-a25f-a22003feeca4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e28ec5783cd6f24d22a1412ced337b92b22dc2c17624208871489dca19dcb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mqxn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a636fc90f5de5e56e91678566c0fe6812e36482554b4454222df8205a44e8d51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mqxn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-g7c4b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:30Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:30 crc kubenswrapper[4998]: I0227 10:20:30.470267 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80f17ef8-6ca9-4916-b46a-449ae073f38b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54c1076d7b06c386da934e0ef2a7ae42071884aaa9781bf133c657585aadddbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a48766e32131d5ab9abc23e7d20a4d07291cb6a8fbfd6cc71727209ac18d01a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T10:18:33Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0227 10:18:02.681302 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0227 10:18:02.682345 1 observer_polling.go:159] Starting file observer\\\\nI0227 10:18:02.683346 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0227 10:18:02.684103 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0227 10:18:25.996529 1 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials\\\\nF0227 10:18:32.988177 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T10:18:02Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:18:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://522fda246ba145bdebb76c4053bdce6892f1420b31e9bf785d9f94cc17d7f88a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3aa3432d6c4ec6208a82daff53e88297a9a36dcba4740c03f7982e806c8278e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e09abfc39e6ff32d77e2feeb903f10bb1d82033d56c9b6b7cc45bf52c6a5d2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:17:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:30Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:30 crc kubenswrapper[4998]: I0227 10:20:30.479788 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcfqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9652967a-d4bf-4304-bd25-4fed87e89b10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79a4ef241b85a5c9e0a28bc1b9d7c65ed60b9ca3472e0cbc0ef4aade02e6dff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jctzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcfqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:30Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:30 crc kubenswrapper[4998]: I0227 10:20:30.491321 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da216979-4070-4044-885e-64db11be9b28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:18:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:18:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8e1a3d0cfbfb3c94ceb7fa8965506defc8d50fbd2c3c977ebea917d9b47f29c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77f77430694123a27f4e42870837e5ef1405bc4f157bc15efe4cc2daafd9456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df8df596ab8f658fa7380300b4af87a511447baba28c0a4878829e2217d7d600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcd8311d3319cee9ae6667193835b5bd30c08b0f52b6278fead9c76ffa5949ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dcd8311d3319cee9ae6667193835b5bd30c08b0f52b6278fead9c76ffa5949ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:17:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:30Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:30 crc kubenswrapper[4998]: I0227 10:20:30.502042 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-86xkz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40178d6d-6068-4937-b7d5-883538892cc5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqwdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqwdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-86xkz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:30Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:30 crc kubenswrapper[4998]: I0227 10:20:30.518113 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c89c9d58-a300-46ee-9598-f461887c8f9c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b0013632790bf9b7d865e7dd2b831daeac0c10446a270d6017c83c65f5687ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d760d1817981842ba5ff7cb6e99e17a2bb3c2a706ac403aff1e9df265bc38e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07faee8c5639e04f711dedbcfa4188da5234cf6ccb4c1800d23311dc29e41f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f99112c9996d81cabf24593764be3695882a1acbdb45233464eb28fbdaa0869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ceedf973a7cd5ad682ffbb7ac21dbfff2f467cbebab605bc17fcb5eb53e05f55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a28ebb836f0d7af6bdb1e5b7eaa78692585c79e9e2f0f05e806cf253adc7070f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a28ebb836f0d7af6bdb1e5b7eaa78692585c79e9e2f0f05e806cf253adc7070f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6f5b604c3667bd9b9ff07f7ee8dd1fd046911e6fbc216f8a850621ac53d010a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6f5b604c3667bd9b9ff07f7ee8dd1fd046911e6fbc216f8a850621ac53d010a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e799d0c5d0a29fa4a20c358b5b14eb2910623970ec18dbc0511b9888bddfbc87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e799d0c5d0a29fa4a20c358b5b14eb2910623970ec18dbc0511b9888bddfbc87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:17:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:17:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:30Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:30 crc kubenswrapper[4998]: I0227 10:20:30.528872 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fc05123-f698-45ff-a3c3-13c18e466cdb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c807f36a54905b570613990b7428942b90cbad2d8d8dfbab02ee177d5575644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f0cce1c3309bc7de11ea57038f7a306b781b41434337fbf3da176ca2dba2b91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93c102b441273f39ca361ed86f6ef259e15d8adb7eea414db62fabebfda0dee1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e749ae59c7bd8ab4f2d458cd33ccbae459eb43375c8abcdd275ce7f3978d5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b6e8ecfcaba19fb742ce57619627409073b5ea272a20cca6cff39ebcef8d3e2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T10:18:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 10:18:37.748369 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 10:18:37.748616 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 10:18:37.749728 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2309284339/tls.crt::/tmp/serving-cert-2309284339/tls.key\\\\\\\"\\\\nI0227 10:18:38.164511 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 10:18:38.177544 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 10:18:38.177576 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 10:18:38.177600 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 10:18:38.177606 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 10:18:38.196196 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 10:18:38.196250 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 10:18:38.196256 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 10:18:38.196263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 10:18:38.196267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 10:18:38.196282 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 10:18:38.196286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0227 10:18:38.201111 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0227 10:18:38.201327 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T10:18:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df65cc66d990595a8fdb751234ffd8b9e890f53de56c736241bc8b52e7341357\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b42644d7b4769348dda3a3ed5f82a860712159a6e62df2ee6a04b6e890851fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b42644d7b4769348dda3a3ed5f82a860712159a6e62df2ee6a04b6e890851fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:17:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:30Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:30 crc kubenswrapper[4998]: I0227 10:20:30.537427 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ea5f61679603b32df823b8149d832f4d6e98c0cb4c7d1124b3ff587bb729ef3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:30Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:30 crc kubenswrapper[4998]: I0227 10:20:30.547868 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-46lvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a046a5ca-7081-4920-98af-1027a5bc29d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:20:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:20:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf965cba1260453351278ff54565d3985b1c2eeee25ddba57db0ce0f8e335a93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70d74d26b8ecce796cf2e1d35f24a62a2d3005aac8c11e2a148aa9b9a4e670f4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T10:20:23Z\\\",\\\"message\\\":\\\"2026-02-27T10:19:37+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_633bc58b-0bda-4a44-899f-9a59253933fc\\\\n2026-02-27T10:19:37+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_633bc58b-0bda-4a44-899f-9a59253933fc to /host/opt/cni/bin/\\\\n2026-02-27T10:19:38Z [verbose] multus-daemon started\\\\n2026-02-27T10:19:38Z [verbose] Readiness Indicator file check\\\\n2026-02-27T10:20:23Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:20:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7s287\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-46lvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:30Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:30 crc kubenswrapper[4998]: I0227 10:20:30.556410 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"400c5e2f-5448-49c6-bf8e-04b21e552bb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3ce74dab57b3de4f390244a0a95e5ee5c83a47521248808b783a1e94e25c00c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l6t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94514be337c3278cc1dfc8b2f0c50050f03294a2cc4ac6a72c62695d2fe4152a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l6t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m6kr5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:30Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:30 crc kubenswrapper[4998]: I0227 10:20:30.565458 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:30Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:30 crc kubenswrapper[4998]: I0227 10:20:30.576548 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c7c05c29eea9415919844353f5f82a0543e2592b282913feca3776ef5a711d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65614a1a13ca1ee58cd9ee838bbb0ba2ada05d130adb26b34ae208292a994bf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:30Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:30 crc kubenswrapper[4998]: I0227 10:20:30.763991 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:20:30 crc kubenswrapper[4998]: E0227 10:20:30.764131 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 10:20:30 crc kubenswrapper[4998]: I0227 10:20:30.764316 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:20:30 crc kubenswrapper[4998]: I0227 10:20:30.764385 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:20:30 crc kubenswrapper[4998]: E0227 10:20:30.764468 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 10:20:30 crc kubenswrapper[4998]: E0227 10:20:30.764616 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 10:20:30 crc kubenswrapper[4998]: I0227 10:20:30.764797 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-86xkz" Feb 27 10:20:30 crc kubenswrapper[4998]: E0227 10:20:30.764877 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-86xkz" podUID="40178d6d-6068-4937-b7d5-883538892cc5" Feb 27 10:20:31 crc kubenswrapper[4998]: I0227 10:20:31.343421 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wh9xl_bceef7ff-b99d-432e-b9cb-7c538c82b74b/ovnkube-controller/3.log" Feb 27 10:20:31 crc kubenswrapper[4998]: I0227 10:20:31.344086 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wh9xl_bceef7ff-b99d-432e-b9cb-7c538c82b74b/ovnkube-controller/2.log" Feb 27 10:20:31 crc kubenswrapper[4998]: I0227 10:20:31.346667 4998 generic.go:334] "Generic (PLEG): container finished" podID="bceef7ff-b99d-432e-b9cb-7c538c82b74b" containerID="2d3b4c0f3e17786c3bc75771f5dffed4f1c11d33aea4bd153117e1eda621a35b" exitCode=1 Feb 27 10:20:31 crc kubenswrapper[4998]: I0227 10:20:31.346707 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" event={"ID":"bceef7ff-b99d-432e-b9cb-7c538c82b74b","Type":"ContainerDied","Data":"2d3b4c0f3e17786c3bc75771f5dffed4f1c11d33aea4bd153117e1eda621a35b"} Feb 27 10:20:31 crc kubenswrapper[4998]: I0227 10:20:31.346754 4998 scope.go:117] "RemoveContainer" containerID="8132f979bac54c03a1c63ec4f187152f6ce768d0f42623a73d71788142b919df" Feb 27 10:20:31 crc kubenswrapper[4998]: I0227 10:20:31.347320 4998 scope.go:117] "RemoveContainer" containerID="2d3b4c0f3e17786c3bc75771f5dffed4f1c11d33aea4bd153117e1eda621a35b" Feb 27 10:20:31 crc kubenswrapper[4998]: E0227 10:20:31.347474 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-wh9xl_openshift-ovn-kubernetes(bceef7ff-b99d-432e-b9cb-7c538c82b74b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" podUID="bceef7ff-b99d-432e-b9cb-7c538c82b74b" Feb 27 10:20:31 crc kubenswrapper[4998]: I0227 10:20:31.367643 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c89c9d58-a300-46ee-9598-f461887c8f9c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b0013632790bf9b7d865e7dd2b831daeac0c10446a270d6017c83c65f5687ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d760d1817981842ba5ff7cb6e99e17a2bb3c2a706ac403aff1e9df265bc38e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07faee8c5639e04f711dedbcfa4188da5234cf6ccb4c1800d23311dc29e41f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f99112c9996d81cabf24593764be3695882a1acbdb45233464eb28fbdaa0869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ceedf973a7cd5ad682ffbb7ac21dbfff2f467cbebab605bc17fcb5eb53e05f55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a28ebb836f0d7af6bdb1e5b7eaa78692585c79e9e2f0f05e806cf253adc7070f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a28ebb836f0d7af6bdb1e5b7eaa78692585c79e9e2f0f05e806cf253adc7070f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6f5b604c3667bd9b9ff07f7ee8dd1fd046911e6fbc216f8a850621ac53d010a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6f5b604c3667bd9b9ff07f7ee8dd1fd046911e6fbc216f8a850621ac53d010a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e799d0c5d0a29fa4a20c358b5b14eb2910623970ec18dbc0511b9888bddfbc87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e799d0c5d0a29fa4a20c358b5b14eb2910623970ec18dbc0511b9888bddfbc87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:17:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:17:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:31Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:31 crc kubenswrapper[4998]: I0227 10:20:31.381911 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fc05123-f698-45ff-a3c3-13c18e466cdb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c807f36a54905b570613990b7428942b90cbad2d8d8dfbab02ee177d5575644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f0cce1c3309bc7de11ea57038f7a306b781b41434337fbf3da176ca2dba2b91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93c102b441273f39ca361ed86f6ef259e15d8adb7eea414db62fabebfda0dee1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e749ae59c7bd8ab4f2d458cd33ccbae459eb43375c8abcdd275ce7f3978d5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b6e8ecfcaba19fb742ce57619627409073b5ea272a20cca6cff39ebcef8d3e2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T10:18:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 10:18:37.748369 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 10:18:37.748616 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 10:18:37.749728 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2309284339/tls.crt::/tmp/serving-cert-2309284339/tls.key\\\\\\\"\\\\nI0227 10:18:38.164511 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 10:18:38.177544 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 10:18:38.177576 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 10:18:38.177600 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 10:18:38.177606 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 10:18:38.196196 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 10:18:38.196250 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 10:18:38.196256 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 10:18:38.196263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 10:18:38.196267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 10:18:38.196282 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 10:18:38.196286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0227 10:18:38.201111 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0227 10:18:38.201327 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T10:18:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df65cc66d990595a8fdb751234ffd8b9e890f53de56c736241bc8b52e7341357\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b42644d7b4769348dda3a3ed5f82a860712159a6e62df2ee6a04b6e890851fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b42644d7b4769348dda3a3ed5f82a860712159a6e62df2ee6a04b6e890851fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:17:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:31Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:31 crc kubenswrapper[4998]: I0227 10:20:31.393621 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da216979-4070-4044-885e-64db11be9b28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:18:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:18:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8e1a3d0cfbfb3c94ceb7fa8965506defc8d50fbd2c3c977ebea917d9b47f29c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77f77430694123a27f4e42870837e5ef1405bc4f157bc15efe4cc2daafd9456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df8df596ab8f658fa7380300b4af87a511447baba28c0a4878829e2217d7d600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcd8311d3319cee9ae6667193835b5bd30c08b0f52b6278fead9c76ffa5949ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dcd8311d3319cee9ae6667193835b5bd30c08b0f52b6278fead9c76ffa5949ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:17:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:31Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:31 crc kubenswrapper[4998]: I0227 10:20:31.404813 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-86xkz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40178d6d-6068-4937-b7d5-883538892cc5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqwdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqwdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-86xkz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:31Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:31 crc kubenswrapper[4998]: I0227 10:20:31.416528 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:31Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:31 crc kubenswrapper[4998]: I0227 10:20:31.429379 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c7c05c29eea9415919844353f5f82a0543e2592b282913feca3776ef5a711d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65614a1a13ca1ee58cd9ee838bbb0ba2ada05d130adb26b34ae208292a994bf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:31Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:31 crc kubenswrapper[4998]: I0227 10:20:31.440343 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ea5f61679603b32df823b8149d832f4d6e98c0cb4c7d1124b3ff587bb729ef3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:31Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:31 crc kubenswrapper[4998]: I0227 10:20:31.453102 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-46lvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a046a5ca-7081-4920-98af-1027a5bc29d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:20:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:20:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf965cba1260453351278ff54565d3985b1c2eeee25ddba57db0ce0f8e335a93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70d74d26b8ecce796cf2e1d35f24a62a2d3005aac8c11e2a148aa9b9a4e670f4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T10:20:23Z\\\",\\\"message\\\":\\\"2026-02-27T10:19:37+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_633bc58b-0bda-4a44-899f-9a59253933fc\\\\n2026-02-27T10:19:37+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_633bc58b-0bda-4a44-899f-9a59253933fc to /host/opt/cni/bin/\\\\n2026-02-27T10:19:38Z [verbose] multus-daemon started\\\\n2026-02-27T10:19:38Z [verbose] Readiness Indicator file check\\\\n2026-02-27T10:20:23Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:20:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7s287\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-46lvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:31Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:31 crc kubenswrapper[4998]: I0227 10:20:31.465646 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"400c5e2f-5448-49c6-bf8e-04b21e552bb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3ce74dab57b3de4f390244a0a95e5ee5c83a47521248808b783a1e94e25c00c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l6t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94514be337c3278cc1dfc8b2f0c50050f03294a2cc4ac6a72c62695d2fe4152a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l6t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m6kr5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:31Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:31 crc kubenswrapper[4998]: I0227 10:20:31.475571 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jl2nx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c459deb8-e1ea-43de-a1b0-1b463eee4bdc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7df06ac726fa496bf82c45d7a1ec7f2defaf93f1d406f0c55cbe4c2b46c9b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc6kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jl2nx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:31Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:31 crc kubenswrapper[4998]: I0227 10:20:31.484441 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1182527-22cd-46f5-ac86-450cc2fbc851\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f69cef55dc6c16d2f015053573c088caee5fcca24208ad8c7affb58bc981c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a72f2f016ab471dc647a29e72e4e49dd5008018682ff255ca5b8496cfc4a2a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a72f2f016ab471dc647a29e72e4e49dd5008018682ff255ca5b8496cfc4a2a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:17:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:31Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:31 crc kubenswrapper[4998]: I0227 10:20:31.496731 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de019f950487216eb5629eeb90db5b4dac503bb5bda901cb9dcb66a9019aaecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:31Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:31 crc kubenswrapper[4998]: I0227 10:20:31.511940 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:31Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:31 crc kubenswrapper[4998]: I0227 10:20:31.525736 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:31Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:31 crc kubenswrapper[4998]: I0227 10:20:31.549143 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bceef7ff-b99d-432e-b9cb-7c538c82b74b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b586f4d902be28e9eed6ab0b15e1c5bbd3dbf2add143844fff0835e088a1e3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f8bd4e21e6e078a8e49ae2264c035072fb77179deb36176c991ff790ea25b0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cc4f81dc4311732e185b32536af72d8391b070ab0b822258c13c55b9053cf53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e38b883925e98a1b15113d65d2f9b3e133e05c14ac63651039076b8518d42467\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92c409cddda85ffd90dc0522d28d181bb804f68f82561e82217b3e78cd9938ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80297fee7fa0ab3d0d6a5975984b83a2d0fa2a368a3fc6d85affe2af75e326e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d3b4c0f3e17786c3bc75771f5dffed4f1c11d33aea4bd153117e1eda621a35b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8132f979bac54c03a1c63ec4f187152f6ce768d0f42623a73d71788142b919df\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T10:20:02Z\\\",\\\"message\\\":\\\"al for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nI0227 10:20:02.609935 7320 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/iptables-alerter-4ln5h in node crc\\\\nI0227 10:20:02.609942 7320 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-m6kr5\\\\nI0227 10:20:02.609946 7320 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nF0227 10:20:02.609952 7320 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:02Z is after 2025-08-24T17:21:41Z]\\\\nI\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T10:20:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d3b4c0f3e17786c3bc75771f5dffed4f1c11d33aea4bd153117e1eda621a35b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T10:20:30Z\\\",\\\"message\\\":\\\"rc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:30Z is after 2025-08-24T17:21:41Z]\\\\nI0227 10:20:30.675869 7657 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]} options:{GoMap:map[iface-id-ver:9d751cbb-f2e2-430d-9754-c882a5e924a5 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {960d98b2-dc64-4e93-a4b6-9b19847af71e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T10:20:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f39b4e222d74a12d8889750f34e5bb8e07d73c9d7fcdbaaebe18066fba744a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a581b9e4264b5768a3af01485da05fd006f3cd0d379ef13f228187faa74f297c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a581b9e4264b5768a3af01485da05fd006f3cd0d379ef13f228187faa74f297c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wh9xl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:31Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:31 crc kubenswrapper[4998]: I0227 10:20:31.566151 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80f17ef8-6ca9-4916-b46a-449ae073f38b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54c1076d7b06c386da934e0ef2a7ae42071884aaa9781bf133c657585aadddbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a48766e32131d5ab9abc23e7d20a4d07291cb6a8fbfd6cc71727209ac18d01a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T10:18:33Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0227 10:18:02.681302 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0227 10:18:02.682345 1 observer_polling.go:159] Starting file observer\\\\nI0227 10:18:02.683346 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0227 10:18:02.684103 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0227 10:18:25.996529 1 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials\\\\nF0227 10:18:32.988177 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T10:18:02Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:18:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://522fda246ba145bdebb76c4053bdce6892f1420b31e9bf785d9f94cc17d7f88a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3aa3432d6c4ec6208a82daff53e88297a9a36dcba4740c03f7982e806c8278e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e09abfc39e6ff32d77e2feeb903f10bb1d82033d56c9b6b7cc45bf52c6a5d2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:17:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:31Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:31 crc kubenswrapper[4998]: I0227 10:20:31.582075 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcfqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9652967a-d4bf-4304-bd25-4fed87e89b10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79a4ef241b85a5c9e0a28bc1b9d7c65ed60b9ca3472e0cbc0ef4aade02e6dff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jctzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcfqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:31Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:31 crc kubenswrapper[4998]: I0227 10:20:31.596569 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l9x2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e55e9768-52ee-4fcf-a279-1b55e6d6c6fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://403bf97889905e9e76dc8ba71be2b5382ee401ce7939314113ff6019cff589db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ac2d83408a7c3cccea1e095f4712f9f524751eec680b2df6373e44643438bdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ac2d83408a7c3cccea1e095f4712f9f524751eec680b2df6373e44643438bdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3d5eb208ecd8e1f8ab650a14b8bc638732ebe1d85e05caa726a967bd010334d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3d5eb208ecd8e1f8ab650a14b8bc638732ebe1d85e05caa726a967bd010334d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://736262e0b6f343139b2e1a42c6d3c90fc1ff6df0c02511cb66074a2143476a48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://736262e0b6f343139b2e1a42c6d3c90fc1ff6df0c02511cb66074a2143476a48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b20c69cb476fe3c5e90cc0086cccef9148e6f31a8a99340967dfee5ad04f8ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b20c69cb476fe3c5e90cc0086cccef9148e6f31a8a99340967dfee5ad04f8ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16873e6b57f8d49e2acc350c6940109206ef970ae711b7cd62aa87fc90df15eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16873e6b57f8d49e2acc350c6940109206ef970ae711b7cd62aa87fc90df15eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e03d30a9a2dc44255a2b9c9dd2346f2081d87c91020679615c71a30842c6556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e03d30a9a2dc44255a2b9c9dd2346f2081d87c91020679615c71a30842c6556\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l9x2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:31Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:31 crc kubenswrapper[4998]: I0227 10:20:31.608844 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-g7c4b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68afe6cb-a559-4162-a25f-a22003feeca4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e28ec5783cd6f24d22a1412ced337b92b22dc2c17624208871489dca19dcb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mqxn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a636fc90f5de5e56e91678566c0fe6812e36482554b4454222df8205a44e8d51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mqxn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-g7c4b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:31Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:32 crc kubenswrapper[4998]: I0227 10:20:32.352399 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wh9xl_bceef7ff-b99d-432e-b9cb-7c538c82b74b/ovnkube-controller/3.log" Feb 27 10:20:32 crc kubenswrapper[4998]: I0227 10:20:32.355878 4998 scope.go:117] "RemoveContainer" containerID="2d3b4c0f3e17786c3bc75771f5dffed4f1c11d33aea4bd153117e1eda621a35b" Feb 27 10:20:32 crc kubenswrapper[4998]: E0227 10:20:32.356074 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-wh9xl_openshift-ovn-kubernetes(bceef7ff-b99d-432e-b9cb-7c538c82b74b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" podUID="bceef7ff-b99d-432e-b9cb-7c538c82b74b" Feb 27 10:20:32 crc kubenswrapper[4998]: I0227 10:20:32.370382 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcfqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9652967a-d4bf-4304-bd25-4fed87e89b10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79a4ef241b85a5c9e0a28bc1b9d7c65ed60b9ca3472e0cbc0ef4aade02e6dff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jctzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcfqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:32Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:32 crc kubenswrapper[4998]: I0227 10:20:32.388774 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l9x2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e55e9768-52ee-4fcf-a279-1b55e6d6c6fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://403bf97889905e9e76dc8ba71be2b5382ee401ce7939314113ff6019cff589db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ac2d83408a7c3cccea1e095f4712f9f524751eec680b2df6373e44643438bdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ac2d83408a7c3cccea1e095f4712f9f524751eec680b2df6373e44643438bdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3d5eb208ecd8e1f8ab650a14b8bc638732ebe1d85e05caa726a967bd010334d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3d5eb208ecd8e1f8ab650a14b8bc638732ebe1d85e05caa726a967bd010334d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://736262e0b6f343139b2e1a42c6d3c90fc1ff6df0c02511cb66074a2143476a48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://736262e0b6f343139b2e1a42c6d3c90fc1ff6df0c02511cb66074a2143476a48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b20c69cb476fe3c5e90cc0086cccef9148e6f31a8a99340967dfee5ad04f8ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b20c69cb476fe3c5e90cc0086cccef9148e6f31a8a99340967dfee5ad04f8ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16873e6b57f8d49e2acc350c6940109206ef970ae711b7cd62aa87fc90df15eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16873e6b57f8d49e2acc350c6940109206ef970ae711b7cd62aa87fc90df15eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e03d30a9a2dc44255a2b9c9dd2346f2081d87c91020679615c71a30842c6556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e03d30a9a2dc44255a2b9c9dd2346f2081d87c91020679615c71a30842c6556\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnkjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l9x2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:32Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:32 crc kubenswrapper[4998]: I0227 10:20:32.402723 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-g7c4b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68afe6cb-a559-4162-a25f-a22003feeca4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e28ec5783cd6f24d22a1412ced337b92b22dc2c17624208871489dca19dcb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mqxn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a636fc90f5de5e56e91678566c0fe6812e36482554b4454222df8205a44e8d51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6mqxn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-g7c4b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:32Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:32 crc kubenswrapper[4998]: I0227 10:20:32.418389 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80f17ef8-6ca9-4916-b46a-449ae073f38b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54c1076d7b06c386da934e0ef2a7ae42071884aaa9781bf133c657585aadddbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a48766e32131d5ab9abc23e7d20a4d07291cb6a8fbfd6cc71727209ac18d01a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T10:18:33Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0227 10:18:02.681302 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0227 10:18:02.682345 1 observer_polling.go:159] Starting file observer\\\\nI0227 10:18:02.683346 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0227 10:18:02.684103 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0227 10:18:25.996529 1 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials\\\\nF0227 10:18:32.988177 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T10:18:02Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:18:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://522fda246ba145bdebb76c4053bdce6892f1420b31e9bf785d9f94cc17d7f88a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3aa3432d6c4ec6208a82daff53e88297a9a36dcba4740c03f7982e806c8278e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e09abfc39e6ff32d77e2feeb903f10bb1d82033d56c9b6b7cc45bf52c6a5d2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:17:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:32Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:32 crc kubenswrapper[4998]: I0227 10:20:32.434919 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fc05123-f698-45ff-a3c3-13c18e466cdb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c807f36a54905b570613990b7428942b90cbad2d8d8dfbab02ee177d5575644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f0cce1c3309bc7de11ea57038f7a306b781b41434337fbf3da176ca2dba2b91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93c102b441273f39ca361ed86f6ef259e15d8adb7eea414db62fabebfda0dee1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e749ae59c7bd8ab4f2d458cd33ccbae459eb43375c8abcdd275ce7f3978d5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b6e8ecfcaba19fb742ce57619627409073b5ea272a20cca6cff39ebcef8d3e2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T10:18:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 10:18:37.748369 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 10:18:37.748616 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 10:18:37.749728 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2309284339/tls.crt::/tmp/serving-cert-2309284339/tls.key\\\\\\\"\\\\nI0227 10:18:38.164511 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 10:18:38.177544 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 10:18:38.177576 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 10:18:38.177600 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 10:18:38.177606 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 10:18:38.196196 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 10:18:38.196250 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 10:18:38.196256 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 10:18:38.196263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 10:18:38.196267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 10:18:38.196282 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 10:18:38.196286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0227 10:18:38.201111 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0227 10:18:38.201327 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T10:18:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df65cc66d990595a8fdb751234ffd8b9e890f53de56c736241bc8b52e7341357\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b42644d7b4769348dda3a3ed5f82a860712159a6e62df2ee6a04b6e890851fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b42644d7b4769348dda3a3ed5f82a860712159a6e62df2ee6a04b6e890851fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:17:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:32Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:32 crc kubenswrapper[4998]: I0227 10:20:32.453672 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da216979-4070-4044-885e-64db11be9b28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:18:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:18:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8e1a3d0cfbfb3c94ceb7fa8965506defc8d50fbd2c3c977ebea917d9b47f29c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77f77430694123a27f4e42870837e5ef1405bc4f157bc15efe4cc2daafd9456\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df8df596ab8f658fa7380300b4af87a511447baba28c0a4878829e2217d7d600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcd8311d3319cee9ae6667193835b5bd30c08b0f52b6278fead9c76ffa5949ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dcd8311d3319cee9ae6667193835b5bd30c08b0f52b6278fead9c76ffa5949ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:17:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:32Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:32 crc kubenswrapper[4998]: I0227 10:20:32.469755 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-86xkz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40178d6d-6068-4937-b7d5-883538892cc5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqwdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqwdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-86xkz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:32Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:32 crc kubenswrapper[4998]: I0227 10:20:32.506135 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c89c9d58-a300-46ee-9598-f461887c8f9c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b0013632790bf9b7d865e7dd2b831daeac0c10446a270d6017c83c65f5687ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d760d1817981842ba5ff7cb6e99e17a2bb3c2a706ac403aff1e9df265bc38e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07faee8c5639e04f711dedbcfa4188da5234cf6ccb4c1800d23311dc29e41f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f99112c9996d81cabf24593764be3695882a1acbdb45233464eb28fbdaa0869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ceedf973a7cd5ad682ffbb7ac21dbfff2f467cbebab605bc17fcb5eb53e05f55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a28ebb836f0d7af6bdb1e5b7eaa78692585c79e9e2f0f05e806cf253adc7070f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a28ebb836f0d7af6bdb1e5b7eaa78692585c79e9e2f0f05e806cf253adc7070f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6f5b604c3667bd9b9ff07f7ee8dd1fd046911e6fbc216f8a850621ac53d010a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6f5b604c3667bd9b9ff07f7ee8dd1fd046911e6fbc216f8a850621ac53d010a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e799d0c5d0a29fa4a20c358b5b14eb2910623970ec18dbc0511b9888bddfbc87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e799d0c5d0a29fa4a20c358b5b14eb2910623970ec18dbc0511b9888bddfbc87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:17:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:17:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:32Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:32 crc kubenswrapper[4998]: I0227 10:20:32.523051 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c7c05c29eea9415919844353f5f82a0543e2592b282913feca3776ef5a711d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65614a1a13ca1ee58cd9ee838bbb0ba2ada05d130adb26b34ae208292a994bf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:32Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:32 crc kubenswrapper[4998]: I0227 10:20:32.537644 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ea5f61679603b32df823b8149d832f4d6e98c0cb4c7d1124b3ff587bb729ef3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:32Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:32 crc kubenswrapper[4998]: I0227 10:20:32.553754 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-46lvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a046a5ca-7081-4920-98af-1027a5bc29d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:20:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:20:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf965cba1260453351278ff54565d3985b1c2eeee25ddba57db0ce0f8e335a93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70d74d26b8ecce796cf2e1d35f24a62a2d3005aac8c11e2a148aa9b9a4e670f4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T10:20:23Z\\\",\\\"message\\\":\\\"2026-02-27T10:19:37+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_633bc58b-0bda-4a44-899f-9a59253933fc\\\\n2026-02-27T10:19:37+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_633bc58b-0bda-4a44-899f-9a59253933fc to /host/opt/cni/bin/\\\\n2026-02-27T10:19:38Z [verbose] multus-daemon started\\\\n2026-02-27T10:19:38Z [verbose] Readiness Indicator file check\\\\n2026-02-27T10:20:23Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:20:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7s287\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-46lvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:32Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:32 crc kubenswrapper[4998]: I0227 10:20:32.569091 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"400c5e2f-5448-49c6-bf8e-04b21e552bb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3ce74dab57b3de4f390244a0a95e5ee5c83a47521248808b783a1e94e25c00c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l6t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94514be337c3278cc1dfc8b2f0c50050f03294a2cc4ac6a72c62695d2fe4152a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l6t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m6kr5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:32Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:32 crc kubenswrapper[4998]: I0227 10:20:32.581267 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:32Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:32 crc kubenswrapper[4998]: I0227 10:20:32.594010 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de019f950487216eb5629eeb90db5b4dac503bb5bda901cb9dcb66a9019aaecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:32Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:32 crc kubenswrapper[4998]: I0227 10:20:32.608725 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:32Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:32 crc kubenswrapper[4998]: I0227 10:20:32.624490 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:32Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:32 crc kubenswrapper[4998]: I0227 10:20:32.645112 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bceef7ff-b99d-432e-b9cb-7c538c82b74b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b586f4d902be28e9eed6ab0b15e1c5bbd3dbf2add143844fff0835e088a1e3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f8bd4e21e6e078a8e49ae2264c035072fb77179deb36176c991ff790ea25b0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cc4f81dc4311732e185b32536af72d8391b070ab0b822258c13c55b9053cf53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e38b883925e98a1b15113d65d2f9b3e133e05c14ac63651039076b8518d42467\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92c409cddda85ffd90dc0522d28d181bb804f68f82561e82217b3e78cd9938ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80297fee7fa0ab3d0d6a5975984b83a2d0fa2a368a3fc6d85affe2af75e326e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d3b4c0f3e17786c3bc75771f5dffed4f1c11d33aea4bd153117e1eda621a35b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d3b4c0f3e17786c3bc75771f5dffed4f1c11d33aea4bd153117e1eda621a35b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T10:20:30Z\\\",\\\"message\\\":\\\"rc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:30Z is after 2025-08-24T17:21:41Z]\\\\nI0227 10:20:30.675869 7657 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]} options:{GoMap:map[iface-id-ver:9d751cbb-f2e2-430d-9754-c882a5e924a5 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {960d98b2-dc64-4e93-a4b6-9b19847af71e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T10:20:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-wh9xl_openshift-ovn-kubernetes(bceef7ff-b99d-432e-b9cb-7c538c82b74b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f39b4e222d74a12d8889750f34e5bb8e07d73c9d7fcdbaaebe18066fba744a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a581b9e4264b5768a3af01485da05fd006f3cd0d379ef13f228187faa74f297c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a581b9e4264b5768a3af01485da05fd006f3cd0d379ef13f228187faa74f297c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:19:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:19:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xxgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wh9xl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:32Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:32 crc kubenswrapper[4998]: I0227 10:20:32.656720 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jl2nx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c459deb8-e1ea-43de-a1b0-1b463eee4bdc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7df06ac726fa496bf82c45d7a1ec7f2defaf93f1d406f0c55cbe4c2b46c9b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:19:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc6kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:19:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jl2nx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:32Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:32 crc kubenswrapper[4998]: I0227 10:20:32.669372 4998 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1182527-22cd-46f5-ac86-450cc2fbc851\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:17:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f69cef55dc6c16d2f015053573c088caee5fcca24208ad8c7affb58bc981c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a72f2f016ab471dc647a29e72e4e49dd5008018682ff255ca5b8496cfc4a2a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a72f2f016ab471dc647a29e72e4e49dd5008018682ff255ca5b8496cfc4a2a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:17:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:17:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:17:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:20:32Z is after 2025-08-24T17:21:41Z" Feb 27 10:20:32 crc kubenswrapper[4998]: I0227 10:20:32.763873 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:20:32 crc kubenswrapper[4998]: I0227 10:20:32.763890 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:20:32 crc kubenswrapper[4998]: E0227 10:20:32.764010 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 10:20:32 crc kubenswrapper[4998]: I0227 10:20:32.763935 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-86xkz" Feb 27 10:20:32 crc kubenswrapper[4998]: I0227 10:20:32.763950 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:20:32 crc kubenswrapper[4998]: E0227 10:20:32.764101 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 10:20:32 crc kubenswrapper[4998]: E0227 10:20:32.764167 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 10:20:32 crc kubenswrapper[4998]: E0227 10:20:32.764300 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-86xkz" podUID="40178d6d-6068-4937-b7d5-883538892cc5" Feb 27 10:20:33 crc kubenswrapper[4998]: E0227 10:20:33.876805 4998 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 27 10:20:34 crc kubenswrapper[4998]: I0227 10:20:34.764673 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-86xkz" Feb 27 10:20:34 crc kubenswrapper[4998]: I0227 10:20:34.764716 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:20:34 crc kubenswrapper[4998]: I0227 10:20:34.764732 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:20:34 crc kubenswrapper[4998]: E0227 10:20:34.764827 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 10:20:34 crc kubenswrapper[4998]: I0227 10:20:34.764916 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:20:34 crc kubenswrapper[4998]: E0227 10:20:34.765100 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 10:20:34 crc kubenswrapper[4998]: E0227 10:20:34.765610 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-86xkz" podUID="40178d6d-6068-4937-b7d5-883538892cc5" Feb 27 10:20:34 crc kubenswrapper[4998]: E0227 10:20:34.765683 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 10:20:36 crc kubenswrapper[4998]: I0227 10:20:36.764384 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:20:36 crc kubenswrapper[4998]: I0227 10:20:36.764419 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:20:36 crc kubenswrapper[4998]: E0227 10:20:36.764589 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 10:20:36 crc kubenswrapper[4998]: I0227 10:20:36.764668 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-86xkz" Feb 27 10:20:36 crc kubenswrapper[4998]: I0227 10:20:36.764734 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:20:36 crc kubenswrapper[4998]: E0227 10:20:36.764880 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-86xkz" podUID="40178d6d-6068-4937-b7d5-883538892cc5" Feb 27 10:20:36 crc kubenswrapper[4998]: E0227 10:20:36.764986 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 10:20:36 crc kubenswrapper[4998]: E0227 10:20:36.765113 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 10:20:38 crc kubenswrapper[4998]: I0227 10:20:38.764738 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:20:38 crc kubenswrapper[4998]: I0227 10:20:38.764811 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:20:38 crc kubenswrapper[4998]: I0227 10:20:38.764944 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:20:38 crc kubenswrapper[4998]: E0227 10:20:38.764947 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 10:20:38 crc kubenswrapper[4998]: E0227 10:20:38.765075 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 10:20:38 crc kubenswrapper[4998]: I0227 10:20:38.765125 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-86xkz" Feb 27 10:20:38 crc kubenswrapper[4998]: E0227 10:20:38.765176 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 10:20:38 crc kubenswrapper[4998]: E0227 10:20:38.765249 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-86xkz" podUID="40178d6d-6068-4937-b7d5-883538892cc5" Feb 27 10:20:38 crc kubenswrapper[4998]: I0227 10:20:38.825620 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-qcfqc" podStartSLOduration=121.825586444 podStartE2EDuration="2m1.825586444s" podCreationTimestamp="2026-02-27 10:18:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:20:38.79831961 +0000 UTC m=+190.796590618" watchObservedRunningTime="2026-02-27 10:20:38.825586444 +0000 UTC m=+190.823857432" Feb 27 10:20:38 crc kubenswrapper[4998]: I0227 10:20:38.842406 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-l9x2p" podStartSLOduration=121.842373318 podStartE2EDuration="2m1.842373318s" podCreationTimestamp="2026-02-27 10:18:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:20:38.824813253 +0000 UTC m=+190.823084221" watchObservedRunningTime="2026-02-27 10:20:38.842373318 +0000 UTC m=+190.840644286" Feb 27 10:20:38 crc kubenswrapper[4998]: I0227 10:20:38.842647 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-g7c4b" podStartSLOduration=121.842642245 podStartE2EDuration="2m1.842642245s" podCreationTimestamp="2026-02-27 10:18:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:20:38.842348787 +0000 UTC m=+190.840619755" watchObservedRunningTime="2026-02-27 10:20:38.842642245 +0000 UTC m=+190.840913213" Feb 27 10:20:38 crc kubenswrapper[4998]: E0227 10:20:38.882193 4998 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 27 10:20:38 crc kubenswrapper[4998]: I0227 10:20:38.922553 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=31.922532623 podStartE2EDuration="31.922532623s" podCreationTimestamp="2026-02-27 10:20:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:20:38.896594905 +0000 UTC m=+190.894865893" watchObservedRunningTime="2026-02-27 10:20:38.922532623 +0000 UTC m=+190.920803591" Feb 27 10:20:38 crc kubenswrapper[4998]: I0227 10:20:38.942688 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=68.942666949 podStartE2EDuration="1m8.942666949s" podCreationTimestamp="2026-02-27 10:19:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:20:38.942529015 +0000 UTC m=+190.940799993" watchObservedRunningTime="2026-02-27 10:20:38.942666949 +0000 UTC m=+190.940937917" Feb 27 10:20:38 crc kubenswrapper[4998]: I0227 10:20:38.943055 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=89.943048749 podStartE2EDuration="1m29.943048749s" podCreationTimestamp="2026-02-27 10:19:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:20:38.922673807 +0000 UTC m=+190.920944775" watchObservedRunningTime="2026-02-27 10:20:38.943048749 +0000 UTC m=+190.941319717" Feb 27 10:20:38 crc kubenswrapper[4998]: I0227 10:20:38.984421 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=21.984392582 podStartE2EDuration="21.984392582s" podCreationTimestamp="2026-02-27 10:20:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:20:38.983946069 +0000 UTC m=+190.982217037" watchObservedRunningTime="2026-02-27 10:20:38.984392582 +0000 UTC m=+190.982663550" Feb 27 10:20:39 crc kubenswrapper[4998]: I0227 10:20:39.028173 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-46lvx" podStartSLOduration=122.02815293 podStartE2EDuration="2m2.02815293s" podCreationTimestamp="2026-02-27 10:18:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:20:39.027660377 +0000 UTC m=+191.025931355" watchObservedRunningTime="2026-02-27 10:20:39.02815293 +0000 UTC m=+191.026423898" Feb 27 10:20:39 crc kubenswrapper[4998]: I0227 10:20:39.042245 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podStartSLOduration=122.042205078 podStartE2EDuration="2m2.042205078s" podCreationTimestamp="2026-02-27 10:18:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:20:39.041209151 +0000 UTC m=+191.039480119" watchObservedRunningTime="2026-02-27 10:20:39.042205078 +0000 UTC m=+191.040476046" Feb 27 10:20:39 crc kubenswrapper[4998]: I0227 10:20:39.084431 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:20:39 crc kubenswrapper[4998]: I0227 10:20:39.084464 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:20:39 crc kubenswrapper[4998]: I0227 10:20:39.084473 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:20:39 crc kubenswrapper[4998]: I0227 10:20:39.084486 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:20:39 crc kubenswrapper[4998]: I0227 10:20:39.084495 4998 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:20:39Z","lastTransitionTime":"2026-02-27T10:20:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:20:39 crc kubenswrapper[4998]: I0227 10:20:39.131423 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-4rcxg"] Feb 27 10:20:39 crc kubenswrapper[4998]: I0227 10:20:39.131857 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-4rcxg" Feb 27 10:20:39 crc kubenswrapper[4998]: I0227 10:20:39.133928 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 27 10:20:39 crc kubenswrapper[4998]: I0227 10:20:39.134619 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 27 10:20:39 crc kubenswrapper[4998]: I0227 10:20:39.135189 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 27 10:20:39 crc kubenswrapper[4998]: I0227 10:20:39.135870 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 27 10:20:39 crc kubenswrapper[4998]: I0227 10:20:39.143020 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-jl2nx" podStartSLOduration=122.143002124 podStartE2EDuration="2m2.143002124s" podCreationTimestamp="2026-02-27 10:18:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:20:39.142959842 +0000 UTC m=+191.141230810" watchObservedRunningTime="2026-02-27 10:20:39.143002124 +0000 UTC m=+191.141273092" Feb 27 10:20:39 crc kubenswrapper[4998]: I0227 10:20:39.149508 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/73e79d3d-ed8f-447e-9698-ef4e8bc6fccd-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-4rcxg\" (UID: \"73e79d3d-ed8f-447e-9698-ef4e8bc6fccd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-4rcxg" Feb 27 10:20:39 crc kubenswrapper[4998]: I0227 10:20:39.149564 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/73e79d3d-ed8f-447e-9698-ef4e8bc6fccd-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-4rcxg\" (UID: \"73e79d3d-ed8f-447e-9698-ef4e8bc6fccd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-4rcxg" Feb 27 10:20:39 crc kubenswrapper[4998]: I0227 10:20:39.149581 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/73e79d3d-ed8f-447e-9698-ef4e8bc6fccd-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-4rcxg\" (UID: \"73e79d3d-ed8f-447e-9698-ef4e8bc6fccd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-4rcxg" Feb 27 10:20:39 crc kubenswrapper[4998]: I0227 10:20:39.149598 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/73e79d3d-ed8f-447e-9698-ef4e8bc6fccd-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-4rcxg\" (UID: \"73e79d3d-ed8f-447e-9698-ef4e8bc6fccd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-4rcxg" Feb 27 10:20:39 crc kubenswrapper[4998]: I0227 10:20:39.149710 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/73e79d3d-ed8f-447e-9698-ef4e8bc6fccd-service-ca\") pod \"cluster-version-operator-5c965bbfc6-4rcxg\" (UID: \"73e79d3d-ed8f-447e-9698-ef4e8bc6fccd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-4rcxg" Feb 27 10:20:39 crc kubenswrapper[4998]: I0227 10:20:39.153941 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=60.153929966 podStartE2EDuration="1m0.153929966s" podCreationTimestamp="2026-02-27 10:19:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:20:39.152566288 +0000 UTC m=+191.150837276" watchObservedRunningTime="2026-02-27 10:20:39.153929966 +0000 UTC m=+191.152200934" Feb 27 10:20:39 crc kubenswrapper[4998]: I0227 10:20:39.250785 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/73e79d3d-ed8f-447e-9698-ef4e8bc6fccd-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-4rcxg\" (UID: \"73e79d3d-ed8f-447e-9698-ef4e8bc6fccd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-4rcxg" Feb 27 10:20:39 crc kubenswrapper[4998]: I0227 10:20:39.250840 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/73e79d3d-ed8f-447e-9698-ef4e8bc6fccd-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-4rcxg\" (UID: \"73e79d3d-ed8f-447e-9698-ef4e8bc6fccd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-4rcxg" Feb 27 10:20:39 crc kubenswrapper[4998]: I0227 10:20:39.250859 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/73e79d3d-ed8f-447e-9698-ef4e8bc6fccd-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-4rcxg\" (UID: \"73e79d3d-ed8f-447e-9698-ef4e8bc6fccd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-4rcxg" Feb 27 10:20:39 crc kubenswrapper[4998]: I0227 10:20:39.250879 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/73e79d3d-ed8f-447e-9698-ef4e8bc6fccd-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-4rcxg\" (UID: \"73e79d3d-ed8f-447e-9698-ef4e8bc6fccd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-4rcxg" Feb 27 10:20:39 crc kubenswrapper[4998]: I0227 10:20:39.250923 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/73e79d3d-ed8f-447e-9698-ef4e8bc6fccd-service-ca\") pod \"cluster-version-operator-5c965bbfc6-4rcxg\" (UID: \"73e79d3d-ed8f-447e-9698-ef4e8bc6fccd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-4rcxg" Feb 27 10:20:39 crc kubenswrapper[4998]: I0227 10:20:39.250946 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/73e79d3d-ed8f-447e-9698-ef4e8bc6fccd-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-4rcxg\" (UID: \"73e79d3d-ed8f-447e-9698-ef4e8bc6fccd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-4rcxg" Feb 27 10:20:39 crc kubenswrapper[4998]: I0227 10:20:39.251081 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/73e79d3d-ed8f-447e-9698-ef4e8bc6fccd-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-4rcxg\" (UID: \"73e79d3d-ed8f-447e-9698-ef4e8bc6fccd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-4rcxg" Feb 27 10:20:39 crc kubenswrapper[4998]: I0227 10:20:39.251968 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/73e79d3d-ed8f-447e-9698-ef4e8bc6fccd-service-ca\") pod \"cluster-version-operator-5c965bbfc6-4rcxg\" (UID: \"73e79d3d-ed8f-447e-9698-ef4e8bc6fccd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-4rcxg" Feb 27 10:20:39 crc kubenswrapper[4998]: I0227 10:20:39.259778 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/73e79d3d-ed8f-447e-9698-ef4e8bc6fccd-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-4rcxg\" (UID: \"73e79d3d-ed8f-447e-9698-ef4e8bc6fccd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-4rcxg" Feb 27 10:20:39 crc kubenswrapper[4998]: I0227 10:20:39.273099 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/73e79d3d-ed8f-447e-9698-ef4e8bc6fccd-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-4rcxg\" (UID: \"73e79d3d-ed8f-447e-9698-ef4e8bc6fccd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-4rcxg" Feb 27 10:20:39 crc kubenswrapper[4998]: I0227 10:20:39.441757 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-4rcxg" Feb 27 10:20:39 crc kubenswrapper[4998]: I0227 10:20:39.803149 4998 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 27 10:20:39 crc kubenswrapper[4998]: I0227 10:20:39.812462 4998 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 27 10:20:40 crc kubenswrapper[4998]: I0227 10:20:40.383574 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-4rcxg" event={"ID":"73e79d3d-ed8f-447e-9698-ef4e8bc6fccd","Type":"ContainerStarted","Data":"5571752c85731eaa1150b398466420503e894ab1baf324f7cc31f45d60c001ae"} Feb 27 10:20:40 crc kubenswrapper[4998]: I0227 10:20:40.383663 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-4rcxg" event={"ID":"73e79d3d-ed8f-447e-9698-ef4e8bc6fccd","Type":"ContainerStarted","Data":"f3b450d5c39ed8a0efc4d736845812325dd1ff3da9e6a91a0c265f53cdfb6091"} Feb 27 10:20:40 crc kubenswrapper[4998]: I0227 10:20:40.399089 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-4rcxg" podStartSLOduration=123.399066589 podStartE2EDuration="2m3.399066589s" podCreationTimestamp="2026-02-27 10:18:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:20:40.398155043 +0000 UTC m=+192.396426031" watchObservedRunningTime="2026-02-27 10:20:40.399066589 +0000 UTC m=+192.397337577" Feb 27 10:20:40 crc kubenswrapper[4998]: I0227 10:20:40.764898 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:20:40 crc kubenswrapper[4998]: I0227 10:20:40.764960 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:20:40 crc kubenswrapper[4998]: I0227 10:20:40.765049 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:20:40 crc kubenswrapper[4998]: E0227 10:20:40.765265 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 10:20:40 crc kubenswrapper[4998]: I0227 10:20:40.765348 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-86xkz" Feb 27 10:20:40 crc kubenswrapper[4998]: E0227 10:20:40.765499 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 10:20:40 crc kubenswrapper[4998]: E0227 10:20:40.765682 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-86xkz" podUID="40178d6d-6068-4937-b7d5-883538892cc5" Feb 27 10:20:40 crc kubenswrapper[4998]: E0227 10:20:40.765821 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 10:20:42 crc kubenswrapper[4998]: I0227 10:20:42.764293 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:20:42 crc kubenswrapper[4998]: E0227 10:20:42.764434 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 10:20:42 crc kubenswrapper[4998]: I0227 10:20:42.764545 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:20:42 crc kubenswrapper[4998]: I0227 10:20:42.764714 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:20:42 crc kubenswrapper[4998]: I0227 10:20:42.764755 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-86xkz" Feb 27 10:20:42 crc kubenswrapper[4998]: E0227 10:20:42.764868 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 10:20:42 crc kubenswrapper[4998]: E0227 10:20:42.764942 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-86xkz" podUID="40178d6d-6068-4937-b7d5-883538892cc5" Feb 27 10:20:42 crc kubenswrapper[4998]: E0227 10:20:42.765077 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 10:20:43 crc kubenswrapper[4998]: I0227 10:20:43.764811 4998 scope.go:117] "RemoveContainer" containerID="2d3b4c0f3e17786c3bc75771f5dffed4f1c11d33aea4bd153117e1eda621a35b" Feb 27 10:20:43 crc kubenswrapper[4998]: E0227 10:20:43.765068 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-wh9xl_openshift-ovn-kubernetes(bceef7ff-b99d-432e-b9cb-7c538c82b74b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" podUID="bceef7ff-b99d-432e-b9cb-7c538c82b74b" Feb 27 10:20:43 crc kubenswrapper[4998]: E0227 10:20:43.883889 4998 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 27 10:20:44 crc kubenswrapper[4998]: I0227 10:20:44.764029 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-86xkz" Feb 27 10:20:44 crc kubenswrapper[4998]: E0227 10:20:44.764157 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-86xkz" podUID="40178d6d-6068-4937-b7d5-883538892cc5" Feb 27 10:20:44 crc kubenswrapper[4998]: I0227 10:20:44.764369 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:20:44 crc kubenswrapper[4998]: E0227 10:20:44.764414 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 10:20:44 crc kubenswrapper[4998]: I0227 10:20:44.764534 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:20:44 crc kubenswrapper[4998]: E0227 10:20:44.764621 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 10:20:44 crc kubenswrapper[4998]: I0227 10:20:44.764571 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:20:44 crc kubenswrapper[4998]: E0227 10:20:44.764735 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 10:20:46 crc kubenswrapper[4998]: I0227 10:20:46.764619 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:20:46 crc kubenswrapper[4998]: I0227 10:20:46.764675 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:20:46 crc kubenswrapper[4998]: E0227 10:20:46.764850 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 10:20:46 crc kubenswrapper[4998]: I0227 10:20:46.764959 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:20:46 crc kubenswrapper[4998]: I0227 10:20:46.765049 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-86xkz" Feb 27 10:20:46 crc kubenswrapper[4998]: E0227 10:20:46.765134 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 10:20:46 crc kubenswrapper[4998]: E0227 10:20:46.765324 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 10:20:46 crc kubenswrapper[4998]: E0227 10:20:46.765425 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-86xkz" podUID="40178d6d-6068-4937-b7d5-883538892cc5" Feb 27 10:20:48 crc kubenswrapper[4998]: I0227 10:20:48.764443 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:20:48 crc kubenswrapper[4998]: E0227 10:20:48.766810 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 10:20:48 crc kubenswrapper[4998]: I0227 10:20:48.767005 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-86xkz" Feb 27 10:20:48 crc kubenswrapper[4998]: I0227 10:20:48.767061 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:20:48 crc kubenswrapper[4998]: I0227 10:20:48.767044 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:20:48 crc kubenswrapper[4998]: E0227 10:20:48.767278 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-86xkz" podUID="40178d6d-6068-4937-b7d5-883538892cc5" Feb 27 10:20:48 crc kubenswrapper[4998]: E0227 10:20:48.767342 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 10:20:48 crc kubenswrapper[4998]: E0227 10:20:48.767513 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 10:20:48 crc kubenswrapper[4998]: E0227 10:20:48.884489 4998 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 27 10:20:50 crc kubenswrapper[4998]: I0227 10:20:50.764922 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:20:50 crc kubenswrapper[4998]: E0227 10:20:50.765136 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 10:20:50 crc kubenswrapper[4998]: I0227 10:20:50.765496 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:20:50 crc kubenswrapper[4998]: I0227 10:20:50.765512 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-86xkz" Feb 27 10:20:50 crc kubenswrapper[4998]: I0227 10:20:50.765613 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:20:50 crc kubenswrapper[4998]: E0227 10:20:50.765607 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 10:20:50 crc kubenswrapper[4998]: E0227 10:20:50.765841 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-86xkz" podUID="40178d6d-6068-4937-b7d5-883538892cc5" Feb 27 10:20:50 crc kubenswrapper[4998]: E0227 10:20:50.766051 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 10:20:52 crc kubenswrapper[4998]: I0227 10:20:52.764396 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-86xkz" Feb 27 10:20:52 crc kubenswrapper[4998]: I0227 10:20:52.764462 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:20:52 crc kubenswrapper[4998]: I0227 10:20:52.764494 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:20:52 crc kubenswrapper[4998]: E0227 10:20:52.764586 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 10:20:52 crc kubenswrapper[4998]: I0227 10:20:52.764621 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:20:52 crc kubenswrapper[4998]: E0227 10:20:52.764781 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 10:20:52 crc kubenswrapper[4998]: E0227 10:20:52.764815 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 10:20:52 crc kubenswrapper[4998]: E0227 10:20:52.764963 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-86xkz" podUID="40178d6d-6068-4937-b7d5-883538892cc5" Feb 27 10:20:53 crc kubenswrapper[4998]: E0227 10:20:53.885899 4998 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 27 10:20:54 crc kubenswrapper[4998]: I0227 10:20:54.764480 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:20:54 crc kubenswrapper[4998]: I0227 10:20:54.764480 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:20:54 crc kubenswrapper[4998]: I0227 10:20:54.764545 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:20:54 crc kubenswrapper[4998]: I0227 10:20:54.764645 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-86xkz" Feb 27 10:20:54 crc kubenswrapper[4998]: E0227 10:20:54.765381 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 10:20:54 crc kubenswrapper[4998]: E0227 10:20:54.765651 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 10:20:54 crc kubenswrapper[4998]: E0227 10:20:54.765774 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-86xkz" podUID="40178d6d-6068-4937-b7d5-883538892cc5" Feb 27 10:20:54 crc kubenswrapper[4998]: E0227 10:20:54.765903 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 10:20:56 crc kubenswrapper[4998]: I0227 10:20:56.764210 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:20:56 crc kubenswrapper[4998]: I0227 10:20:56.764210 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:20:56 crc kubenswrapper[4998]: I0227 10:20:56.764313 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:20:56 crc kubenswrapper[4998]: I0227 10:20:56.764485 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-86xkz" Feb 27 10:20:56 crc kubenswrapper[4998]: E0227 10:20:56.764595 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 10:20:56 crc kubenswrapper[4998]: E0227 10:20:56.764874 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-86xkz" podUID="40178d6d-6068-4937-b7d5-883538892cc5" Feb 27 10:20:56 crc kubenswrapper[4998]: E0227 10:20:56.764927 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 10:20:56 crc kubenswrapper[4998]: E0227 10:20:56.765405 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 10:20:56 crc kubenswrapper[4998]: I0227 10:20:56.765638 4998 scope.go:117] "RemoveContainer" containerID="2d3b4c0f3e17786c3bc75771f5dffed4f1c11d33aea4bd153117e1eda621a35b" Feb 27 10:20:56 crc kubenswrapper[4998]: E0227 10:20:56.765794 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-wh9xl_openshift-ovn-kubernetes(bceef7ff-b99d-432e-b9cb-7c538c82b74b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" podUID="bceef7ff-b99d-432e-b9cb-7c538c82b74b" Feb 27 10:20:58 crc kubenswrapper[4998]: I0227 10:20:58.764901 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:20:58 crc kubenswrapper[4998]: I0227 10:20:58.764985 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:20:58 crc kubenswrapper[4998]: E0227 10:20:58.765938 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 10:20:58 crc kubenswrapper[4998]: I0227 10:20:58.765953 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:20:58 crc kubenswrapper[4998]: I0227 10:20:58.766005 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-86xkz" Feb 27 10:20:58 crc kubenswrapper[4998]: E0227 10:20:58.766120 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 10:20:58 crc kubenswrapper[4998]: E0227 10:20:58.766174 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-86xkz" podUID="40178d6d-6068-4937-b7d5-883538892cc5" Feb 27 10:20:58 crc kubenswrapper[4998]: E0227 10:20:58.766264 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 10:20:58 crc kubenswrapper[4998]: E0227 10:20:58.886456 4998 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 27 10:21:00 crc kubenswrapper[4998]: I0227 10:21:00.764363 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:21:00 crc kubenswrapper[4998]: I0227 10:21:00.764388 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:21:00 crc kubenswrapper[4998]: I0227 10:21:00.764498 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:21:00 crc kubenswrapper[4998]: E0227 10:21:00.764489 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 10:21:00 crc kubenswrapper[4998]: I0227 10:21:00.764552 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-86xkz" Feb 27 10:21:00 crc kubenswrapper[4998]: E0227 10:21:00.764681 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 10:21:00 crc kubenswrapper[4998]: E0227 10:21:00.764736 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 10:21:00 crc kubenswrapper[4998]: E0227 10:21:00.764809 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-86xkz" podUID="40178d6d-6068-4937-b7d5-883538892cc5" Feb 27 10:21:02 crc kubenswrapper[4998]: I0227 10:21:02.764391 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:21:02 crc kubenswrapper[4998]: I0227 10:21:02.764429 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:21:02 crc kubenswrapper[4998]: I0227 10:21:02.764532 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-86xkz" Feb 27 10:21:02 crc kubenswrapper[4998]: E0227 10:21:02.764594 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 10:21:02 crc kubenswrapper[4998]: E0227 10:21:02.764728 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-86xkz" podUID="40178d6d-6068-4937-b7d5-883538892cc5" Feb 27 10:21:02 crc kubenswrapper[4998]: E0227 10:21:02.764860 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 10:21:02 crc kubenswrapper[4998]: I0227 10:21:02.764918 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:21:02 crc kubenswrapper[4998]: E0227 10:21:02.765016 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 10:21:03 crc kubenswrapper[4998]: E0227 10:21:03.887482 4998 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 27 10:21:04 crc kubenswrapper[4998]: I0227 10:21:04.764638 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:21:04 crc kubenswrapper[4998]: I0227 10:21:04.764727 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-86xkz" Feb 27 10:21:04 crc kubenswrapper[4998]: I0227 10:21:04.764669 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:21:04 crc kubenswrapper[4998]: I0227 10:21:04.764650 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:21:04 crc kubenswrapper[4998]: E0227 10:21:04.764836 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 10:21:04 crc kubenswrapper[4998]: E0227 10:21:04.764965 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-86xkz" podUID="40178d6d-6068-4937-b7d5-883538892cc5" Feb 27 10:21:04 crc kubenswrapper[4998]: E0227 10:21:04.765056 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 10:21:04 crc kubenswrapper[4998]: E0227 10:21:04.765130 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 10:21:06 crc kubenswrapper[4998]: I0227 10:21:06.764953 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:21:06 crc kubenswrapper[4998]: I0227 10:21:06.765014 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:21:06 crc kubenswrapper[4998]: I0227 10:21:06.765068 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-86xkz" Feb 27 10:21:06 crc kubenswrapper[4998]: I0227 10:21:06.764962 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:21:06 crc kubenswrapper[4998]: E0227 10:21:06.765177 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 10:21:06 crc kubenswrapper[4998]: E0227 10:21:06.765245 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 10:21:06 crc kubenswrapper[4998]: E0227 10:21:06.765352 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-86xkz" podUID="40178d6d-6068-4937-b7d5-883538892cc5" Feb 27 10:21:06 crc kubenswrapper[4998]: E0227 10:21:06.765682 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 10:21:08 crc kubenswrapper[4998]: I0227 10:21:08.763945 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:21:08 crc kubenswrapper[4998]: I0227 10:21:08.764017 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:21:08 crc kubenswrapper[4998]: I0227 10:21:08.764054 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-86xkz" Feb 27 10:21:08 crc kubenswrapper[4998]: I0227 10:21:08.764070 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:21:08 crc kubenswrapper[4998]: E0227 10:21:08.765275 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 10:21:08 crc kubenswrapper[4998]: E0227 10:21:08.765426 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-86xkz" podUID="40178d6d-6068-4937-b7d5-883538892cc5" Feb 27 10:21:08 crc kubenswrapper[4998]: E0227 10:21:08.765587 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 10:21:08 crc kubenswrapper[4998]: E0227 10:21:08.765693 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 10:21:08 crc kubenswrapper[4998]: E0227 10:21:08.887979 4998 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 27 10:21:10 crc kubenswrapper[4998]: I0227 10:21:10.505812 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-46lvx_a046a5ca-7081-4920-98af-1027a5bc29d0/kube-multus/1.log" Feb 27 10:21:10 crc kubenswrapper[4998]: I0227 10:21:10.506269 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-46lvx_a046a5ca-7081-4920-98af-1027a5bc29d0/kube-multus/0.log" Feb 27 10:21:10 crc kubenswrapper[4998]: I0227 10:21:10.506311 4998 generic.go:334] "Generic (PLEG): container finished" podID="a046a5ca-7081-4920-98af-1027a5bc29d0" containerID="cf965cba1260453351278ff54565d3985b1c2eeee25ddba57db0ce0f8e335a93" exitCode=1 Feb 27 10:21:10 crc kubenswrapper[4998]: I0227 10:21:10.506341 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-46lvx" event={"ID":"a046a5ca-7081-4920-98af-1027a5bc29d0","Type":"ContainerDied","Data":"cf965cba1260453351278ff54565d3985b1c2eeee25ddba57db0ce0f8e335a93"} Feb 27 10:21:10 crc kubenswrapper[4998]: I0227 10:21:10.506380 4998 scope.go:117] "RemoveContainer" containerID="70d74d26b8ecce796cf2e1d35f24a62a2d3005aac8c11e2a148aa9b9a4e670f4" Feb 27 10:21:10 crc kubenswrapper[4998]: I0227 10:21:10.506831 4998 scope.go:117] "RemoveContainer" containerID="cf965cba1260453351278ff54565d3985b1c2eeee25ddba57db0ce0f8e335a93" Feb 27 10:21:10 crc kubenswrapper[4998]: E0227 10:21:10.508332 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-46lvx_openshift-multus(a046a5ca-7081-4920-98af-1027a5bc29d0)\"" pod="openshift-multus/multus-46lvx" podUID="a046a5ca-7081-4920-98af-1027a5bc29d0" Feb 27 10:21:10 crc kubenswrapper[4998]: I0227 10:21:10.702926 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:21:10 crc kubenswrapper[4998]: E0227 10:21:10.703152 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 10:23:12.703121809 +0000 UTC m=+344.701392787 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:21:10 crc kubenswrapper[4998]: I0227 10:21:10.764256 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-86xkz" Feb 27 10:21:10 crc kubenswrapper[4998]: I0227 10:21:10.764447 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:21:10 crc kubenswrapper[4998]: I0227 10:21:10.764653 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:21:10 crc kubenswrapper[4998]: E0227 10:21:10.764645 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-86xkz" podUID="40178d6d-6068-4937-b7d5-883538892cc5" Feb 27 10:21:10 crc kubenswrapper[4998]: I0227 10:21:10.764715 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:21:10 crc kubenswrapper[4998]: E0227 10:21:10.764894 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 10:21:10 crc kubenswrapper[4998]: E0227 10:21:10.764945 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 10:21:10 crc kubenswrapper[4998]: E0227 10:21:10.765308 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 10:21:10 crc kubenswrapper[4998]: I0227 10:21:10.765665 4998 scope.go:117] "RemoveContainer" containerID="2d3b4c0f3e17786c3bc75771f5dffed4f1c11d33aea4bd153117e1eda621a35b" Feb 27 10:21:10 crc kubenswrapper[4998]: I0227 10:21:10.803873 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:21:10 crc kubenswrapper[4998]: I0227 10:21:10.804201 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:21:10 crc kubenswrapper[4998]: I0227 10:21:10.804270 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:21:10 crc kubenswrapper[4998]: E0227 10:21:10.804084 4998 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 27 10:21:10 crc kubenswrapper[4998]: E0227 10:21:10.804340 4998 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 27 10:21:10 crc kubenswrapper[4998]: E0227 10:21:10.804357 4998 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 10:21:10 crc kubenswrapper[4998]: E0227 10:21:10.804365 4998 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 27 10:21:10 crc kubenswrapper[4998]: E0227 10:21:10.804405 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-27 10:23:12.80438891 +0000 UTC m=+344.802659878 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 10:21:10 crc kubenswrapper[4998]: E0227 10:21:10.804422 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-27 10:23:12.804415031 +0000 UTC m=+344.802685999 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 27 10:21:10 crc kubenswrapper[4998]: E0227 10:21:10.804560 4998 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 27 10:21:10 crc kubenswrapper[4998]: E0227 10:21:10.804688 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-27 10:23:12.804661077 +0000 UTC m=+344.802932075 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 27 10:21:10 crc kubenswrapper[4998]: I0227 10:21:10.905794 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:21:10 crc kubenswrapper[4998]: E0227 10:21:10.906312 4998 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 27 10:21:10 crc kubenswrapper[4998]: E0227 10:21:10.906386 4998 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 27 10:21:10 crc kubenswrapper[4998]: E0227 10:21:10.906406 4998 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 10:21:10 crc kubenswrapper[4998]: E0227 10:21:10.906504 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-27 10:23:12.906468864 +0000 UTC m=+344.904740022 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 10:21:11 crc kubenswrapper[4998]: I0227 10:21:11.517045 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-46lvx_a046a5ca-7081-4920-98af-1027a5bc29d0/kube-multus/1.log" Feb 27 10:21:11 crc kubenswrapper[4998]: I0227 10:21:11.518908 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wh9xl_bceef7ff-b99d-432e-b9cb-7c538c82b74b/ovnkube-controller/3.log" Feb 27 10:21:11 crc kubenswrapper[4998]: I0227 10:21:11.521069 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" event={"ID":"bceef7ff-b99d-432e-b9cb-7c538c82b74b","Type":"ContainerStarted","Data":"e9eaff90d2da92c1eae5a79cd037518bcdfab96974f21aa40ee77de2eba576af"} Feb 27 10:21:11 crc kubenswrapper[4998]: I0227 10:21:11.521511 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" Feb 27 10:21:11 crc kubenswrapper[4998]: I0227 10:21:11.549144 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" podStartSLOduration=154.549121131 podStartE2EDuration="2m34.549121131s" podCreationTimestamp="2026-02-27 10:18:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:21:11.546761036 +0000 UTC m=+223.545032014" watchObservedRunningTime="2026-02-27 10:21:11.549121131 +0000 UTC m=+223.547392089" Feb 27 10:21:11 crc kubenswrapper[4998]: I0227 10:21:11.553716 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-86xkz"] Feb 27 10:21:11 crc kubenswrapper[4998]: I0227 10:21:11.553827 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-86xkz" Feb 27 10:21:11 crc kubenswrapper[4998]: E0227 10:21:11.553930 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-86xkz" podUID="40178d6d-6068-4937-b7d5-883538892cc5" Feb 27 10:21:12 crc kubenswrapper[4998]: I0227 10:21:12.764412 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:21:12 crc kubenswrapper[4998]: I0227 10:21:12.764491 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:21:12 crc kubenswrapper[4998]: E0227 10:21:12.764664 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 10:21:12 crc kubenswrapper[4998]: I0227 10:21:12.764722 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-86xkz" Feb 27 10:21:12 crc kubenswrapper[4998]: E0227 10:21:12.764795 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-86xkz" podUID="40178d6d-6068-4937-b7d5-883538892cc5" Feb 27 10:21:12 crc kubenswrapper[4998]: E0227 10:21:12.764518 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 10:21:12 crc kubenswrapper[4998]: I0227 10:21:12.764905 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:21:12 crc kubenswrapper[4998]: E0227 10:21:12.764978 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 10:21:13 crc kubenswrapper[4998]: E0227 10:21:13.889480 4998 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 27 10:21:14 crc kubenswrapper[4998]: I0227 10:21:14.764641 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:21:14 crc kubenswrapper[4998]: I0227 10:21:14.764671 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:21:14 crc kubenswrapper[4998]: I0227 10:21:14.764688 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:21:14 crc kubenswrapper[4998]: E0227 10:21:14.765480 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 10:21:14 crc kubenswrapper[4998]: I0227 10:21:14.765559 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-86xkz" Feb 27 10:21:14 crc kubenswrapper[4998]: E0227 10:21:14.765804 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-86xkz" podUID="40178d6d-6068-4937-b7d5-883538892cc5" Feb 27 10:21:14 crc kubenswrapper[4998]: E0227 10:21:14.770442 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 10:21:14 crc kubenswrapper[4998]: E0227 10:21:14.770680 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 10:21:16 crc kubenswrapper[4998]: I0227 10:21:16.764079 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:21:16 crc kubenswrapper[4998]: I0227 10:21:16.764132 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-86xkz" Feb 27 10:21:16 crc kubenswrapper[4998]: I0227 10:21:16.764149 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:21:16 crc kubenswrapper[4998]: I0227 10:21:16.764041 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:21:16 crc kubenswrapper[4998]: E0227 10:21:16.764729 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 10:21:16 crc kubenswrapper[4998]: E0227 10:21:16.765047 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-86xkz" podUID="40178d6d-6068-4937-b7d5-883538892cc5" Feb 27 10:21:16 crc kubenswrapper[4998]: E0227 10:21:16.765271 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 10:21:16 crc kubenswrapper[4998]: E0227 10:21:16.765314 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 10:21:18 crc kubenswrapper[4998]: I0227 10:21:18.764161 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:21:18 crc kubenswrapper[4998]: I0227 10:21:18.764240 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:21:18 crc kubenswrapper[4998]: E0227 10:21:18.765458 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 10:21:18 crc kubenswrapper[4998]: I0227 10:21:18.765474 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-86xkz" Feb 27 10:21:18 crc kubenswrapper[4998]: E0227 10:21:18.765528 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 10:21:18 crc kubenswrapper[4998]: E0227 10:21:18.765571 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-86xkz" podUID="40178d6d-6068-4937-b7d5-883538892cc5" Feb 27 10:21:18 crc kubenswrapper[4998]: I0227 10:21:18.765482 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:21:18 crc kubenswrapper[4998]: E0227 10:21:18.765646 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 10:21:18 crc kubenswrapper[4998]: E0227 10:21:18.890375 4998 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 27 10:21:20 crc kubenswrapper[4998]: I0227 10:21:20.764185 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:21:20 crc kubenswrapper[4998]: E0227 10:21:20.764345 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 10:21:20 crc kubenswrapper[4998]: I0227 10:21:20.764202 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:21:20 crc kubenswrapper[4998]: I0227 10:21:20.764399 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-86xkz" Feb 27 10:21:20 crc kubenswrapper[4998]: E0227 10:21:20.764435 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 10:21:20 crc kubenswrapper[4998]: I0227 10:21:20.764185 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:21:20 crc kubenswrapper[4998]: E0227 10:21:20.764543 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-86xkz" podUID="40178d6d-6068-4937-b7d5-883538892cc5" Feb 27 10:21:20 crc kubenswrapper[4998]: E0227 10:21:20.764632 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 10:21:22 crc kubenswrapper[4998]: I0227 10:21:22.765059 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:21:22 crc kubenswrapper[4998]: I0227 10:21:22.765121 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:21:22 crc kubenswrapper[4998]: E0227 10:21:22.765364 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 10:21:22 crc kubenswrapper[4998]: I0227 10:21:22.765433 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-86xkz" Feb 27 10:21:22 crc kubenswrapper[4998]: E0227 10:21:22.765673 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-86xkz" podUID="40178d6d-6068-4937-b7d5-883538892cc5" Feb 27 10:21:22 crc kubenswrapper[4998]: I0227 10:21:22.765779 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:21:22 crc kubenswrapper[4998]: E0227 10:21:22.765778 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 10:21:22 crc kubenswrapper[4998]: E0227 10:21:22.765844 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 10:21:22 crc kubenswrapper[4998]: I0227 10:21:22.766102 4998 scope.go:117] "RemoveContainer" containerID="cf965cba1260453351278ff54565d3985b1c2eeee25ddba57db0ce0f8e335a93" Feb 27 10:21:23 crc kubenswrapper[4998]: I0227 10:21:23.562930 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-46lvx_a046a5ca-7081-4920-98af-1027a5bc29d0/kube-multus/1.log" Feb 27 10:21:23 crc kubenswrapper[4998]: I0227 10:21:23.563320 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-46lvx" event={"ID":"a046a5ca-7081-4920-98af-1027a5bc29d0","Type":"ContainerStarted","Data":"d3e2963b299c9c91d93abf85f31c8d17e14dd7e330e911092cdfcb10879314ea"} Feb 27 10:21:23 crc kubenswrapper[4998]: E0227 10:21:23.892175 4998 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 27 10:21:24 crc kubenswrapper[4998]: I0227 10:21:24.763967 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-86xkz" Feb 27 10:21:24 crc kubenswrapper[4998]: E0227 10:21:24.764132 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-86xkz" podUID="40178d6d-6068-4937-b7d5-883538892cc5" Feb 27 10:21:24 crc kubenswrapper[4998]: I0227 10:21:24.764189 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:21:24 crc kubenswrapper[4998]: E0227 10:21:24.764296 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 10:21:24 crc kubenswrapper[4998]: I0227 10:21:24.764292 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:21:24 crc kubenswrapper[4998]: I0227 10:21:24.764326 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:21:24 crc kubenswrapper[4998]: E0227 10:21:24.765073 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 10:21:24 crc kubenswrapper[4998]: E0227 10:21:24.764362 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 10:21:26 crc kubenswrapper[4998]: I0227 10:21:26.764937 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:21:26 crc kubenswrapper[4998]: I0227 10:21:26.764957 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:21:26 crc kubenswrapper[4998]: E0227 10:21:26.765071 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 10:21:26 crc kubenswrapper[4998]: I0227 10:21:26.765122 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-86xkz" Feb 27 10:21:26 crc kubenswrapper[4998]: I0227 10:21:26.765318 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:21:26 crc kubenswrapper[4998]: E0227 10:21:26.765308 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 10:21:26 crc kubenswrapper[4998]: E0227 10:21:26.765417 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-86xkz" podUID="40178d6d-6068-4937-b7d5-883538892cc5" Feb 27 10:21:26 crc kubenswrapper[4998]: E0227 10:21:26.765465 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 10:21:28 crc kubenswrapper[4998]: I0227 10:21:28.764134 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:21:28 crc kubenswrapper[4998]: I0227 10:21:28.764266 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:21:28 crc kubenswrapper[4998]: E0227 10:21:28.765312 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 10:21:28 crc kubenswrapper[4998]: I0227 10:21:28.765343 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-86xkz" Feb 27 10:21:28 crc kubenswrapper[4998]: I0227 10:21:28.765409 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:21:28 crc kubenswrapper[4998]: E0227 10:21:28.765689 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-86xkz" podUID="40178d6d-6068-4937-b7d5-883538892cc5" Feb 27 10:21:28 crc kubenswrapper[4998]: E0227 10:21:28.766158 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 10:21:28 crc kubenswrapper[4998]: E0227 10:21:28.766099 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.109318 4998 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.157997 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-rqwm5"] Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.158914 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-rqwm5" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.162308 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-hpdf9"] Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.163020 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-dvl4h"] Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.163166 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-hpdf9" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.164135 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-x828r"] Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.164212 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-dvl4h" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.164897 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-2ksp8"] Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.165051 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-x828r" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.165342 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-2ksp8" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.165672 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-68mc7"] Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.166785 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-mrrql"] Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.167509 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-68mc7" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.167663 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jbr44"] Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.168244 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mrrql" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.168303 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-vn72h"] Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.168875 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jbr44" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.169573 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ggw9l"] Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.170629 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ggw9l" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.171388 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-vn72h" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.175793 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.189536 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.189970 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.192762 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.192875 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.193309 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.194196 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.194307 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.194634 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.195321 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.195424 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.195597 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.198349 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.199896 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65b6d855-bcdd-41e5-b3ea-e269a1b6b689-serving-cert\") pod \"controller-manager-879f6c89f-2ksp8\" (UID: \"65b6d855-bcdd-41e5-b3ea-e269a1b6b689\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2ksp8" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.199994 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/84d79f7d-71fe-4982-8995-729a288d93fa-service-ca-bundle\") pod \"authentication-operator-69f744f599-hpdf9\" (UID: \"84d79f7d-71fe-4982-8995-729a288d93fa\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hpdf9" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.200070 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/68f33d7b-1e6e-45c8-b37e-2eff317c25d2-encryption-config\") pod \"apiserver-76f77b778f-rqwm5\" (UID: \"68f33d7b-1e6e-45c8-b37e-2eff317c25d2\") " pod="openshift-apiserver/apiserver-76f77b778f-rqwm5" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.200168 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/16ddfc0b-99f7-4c57-a804-665d86d0411b-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-68mc7\" (UID: \"16ddfc0b-99f7-4c57-a804-665d86d0411b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-68mc7" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.200269 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/854e4003-37ab-473a-a282-6e9c453dfd52-client-ca\") pod \"route-controller-manager-6576b87f9c-x828r\" (UID: \"854e4003-37ab-473a-a282-6e9c453dfd52\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-x828r" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.200343 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ae341042-4a8d-4a41-bb0e-931abecc819a-images\") pod \"machine-api-operator-5694c8668f-dvl4h\" (UID: \"ae341042-4a8d-4a41-bb0e-931abecc819a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-dvl4h" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.200807 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5hv5\" (UniqueName: \"kubernetes.io/projected/16ddfc0b-99f7-4c57-a804-665d86d0411b-kube-api-access-d5hv5\") pod \"apiserver-7bbb656c7d-68mc7\" (UID: \"16ddfc0b-99f7-4c57-a804-665d86d0411b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-68mc7" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.200886 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/68f33d7b-1e6e-45c8-b37e-2eff317c25d2-audit\") pod \"apiserver-76f77b778f-rqwm5\" (UID: \"68f33d7b-1e6e-45c8-b37e-2eff317c25d2\") " pod="openshift-apiserver/apiserver-76f77b778f-rqwm5" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.200952 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16ddfc0b-99f7-4c57-a804-665d86d0411b-serving-cert\") pod \"apiserver-7bbb656c7d-68mc7\" (UID: \"16ddfc0b-99f7-4c57-a804-665d86d0411b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-68mc7" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.201020 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnrkk\" (UniqueName: \"kubernetes.io/projected/daeaab34-be3d-4a1e-964f-17e3661682bc-kube-api-access-dnrkk\") pod \"downloads-7954f5f757-vn72h\" (UID: \"daeaab34-be3d-4a1e-964f-17e3661682bc\") " pod="openshift-console/downloads-7954f5f757-vn72h" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.201134 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/84d79f7d-71fe-4982-8995-729a288d93fa-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-hpdf9\" (UID: \"84d79f7d-71fe-4982-8995-729a288d93fa\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hpdf9" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.201337 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/16ddfc0b-99f7-4c57-a804-665d86d0411b-encryption-config\") pod \"apiserver-7bbb656c7d-68mc7\" (UID: \"16ddfc0b-99f7-4c57-a804-665d86d0411b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-68mc7" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.201600 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84d79f7d-71fe-4982-8995-729a288d93fa-config\") pod \"authentication-operator-69f744f599-hpdf9\" (UID: \"84d79f7d-71fe-4982-8995-729a288d93fa\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hpdf9" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.201814 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68f33d7b-1e6e-45c8-b37e-2eff317c25d2-config\") pod \"apiserver-76f77b778f-rqwm5\" (UID: \"68f33d7b-1e6e-45c8-b37e-2eff317c25d2\") " pod="openshift-apiserver/apiserver-76f77b778f-rqwm5" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.201872 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p75f8\" (UniqueName: \"kubernetes.io/projected/68f33d7b-1e6e-45c8-b37e-2eff317c25d2-kube-api-access-p75f8\") pod \"apiserver-76f77b778f-rqwm5\" (UID: \"68f33d7b-1e6e-45c8-b37e-2eff317c25d2\") " pod="openshift-apiserver/apiserver-76f77b778f-rqwm5" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.201929 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/65b6d855-bcdd-41e5-b3ea-e269a1b6b689-client-ca\") pod \"controller-manager-879f6c89f-2ksp8\" (UID: \"65b6d855-bcdd-41e5-b3ea-e269a1b6b689\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2ksp8" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.201963 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlgst\" (UniqueName: \"kubernetes.io/projected/84d79f7d-71fe-4982-8995-729a288d93fa-kube-api-access-nlgst\") pod \"authentication-operator-69f744f599-hpdf9\" (UID: \"84d79f7d-71fe-4982-8995-729a288d93fa\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hpdf9" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.202160 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/ae341042-4a8d-4a41-bb0e-931abecc819a-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-dvl4h\" (UID: \"ae341042-4a8d-4a41-bb0e-931abecc819a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-dvl4h" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.202209 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/39055adf-7be5-43c0-851f-e0c2c0e631a7-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-jbr44\" (UID: \"39055adf-7be5-43c0-851f-e0c2c0e631a7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jbr44" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.202262 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/68f33d7b-1e6e-45c8-b37e-2eff317c25d2-trusted-ca-bundle\") pod \"apiserver-76f77b778f-rqwm5\" (UID: \"68f33d7b-1e6e-45c8-b37e-2eff317c25d2\") " pod="openshift-apiserver/apiserver-76f77b778f-rqwm5" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.202302 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/68f33d7b-1e6e-45c8-b37e-2eff317c25d2-node-pullsecrets\") pod \"apiserver-76f77b778f-rqwm5\" (UID: \"68f33d7b-1e6e-45c8-b37e-2eff317c25d2\") " pod="openshift-apiserver/apiserver-76f77b778f-rqwm5" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.202341 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/4812fe61-7540-41eb-8daa-26541710d1fe-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-ggw9l\" (UID: \"4812fe61-7540-41eb-8daa-26541710d1fe\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ggw9l" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.202370 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qw6r\" (UniqueName: \"kubernetes.io/projected/65b6d855-bcdd-41e5-b3ea-e269a1b6b689-kube-api-access-5qw6r\") pod \"controller-manager-879f6c89f-2ksp8\" (UID: \"65b6d855-bcdd-41e5-b3ea-e269a1b6b689\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2ksp8" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.202444 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/2c5ba57a-1b2e-4b26-a17e-2d61d61b9645-machine-approver-tls\") pod \"machine-approver-56656f9798-mrrql\" (UID: \"2c5ba57a-1b2e-4b26-a17e-2d61d61b9645\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mrrql" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.202538 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/16ddfc0b-99f7-4c57-a804-665d86d0411b-audit-dir\") pod \"apiserver-7bbb656c7d-68mc7\" (UID: \"16ddfc0b-99f7-4c57-a804-665d86d0411b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-68mc7" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.202617 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/68f33d7b-1e6e-45c8-b37e-2eff317c25d2-serving-cert\") pod \"apiserver-76f77b778f-rqwm5\" (UID: \"68f33d7b-1e6e-45c8-b37e-2eff317c25d2\") " pod="openshift-apiserver/apiserver-76f77b778f-rqwm5" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.202701 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/854e4003-37ab-473a-a282-6e9c453dfd52-serving-cert\") pod \"route-controller-manager-6576b87f9c-x828r\" (UID: \"854e4003-37ab-473a-a282-6e9c453dfd52\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-x828r" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.202775 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/16ddfc0b-99f7-4c57-a804-665d86d0411b-audit-policies\") pod \"apiserver-7bbb656c7d-68mc7\" (UID: \"16ddfc0b-99f7-4c57-a804-665d86d0411b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-68mc7" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.202987 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65b6d855-bcdd-41e5-b3ea-e269a1b6b689-config\") pod \"controller-manager-879f6c89f-2ksp8\" (UID: \"65b6d855-bcdd-41e5-b3ea-e269a1b6b689\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2ksp8" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.203080 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/65b6d855-bcdd-41e5-b3ea-e269a1b6b689-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-2ksp8\" (UID: \"65b6d855-bcdd-41e5-b3ea-e269a1b6b689\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2ksp8" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.203169 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpb9b\" (UniqueName: \"kubernetes.io/projected/854e4003-37ab-473a-a282-6e9c453dfd52-kube-api-access-fpb9b\") pod \"route-controller-manager-6576b87f9c-x828r\" (UID: \"854e4003-37ab-473a-a282-6e9c453dfd52\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-x828r" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.203264 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/68f33d7b-1e6e-45c8-b37e-2eff317c25d2-etcd-serving-ca\") pod \"apiserver-76f77b778f-rqwm5\" (UID: \"68f33d7b-1e6e-45c8-b37e-2eff317c25d2\") " pod="openshift-apiserver/apiserver-76f77b778f-rqwm5" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.203344 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c5ba57a-1b2e-4b26-a17e-2d61d61b9645-config\") pod \"machine-approver-56656f9798-mrrql\" (UID: \"2c5ba57a-1b2e-4b26-a17e-2d61d61b9645\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mrrql" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.203423 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rpl6\" (UniqueName: \"kubernetes.io/projected/ae341042-4a8d-4a41-bb0e-931abecc819a-kube-api-access-7rpl6\") pod \"machine-api-operator-5694c8668f-dvl4h\" (UID: \"ae341042-4a8d-4a41-bb0e-931abecc819a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-dvl4h" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.203502 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/16ddfc0b-99f7-4c57-a804-665d86d0411b-etcd-client\") pod \"apiserver-7bbb656c7d-68mc7\" (UID: \"16ddfc0b-99f7-4c57-a804-665d86d0411b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-68mc7" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.203601 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/854e4003-37ab-473a-a282-6e9c453dfd52-config\") pod \"route-controller-manager-6576b87f9c-x828r\" (UID: \"854e4003-37ab-473a-a282-6e9c453dfd52\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-x828r" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.203637 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.204046 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.206796 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.206913 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.207196 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.207457 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.208058 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.208536 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.210338 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.210521 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.203681 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39055adf-7be5-43c0-851f-e0c2c0e631a7-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-jbr44\" (UID: \"39055adf-7be5-43c0-851f-e0c2c0e631a7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jbr44" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.210668 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.210703 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-n6r8g"] Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.210752 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae341042-4a8d-4a41-bb0e-931abecc819a-config\") pod \"machine-api-operator-5694c8668f-dvl4h\" (UID: \"ae341042-4a8d-4a41-bb0e-931abecc819a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-dvl4h" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.210808 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.210856 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2b82z\" (UniqueName: \"kubernetes.io/projected/39055adf-7be5-43c0-851f-e0c2c0e631a7-kube-api-access-2b82z\") pod \"openshift-controller-manager-operator-756b6f6bc6-jbr44\" (UID: \"39055adf-7be5-43c0-851f-e0c2c0e631a7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jbr44" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.210970 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/68f33d7b-1e6e-45c8-b37e-2eff317c25d2-image-import-ca\") pod \"apiserver-76f77b778f-rqwm5\" (UID: \"68f33d7b-1e6e-45c8-b37e-2eff317c25d2\") " pod="openshift-apiserver/apiserver-76f77b778f-rqwm5" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.211032 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/68f33d7b-1e6e-45c8-b37e-2eff317c25d2-audit-dir\") pod \"apiserver-76f77b778f-rqwm5\" (UID: \"68f33d7b-1e6e-45c8-b37e-2eff317c25d2\") " pod="openshift-apiserver/apiserver-76f77b778f-rqwm5" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.211070 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/16ddfc0b-99f7-4c57-a804-665d86d0411b-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-68mc7\" (UID: \"16ddfc0b-99f7-4c57-a804-665d86d0411b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-68mc7" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.211118 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m65zs\" (UniqueName: \"kubernetes.io/projected/2c5ba57a-1b2e-4b26-a17e-2d61d61b9645-kube-api-access-m65zs\") pod \"machine-approver-56656f9798-mrrql\" (UID: \"2c5ba57a-1b2e-4b26-a17e-2d61d61b9645\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mrrql" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.211217 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/68f33d7b-1e6e-45c8-b37e-2eff317c25d2-etcd-client\") pod \"apiserver-76f77b778f-rqwm5\" (UID: \"68f33d7b-1e6e-45c8-b37e-2eff317c25d2\") " pod="openshift-apiserver/apiserver-76f77b778f-rqwm5" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.211344 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2c5ba57a-1b2e-4b26-a17e-2d61d61b9645-auth-proxy-config\") pod \"machine-approver-56656f9798-mrrql\" (UID: \"2c5ba57a-1b2e-4b26-a17e-2d61d61b9645\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mrrql" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.211404 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84d79f7d-71fe-4982-8995-729a288d93fa-serving-cert\") pod \"authentication-operator-69f744f599-hpdf9\" (UID: \"84d79f7d-71fe-4982-8995-729a288d93fa\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hpdf9" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.211510 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlpzn\" (UniqueName: \"kubernetes.io/projected/4812fe61-7540-41eb-8daa-26541710d1fe-kube-api-access-jlpzn\") pod \"cluster-samples-operator-665b6dd947-ggw9l\" (UID: \"4812fe61-7540-41eb-8daa-26541710d1fe\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ggw9l" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.211875 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-n6r8g" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.212192 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.218203 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.218441 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.218649 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.218976 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.219582 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.219983 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.220597 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.221074 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-h2zrl"] Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.221816 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.222165 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.222652 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.223108 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.223246 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-h2zrl" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.223895 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hnlvv"] Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.224589 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.224602 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hnlvv" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.224863 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.225052 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.225393 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.225421 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.225698 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.225965 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.226109 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.226394 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.226561 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.226935 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.226961 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.228089 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.228312 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.228412 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.228530 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.228706 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.228787 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.228799 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.228853 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.228986 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.228719 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.229310 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.229330 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.229390 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.229525 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.241045 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.241298 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-7g9hp"] Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.242171 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-vlnjr"] Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.245044 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.245305 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.246927 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.247424 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-qx7dp"] Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.247938 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vlnjr" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.250252 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.251689 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-7g9hp" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.251724 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-qx7dp" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.251852 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.253194 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.254306 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.257076 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.257733 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.257810 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.260697 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bwhh6"] Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.261285 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bwhh6" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.261464 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-ffk9k"] Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.261927 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-ffk9k" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.269614 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.272745 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.273045 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.275627 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.275840 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.276059 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.276470 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.276658 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.276807 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.276979 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.277800 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.278615 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.278718 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-rqwm5"] Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.278812 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.278826 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.278968 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.278991 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.279069 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.279202 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.279285 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.279245 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.279789 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.279869 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.282007 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.282544 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.284609 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.290746 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-dvl4h"] Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.293898 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.295489 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-lwjmt"] Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.296923 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-tsvns"] Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.298845 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-lwjmt" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.299018 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-tsvns" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.311107 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.318772 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.320715 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.323489 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.324102 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.324162 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-sw9b4"] Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.327289 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xggj7"] Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.327996 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.328029 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xggj7" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.328568 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-sw9b4" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.331843 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lg8l6"] Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.332426 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae341042-4a8d-4a41-bb0e-931abecc819a-config\") pod \"machine-api-operator-5694c8668f-dvl4h\" (UID: \"ae341042-4a8d-4a41-bb0e-931abecc819a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-dvl4h" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.332465 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2b82z\" (UniqueName: \"kubernetes.io/projected/39055adf-7be5-43c0-851f-e0c2c0e631a7-kube-api-access-2b82z\") pod \"openshift-controller-manager-operator-756b6f6bc6-jbr44\" (UID: \"39055adf-7be5-43c0-851f-e0c2c0e631a7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jbr44" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.332498 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sc8n8\" (UniqueName: \"kubernetes.io/projected/34cb42eb-c801-4afa-9b85-64ea3d8c3ab2-kube-api-access-sc8n8\") pod \"openshift-apiserver-operator-796bbdcf4f-bwhh6\" (UID: \"34cb42eb-c801-4afa-9b85-64ea3d8c3ab2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bwhh6" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.332528 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d34656b6-50d4-4173-a40b-5a9eddb99397-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-h2zrl\" (UID: \"d34656b6-50d4-4173-a40b-5a9eddb99397\") " pod="openshift-authentication/oauth-openshift-558db77b4-h2zrl" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.332557 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/68f33d7b-1e6e-45c8-b37e-2eff317c25d2-image-import-ca\") pod \"apiserver-76f77b778f-rqwm5\" (UID: \"68f33d7b-1e6e-45c8-b37e-2eff317c25d2\") " pod="openshift-apiserver/apiserver-76f77b778f-rqwm5" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.332579 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/68f33d7b-1e6e-45c8-b37e-2eff317c25d2-audit-dir\") pod \"apiserver-76f77b778f-rqwm5\" (UID: \"68f33d7b-1e6e-45c8-b37e-2eff317c25d2\") " pod="openshift-apiserver/apiserver-76f77b778f-rqwm5" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.332598 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/68f33d7b-1e6e-45c8-b37e-2eff317c25d2-etcd-client\") pod \"apiserver-76f77b778f-rqwm5\" (UID: \"68f33d7b-1e6e-45c8-b37e-2eff317c25d2\") " pod="openshift-apiserver/apiserver-76f77b778f-rqwm5" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.332617 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/16ddfc0b-99f7-4c57-a804-665d86d0411b-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-68mc7\" (UID: \"16ddfc0b-99f7-4c57-a804-665d86d0411b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-68mc7" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.332756 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m65zs\" (UniqueName: \"kubernetes.io/projected/2c5ba57a-1b2e-4b26-a17e-2d61d61b9645-kube-api-access-m65zs\") pod \"machine-approver-56656f9798-mrrql\" (UID: \"2c5ba57a-1b2e-4b26-a17e-2d61d61b9645\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mrrql" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.332792 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d34656b6-50d4-4173-a40b-5a9eddb99397-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-h2zrl\" (UID: \"d34656b6-50d4-4173-a40b-5a9eddb99397\") " pod="openshift-authentication/oauth-openshift-558db77b4-h2zrl" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.332812 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxjgh\" (UniqueName: \"kubernetes.io/projected/bcc07b80-55eb-465c-9528-6bad6e2bcbc1-kube-api-access-xxjgh\") pod \"openshift-config-operator-7777fb866f-vlnjr\" (UID: \"bcc07b80-55eb-465c-9528-6bad6e2bcbc1\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vlnjr" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.332834 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlpzn\" (UniqueName: \"kubernetes.io/projected/4812fe61-7540-41eb-8daa-26541710d1fe-kube-api-access-jlpzn\") pod \"cluster-samples-operator-665b6dd947-ggw9l\" (UID: \"4812fe61-7540-41eb-8daa-26541710d1fe\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ggw9l" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.332858 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2c5ba57a-1b2e-4b26-a17e-2d61d61b9645-auth-proxy-config\") pod \"machine-approver-56656f9798-mrrql\" (UID: \"2c5ba57a-1b2e-4b26-a17e-2d61d61b9645\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mrrql" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.332880 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84d79f7d-71fe-4982-8995-729a288d93fa-serving-cert\") pod \"authentication-operator-69f744f599-hpdf9\" (UID: \"84d79f7d-71fe-4982-8995-729a288d93fa\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hpdf9" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.332902 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4de17c0f-b467-4b89-8152-ecd5eb9cd5ed-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-hnlvv\" (UID: \"4de17c0f-b467-4b89-8152-ecd5eb9cd5ed\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hnlvv" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.332934 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bcc07b80-55eb-465c-9528-6bad6e2bcbc1-serving-cert\") pod \"openshift-config-operator-7777fb866f-vlnjr\" (UID: \"bcc07b80-55eb-465c-9528-6bad6e2bcbc1\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vlnjr" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.332956 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d89392c8-76ea-4723-8fc3-04fcd6727a23-config\") pod \"etcd-operator-b45778765-7g9hp\" (UID: \"d89392c8-76ea-4723-8fc3-04fcd6727a23\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7g9hp" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.332973 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/d89392c8-76ea-4723-8fc3-04fcd6727a23-etcd-ca\") pod \"etcd-operator-b45778765-7g9hp\" (UID: \"d89392c8-76ea-4723-8fc3-04fcd6727a23\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7g9hp" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.332994 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65b6d855-bcdd-41e5-b3ea-e269a1b6b689-serving-cert\") pod \"controller-manager-879f6c89f-2ksp8\" (UID: \"65b6d855-bcdd-41e5-b3ea-e269a1b6b689\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2ksp8" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.333016 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d34656b6-50d4-4173-a40b-5a9eddb99397-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-h2zrl\" (UID: \"d34656b6-50d4-4173-a40b-5a9eddb99397\") " pod="openshift-authentication/oauth-openshift-558db77b4-h2zrl" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.333040 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d34656b6-50d4-4173-a40b-5a9eddb99397-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-h2zrl\" (UID: \"d34656b6-50d4-4173-a40b-5a9eddb99397\") " pod="openshift-authentication/oauth-openshift-558db77b4-h2zrl" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.333063 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bcc07b80-55eb-465c-9528-6bad6e2bcbc1-available-featuregates\") pod \"openshift-config-operator-7777fb866f-vlnjr\" (UID: \"bcc07b80-55eb-465c-9528-6bad6e2bcbc1\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vlnjr" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.333083 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d34656b6-50d4-4173-a40b-5a9eddb99397-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-h2zrl\" (UID: \"d34656b6-50d4-4173-a40b-5a9eddb99397\") " pod="openshift-authentication/oauth-openshift-558db77b4-h2zrl" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.333104 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/84d79f7d-71fe-4982-8995-729a288d93fa-service-ca-bundle\") pod \"authentication-operator-69f744f599-hpdf9\" (UID: \"84d79f7d-71fe-4982-8995-729a288d93fa\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hpdf9" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.333125 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/68f33d7b-1e6e-45c8-b37e-2eff317c25d2-encryption-config\") pod \"apiserver-76f77b778f-rqwm5\" (UID: \"68f33d7b-1e6e-45c8-b37e-2eff317c25d2\") " pod="openshift-apiserver/apiserver-76f77b778f-rqwm5" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.333147 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92wrt\" (UniqueName: \"kubernetes.io/projected/43e2df1f-102d-4440-bc2b-76d89a47be31-kube-api-access-92wrt\") pod \"console-operator-58897d9998-ffk9k\" (UID: \"43e2df1f-102d-4440-bc2b-76d89a47be31\") " pod="openshift-console-operator/console-operator-58897d9998-ffk9k" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.333185 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nc5bh\" (UniqueName: \"kubernetes.io/projected/4de17c0f-b467-4b89-8152-ecd5eb9cd5ed-kube-api-access-nc5bh\") pod \"cluster-image-registry-operator-dc59b4c8b-hnlvv\" (UID: \"4de17c0f-b467-4b89-8152-ecd5eb9cd5ed\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hnlvv" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.333217 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/16ddfc0b-99f7-4c57-a804-665d86d0411b-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-68mc7\" (UID: \"16ddfc0b-99f7-4c57-a804-665d86d0411b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-68mc7" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.333272 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d89392c8-76ea-4723-8fc3-04fcd6727a23-serving-cert\") pod \"etcd-operator-b45778765-7g9hp\" (UID: \"d89392c8-76ea-4723-8fc3-04fcd6727a23\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7g9hp" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.333296 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/854e4003-37ab-473a-a282-6e9c453dfd52-client-ca\") pod \"route-controller-manager-6576b87f9c-x828r\" (UID: \"854e4003-37ab-473a-a282-6e9c453dfd52\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-x828r" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.333326 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ae341042-4a8d-4a41-bb0e-931abecc819a-images\") pod \"machine-api-operator-5694c8668f-dvl4h\" (UID: \"ae341042-4a8d-4a41-bb0e-931abecc819a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-dvl4h" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.333347 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5l6m\" (UniqueName: \"kubernetes.io/projected/d89392c8-76ea-4723-8fc3-04fcd6727a23-kube-api-access-c5l6m\") pod \"etcd-operator-b45778765-7g9hp\" (UID: \"d89392c8-76ea-4723-8fc3-04fcd6727a23\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7g9hp" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.333369 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5hv5\" (UniqueName: \"kubernetes.io/projected/16ddfc0b-99f7-4c57-a804-665d86d0411b-kube-api-access-d5hv5\") pod \"apiserver-7bbb656c7d-68mc7\" (UID: \"16ddfc0b-99f7-4c57-a804-665d86d0411b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-68mc7" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.333395 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/68f33d7b-1e6e-45c8-b37e-2eff317c25d2-audit\") pod \"apiserver-76f77b778f-rqwm5\" (UID: \"68f33d7b-1e6e-45c8-b37e-2eff317c25d2\") " pod="openshift-apiserver/apiserver-76f77b778f-rqwm5" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.333412 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16ddfc0b-99f7-4c57-a804-665d86d0411b-serving-cert\") pod \"apiserver-7bbb656c7d-68mc7\" (UID: \"16ddfc0b-99f7-4c57-a804-665d86d0411b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-68mc7" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.333437 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnrkk\" (UniqueName: \"kubernetes.io/projected/daeaab34-be3d-4a1e-964f-17e3661682bc-kube-api-access-dnrkk\") pod \"downloads-7954f5f757-vn72h\" (UID: \"daeaab34-be3d-4a1e-964f-17e3661682bc\") " pod="openshift-console/downloads-7954f5f757-vn72h" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.333460 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/84d79f7d-71fe-4982-8995-729a288d93fa-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-hpdf9\" (UID: \"84d79f7d-71fe-4982-8995-729a288d93fa\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hpdf9" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.333754 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/34cb42eb-c801-4afa-9b85-64ea3d8c3ab2-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-bwhh6\" (UID: \"34cb42eb-c801-4afa-9b85-64ea3d8c3ab2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bwhh6" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.333776 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d34656b6-50d4-4173-a40b-5a9eddb99397-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-h2zrl\" (UID: \"d34656b6-50d4-4173-a40b-5a9eddb99397\") " pod="openshift-authentication/oauth-openshift-558db77b4-h2zrl" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.333795 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvdnk\" (UniqueName: \"kubernetes.io/projected/e32a75fa-f16d-4386-a933-4a6bd43f1bdc-kube-api-access-lvdnk\") pod \"console-f9d7485db-n6r8g\" (UID: \"e32a75fa-f16d-4386-a933-4a6bd43f1bdc\") " pod="openshift-console/console-f9d7485db-n6r8g" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.333816 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68f33d7b-1e6e-45c8-b37e-2eff317c25d2-config\") pod \"apiserver-76f77b778f-rqwm5\" (UID: \"68f33d7b-1e6e-45c8-b37e-2eff317c25d2\") " pod="openshift-apiserver/apiserver-76f77b778f-rqwm5" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.333841 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p75f8\" (UniqueName: \"kubernetes.io/projected/68f33d7b-1e6e-45c8-b37e-2eff317c25d2-kube-api-access-p75f8\") pod \"apiserver-76f77b778f-rqwm5\" (UID: \"68f33d7b-1e6e-45c8-b37e-2eff317c25d2\") " pod="openshift-apiserver/apiserver-76f77b778f-rqwm5" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.333859 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/16ddfc0b-99f7-4c57-a804-665d86d0411b-encryption-config\") pod \"apiserver-7bbb656c7d-68mc7\" (UID: \"16ddfc0b-99f7-4c57-a804-665d86d0411b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-68mc7" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.333881 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84d79f7d-71fe-4982-8995-729a288d93fa-config\") pod \"authentication-operator-69f744f599-hpdf9\" (UID: \"84d79f7d-71fe-4982-8995-729a288d93fa\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hpdf9" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.333920 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d34656b6-50d4-4173-a40b-5a9eddb99397-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-h2zrl\" (UID: \"d34656b6-50d4-4173-a40b-5a9eddb99397\") " pod="openshift-authentication/oauth-openshift-558db77b4-h2zrl" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.333942 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e32a75fa-f16d-4386-a933-4a6bd43f1bdc-trusted-ca-bundle\") pod \"console-f9d7485db-n6r8g\" (UID: \"e32a75fa-f16d-4386-a933-4a6bd43f1bdc\") " pod="openshift-console/console-f9d7485db-n6r8g" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.333967 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d34656b6-50d4-4173-a40b-5a9eddb99397-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-h2zrl\" (UID: \"d34656b6-50d4-4173-a40b-5a9eddb99397\") " pod="openshift-authentication/oauth-openshift-558db77b4-h2zrl" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.333986 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/65b6d855-bcdd-41e5-b3ea-e269a1b6b689-client-ca\") pod \"controller-manager-879f6c89f-2ksp8\" (UID: \"65b6d855-bcdd-41e5-b3ea-e269a1b6b689\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2ksp8" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.334004 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nlgst\" (UniqueName: \"kubernetes.io/projected/84d79f7d-71fe-4982-8995-729a288d93fa-kube-api-access-nlgst\") pod \"authentication-operator-69f744f599-hpdf9\" (UID: \"84d79f7d-71fe-4982-8995-729a288d93fa\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hpdf9" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.334026 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34cb42eb-c801-4afa-9b85-64ea3d8c3ab2-config\") pod \"openshift-apiserver-operator-796bbdcf4f-bwhh6\" (UID: \"34cb42eb-c801-4afa-9b85-64ea3d8c3ab2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bwhh6" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.334062 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e32a75fa-f16d-4386-a933-4a6bd43f1bdc-console-oauth-config\") pod \"console-f9d7485db-n6r8g\" (UID: \"e32a75fa-f16d-4386-a933-4a6bd43f1bdc\") " pod="openshift-console/console-f9d7485db-n6r8g" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.334094 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/39055adf-7be5-43c0-851f-e0c2c0e631a7-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-jbr44\" (UID: \"39055adf-7be5-43c0-851f-e0c2c0e631a7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jbr44" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.334116 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/68f33d7b-1e6e-45c8-b37e-2eff317c25d2-trusted-ca-bundle\") pod \"apiserver-76f77b778f-rqwm5\" (UID: \"68f33d7b-1e6e-45c8-b37e-2eff317c25d2\") " pod="openshift-apiserver/apiserver-76f77b778f-rqwm5" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.334137 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d34656b6-50d4-4173-a40b-5a9eddb99397-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-h2zrl\" (UID: \"d34656b6-50d4-4173-a40b-5a9eddb99397\") " pod="openshift-authentication/oauth-openshift-558db77b4-h2zrl" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.334159 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d34656b6-50d4-4173-a40b-5a9eddb99397-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-h2zrl\" (UID: \"d34656b6-50d4-4173-a40b-5a9eddb99397\") " pod="openshift-authentication/oauth-openshift-558db77b4-h2zrl" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.334183 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/ae341042-4a8d-4a41-bb0e-931abecc819a-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-dvl4h\" (UID: \"ae341042-4a8d-4a41-bb0e-931abecc819a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-dvl4h" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.334271 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-rh88s"] Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.335017 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/16ddfc0b-99f7-4c57-a804-665d86d0411b-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-68mc7\" (UID: \"16ddfc0b-99f7-4c57-a804-665d86d0411b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-68mc7" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.335353 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/854e4003-37ab-473a-a282-6e9c453dfd52-client-ca\") pod \"route-controller-manager-6576b87f9c-x828r\" (UID: \"854e4003-37ab-473a-a282-6e9c453dfd52\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-x828r" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.335392 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rh88s" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.335728 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lg8l6" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.335809 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae341042-4a8d-4a41-bb0e-931abecc819a-config\") pod \"machine-api-operator-5694c8668f-dvl4h\" (UID: \"ae341042-4a8d-4a41-bb0e-931abecc819a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-dvl4h" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.335992 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ae341042-4a8d-4a41-bb0e-931abecc819a-images\") pod \"machine-api-operator-5694c8668f-dvl4h\" (UID: \"ae341042-4a8d-4a41-bb0e-931abecc819a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-dvl4h" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.336351 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/68f33d7b-1e6e-45c8-b37e-2eff317c25d2-audit-dir\") pod \"apiserver-76f77b778f-rqwm5\" (UID: \"68f33d7b-1e6e-45c8-b37e-2eff317c25d2\") " pod="openshift-apiserver/apiserver-76f77b778f-rqwm5" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.336662 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/68f33d7b-1e6e-45c8-b37e-2eff317c25d2-audit\") pod \"apiserver-76f77b778f-rqwm5\" (UID: \"68f33d7b-1e6e-45c8-b37e-2eff317c25d2\") " pod="openshift-apiserver/apiserver-76f77b778f-rqwm5" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.337492 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/68f33d7b-1e6e-45c8-b37e-2eff317c25d2-image-import-ca\") pod \"apiserver-76f77b778f-rqwm5\" (UID: \"68f33d7b-1e6e-45c8-b37e-2eff317c25d2\") " pod="openshift-apiserver/apiserver-76f77b778f-rqwm5" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.337663 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/68f33d7b-1e6e-45c8-b37e-2eff317c25d2-node-pullsecrets\") pod \"apiserver-76f77b778f-rqwm5\" (UID: \"68f33d7b-1e6e-45c8-b37e-2eff317c25d2\") " pod="openshift-apiserver/apiserver-76f77b778f-rqwm5" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.337743 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/68f33d7b-1e6e-45c8-b37e-2eff317c25d2-node-pullsecrets\") pod \"apiserver-76f77b778f-rqwm5\" (UID: \"68f33d7b-1e6e-45c8-b37e-2eff317c25d2\") " pod="openshift-apiserver/apiserver-76f77b778f-rqwm5" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.337882 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/4812fe61-7540-41eb-8daa-26541710d1fe-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-ggw9l\" (UID: \"4812fe61-7540-41eb-8daa-26541710d1fe\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ggw9l" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.337948 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qw6r\" (UniqueName: \"kubernetes.io/projected/65b6d855-bcdd-41e5-b3ea-e269a1b6b689-kube-api-access-5qw6r\") pod \"controller-manager-879f6c89f-2ksp8\" (UID: \"65b6d855-bcdd-41e5-b3ea-e269a1b6b689\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2ksp8" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.337972 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/2c5ba57a-1b2e-4b26-a17e-2d61d61b9645-machine-approver-tls\") pod \"machine-approver-56656f9798-mrrql\" (UID: \"2c5ba57a-1b2e-4b26-a17e-2d61d61b9645\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mrrql" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.338006 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d89392c8-76ea-4723-8fc3-04fcd6727a23-etcd-client\") pod \"etcd-operator-b45778765-7g9hp\" (UID: \"d89392c8-76ea-4723-8fc3-04fcd6727a23\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7g9hp" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.338009 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6pn2r"] Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.338045 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e32a75fa-f16d-4386-a933-4a6bd43f1bdc-console-config\") pod \"console-f9d7485db-n6r8g\" (UID: \"e32a75fa-f16d-4386-a933-4a6bd43f1bdc\") " pod="openshift-console/console-f9d7485db-n6r8g" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.338075 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/16ddfc0b-99f7-4c57-a804-665d86d0411b-audit-dir\") pod \"apiserver-7bbb656c7d-68mc7\" (UID: \"16ddfc0b-99f7-4c57-a804-665d86d0411b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-68mc7" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.338100 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/68f33d7b-1e6e-45c8-b37e-2eff317c25d2-serving-cert\") pod \"apiserver-76f77b778f-rqwm5\" (UID: \"68f33d7b-1e6e-45c8-b37e-2eff317c25d2\") " pod="openshift-apiserver/apiserver-76f77b778f-rqwm5" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.338125 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/854e4003-37ab-473a-a282-6e9c453dfd52-serving-cert\") pod \"route-controller-manager-6576b87f9c-x828r\" (UID: \"854e4003-37ab-473a-a282-6e9c453dfd52\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-x828r" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.338150 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/16ddfc0b-99f7-4c57-a804-665d86d0411b-audit-policies\") pod \"apiserver-7bbb656c7d-68mc7\" (UID: \"16ddfc0b-99f7-4c57-a804-665d86d0411b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-68mc7" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.338179 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/d89392c8-76ea-4723-8fc3-04fcd6727a23-etcd-service-ca\") pod \"etcd-operator-b45778765-7g9hp\" (UID: \"d89392c8-76ea-4723-8fc3-04fcd6727a23\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7g9hp" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.338204 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e32a75fa-f16d-4386-a933-4a6bd43f1bdc-console-serving-cert\") pod \"console-f9d7485db-n6r8g\" (UID: \"e32a75fa-f16d-4386-a933-4a6bd43f1bdc\") " pod="openshift-console/console-f9d7485db-n6r8g" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.338272 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/63800e6f-d2ec-48e9-9739-60ed474ed51b-metrics-tls\") pod \"dns-operator-744455d44c-qx7dp\" (UID: \"63800e6f-d2ec-48e9-9739-60ed474ed51b\") " pod="openshift-dns-operator/dns-operator-744455d44c-qx7dp" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.338300 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/4de17c0f-b467-4b89-8152-ecd5eb9cd5ed-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-hnlvv\" (UID: \"4de17c0f-b467-4b89-8152-ecd5eb9cd5ed\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hnlvv" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.338341 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65b6d855-bcdd-41e5-b3ea-e269a1b6b689-config\") pod \"controller-manager-879f6c89f-2ksp8\" (UID: \"65b6d855-bcdd-41e5-b3ea-e269a1b6b689\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2ksp8" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.338363 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/65b6d855-bcdd-41e5-b3ea-e269a1b6b689-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-2ksp8\" (UID: \"65b6d855-bcdd-41e5-b3ea-e269a1b6b689\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2ksp8" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.338390 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d34656b6-50d4-4173-a40b-5a9eddb99397-audit-policies\") pod \"oauth-openshift-558db77b4-h2zrl\" (UID: \"d34656b6-50d4-4173-a40b-5a9eddb99397\") " pod="openshift-authentication/oauth-openshift-558db77b4-h2zrl" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.338412 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d34656b6-50d4-4173-a40b-5a9eddb99397-audit-dir\") pod \"oauth-openshift-558db77b4-h2zrl\" (UID: \"d34656b6-50d4-4173-a40b-5a9eddb99397\") " pod="openshift-authentication/oauth-openshift-558db77b4-h2zrl" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.338434 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43e2df1f-102d-4440-bc2b-76d89a47be31-serving-cert\") pod \"console-operator-58897d9998-ffk9k\" (UID: \"43e2df1f-102d-4440-bc2b-76d89a47be31\") " pod="openshift-console-operator/console-operator-58897d9998-ffk9k" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.338463 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fpb9b\" (UniqueName: \"kubernetes.io/projected/854e4003-37ab-473a-a282-6e9c453dfd52-kube-api-access-fpb9b\") pod \"route-controller-manager-6576b87f9c-x828r\" (UID: \"854e4003-37ab-473a-a282-6e9c453dfd52\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-x828r" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.338486 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d34656b6-50d4-4173-a40b-5a9eddb99397-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-h2zrl\" (UID: \"d34656b6-50d4-4173-a40b-5a9eddb99397\") " pod="openshift-authentication/oauth-openshift-558db77b4-h2zrl" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.338513 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/68f33d7b-1e6e-45c8-b37e-2eff317c25d2-etcd-serving-ca\") pod \"apiserver-76f77b778f-rqwm5\" (UID: \"68f33d7b-1e6e-45c8-b37e-2eff317c25d2\") " pod="openshift-apiserver/apiserver-76f77b778f-rqwm5" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.338534 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mbjh\" (UniqueName: \"kubernetes.io/projected/d34656b6-50d4-4173-a40b-5a9eddb99397-kube-api-access-6mbjh\") pod \"oauth-openshift-558db77b4-h2zrl\" (UID: \"d34656b6-50d4-4173-a40b-5a9eddb99397\") " pod="openshift-authentication/oauth-openshift-558db77b4-h2zrl" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.338556 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mp2k6\" (UniqueName: \"kubernetes.io/projected/63800e6f-d2ec-48e9-9739-60ed474ed51b-kube-api-access-mp2k6\") pod \"dns-operator-744455d44c-qx7dp\" (UID: \"63800e6f-d2ec-48e9-9739-60ed474ed51b\") " pod="openshift-dns-operator/dns-operator-744455d44c-qx7dp" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.338579 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43e2df1f-102d-4440-bc2b-76d89a47be31-config\") pod \"console-operator-58897d9998-ffk9k\" (UID: \"43e2df1f-102d-4440-bc2b-76d89a47be31\") " pod="openshift-console-operator/console-operator-58897d9998-ffk9k" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.338601 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/43e2df1f-102d-4440-bc2b-76d89a47be31-trusted-ca\") pod \"console-operator-58897d9998-ffk9k\" (UID: \"43e2df1f-102d-4440-bc2b-76d89a47be31\") " pod="openshift-console-operator/console-operator-58897d9998-ffk9k" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.338621 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e32a75fa-f16d-4386-a933-4a6bd43f1bdc-oauth-serving-cert\") pod \"console-f9d7485db-n6r8g\" (UID: \"e32a75fa-f16d-4386-a933-4a6bd43f1bdc\") " pod="openshift-console/console-f9d7485db-n6r8g" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.338650 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rpl6\" (UniqueName: \"kubernetes.io/projected/ae341042-4a8d-4a41-bb0e-931abecc819a-kube-api-access-7rpl6\") pod \"machine-api-operator-5694c8668f-dvl4h\" (UID: \"ae341042-4a8d-4a41-bb0e-931abecc819a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-dvl4h" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.338675 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/16ddfc0b-99f7-4c57-a804-665d86d0411b-etcd-client\") pod \"apiserver-7bbb656c7d-68mc7\" (UID: \"16ddfc0b-99f7-4c57-a804-665d86d0411b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-68mc7" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.338697 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c5ba57a-1b2e-4b26-a17e-2d61d61b9645-config\") pod \"machine-approver-56656f9798-mrrql\" (UID: \"2c5ba57a-1b2e-4b26-a17e-2d61d61b9645\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mrrql" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.338693 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/16ddfc0b-99f7-4c57-a804-665d86d0411b-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-68mc7\" (UID: \"16ddfc0b-99f7-4c57-a804-665d86d0411b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-68mc7" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.338720 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e32a75fa-f16d-4386-a933-4a6bd43f1bdc-service-ca\") pod \"console-f9d7485db-n6r8g\" (UID: \"e32a75fa-f16d-4386-a933-4a6bd43f1bdc\") " pod="openshift-console/console-f9d7485db-n6r8g" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.338753 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/854e4003-37ab-473a-a282-6e9c453dfd52-config\") pod \"route-controller-manager-6576b87f9c-x828r\" (UID: \"854e4003-37ab-473a-a282-6e9c453dfd52\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-x828r" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.338777 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39055adf-7be5-43c0-851f-e0c2c0e631a7-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-jbr44\" (UID: \"39055adf-7be5-43c0-851f-e0c2c0e631a7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jbr44" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.338805 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/68f33d7b-1e6e-45c8-b37e-2eff317c25d2-trusted-ca-bundle\") pod \"apiserver-76f77b778f-rqwm5\" (UID: \"68f33d7b-1e6e-45c8-b37e-2eff317c25d2\") " pod="openshift-apiserver/apiserver-76f77b778f-rqwm5" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.339200 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6pn2r" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.339775 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2c5ba57a-1b2e-4b26-a17e-2d61d61b9645-auth-proxy-config\") pod \"machine-approver-56656f9798-mrrql\" (UID: \"2c5ba57a-1b2e-4b26-a17e-2d61d61b9645\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mrrql" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.341458 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84d79f7d-71fe-4982-8995-729a288d93fa-config\") pod \"authentication-operator-69f744f599-hpdf9\" (UID: \"84d79f7d-71fe-4982-8995-729a288d93fa\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hpdf9" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.342236 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/68f33d7b-1e6e-45c8-b37e-2eff317c25d2-etcd-client\") pod \"apiserver-76f77b778f-rqwm5\" (UID: \"68f33d7b-1e6e-45c8-b37e-2eff317c25d2\") " pod="openshift-apiserver/apiserver-76f77b778f-rqwm5" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.342833 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16ddfc0b-99f7-4c57-a804-665d86d0411b-serving-cert\") pod \"apiserver-7bbb656c7d-68mc7\" (UID: \"16ddfc0b-99f7-4c57-a804-665d86d0411b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-68mc7" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.344102 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/84d79f7d-71fe-4982-8995-729a288d93fa-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-hpdf9\" (UID: \"84d79f7d-71fe-4982-8995-729a288d93fa\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hpdf9" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.338813 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4de17c0f-b467-4b89-8152-ecd5eb9cd5ed-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-hnlvv\" (UID: \"4de17c0f-b467-4b89-8152-ecd5eb9cd5ed\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hnlvv" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.344646 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/4812fe61-7540-41eb-8daa-26541710d1fe-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-ggw9l\" (UID: \"4812fe61-7540-41eb-8daa-26541710d1fe\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ggw9l" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.344770 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/16ddfc0b-99f7-4c57-a804-665d86d0411b-audit-dir\") pod \"apiserver-7bbb656c7d-68mc7\" (UID: \"16ddfc0b-99f7-4c57-a804-665d86d0411b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-68mc7" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.345148 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84d79f7d-71fe-4982-8995-729a288d93fa-serving-cert\") pod \"authentication-operator-69f744f599-hpdf9\" (UID: \"84d79f7d-71fe-4982-8995-729a288d93fa\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hpdf9" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.345155 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/65b6d855-bcdd-41e5-b3ea-e269a1b6b689-client-ca\") pod \"controller-manager-879f6c89f-2ksp8\" (UID: \"65b6d855-bcdd-41e5-b3ea-e269a1b6b689\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2ksp8" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.345361 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/16ddfc0b-99f7-4c57-a804-665d86d0411b-audit-policies\") pod \"apiserver-7bbb656c7d-68mc7\" (UID: \"16ddfc0b-99f7-4c57-a804-665d86d0411b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-68mc7" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.345369 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/ae341042-4a8d-4a41-bb0e-931abecc819a-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-dvl4h\" (UID: \"ae341042-4a8d-4a41-bb0e-931abecc819a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-dvl4h" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.345752 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c5ba57a-1b2e-4b26-a17e-2d61d61b9645-config\") pod \"machine-approver-56656f9798-mrrql\" (UID: \"2c5ba57a-1b2e-4b26-a17e-2d61d61b9645\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mrrql" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.346992 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/854e4003-37ab-473a-a282-6e9c453dfd52-config\") pod \"route-controller-manager-6576b87f9c-x828r\" (UID: \"854e4003-37ab-473a-a282-6e9c453dfd52\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-x828r" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.347563 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pxm5q"] Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.348185 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/65b6d855-bcdd-41e5-b3ea-e269a1b6b689-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-2ksp8\" (UID: \"65b6d855-bcdd-41e5-b3ea-e269a1b6b689\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2ksp8" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.348244 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/68f33d7b-1e6e-45c8-b37e-2eff317c25d2-etcd-serving-ca\") pod \"apiserver-76f77b778f-rqwm5\" (UID: \"68f33d7b-1e6e-45c8-b37e-2eff317c25d2\") " pod="openshift-apiserver/apiserver-76f77b778f-rqwm5" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.348656 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68f33d7b-1e6e-45c8-b37e-2eff317c25d2-config\") pod \"apiserver-76f77b778f-rqwm5\" (UID: \"68f33d7b-1e6e-45c8-b37e-2eff317c25d2\") " pod="openshift-apiserver/apiserver-76f77b778f-rqwm5" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.348725 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39055adf-7be5-43c0-851f-e0c2c0e631a7-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-jbr44\" (UID: \"39055adf-7be5-43c0-851f-e0c2c0e631a7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jbr44" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.351452 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/84d79f7d-71fe-4982-8995-729a288d93fa-service-ca-bundle\") pod \"authentication-operator-69f744f599-hpdf9\" (UID: \"84d79f7d-71fe-4982-8995-729a288d93fa\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hpdf9" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.352047 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/39055adf-7be5-43c0-851f-e0c2c0e631a7-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-jbr44\" (UID: \"39055adf-7be5-43c0-851f-e0c2c0e631a7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jbr44" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.352922 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/2c5ba57a-1b2e-4b26-a17e-2d61d61b9645-machine-approver-tls\") pod \"machine-approver-56656f9798-mrrql\" (UID: \"2c5ba57a-1b2e-4b26-a17e-2d61d61b9645\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mrrql" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.366606 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/854e4003-37ab-473a-a282-6e9c453dfd52-serving-cert\") pod \"route-controller-manager-6576b87f9c-x828r\" (UID: \"854e4003-37ab-473a-a282-6e9c453dfd52\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-x828r" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.370616 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pxm5q" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.372129 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p9c6z"] Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.373679 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p9c6z" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.375287 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/16ddfc0b-99f7-4c57-a804-665d86d0411b-encryption-config\") pod \"apiserver-7bbb656c7d-68mc7\" (UID: \"16ddfc0b-99f7-4c57-a804-665d86d0411b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-68mc7" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.375931 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65b6d855-bcdd-41e5-b3ea-e269a1b6b689-config\") pod \"controller-manager-879f6c89f-2ksp8\" (UID: \"65b6d855-bcdd-41e5-b3ea-e269a1b6b689\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2ksp8" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.376377 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/16ddfc0b-99f7-4c57-a804-665d86d0411b-etcd-client\") pod \"apiserver-7bbb656c7d-68mc7\" (UID: \"16ddfc0b-99f7-4c57-a804-665d86d0411b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-68mc7" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.377338 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/68f33d7b-1e6e-45c8-b37e-2eff317c25d2-encryption-config\") pod \"apiserver-76f77b778f-rqwm5\" (UID: \"68f33d7b-1e6e-45c8-b37e-2eff317c25d2\") " pod="openshift-apiserver/apiserver-76f77b778f-rqwm5" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.377787 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/68f33d7b-1e6e-45c8-b37e-2eff317c25d2-serving-cert\") pod \"apiserver-76f77b778f-rqwm5\" (UID: \"68f33d7b-1e6e-45c8-b37e-2eff317c25d2\") " pod="openshift-apiserver/apiserver-76f77b778f-rqwm5" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.377851 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-rkzrk"] Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.377945 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.378253 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.378886 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-7mccc"] Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.379139 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rkzrk" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.379915 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-hxpst"] Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.380309 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7mccc" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.380475 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-hxpst" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.380871 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-4n9qx"] Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.381530 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-4n9qx" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.382839 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65b6d855-bcdd-41e5-b3ea-e269a1b6b689-serving-cert\") pod \"controller-manager-879f6c89f-2ksp8\" (UID: \"65b6d855-bcdd-41e5-b3ea-e269a1b6b689\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2ksp8" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.382922 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.383133 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-2vxxl"] Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.383798 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-2vxxl" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.391505 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-b25mg"] Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.392283 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-hpdf9"] Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.392391 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-b25mg" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.393522 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vvpct"] Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.395215 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536460-jgglv"] Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.395305 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vvpct" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.396065 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536460-jgglv" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.402191 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536455-gfh89"] Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.403024 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-2m9rf"] Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.403556 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7rm6t"] Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.404112 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7rm6t" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.404941 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536455-gfh89" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.405155 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-2m9rf" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.406682 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-vn72h"] Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.408009 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-fd4sb"] Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.409371 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.409856 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-fd4sb" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.413187 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-f8f5r"] Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.414912 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-f8f5r" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.417323 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-2ksp8"] Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.417899 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-n6r8g"] Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.420329 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-phwvx"] Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.421360 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jbr44"] Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.421495 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-phwvx" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.421884 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-7g9hp"] Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.423786 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.427701 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-68mc7"] Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.428733 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hnlvv"] Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.430043 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bwhh6"] Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.431461 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-h2zrl"] Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.434688 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xggj7"] Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.436122 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p9c6z"] Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.437246 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ggw9l"] Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.438478 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-sw9b4"] Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.440684 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-f8f5r"] Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.441654 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.442782 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-tsvns"] Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.443786 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-ffk9k"] Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.445409 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536460-jgglv"] Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.447478 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-lwjmt"] Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.449826 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536455-gfh89"] Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.453712 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-6bhg7"] Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.456574 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-qx7dp"] Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.456611 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-hxw4p"] Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.456735 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-6bhg7" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.458432 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-rh88s"] Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.458487 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-vlnjr"] Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.458505 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-rkzrk"] Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.458679 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-hxw4p" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.461773 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.462283 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-7mccc"] Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.465845 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mbjh\" (UniqueName: \"kubernetes.io/projected/d34656b6-50d4-4173-a40b-5a9eddb99397-kube-api-access-6mbjh\") pod \"oauth-openshift-558db77b4-h2zrl\" (UID: \"d34656b6-50d4-4173-a40b-5a9eddb99397\") " pod="openshift-authentication/oauth-openshift-558db77b4-h2zrl" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.465904 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/43e2df1f-102d-4440-bc2b-76d89a47be31-trusted-ca\") pod \"console-operator-58897d9998-ffk9k\" (UID: \"43e2df1f-102d-4440-bc2b-76d89a47be31\") " pod="openshift-console-operator/console-operator-58897d9998-ffk9k" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.465957 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e32a75fa-f16d-4386-a933-4a6bd43f1bdc-service-ca\") pod \"console-f9d7485db-n6r8g\" (UID: \"e32a75fa-f16d-4386-a933-4a6bd43f1bdc\") " pod="openshift-console/console-f9d7485db-n6r8g" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.466005 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rqjb\" (UniqueName: \"kubernetes.io/projected/4e349c3e-b968-4c13-968e-124554aca7d2-kube-api-access-2rqjb\") pod \"multus-admission-controller-857f4d67dd-tsvns\" (UID: \"4e349c3e-b968-4c13-968e-124554aca7d2\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-tsvns" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.466044 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74tm6\" (UniqueName: \"kubernetes.io/projected/223282ee-d242-4896-a1b9-9f63a9bb0915-kube-api-access-74tm6\") pod \"collect-profiles-29536455-gfh89\" (UID: \"223282ee-d242-4896-a1b9-9f63a9bb0915\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536455-gfh89" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.466072 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d34656b6-50d4-4173-a40b-5a9eddb99397-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-h2zrl\" (UID: \"d34656b6-50d4-4173-a40b-5a9eddb99397\") " pod="openshift-authentication/oauth-openshift-558db77b4-h2zrl" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.466109 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b820abcf-cb2e-4fef-be37-060602dac285-profile-collector-cert\") pod \"olm-operator-6b444d44fb-p9c6z\" (UID: \"b820abcf-cb2e-4fef-be37-060602dac285\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p9c6z" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.466186 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d34656b6-50d4-4173-a40b-5a9eddb99397-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-h2zrl\" (UID: \"d34656b6-50d4-4173-a40b-5a9eddb99397\") " pod="openshift-authentication/oauth-openshift-558db77b4-h2zrl" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.466254 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4de17c0f-b467-4b89-8152-ecd5eb9cd5ed-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-hnlvv\" (UID: \"4de17c0f-b467-4b89-8152-ecd5eb9cd5ed\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hnlvv" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.466344 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d89392c8-76ea-4723-8fc3-04fcd6727a23-config\") pod \"etcd-operator-b45778765-7g9hp\" (UID: \"d89392c8-76ea-4723-8fc3-04fcd6727a23\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7g9hp" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.466373 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/d89392c8-76ea-4723-8fc3-04fcd6727a23-etcd-ca\") pod \"etcd-operator-b45778765-7g9hp\" (UID: \"d89392c8-76ea-4723-8fc3-04fcd6727a23\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7g9hp" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.466416 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bcc07b80-55eb-465c-9528-6bad6e2bcbc1-serving-cert\") pod \"openshift-config-operator-7777fb866f-vlnjr\" (UID: \"bcc07b80-55eb-465c-9528-6bad6e2bcbc1\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vlnjr" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.466451 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d34656b6-50d4-4173-a40b-5a9eddb99397-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-h2zrl\" (UID: \"d34656b6-50d4-4173-a40b-5a9eddb99397\") " pod="openshift-authentication/oauth-openshift-558db77b4-h2zrl" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.466492 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d34656b6-50d4-4173-a40b-5a9eddb99397-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-h2zrl\" (UID: \"d34656b6-50d4-4173-a40b-5a9eddb99397\") " pod="openshift-authentication/oauth-openshift-558db77b4-h2zrl" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.466529 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nc5bh\" (UniqueName: \"kubernetes.io/projected/4de17c0f-b467-4b89-8152-ecd5eb9cd5ed-kube-api-access-nc5bh\") pod \"cluster-image-registry-operator-dc59b4c8b-hnlvv\" (UID: \"4de17c0f-b467-4b89-8152-ecd5eb9cd5ed\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hnlvv" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.466584 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c085501f-cb61-43a3-816b-dc1e744642aa-webhook-cert\") pod \"packageserver-d55dfcdfc-6pn2r\" (UID: \"c085501f-cb61-43a3-816b-dc1e744642aa\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6pn2r" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.466616 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d89392c8-76ea-4723-8fc3-04fcd6727a23-serving-cert\") pod \"etcd-operator-b45778765-7g9hp\" (UID: \"d89392c8-76ea-4723-8fc3-04fcd6727a23\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7g9hp" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.466646 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9e380580-8ee5-4746-8abd-1e89104afa78-metrics-tls\") pod \"ingress-operator-5b745b69d9-7mccc\" (UID: \"9e380580-8ee5-4746-8abd-1e89104afa78\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7mccc" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.466678 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cad1f061-186f-4774-ae82-648462d0912a-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-fd4sb\" (UID: \"cad1f061-186f-4774-ae82-648462d0912a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-fd4sb" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.466722 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21d518bc-a601-48ce-9d37-67eb5657eea1-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-lg8l6\" (UID: \"21d518bc-a601-48ce-9d37-67eb5657eea1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lg8l6" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.466779 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/34cb42eb-c801-4afa-9b85-64ea3d8c3ab2-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-bwhh6\" (UID: \"34cb42eb-c801-4afa-9b85-64ea3d8c3ab2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bwhh6" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.466812 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdjlc\" (UniqueName: \"kubernetes.io/projected/9e380580-8ee5-4746-8abd-1e89104afa78-kube-api-access-qdjlc\") pod \"ingress-operator-5b745b69d9-7mccc\" (UID: \"9e380580-8ee5-4746-8abd-1e89104afa78\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7mccc" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.466850 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/827ec3fa-aae4-4a9d-b3d6-3b93e0a9b71d-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-b25mg\" (UID: \"827ec3fa-aae4-4a9d-b3d6-3b93e0a9b71d\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-b25mg" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.466904 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8a02b4b9-556f-4dc2-9647-68951959ab71-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-7rm6t\" (UID: \"8a02b4b9-556f-4dc2-9647-68951959ab71\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7rm6t" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.466940 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d34656b6-50d4-4173-a40b-5a9eddb99397-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-h2zrl\" (UID: \"d34656b6-50d4-4173-a40b-5a9eddb99397\") " pod="openshift-authentication/oauth-openshift-558db77b4-h2zrl" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.466992 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d34656b6-50d4-4173-a40b-5a9eddb99397-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-h2zrl\" (UID: \"d34656b6-50d4-4173-a40b-5a9eddb99397\") " pod="openshift-authentication/oauth-openshift-558db77b4-h2zrl" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.467030 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4e349c3e-b968-4c13-968e-124554aca7d2-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-tsvns\" (UID: \"4e349c3e-b968-4c13-968e-124554aca7d2\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-tsvns" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.467063 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e380580-8ee5-4746-8abd-1e89104afa78-bound-sa-token\") pod \"ingress-operator-5b745b69d9-7mccc\" (UID: \"9e380580-8ee5-4746-8abd-1e89104afa78\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7mccc" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.467114 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34cb42eb-c801-4afa-9b85-64ea3d8c3ab2-config\") pod \"openshift-apiserver-operator-796bbdcf4f-bwhh6\" (UID: \"34cb42eb-c801-4afa-9b85-64ea3d8c3ab2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bwhh6" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.467148 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e32a75fa-f16d-4386-a933-4a6bd43f1bdc-console-oauth-config\") pod \"console-f9d7485db-n6r8g\" (UID: \"e32a75fa-f16d-4386-a933-4a6bd43f1bdc\") " pod="openshift-console/console-f9d7485db-n6r8g" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.467176 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d34656b6-50d4-4173-a40b-5a9eddb99397-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-h2zrl\" (UID: \"d34656b6-50d4-4173-a40b-5a9eddb99397\") " pod="openshift-authentication/oauth-openshift-558db77b4-h2zrl" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.467206 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d34656b6-50d4-4173-a40b-5a9eddb99397-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-h2zrl\" (UID: \"d34656b6-50d4-4173-a40b-5a9eddb99397\") " pod="openshift-authentication/oauth-openshift-558db77b4-h2zrl" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.467259 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/3d03c805-40fa-4fd0-a049-db518941b121-profile-collector-cert\") pod \"catalog-operator-68c6474976-pxm5q\" (UID: \"3d03c805-40fa-4fd0-a049-db518941b121\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pxm5q" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.467299 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d89392c8-76ea-4723-8fc3-04fcd6727a23-etcd-client\") pod \"etcd-operator-b45778765-7g9hp\" (UID: \"d89392c8-76ea-4723-8fc3-04fcd6727a23\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7g9hp" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.467330 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e32a75fa-f16d-4386-a933-4a6bd43f1bdc-console-config\") pod \"console-f9d7485db-n6r8g\" (UID: \"e32a75fa-f16d-4386-a933-4a6bd43f1bdc\") " pod="openshift-console/console-f9d7485db-n6r8g" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.467367 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbzhk\" (UniqueName: \"kubernetes.io/projected/c085501f-cb61-43a3-816b-dc1e744642aa-kube-api-access-rbzhk\") pod \"packageserver-d55dfcdfc-6pn2r\" (UID: \"c085501f-cb61-43a3-816b-dc1e744642aa\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6pn2r" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.467405 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vb5r\" (UniqueName: \"kubernetes.io/projected/b820abcf-cb2e-4fef-be37-060602dac285-kube-api-access-6vb5r\") pod \"olm-operator-6b444d44fb-p9c6z\" (UID: \"b820abcf-cb2e-4fef-be37-060602dac285\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p9c6z" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.468567 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/d89392c8-76ea-4723-8fc3-04fcd6727a23-etcd-ca\") pod \"etcd-operator-b45778765-7g9hp\" (UID: \"d89392c8-76ea-4723-8fc3-04fcd6727a23\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7g9hp" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.468822 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e32a75fa-f16d-4386-a933-4a6bd43f1bdc-service-ca\") pod \"console-f9d7485db-n6r8g\" (UID: \"e32a75fa-f16d-4386-a933-4a6bd43f1bdc\") " pod="openshift-console/console-f9d7485db-n6r8g" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.469734 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8a02b4b9-556f-4dc2-9647-68951959ab71-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-7rm6t\" (UID: \"8a02b4b9-556f-4dc2-9647-68951959ab71\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7rm6t" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.469812 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pxm5q"] Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.469886 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a02b4b9-556f-4dc2-9647-68951959ab71-config\") pod \"kube-controller-manager-operator-78b949d7b-7rm6t\" (UID: \"8a02b4b9-556f-4dc2-9647-68951959ab71\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7rm6t" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.471391 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34cb42eb-c801-4afa-9b85-64ea3d8c3ab2-config\") pod \"openshift-apiserver-operator-796bbdcf4f-bwhh6\" (UID: \"34cb42eb-c801-4afa-9b85-64ea3d8c3ab2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bwhh6" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.471537 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d89392c8-76ea-4723-8fc3-04fcd6727a23-config\") pod \"etcd-operator-b45778765-7g9hp\" (UID: \"d89392c8-76ea-4723-8fc3-04fcd6727a23\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7g9hp" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.471715 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/34cb42eb-c801-4afa-9b85-64ea3d8c3ab2-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-bwhh6\" (UID: \"34cb42eb-c801-4afa-9b85-64ea3d8c3ab2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bwhh6" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.472961 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d34656b6-50d4-4173-a40b-5a9eddb99397-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-h2zrl\" (UID: \"d34656b6-50d4-4173-a40b-5a9eddb99397\") " pod="openshift-authentication/oauth-openshift-558db77b4-h2zrl" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.473015 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d34656b6-50d4-4173-a40b-5a9eddb99397-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-h2zrl\" (UID: \"d34656b6-50d4-4173-a40b-5a9eddb99397\") " pod="openshift-authentication/oauth-openshift-558db77b4-h2zrl" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.473587 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-2vxxl"] Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.474289 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/63800e6f-d2ec-48e9-9739-60ed474ed51b-metrics-tls\") pod \"dns-operator-744455d44c-qx7dp\" (UID: \"63800e6f-d2ec-48e9-9739-60ed474ed51b\") " pod="openshift-dns-operator/dns-operator-744455d44c-qx7dp" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.474351 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/4de17c0f-b467-4b89-8152-ecd5eb9cd5ed-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-hnlvv\" (UID: \"4de17c0f-b467-4b89-8152-ecd5eb9cd5ed\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hnlvv" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.474380 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/223282ee-d242-4896-a1b9-9f63a9bb0915-secret-volume\") pod \"collect-profiles-29536455-gfh89\" (UID: \"223282ee-d242-4896-a1b9-9f63a9bb0915\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536455-gfh89" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.474410 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/46053881-83ad-4dad-ae13-950fc812a5ed-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-rh88s\" (UID: \"46053881-83ad-4dad-ae13-950fc812a5ed\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rh88s" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.474436 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d34656b6-50d4-4173-a40b-5a9eddb99397-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-h2zrl\" (UID: \"d34656b6-50d4-4173-a40b-5a9eddb99397\") " pod="openshift-authentication/oauth-openshift-558db77b4-h2zrl" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.474489 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4de17c0f-b467-4b89-8152-ecd5eb9cd5ed-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-hnlvv\" (UID: \"4de17c0f-b467-4b89-8152-ecd5eb9cd5ed\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hnlvv" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.475579 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e32a75fa-f16d-4386-a933-4a6bd43f1bdc-console-config\") pod \"console-f9d7485db-n6r8g\" (UID: \"e32a75fa-f16d-4386-a933-4a6bd43f1bdc\") " pod="openshift-console/console-f9d7485db-n6r8g" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.476137 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d89392c8-76ea-4723-8fc3-04fcd6727a23-serving-cert\") pod \"etcd-operator-b45778765-7g9hp\" (UID: \"d89392c8-76ea-4723-8fc3-04fcd6727a23\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7g9hp" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.476334 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d34656b6-50d4-4173-a40b-5a9eddb99397-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-h2zrl\" (UID: \"d34656b6-50d4-4173-a40b-5a9eddb99397\") " pod="openshift-authentication/oauth-openshift-558db77b4-h2zrl" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.475588 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e32a75fa-f16d-4386-a933-4a6bd43f1bdc-console-oauth-config\") pod \"console-f9d7485db-n6r8g\" (UID: \"e32a75fa-f16d-4386-a933-4a6bd43f1bdc\") " pod="openshift-console/console-f9d7485db-n6r8g" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.476713 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d34656b6-50d4-4173-a40b-5a9eddb99397-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-h2zrl\" (UID: \"d34656b6-50d4-4173-a40b-5a9eddb99397\") " pod="openshift-authentication/oauth-openshift-558db77b4-h2zrl" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.476735 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mp2k6\" (UniqueName: \"kubernetes.io/projected/63800e6f-d2ec-48e9-9739-60ed474ed51b-kube-api-access-mp2k6\") pod \"dns-operator-744455d44c-qx7dp\" (UID: \"63800e6f-d2ec-48e9-9739-60ed474ed51b\") " pod="openshift-dns-operator/dns-operator-744455d44c-qx7dp" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.476885 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43e2df1f-102d-4440-bc2b-76d89a47be31-config\") pod \"console-operator-58897d9998-ffk9k\" (UID: \"43e2df1f-102d-4440-bc2b-76d89a47be31\") " pod="openshift-console-operator/console-operator-58897d9998-ffk9k" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.477244 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e32a75fa-f16d-4386-a933-4a6bd43f1bdc-oauth-serving-cert\") pod \"console-f9d7485db-n6r8g\" (UID: \"e32a75fa-f16d-4386-a933-4a6bd43f1bdc\") " pod="openshift-console/console-f9d7485db-n6r8g" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.477307 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21d518bc-a601-48ce-9d37-67eb5657eea1-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-lg8l6\" (UID: \"21d518bc-a601-48ce-9d37-67eb5657eea1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lg8l6" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.477337 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/c085501f-cb61-43a3-816b-dc1e744642aa-tmpfs\") pod \"packageserver-d55dfcdfc-6pn2r\" (UID: \"c085501f-cb61-43a3-816b-dc1e744642aa\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6pn2r" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.477377 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4de17c0f-b467-4b89-8152-ecd5eb9cd5ed-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-hnlvv\" (UID: \"4de17c0f-b467-4b89-8152-ecd5eb9cd5ed\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hnlvv" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.477407 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bcc07b80-55eb-465c-9528-6bad6e2bcbc1-serving-cert\") pod \"openshift-config-operator-7777fb866f-vlnjr\" (UID: \"bcc07b80-55eb-465c-9528-6bad6e2bcbc1\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vlnjr" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.477552 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8fkr\" (UniqueName: \"kubernetes.io/projected/3d03c805-40fa-4fd0-a049-db518941b121-kube-api-access-m8fkr\") pod \"catalog-operator-68c6474976-pxm5q\" (UID: \"3d03c805-40fa-4fd0-a049-db518941b121\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pxm5q" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.477592 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e380580-8ee5-4746-8abd-1e89104afa78-trusted-ca\") pod \"ingress-operator-5b745b69d9-7mccc\" (UID: \"9e380580-8ee5-4746-8abd-1e89104afa78\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7mccc" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.477652 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9881d4cb-217e-455b-b8f3-0ad24a1e51d7-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-2vxxl\" (UID: \"9881d4cb-217e-455b-b8f3-0ad24a1e51d7\") " pod="openshift-marketplace/marketplace-operator-79b997595-2vxxl" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.477735 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6j79\" (UniqueName: \"kubernetes.io/projected/827ec3fa-aae4-4a9d-b3d6-3b93e0a9b71d-kube-api-access-h6j79\") pod \"control-plane-machine-set-operator-78cbb6b69f-b25mg\" (UID: \"827ec3fa-aae4-4a9d-b3d6-3b93e0a9b71d\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-b25mg" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.477792 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sc8n8\" (UniqueName: \"kubernetes.io/projected/34cb42eb-c801-4afa-9b85-64ea3d8c3ab2-kube-api-access-sc8n8\") pod \"openshift-apiserver-operator-796bbdcf4f-bwhh6\" (UID: \"34cb42eb-c801-4afa-9b85-64ea3d8c3ab2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bwhh6" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.477864 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/21d518bc-a601-48ce-9d37-67eb5657eea1-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-lg8l6\" (UID: \"21d518bc-a601-48ce-9d37-67eb5657eea1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lg8l6" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.477936 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9881d4cb-217e-455b-b8f3-0ad24a1e51d7-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-2vxxl\" (UID: \"9881d4cb-217e-455b-b8f3-0ad24a1e51d7\") " pod="openshift-marketplace/marketplace-operator-79b997595-2vxxl" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.477979 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cad1f061-186f-4774-ae82-648462d0912a-config\") pod \"kube-apiserver-operator-766d6c64bb-fd4sb\" (UID: \"cad1f061-186f-4774-ae82-648462d0912a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-fd4sb" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.477984 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e32a75fa-f16d-4386-a933-4a6bd43f1bdc-oauth-serving-cert\") pod \"console-f9d7485db-n6r8g\" (UID: \"e32a75fa-f16d-4386-a933-4a6bd43f1bdc\") " pod="openshift-console/console-f9d7485db-n6r8g" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.478087 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxjgh\" (UniqueName: \"kubernetes.io/projected/bcc07b80-55eb-465c-9528-6bad6e2bcbc1-kube-api-access-xxjgh\") pod \"openshift-config-operator-7777fb866f-vlnjr\" (UID: \"bcc07b80-55eb-465c-9528-6bad6e2bcbc1\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vlnjr" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.478190 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/46053881-83ad-4dad-ae13-950fc812a5ed-proxy-tls\") pod \"machine-config-controller-84d6567774-rh88s\" (UID: \"46053881-83ad-4dad-ae13-950fc812a5ed\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rh88s" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.478260 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bcc07b80-55eb-465c-9528-6bad6e2bcbc1-available-featuregates\") pod \"openshift-config-operator-7777fb866f-vlnjr\" (UID: \"bcc07b80-55eb-465c-9528-6bad6e2bcbc1\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vlnjr" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.478296 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d34656b6-50d4-4173-a40b-5a9eddb99397-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-h2zrl\" (UID: \"d34656b6-50d4-4173-a40b-5a9eddb99397\") " pod="openshift-authentication/oauth-openshift-558db77b4-h2zrl" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.481615 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klsqf\" (UniqueName: \"kubernetes.io/projected/9881d4cb-217e-455b-b8f3-0ad24a1e51d7-kube-api-access-klsqf\") pod \"marketplace-operator-79b997595-2vxxl\" (UID: \"9881d4cb-217e-455b-b8f3-0ad24a1e51d7\") " pod="openshift-marketplace/marketplace-operator-79b997595-2vxxl" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.481760 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92wrt\" (UniqueName: \"kubernetes.io/projected/43e2df1f-102d-4440-bc2b-76d89a47be31-kube-api-access-92wrt\") pod \"console-operator-58897d9998-ffk9k\" (UID: \"43e2df1f-102d-4440-bc2b-76d89a47be31\") " pod="openshift-console-operator/console-operator-58897d9998-ffk9k" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.481798 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c085501f-cb61-43a3-816b-dc1e744642aa-apiservice-cert\") pod \"packageserver-d55dfcdfc-6pn2r\" (UID: \"c085501f-cb61-43a3-816b-dc1e744642aa\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6pn2r" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.481836 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5l6m\" (UniqueName: \"kubernetes.io/projected/d89392c8-76ea-4723-8fc3-04fcd6727a23-kube-api-access-c5l6m\") pod \"etcd-operator-b45778765-7g9hp\" (UID: \"d89392c8-76ea-4723-8fc3-04fcd6727a23\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7g9hp" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.481869 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvx6x\" (UniqueName: \"kubernetes.io/projected/46053881-83ad-4dad-ae13-950fc812a5ed-kube-api-access-pvx6x\") pod \"machine-config-controller-84d6567774-rh88s\" (UID: \"46053881-83ad-4dad-ae13-950fc812a5ed\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rh88s" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.481914 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvdnk\" (UniqueName: \"kubernetes.io/projected/e32a75fa-f16d-4386-a933-4a6bd43f1bdc-kube-api-access-lvdnk\") pod \"console-f9d7485db-n6r8g\" (UID: \"e32a75fa-f16d-4386-a933-4a6bd43f1bdc\") " pod="openshift-console/console-f9d7485db-n6r8g" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.481939 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3d03c805-40fa-4fd0-a049-db518941b121-srv-cert\") pod \"catalog-operator-68c6474976-pxm5q\" (UID: \"3d03c805-40fa-4fd0-a049-db518941b121\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pxm5q" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.481966 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d34656b6-50d4-4173-a40b-5a9eddb99397-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-h2zrl\" (UID: \"d34656b6-50d4-4173-a40b-5a9eddb99397\") " pod="openshift-authentication/oauth-openshift-558db77b4-h2zrl" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.481996 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e32a75fa-f16d-4386-a933-4a6bd43f1bdc-trusted-ca-bundle\") pod \"console-f9d7485db-n6r8g\" (UID: \"e32a75fa-f16d-4386-a933-4a6bd43f1bdc\") " pod="openshift-console/console-f9d7485db-n6r8g" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.482001 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bcc07b80-55eb-465c-9528-6bad6e2bcbc1-available-featuregates\") pod \"openshift-config-operator-7777fb866f-vlnjr\" (UID: \"bcc07b80-55eb-465c-9528-6bad6e2bcbc1\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vlnjr" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.482055 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/223282ee-d242-4896-a1b9-9f63a9bb0915-config-volume\") pod \"collect-profiles-29536455-gfh89\" (UID: \"223282ee-d242-4896-a1b9-9f63a9bb0915\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536455-gfh89" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.482093 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b820abcf-cb2e-4fef-be37-060602dac285-srv-cert\") pod \"olm-operator-6b444d44fb-p9c6z\" (UID: \"b820abcf-cb2e-4fef-be37-060602dac285\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p9c6z" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.482113 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cad1f061-186f-4774-ae82-648462d0912a-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-fd4sb\" (UID: \"cad1f061-186f-4774-ae82-648462d0912a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-fd4sb" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.482139 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/d89392c8-76ea-4723-8fc3-04fcd6727a23-etcd-service-ca\") pod \"etcd-operator-b45778765-7g9hp\" (UID: \"d89392c8-76ea-4723-8fc3-04fcd6727a23\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7g9hp" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.482159 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e32a75fa-f16d-4386-a933-4a6bd43f1bdc-console-serving-cert\") pod \"console-f9d7485db-n6r8g\" (UID: \"e32a75fa-f16d-4386-a933-4a6bd43f1bdc\") " pod="openshift-console/console-f9d7485db-n6r8g" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.482199 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d34656b6-50d4-4173-a40b-5a9eddb99397-audit-policies\") pod \"oauth-openshift-558db77b4-h2zrl\" (UID: \"d34656b6-50d4-4173-a40b-5a9eddb99397\") " pod="openshift-authentication/oauth-openshift-558db77b4-h2zrl" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.482216 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d34656b6-50d4-4173-a40b-5a9eddb99397-audit-dir\") pod \"oauth-openshift-558db77b4-h2zrl\" (UID: \"d34656b6-50d4-4173-a40b-5a9eddb99397\") " pod="openshift-authentication/oauth-openshift-558db77b4-h2zrl" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.482248 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43e2df1f-102d-4440-bc2b-76d89a47be31-serving-cert\") pod \"console-operator-58897d9998-ffk9k\" (UID: \"43e2df1f-102d-4440-bc2b-76d89a47be31\") " pod="openshift-console-operator/console-operator-58897d9998-ffk9k" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.482286 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d34656b6-50d4-4173-a40b-5a9eddb99397-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-h2zrl\" (UID: \"d34656b6-50d4-4173-a40b-5a9eddb99397\") " pod="openshift-authentication/oauth-openshift-558db77b4-h2zrl" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.482450 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d34656b6-50d4-4173-a40b-5a9eddb99397-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-h2zrl\" (UID: \"d34656b6-50d4-4173-a40b-5a9eddb99397\") " pod="openshift-authentication/oauth-openshift-558db77b4-h2zrl" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.482642 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d34656b6-50d4-4173-a40b-5a9eddb99397-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-h2zrl\" (UID: \"d34656b6-50d4-4173-a40b-5a9eddb99397\") " pod="openshift-authentication/oauth-openshift-558db77b4-h2zrl" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.481977 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d34656b6-50d4-4173-a40b-5a9eddb99397-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-h2zrl\" (UID: \"d34656b6-50d4-4173-a40b-5a9eddb99397\") " pod="openshift-authentication/oauth-openshift-558db77b4-h2zrl" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.483490 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/d89392c8-76ea-4723-8fc3-04fcd6727a23-etcd-service-ca\") pod \"etcd-operator-b45778765-7g9hp\" (UID: \"d89392c8-76ea-4723-8fc3-04fcd6727a23\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7g9hp" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.483537 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.483673 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d34656b6-50d4-4173-a40b-5a9eddb99397-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-h2zrl\" (UID: \"d34656b6-50d4-4173-a40b-5a9eddb99397\") " pod="openshift-authentication/oauth-openshift-558db77b4-h2zrl" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.483727 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vvpct"] Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.483763 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7rm6t"] Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.483932 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d34656b6-50d4-4173-a40b-5a9eddb99397-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-h2zrl\" (UID: \"d34656b6-50d4-4173-a40b-5a9eddb99397\") " pod="openshift-authentication/oauth-openshift-558db77b4-h2zrl" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.483995 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d34656b6-50d4-4173-a40b-5a9eddb99397-audit-dir\") pod \"oauth-openshift-558db77b4-h2zrl\" (UID: \"d34656b6-50d4-4173-a40b-5a9eddb99397\") " pod="openshift-authentication/oauth-openshift-558db77b4-h2zrl" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.484473 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/63800e6f-d2ec-48e9-9739-60ed474ed51b-metrics-tls\") pod \"dns-operator-744455d44c-qx7dp\" (UID: \"63800e6f-d2ec-48e9-9739-60ed474ed51b\") " pod="openshift-dns-operator/dns-operator-744455d44c-qx7dp" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.484653 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/4de17c0f-b467-4b89-8152-ecd5eb9cd5ed-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-hnlvv\" (UID: \"4de17c0f-b467-4b89-8152-ecd5eb9cd5ed\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hnlvv" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.484703 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d34656b6-50d4-4173-a40b-5a9eddb99397-audit-policies\") pod \"oauth-openshift-558db77b4-h2zrl\" (UID: \"d34656b6-50d4-4173-a40b-5a9eddb99397\") " pod="openshift-authentication/oauth-openshift-558db77b4-h2zrl" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.485105 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d89392c8-76ea-4723-8fc3-04fcd6727a23-etcd-client\") pod \"etcd-operator-b45778765-7g9hp\" (UID: \"d89392c8-76ea-4723-8fc3-04fcd6727a23\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7g9hp" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.485438 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d34656b6-50d4-4173-a40b-5a9eddb99397-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-h2zrl\" (UID: \"d34656b6-50d4-4173-a40b-5a9eddb99397\") " pod="openshift-authentication/oauth-openshift-558db77b4-h2zrl" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.485487 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-x828r"] Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.486520 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e32a75fa-f16d-4386-a933-4a6bd43f1bdc-console-serving-cert\") pod \"console-f9d7485db-n6r8g\" (UID: \"e32a75fa-f16d-4386-a933-4a6bd43f1bdc\") " pod="openshift-console/console-f9d7485db-n6r8g" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.486574 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e32a75fa-f16d-4386-a933-4a6bd43f1bdc-trusted-ca-bundle\") pod \"console-f9d7485db-n6r8g\" (UID: \"e32a75fa-f16d-4386-a933-4a6bd43f1bdc\") " pod="openshift-console/console-f9d7485db-n6r8g" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.486736 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-6bhg7"] Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.487804 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43e2df1f-102d-4440-bc2b-76d89a47be31-serving-cert\") pod \"console-operator-58897d9998-ffk9k\" (UID: \"43e2df1f-102d-4440-bc2b-76d89a47be31\") " pod="openshift-console-operator/console-operator-58897d9998-ffk9k" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.488006 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43e2df1f-102d-4440-bc2b-76d89a47be31-config\") pod \"console-operator-58897d9998-ffk9k\" (UID: \"43e2df1f-102d-4440-bc2b-76d89a47be31\") " pod="openshift-console-operator/console-operator-58897d9998-ffk9k" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.488073 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-hxpst"] Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.489478 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6pn2r"] Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.491339 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lg8l6"] Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.492996 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-2m9rf"] Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.495499 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-hxw4p"] Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.497242 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-fd4sb"] Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.498406 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-b25mg"] Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.501169 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.528957 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.539878 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/43e2df1f-102d-4440-bc2b-76d89a47be31-trusted-ca\") pod \"console-operator-58897d9998-ffk9k\" (UID: \"43e2df1f-102d-4440-bc2b-76d89a47be31\") " pod="openshift-console-operator/console-operator-58897d9998-ffk9k" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.561669 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.581437 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.584129 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rqjb\" (UniqueName: \"kubernetes.io/projected/4e349c3e-b968-4c13-968e-124554aca7d2-kube-api-access-2rqjb\") pod \"multus-admission-controller-857f4d67dd-tsvns\" (UID: \"4e349c3e-b968-4c13-968e-124554aca7d2\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-tsvns" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.584180 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74tm6\" (UniqueName: \"kubernetes.io/projected/223282ee-d242-4896-a1b9-9f63a9bb0915-kube-api-access-74tm6\") pod \"collect-profiles-29536455-gfh89\" (UID: \"223282ee-d242-4896-a1b9-9f63a9bb0915\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536455-gfh89" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.584208 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b820abcf-cb2e-4fef-be37-060602dac285-profile-collector-cert\") pod \"olm-operator-6b444d44fb-p9c6z\" (UID: \"b820abcf-cb2e-4fef-be37-060602dac285\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p9c6z" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.584297 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c085501f-cb61-43a3-816b-dc1e744642aa-webhook-cert\") pod \"packageserver-d55dfcdfc-6pn2r\" (UID: \"c085501f-cb61-43a3-816b-dc1e744642aa\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6pn2r" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.584329 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cad1f061-186f-4774-ae82-648462d0912a-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-fd4sb\" (UID: \"cad1f061-186f-4774-ae82-648462d0912a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-fd4sb" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.584351 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9e380580-8ee5-4746-8abd-1e89104afa78-metrics-tls\") pod \"ingress-operator-5b745b69d9-7mccc\" (UID: \"9e380580-8ee5-4746-8abd-1e89104afa78\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7mccc" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.584397 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21d518bc-a601-48ce-9d37-67eb5657eea1-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-lg8l6\" (UID: \"21d518bc-a601-48ce-9d37-67eb5657eea1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lg8l6" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.584427 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdjlc\" (UniqueName: \"kubernetes.io/projected/9e380580-8ee5-4746-8abd-1e89104afa78-kube-api-access-qdjlc\") pod \"ingress-operator-5b745b69d9-7mccc\" (UID: \"9e380580-8ee5-4746-8abd-1e89104afa78\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7mccc" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.584453 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/827ec3fa-aae4-4a9d-b3d6-3b93e0a9b71d-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-b25mg\" (UID: \"827ec3fa-aae4-4a9d-b3d6-3b93e0a9b71d\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-b25mg" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.584482 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8a02b4b9-556f-4dc2-9647-68951959ab71-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-7rm6t\" (UID: \"8a02b4b9-556f-4dc2-9647-68951959ab71\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7rm6t" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.584517 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4e349c3e-b968-4c13-968e-124554aca7d2-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-tsvns\" (UID: \"4e349c3e-b968-4c13-968e-124554aca7d2\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-tsvns" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.584550 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e380580-8ee5-4746-8abd-1e89104afa78-bound-sa-token\") pod \"ingress-operator-5b745b69d9-7mccc\" (UID: \"9e380580-8ee5-4746-8abd-1e89104afa78\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7mccc" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.584576 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/3d03c805-40fa-4fd0-a049-db518941b121-profile-collector-cert\") pod \"catalog-operator-68c6474976-pxm5q\" (UID: \"3d03c805-40fa-4fd0-a049-db518941b121\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pxm5q" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.584604 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbzhk\" (UniqueName: \"kubernetes.io/projected/c085501f-cb61-43a3-816b-dc1e744642aa-kube-api-access-rbzhk\") pod \"packageserver-d55dfcdfc-6pn2r\" (UID: \"c085501f-cb61-43a3-816b-dc1e744642aa\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6pn2r" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.584630 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vb5r\" (UniqueName: \"kubernetes.io/projected/b820abcf-cb2e-4fef-be37-060602dac285-kube-api-access-6vb5r\") pod \"olm-operator-6b444d44fb-p9c6z\" (UID: \"b820abcf-cb2e-4fef-be37-060602dac285\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p9c6z" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.584660 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8a02b4b9-556f-4dc2-9647-68951959ab71-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-7rm6t\" (UID: \"8a02b4b9-556f-4dc2-9647-68951959ab71\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7rm6t" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.584686 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a02b4b9-556f-4dc2-9647-68951959ab71-config\") pod \"kube-controller-manager-operator-78b949d7b-7rm6t\" (UID: \"8a02b4b9-556f-4dc2-9647-68951959ab71\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7rm6t" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.584713 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/223282ee-d242-4896-a1b9-9f63a9bb0915-secret-volume\") pod \"collect-profiles-29536455-gfh89\" (UID: \"223282ee-d242-4896-a1b9-9f63a9bb0915\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536455-gfh89" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.584751 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/46053881-83ad-4dad-ae13-950fc812a5ed-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-rh88s\" (UID: \"46053881-83ad-4dad-ae13-950fc812a5ed\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rh88s" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.584796 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/c085501f-cb61-43a3-816b-dc1e744642aa-tmpfs\") pod \"packageserver-d55dfcdfc-6pn2r\" (UID: \"c085501f-cb61-43a3-816b-dc1e744642aa\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6pn2r" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.584824 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21d518bc-a601-48ce-9d37-67eb5657eea1-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-lg8l6\" (UID: \"21d518bc-a601-48ce-9d37-67eb5657eea1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lg8l6" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.584858 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8fkr\" (UniqueName: \"kubernetes.io/projected/3d03c805-40fa-4fd0-a049-db518941b121-kube-api-access-m8fkr\") pod \"catalog-operator-68c6474976-pxm5q\" (UID: \"3d03c805-40fa-4fd0-a049-db518941b121\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pxm5q" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.584884 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e380580-8ee5-4746-8abd-1e89104afa78-trusted-ca\") pod \"ingress-operator-5b745b69d9-7mccc\" (UID: \"9e380580-8ee5-4746-8abd-1e89104afa78\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7mccc" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.584926 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9881d4cb-217e-455b-b8f3-0ad24a1e51d7-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-2vxxl\" (UID: \"9881d4cb-217e-455b-b8f3-0ad24a1e51d7\") " pod="openshift-marketplace/marketplace-operator-79b997595-2vxxl" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.584953 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6j79\" (UniqueName: \"kubernetes.io/projected/827ec3fa-aae4-4a9d-b3d6-3b93e0a9b71d-kube-api-access-h6j79\") pod \"control-plane-machine-set-operator-78cbb6b69f-b25mg\" (UID: \"827ec3fa-aae4-4a9d-b3d6-3b93e0a9b71d\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-b25mg" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.584977 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/21d518bc-a601-48ce-9d37-67eb5657eea1-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-lg8l6\" (UID: \"21d518bc-a601-48ce-9d37-67eb5657eea1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lg8l6" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.585002 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9881d4cb-217e-455b-b8f3-0ad24a1e51d7-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-2vxxl\" (UID: \"9881d4cb-217e-455b-b8f3-0ad24a1e51d7\") " pod="openshift-marketplace/marketplace-operator-79b997595-2vxxl" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.585024 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cad1f061-186f-4774-ae82-648462d0912a-config\") pod \"kube-apiserver-operator-766d6c64bb-fd4sb\" (UID: \"cad1f061-186f-4774-ae82-648462d0912a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-fd4sb" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.585065 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/46053881-83ad-4dad-ae13-950fc812a5ed-proxy-tls\") pod \"machine-config-controller-84d6567774-rh88s\" (UID: \"46053881-83ad-4dad-ae13-950fc812a5ed\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rh88s" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.585090 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-klsqf\" (UniqueName: \"kubernetes.io/projected/9881d4cb-217e-455b-b8f3-0ad24a1e51d7-kube-api-access-klsqf\") pod \"marketplace-operator-79b997595-2vxxl\" (UID: \"9881d4cb-217e-455b-b8f3-0ad24a1e51d7\") " pod="openshift-marketplace/marketplace-operator-79b997595-2vxxl" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.585122 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c085501f-cb61-43a3-816b-dc1e744642aa-apiservice-cert\") pod \"packageserver-d55dfcdfc-6pn2r\" (UID: \"c085501f-cb61-43a3-816b-dc1e744642aa\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6pn2r" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.585146 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvx6x\" (UniqueName: \"kubernetes.io/projected/46053881-83ad-4dad-ae13-950fc812a5ed-kube-api-access-pvx6x\") pod \"machine-config-controller-84d6567774-rh88s\" (UID: \"46053881-83ad-4dad-ae13-950fc812a5ed\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rh88s" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.585181 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3d03c805-40fa-4fd0-a049-db518941b121-srv-cert\") pod \"catalog-operator-68c6474976-pxm5q\" (UID: \"3d03c805-40fa-4fd0-a049-db518941b121\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pxm5q" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.585270 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/223282ee-d242-4896-a1b9-9f63a9bb0915-config-volume\") pod \"collect-profiles-29536455-gfh89\" (UID: \"223282ee-d242-4896-a1b9-9f63a9bb0915\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536455-gfh89" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.585301 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b820abcf-cb2e-4fef-be37-060602dac285-srv-cert\") pod \"olm-operator-6b444d44fb-p9c6z\" (UID: \"b820abcf-cb2e-4fef-be37-060602dac285\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p9c6z" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.585326 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cad1f061-186f-4774-ae82-648462d0912a-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-fd4sb\" (UID: \"cad1f061-186f-4774-ae82-648462d0912a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-fd4sb" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.585501 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/c085501f-cb61-43a3-816b-dc1e744642aa-tmpfs\") pod \"packageserver-d55dfcdfc-6pn2r\" (UID: \"c085501f-cb61-43a3-816b-dc1e744642aa\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6pn2r" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.585555 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/46053881-83ad-4dad-ae13-950fc812a5ed-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-rh88s\" (UID: \"46053881-83ad-4dad-ae13-950fc812a5ed\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rh88s" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.602100 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.621327 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.641147 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.661398 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.681563 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.689170 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4e349c3e-b968-4c13-968e-124554aca7d2-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-tsvns\" (UID: \"4e349c3e-b968-4c13-968e-124554aca7d2\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-tsvns" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.701249 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.722001 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.742671 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.762095 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.764814 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.764894 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.765190 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.765207 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-86xkz" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.781895 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.788762 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/40178d6d-6068-4937-b7d5-883538892cc5-metrics-certs\") pod \"network-metrics-daemon-86xkz\" (UID: \"40178d6d-6068-4937-b7d5-883538892cc5\") " pod="openshift-multus/network-metrics-daemon-86xkz" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.801424 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.821493 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.853863 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p75f8\" (UniqueName: \"kubernetes.io/projected/68f33d7b-1e6e-45c8-b37e-2eff317c25d2-kube-api-access-p75f8\") pod \"apiserver-76f77b778f-rqwm5\" (UID: \"68f33d7b-1e6e-45c8-b37e-2eff317c25d2\") " pod="openshift-apiserver/apiserver-76f77b778f-rqwm5" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.877126 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nlgst\" (UniqueName: \"kubernetes.io/projected/84d79f7d-71fe-4982-8995-729a288d93fa-kube-api-access-nlgst\") pod \"authentication-operator-69f744f599-hpdf9\" (UID: \"84d79f7d-71fe-4982-8995-729a288d93fa\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hpdf9" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.882144 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.890516 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/46053881-83ad-4dad-ae13-950fc812a5ed-proxy-tls\") pod \"machine-config-controller-84d6567774-rh88s\" (UID: \"46053881-83ad-4dad-ae13-950fc812a5ed\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rh88s" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.902639 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.922495 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.962001 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.963530 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2b82z\" (UniqueName: \"kubernetes.io/projected/39055adf-7be5-43c0-851f-e0c2c0e631a7-kube-api-access-2b82z\") pod \"openshift-controller-manager-operator-756b6f6bc6-jbr44\" (UID: \"39055adf-7be5-43c0-851f-e0c2c0e631a7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jbr44" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.981783 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 27 10:21:30 crc kubenswrapper[4998]: I0227 10:21:30.988847 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21d518bc-a601-48ce-9d37-67eb5657eea1-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-lg8l6\" (UID: \"21d518bc-a601-48ce-9d37-67eb5657eea1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lg8l6" Feb 27 10:21:31 crc kubenswrapper[4998]: I0227 10:21:31.001759 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 27 10:21:31 crc kubenswrapper[4998]: I0227 10:21:31.005415 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21d518bc-a601-48ce-9d37-67eb5657eea1-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-lg8l6\" (UID: \"21d518bc-a601-48ce-9d37-67eb5657eea1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lg8l6" Feb 27 10:21:31 crc kubenswrapper[4998]: I0227 10:21:31.037925 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5hv5\" (UniqueName: \"kubernetes.io/projected/16ddfc0b-99f7-4c57-a804-665d86d0411b-kube-api-access-d5hv5\") pod \"apiserver-7bbb656c7d-68mc7\" (UID: \"16ddfc0b-99f7-4c57-a804-665d86d0411b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-68mc7" Feb 27 10:21:31 crc kubenswrapper[4998]: I0227 10:21:31.057748 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnrkk\" (UniqueName: \"kubernetes.io/projected/daeaab34-be3d-4a1e-964f-17e3661682bc-kube-api-access-dnrkk\") pod \"downloads-7954f5f757-vn72h\" (UID: \"daeaab34-be3d-4a1e-964f-17e3661682bc\") " pod="openshift-console/downloads-7954f5f757-vn72h" Feb 27 10:21:31 crc kubenswrapper[4998]: I0227 10:21:31.077041 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m65zs\" (UniqueName: \"kubernetes.io/projected/2c5ba57a-1b2e-4b26-a17e-2d61d61b9645-kube-api-access-m65zs\") pod \"machine-approver-56656f9798-mrrql\" (UID: \"2c5ba57a-1b2e-4b26-a17e-2d61d61b9645\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mrrql" Feb 27 10:21:31 crc kubenswrapper[4998]: I0227 10:21:31.080093 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-rqwm5" Feb 27 10:21:31 crc kubenswrapper[4998]: I0227 10:21:31.093782 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-hpdf9" Feb 27 10:21:31 crc kubenswrapper[4998]: I0227 10:21:31.102296 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 27 10:21:31 crc kubenswrapper[4998]: I0227 10:21:31.103927 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlpzn\" (UniqueName: \"kubernetes.io/projected/4812fe61-7540-41eb-8daa-26541710d1fe-kube-api-access-jlpzn\") pod \"cluster-samples-operator-665b6dd947-ggw9l\" (UID: \"4812fe61-7540-41eb-8daa-26541710d1fe\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ggw9l" Feb 27 10:21:31 crc kubenswrapper[4998]: I0227 10:21:31.122641 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 27 10:21:31 crc kubenswrapper[4998]: I0227 10:21:31.144744 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 27 10:21:31 crc kubenswrapper[4998]: I0227 10:21:31.160213 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-68mc7" Feb 27 10:21:31 crc kubenswrapper[4998]: I0227 10:21:31.166651 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 27 10:21:31 crc kubenswrapper[4998]: I0227 10:21:31.174571 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c085501f-cb61-43a3-816b-dc1e744642aa-apiservice-cert\") pod \"packageserver-d55dfcdfc-6pn2r\" (UID: \"c085501f-cb61-43a3-816b-dc1e744642aa\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6pn2r" Feb 27 10:21:31 crc kubenswrapper[4998]: I0227 10:21:31.175931 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c085501f-cb61-43a3-816b-dc1e744642aa-webhook-cert\") pod \"packageserver-d55dfcdfc-6pn2r\" (UID: \"c085501f-cb61-43a3-816b-dc1e744642aa\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6pn2r" Feb 27 10:21:31 crc kubenswrapper[4998]: I0227 10:21:31.176213 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mrrql" Feb 27 10:21:31 crc kubenswrapper[4998]: I0227 10:21:31.183120 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 27 10:21:31 crc kubenswrapper[4998]: I0227 10:21:31.197929 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jbr44" Feb 27 10:21:31 crc kubenswrapper[4998]: I0227 10:21:31.222003 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ggw9l" Feb 27 10:21:31 crc kubenswrapper[4998]: I0227 10:21:31.231970 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qw6r\" (UniqueName: \"kubernetes.io/projected/65b6d855-bcdd-41e5-b3ea-e269a1b6b689-kube-api-access-5qw6r\") pod \"controller-manager-879f6c89f-2ksp8\" (UID: \"65b6d855-bcdd-41e5-b3ea-e269a1b6b689\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2ksp8" Feb 27 10:21:31 crc kubenswrapper[4998]: I0227 10:21:31.245292 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rpl6\" (UniqueName: \"kubernetes.io/projected/ae341042-4a8d-4a41-bb0e-931abecc819a-kube-api-access-7rpl6\") pod \"machine-api-operator-5694c8668f-dvl4h\" (UID: \"ae341042-4a8d-4a41-bb0e-931abecc819a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-dvl4h" Feb 27 10:21:31 crc kubenswrapper[4998]: I0227 10:21:31.261414 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-vn72h" Feb 27 10:21:31 crc kubenswrapper[4998]: I0227 10:21:31.261949 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 27 10:21:31 crc kubenswrapper[4998]: I0227 10:21:31.264736 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpb9b\" (UniqueName: \"kubernetes.io/projected/854e4003-37ab-473a-a282-6e9c453dfd52-kube-api-access-fpb9b\") pod \"route-controller-manager-6576b87f9c-x828r\" (UID: \"854e4003-37ab-473a-a282-6e9c453dfd52\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-x828r" Feb 27 10:21:31 crc kubenswrapper[4998]: I0227 10:21:31.267955 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b820abcf-cb2e-4fef-be37-060602dac285-profile-collector-cert\") pod \"olm-operator-6b444d44fb-p9c6z\" (UID: \"b820abcf-cb2e-4fef-be37-060602dac285\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p9c6z" Feb 27 10:21:31 crc kubenswrapper[4998]: I0227 10:21:31.271437 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/223282ee-d242-4896-a1b9-9f63a9bb0915-secret-volume\") pod \"collect-profiles-29536455-gfh89\" (UID: \"223282ee-d242-4896-a1b9-9f63a9bb0915\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536455-gfh89" Feb 27 10:21:31 crc kubenswrapper[4998]: I0227 10:21:31.272687 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/3d03c805-40fa-4fd0-a049-db518941b121-profile-collector-cert\") pod \"catalog-operator-68c6474976-pxm5q\" (UID: \"3d03c805-40fa-4fd0-a049-db518941b121\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pxm5q" Feb 27 10:21:31 crc kubenswrapper[4998]: I0227 10:21:31.287355 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 27 10:21:31 crc kubenswrapper[4998]: I0227 10:21:31.288213 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-rqwm5"] Feb 27 10:21:31 crc kubenswrapper[4998]: I0227 10:21:31.302041 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 27 10:21:31 crc kubenswrapper[4998]: I0227 10:21:31.302956 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3d03c805-40fa-4fd0-a049-db518941b121-srv-cert\") pod \"catalog-operator-68c6474976-pxm5q\" (UID: \"3d03c805-40fa-4fd0-a049-db518941b121\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pxm5q" Feb 27 10:21:31 crc kubenswrapper[4998]: I0227 10:21:31.311376 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b820abcf-cb2e-4fef-be37-060602dac285-srv-cert\") pod \"olm-operator-6b444d44fb-p9c6z\" (UID: \"b820abcf-cb2e-4fef-be37-060602dac285\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p9c6z" Feb 27 10:21:31 crc kubenswrapper[4998]: I0227 10:21:31.312852 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-hpdf9"] Feb 27 10:21:31 crc kubenswrapper[4998]: I0227 10:21:31.321149 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 27 10:21:31 crc kubenswrapper[4998]: I0227 10:21:31.345344 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 27 10:21:31 crc kubenswrapper[4998]: I0227 10:21:31.361279 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 27 10:21:31 crc kubenswrapper[4998]: I0227 10:21:31.381694 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 27 10:21:31 crc kubenswrapper[4998]: I0227 10:21:31.402322 4998 request.go:700] Waited for 1.021638579s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/secrets?fieldSelector=metadata.name%3Dingress-operator-dockercfg-7lnqk&limit=500&resourceVersion=0 Feb 27 10:21:31 crc kubenswrapper[4998]: I0227 10:21:31.404955 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 27 10:21:31 crc kubenswrapper[4998]: I0227 10:21:31.414550 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-dvl4h" Feb 27 10:21:31 crc kubenswrapper[4998]: I0227 10:21:31.424652 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 27 10:21:31 crc kubenswrapper[4998]: I0227 10:21:31.438893 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9e380580-8ee5-4746-8abd-1e89104afa78-metrics-tls\") pod \"ingress-operator-5b745b69d9-7mccc\" (UID: \"9e380580-8ee5-4746-8abd-1e89104afa78\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7mccc" Feb 27 10:21:31 crc kubenswrapper[4998]: I0227 10:21:31.441946 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-x828r" Feb 27 10:21:31 crc kubenswrapper[4998]: I0227 10:21:31.443645 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-2ksp8" Feb 27 10:21:31 crc kubenswrapper[4998]: I0227 10:21:31.447625 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 27 10:21:31 crc kubenswrapper[4998]: I0227 10:21:31.456888 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e380580-8ee5-4746-8abd-1e89104afa78-trusted-ca\") pod \"ingress-operator-5b745b69d9-7mccc\" (UID: \"9e380580-8ee5-4746-8abd-1e89104afa78\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7mccc" Feb 27 10:21:31 crc kubenswrapper[4998]: I0227 10:21:31.461131 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 27 10:21:31 crc kubenswrapper[4998]: I0227 10:21:31.482695 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 27 10:21:31 crc kubenswrapper[4998]: I0227 10:21:31.501816 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 27 10:21:31 crc kubenswrapper[4998]: I0227 10:21:31.523037 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 27 10:21:31 crc kubenswrapper[4998]: I0227 10:21:31.543607 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 27 10:21:31 crc kubenswrapper[4998]: I0227 10:21:31.565066 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 27 10:21:31 crc kubenswrapper[4998]: I0227 10:21:31.582033 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 27 10:21:31 crc kubenswrapper[4998]: E0227 10:21:31.584624 4998 secret.go:188] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: failed to sync secret cache: timed out waiting for the condition Feb 27 10:21:31 crc kubenswrapper[4998]: E0227 10:21:31.584736 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/827ec3fa-aae4-4a9d-b3d6-3b93e0a9b71d-control-plane-machine-set-operator-tls podName:827ec3fa-aae4-4a9d-b3d6-3b93e0a9b71d nodeName:}" failed. No retries permitted until 2026-02-27 10:21:32.084710922 +0000 UTC m=+244.082981890 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/827ec3fa-aae4-4a9d-b3d6-3b93e0a9b71d-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-78cbb6b69f-b25mg" (UID: "827ec3fa-aae4-4a9d-b3d6-3b93e0a9b71d") : failed to sync secret cache: timed out waiting for the condition Feb 27 10:21:31 crc kubenswrapper[4998]: E0227 10:21:31.584808 4998 secret.go:188] Couldn't get secret openshift-kube-controller-manager-operator/kube-controller-manager-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 27 10:21:31 crc kubenswrapper[4998]: E0227 10:21:31.584899 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a02b4b9-556f-4dc2-9647-68951959ab71-serving-cert podName:8a02b4b9-556f-4dc2-9647-68951959ab71 nodeName:}" failed. No retries permitted until 2026-02-27 10:21:32.084871197 +0000 UTC m=+244.083142165 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/8a02b4b9-556f-4dc2-9647-68951959ab71-serving-cert") pod "kube-controller-manager-operator-78b949d7b-7rm6t" (UID: "8a02b4b9-556f-4dc2-9647-68951959ab71") : failed to sync secret cache: timed out waiting for the condition Feb 27 10:21:31 crc kubenswrapper[4998]: E0227 10:21:31.584953 4998 configmap.go:193] Couldn't get configMap openshift-kube-controller-manager-operator/kube-controller-manager-operator-config: failed to sync configmap cache: timed out waiting for the condition Feb 27 10:21:31 crc kubenswrapper[4998]: E0227 10:21:31.584974 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8a02b4b9-556f-4dc2-9647-68951959ab71-config podName:8a02b4b9-556f-4dc2-9647-68951959ab71 nodeName:}" failed. No retries permitted until 2026-02-27 10:21:32.084968209 +0000 UTC m=+244.083239177 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/8a02b4b9-556f-4dc2-9647-68951959ab71-config") pod "kube-controller-manager-operator-78b949d7b-7rm6t" (UID: "8a02b4b9-556f-4dc2-9647-68951959ab71") : failed to sync configmap cache: timed out waiting for the condition Feb 27 10:21:31 crc kubenswrapper[4998]: E0227 10:21:31.586167 4998 configmap.go:193] Couldn't get configMap openshift-kube-apiserver-operator/kube-apiserver-operator-config: failed to sync configmap cache: timed out waiting for the condition Feb 27 10:21:31 crc kubenswrapper[4998]: E0227 10:21:31.586197 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/cad1f061-186f-4774-ae82-648462d0912a-config podName:cad1f061-186f-4774-ae82-648462d0912a nodeName:}" failed. No retries permitted until 2026-02-27 10:21:32.086188163 +0000 UTC m=+244.084459131 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/cad1f061-186f-4774-ae82-648462d0912a-config") pod "kube-apiserver-operator-766d6c64bb-fd4sb" (UID: "cad1f061-186f-4774-ae82-648462d0912a") : failed to sync configmap cache: timed out waiting for the condition Feb 27 10:21:31 crc kubenswrapper[4998]: E0227 10:21:31.586246 4998 configmap.go:193] Couldn't get configMap openshift-operator-lifecycle-manager/collect-profiles-config: failed to sync configmap cache: timed out waiting for the condition Feb 27 10:21:31 crc kubenswrapper[4998]: E0227 10:21:31.586269 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/223282ee-d242-4896-a1b9-9f63a9bb0915-config-volume podName:223282ee-d242-4896-a1b9-9f63a9bb0915 nodeName:}" failed. No retries permitted until 2026-02-27 10:21:32.086262825 +0000 UTC m=+244.084533793 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/223282ee-d242-4896-a1b9-9f63a9bb0915-config-volume") pod "collect-profiles-29536455-gfh89" (UID: "223282ee-d242-4896-a1b9-9f63a9bb0915") : failed to sync configmap cache: timed out waiting for the condition Feb 27 10:21:31 crc kubenswrapper[4998]: E0227 10:21:31.586283 4998 secret.go:188] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: failed to sync secret cache: timed out waiting for the condition Feb 27 10:21:31 crc kubenswrapper[4998]: E0227 10:21:31.586304 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9881d4cb-217e-455b-b8f3-0ad24a1e51d7-marketplace-operator-metrics podName:9881d4cb-217e-455b-b8f3-0ad24a1e51d7 nodeName:}" failed. No retries permitted until 2026-02-27 10:21:32.086297416 +0000 UTC m=+244.084568374 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/9881d4cb-217e-455b-b8f3-0ad24a1e51d7-marketplace-operator-metrics") pod "marketplace-operator-79b997595-2vxxl" (UID: "9881d4cb-217e-455b-b8f3-0ad24a1e51d7") : failed to sync secret cache: timed out waiting for the condition Feb 27 10:21:31 crc kubenswrapper[4998]: E0227 10:21:31.586325 4998 configmap.go:193] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: failed to sync configmap cache: timed out waiting for the condition Feb 27 10:21:31 crc kubenswrapper[4998]: E0227 10:21:31.586341 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9881d4cb-217e-455b-b8f3-0ad24a1e51d7-marketplace-trusted-ca podName:9881d4cb-217e-455b-b8f3-0ad24a1e51d7 nodeName:}" failed. No retries permitted until 2026-02-27 10:21:32.086336407 +0000 UTC m=+244.084607375 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/9881d4cb-217e-455b-b8f3-0ad24a1e51d7-marketplace-trusted-ca") pod "marketplace-operator-79b997595-2vxxl" (UID: "9881d4cb-217e-455b-b8f3-0ad24a1e51d7") : failed to sync configmap cache: timed out waiting for the condition Feb 27 10:21:31 crc kubenswrapper[4998]: E0227 10:21:31.587542 4998 secret.go:188] Couldn't get secret openshift-kube-apiserver-operator/kube-apiserver-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 27 10:21:31 crc kubenswrapper[4998]: E0227 10:21:31.587569 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cad1f061-186f-4774-ae82-648462d0912a-serving-cert podName:cad1f061-186f-4774-ae82-648462d0912a nodeName:}" failed. No retries permitted until 2026-02-27 10:21:32.08756118 +0000 UTC m=+244.085832148 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/cad1f061-186f-4774-ae82-648462d0912a-serving-cert") pod "kube-apiserver-operator-766d6c64bb-fd4sb" (UID: "cad1f061-186f-4774-ae82-648462d0912a") : failed to sync secret cache: timed out waiting for the condition Feb 27 10:21:31 crc kubenswrapper[4998]: I0227 10:21:31.588862 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-hpdf9" event={"ID":"84d79f7d-71fe-4982-8995-729a288d93fa","Type":"ContainerStarted","Data":"425d4498bfb8f36900bc5dbf1a7878588d2c77353dfbb01864397d772776f878"} Feb 27 10:21:31 crc kubenswrapper[4998]: I0227 10:21:31.588896 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-hpdf9" event={"ID":"84d79f7d-71fe-4982-8995-729a288d93fa","Type":"ContainerStarted","Data":"9ed50246a3fd0e1766a1d4ba473fa59aef1d9ac09d6b705e373abe3ee7b97ddc"} Feb 27 10:21:31 crc kubenswrapper[4998]: I0227 10:21:31.597412 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-rqwm5" event={"ID":"68f33d7b-1e6e-45c8-b37e-2eff317c25d2","Type":"ContainerStarted","Data":"b89c8073bf221130247bb53607808d8c248f258d0c8f0381abb7350341f5a171"} Feb 27 10:21:31 crc kubenswrapper[4998]: I0227 10:21:31.601149 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 27 10:21:31 crc kubenswrapper[4998]: I0227 10:21:31.602280 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mrrql" event={"ID":"2c5ba57a-1b2e-4b26-a17e-2d61d61b9645","Type":"ContainerStarted","Data":"0dc9d5a3445ba92aa5dc5cfbecb732fc30e5305a66dd62953befcb4320744595"} Feb 27 10:21:31 crc kubenswrapper[4998]: I0227 10:21:31.602307 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mrrql" event={"ID":"2c5ba57a-1b2e-4b26-a17e-2d61d61b9645","Type":"ContainerStarted","Data":"91a92895c9427154e94951ef6596b93e0a88d6676d332a1a8c9635a265b1eb5d"} Feb 27 10:21:31 crc kubenswrapper[4998]: I0227 10:21:31.625357 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 27 10:21:31 crc kubenswrapper[4998]: I0227 10:21:31.641883 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 27 10:21:31 crc kubenswrapper[4998]: I0227 10:21:31.661461 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 27 10:21:31 crc kubenswrapper[4998]: I0227 10:21:31.676969 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-dvl4h"] Feb 27 10:21:31 crc kubenswrapper[4998]: I0227 10:21:31.682193 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ggw9l"] Feb 27 10:21:31 crc kubenswrapper[4998]: I0227 10:21:31.684798 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 27 10:21:31 crc kubenswrapper[4998]: I0227 10:21:31.685921 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jbr44"] Feb 27 10:21:31 crc kubenswrapper[4998]: W0227 10:21:31.695899 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod39055adf_7be5_43c0_851f_e0c2c0e631a7.slice/crio-043ff8b8cabcc4ed9936d55d078b71a89c52e5898da6234be5223743dbfbdb4e WatchSource:0}: Error finding container 043ff8b8cabcc4ed9936d55d078b71a89c52e5898da6234be5223743dbfbdb4e: Status 404 returned error can't find the container with id 043ff8b8cabcc4ed9936d55d078b71a89c52e5898da6234be5223743dbfbdb4e Feb 27 10:21:31 crc kubenswrapper[4998]: W0227 10:21:31.696984 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podae341042_4a8d_4a41_bb0e_931abecc819a.slice/crio-55230a858fe3a44a73ebca88b259660e839f8205c2cd9a53d0b14b7bd55e3640 WatchSource:0}: Error finding container 55230a858fe3a44a73ebca88b259660e839f8205c2cd9a53d0b14b7bd55e3640: Status 404 returned error can't find the container with id 55230a858fe3a44a73ebca88b259660e839f8205c2cd9a53d0b14b7bd55e3640 Feb 27 10:21:31 crc kubenswrapper[4998]: I0227 10:21:31.702069 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 27 10:21:31 crc kubenswrapper[4998]: I0227 10:21:31.714341 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-2ksp8"] Feb 27 10:21:31 crc kubenswrapper[4998]: I0227 10:21:31.721516 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 27 10:21:31 crc kubenswrapper[4998]: I0227 10:21:31.740244 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-x828r"] Feb 27 10:21:31 crc kubenswrapper[4998]: W0227 10:21:31.741124 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod65b6d855_bcdd_41e5_b3ea_e269a1b6b689.slice/crio-145778089d579e512edcc8aa52ce46633b5623963c41d0f6ae5e23664dc4de6f WatchSource:0}: Error finding container 145778089d579e512edcc8aa52ce46633b5623963c41d0f6ae5e23664dc4de6f: Status 404 returned error can't find the container with id 145778089d579e512edcc8aa52ce46633b5623963c41d0f6ae5e23664dc4de6f Feb 27 10:21:31 crc kubenswrapper[4998]: I0227 10:21:31.741965 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 27 10:21:31 crc kubenswrapper[4998]: I0227 10:21:31.749850 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-68mc7"] Feb 27 10:21:31 crc kubenswrapper[4998]: I0227 10:21:31.752404 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-vn72h"] Feb 27 10:21:31 crc kubenswrapper[4998]: W0227 10:21:31.758182 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod854e4003_37ab_473a_a282_6e9c453dfd52.slice/crio-c53fcc1b5954074f7456fd74d5820699fc456c603c0081670abb490af0bb9bac WatchSource:0}: Error finding container c53fcc1b5954074f7456fd74d5820699fc456c603c0081670abb490af0bb9bac: Status 404 returned error can't find the container with id c53fcc1b5954074f7456fd74d5820699fc456c603c0081670abb490af0bb9bac Feb 27 10:21:31 crc kubenswrapper[4998]: W0227 10:21:31.759582 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16ddfc0b_99f7_4c57_a804_665d86d0411b.slice/crio-cc0dbc889e60e7abfad25dd161c704957591d1d27842a17c55043d10cf7ea224 WatchSource:0}: Error finding container cc0dbc889e60e7abfad25dd161c704957591d1d27842a17c55043d10cf7ea224: Status 404 returned error can't find the container with id cc0dbc889e60e7abfad25dd161c704957591d1d27842a17c55043d10cf7ea224 Feb 27 10:21:31 crc kubenswrapper[4998]: I0227 10:21:31.768858 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 27 10:21:31 crc kubenswrapper[4998]: I0227 10:21:31.781374 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 27 10:21:31 crc kubenswrapper[4998]: E0227 10:21:31.789186 4998 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: failed to sync secret cache: timed out waiting for the condition Feb 27 10:21:31 crc kubenswrapper[4998]: E0227 10:21:31.789334 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/40178d6d-6068-4937-b7d5-883538892cc5-metrics-certs podName:40178d6d-6068-4937-b7d5-883538892cc5 nodeName:}" failed. No retries permitted until 2026-02-27 10:23:33.789313582 +0000 UTC m=+365.787584550 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/40178d6d-6068-4937-b7d5-883538892cc5-metrics-certs") pod "network-metrics-daemon-86xkz" (UID: "40178d6d-6068-4937-b7d5-883538892cc5") : failed to sync secret cache: timed out waiting for the condition Feb 27 10:21:31 crc kubenswrapper[4998]: I0227 10:21:31.801185 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 27 10:21:31 crc kubenswrapper[4998]: I0227 10:21:31.824114 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 27 10:21:31 crc kubenswrapper[4998]: I0227 10:21:31.848015 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 27 10:21:31 crc kubenswrapper[4998]: I0227 10:21:31.863491 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 27 10:21:31 crc kubenswrapper[4998]: I0227 10:21:31.882189 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 10:21:31 crc kubenswrapper[4998]: I0227 10:21:31.901835 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 10:21:31 crc kubenswrapper[4998]: I0227 10:21:31.921517 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 27 10:21:31 crc kubenswrapper[4998]: I0227 10:21:31.941333 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 27 10:21:31 crc kubenswrapper[4998]: I0227 10:21:31.962514 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 27 10:21:31 crc kubenswrapper[4998]: I0227 10:21:31.981502 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 27 10:21:32 crc kubenswrapper[4998]: I0227 10:21:32.001360 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 27 10:21:32 crc kubenswrapper[4998]: I0227 10:21:32.021201 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 27 10:21:32 crc kubenswrapper[4998]: I0227 10:21:32.041861 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 27 10:21:32 crc kubenswrapper[4998]: I0227 10:21:32.061730 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 27 10:21:32 crc kubenswrapper[4998]: I0227 10:21:32.082016 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 27 10:21:32 crc kubenswrapper[4998]: I0227 10:21:32.101343 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 27 10:21:32 crc kubenswrapper[4998]: I0227 10:21:32.114412 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9881d4cb-217e-455b-b8f3-0ad24a1e51d7-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-2vxxl\" (UID: \"9881d4cb-217e-455b-b8f3-0ad24a1e51d7\") " pod="openshift-marketplace/marketplace-operator-79b997595-2vxxl" Feb 27 10:21:32 crc kubenswrapper[4998]: I0227 10:21:32.114480 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9881d4cb-217e-455b-b8f3-0ad24a1e51d7-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-2vxxl\" (UID: \"9881d4cb-217e-455b-b8f3-0ad24a1e51d7\") " pod="openshift-marketplace/marketplace-operator-79b997595-2vxxl" Feb 27 10:21:32 crc kubenswrapper[4998]: I0227 10:21:32.114508 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cad1f061-186f-4774-ae82-648462d0912a-config\") pod \"kube-apiserver-operator-766d6c64bb-fd4sb\" (UID: \"cad1f061-186f-4774-ae82-648462d0912a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-fd4sb" Feb 27 10:21:32 crc kubenswrapper[4998]: I0227 10:21:32.114589 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/223282ee-d242-4896-a1b9-9f63a9bb0915-config-volume\") pod \"collect-profiles-29536455-gfh89\" (UID: \"223282ee-d242-4896-a1b9-9f63a9bb0915\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536455-gfh89" Feb 27 10:21:32 crc kubenswrapper[4998]: I0227 10:21:32.114615 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cad1f061-186f-4774-ae82-648462d0912a-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-fd4sb\" (UID: \"cad1f061-186f-4774-ae82-648462d0912a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-fd4sb" Feb 27 10:21:32 crc kubenswrapper[4998]: I0227 10:21:32.114735 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/827ec3fa-aae4-4a9d-b3d6-3b93e0a9b71d-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-b25mg\" (UID: \"827ec3fa-aae4-4a9d-b3d6-3b93e0a9b71d\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-b25mg" Feb 27 10:21:32 crc kubenswrapper[4998]: I0227 10:21:32.114969 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8a02b4b9-556f-4dc2-9647-68951959ab71-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-7rm6t\" (UID: \"8a02b4b9-556f-4dc2-9647-68951959ab71\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7rm6t" Feb 27 10:21:32 crc kubenswrapper[4998]: I0227 10:21:32.114991 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a02b4b9-556f-4dc2-9647-68951959ab71-config\") pod \"kube-controller-manager-operator-78b949d7b-7rm6t\" (UID: \"8a02b4b9-556f-4dc2-9647-68951959ab71\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7rm6t" Feb 27 10:21:32 crc kubenswrapper[4998]: I0227 10:21:32.115760 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a02b4b9-556f-4dc2-9647-68951959ab71-config\") pod \"kube-controller-manager-operator-78b949d7b-7rm6t\" (UID: \"8a02b4b9-556f-4dc2-9647-68951959ab71\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7rm6t" Feb 27 10:21:32 crc kubenswrapper[4998]: I0227 10:21:32.115830 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/223282ee-d242-4896-a1b9-9f63a9bb0915-config-volume\") pod \"collect-profiles-29536455-gfh89\" (UID: \"223282ee-d242-4896-a1b9-9f63a9bb0915\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536455-gfh89" Feb 27 10:21:32 crc kubenswrapper[4998]: I0227 10:21:32.116320 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9881d4cb-217e-455b-b8f3-0ad24a1e51d7-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-2vxxl\" (UID: \"9881d4cb-217e-455b-b8f3-0ad24a1e51d7\") " pod="openshift-marketplace/marketplace-operator-79b997595-2vxxl" Feb 27 10:21:32 crc kubenswrapper[4998]: I0227 10:21:32.120429 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8a02b4b9-556f-4dc2-9647-68951959ab71-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-7rm6t\" (UID: \"8a02b4b9-556f-4dc2-9647-68951959ab71\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7rm6t" Feb 27 10:21:32 crc kubenswrapper[4998]: I0227 10:21:32.121195 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/827ec3fa-aae4-4a9d-b3d6-3b93e0a9b71d-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-b25mg\" (UID: \"827ec3fa-aae4-4a9d-b3d6-3b93e0a9b71d\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-b25mg" Feb 27 10:21:32 crc kubenswrapper[4998]: I0227 10:21:32.121919 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 27 10:21:32 crc kubenswrapper[4998]: I0227 10:21:32.122523 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9881d4cb-217e-455b-b8f3-0ad24a1e51d7-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-2vxxl\" (UID: \"9881d4cb-217e-455b-b8f3-0ad24a1e51d7\") " pod="openshift-marketplace/marketplace-operator-79b997595-2vxxl" Feb 27 10:21:32 crc kubenswrapper[4998]: I0227 10:21:32.127393 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cad1f061-186f-4774-ae82-648462d0912a-config\") pod \"kube-apiserver-operator-766d6c64bb-fd4sb\" (UID: \"cad1f061-186f-4774-ae82-648462d0912a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-fd4sb" Feb 27 10:21:32 crc kubenswrapper[4998]: I0227 10:21:32.142182 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 27 10:21:32 crc kubenswrapper[4998]: I0227 10:21:32.162463 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 27 10:21:32 crc kubenswrapper[4998]: I0227 10:21:32.168027 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cad1f061-186f-4774-ae82-648462d0912a-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-fd4sb\" (UID: \"cad1f061-186f-4774-ae82-648462d0912a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-fd4sb" Feb 27 10:21:32 crc kubenswrapper[4998]: I0227 10:21:32.181806 4998 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 27 10:21:32 crc kubenswrapper[4998]: I0227 10:21:32.201759 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 27 10:21:32 crc kubenswrapper[4998]: I0227 10:21:32.221376 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 27 10:21:32 crc kubenswrapper[4998]: I0227 10:21:32.241605 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 27 10:21:32 crc kubenswrapper[4998]: I0227 10:21:32.262873 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 27 10:21:32 crc kubenswrapper[4998]: I0227 10:21:32.282693 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 27 10:21:32 crc kubenswrapper[4998]: I0227 10:21:32.322725 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 27 10:21:32 crc kubenswrapper[4998]: I0227 10:21:32.342630 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 27 10:21:32 crc kubenswrapper[4998]: I0227 10:21:32.362808 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 27 10:21:32 crc kubenswrapper[4998]: I0227 10:21:32.381440 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 27 10:21:32 crc kubenswrapper[4998]: I0227 10:21:32.402562 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 27 10:21:32 crc kubenswrapper[4998]: I0227 10:21:32.419853 4998 request.go:700] Waited for 1.960471632s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/secrets?fieldSelector=metadata.name%3Ddefault-dockercfg-2llfx&limit=500&resourceVersion=0 Feb 27 10:21:32 crc kubenswrapper[4998]: I0227 10:21:32.425949 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 27 10:21:32 crc kubenswrapper[4998]: I0227 10:21:32.441996 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 27 10:21:32 crc kubenswrapper[4998]: I0227 10:21:32.476039 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mbjh\" (UniqueName: \"kubernetes.io/projected/d34656b6-50d4-4173-a40b-5a9eddb99397-kube-api-access-6mbjh\") pod \"oauth-openshift-558db77b4-h2zrl\" (UID: \"d34656b6-50d4-4173-a40b-5a9eddb99397\") " pod="openshift-authentication/oauth-openshift-558db77b4-h2zrl" Feb 27 10:21:32 crc kubenswrapper[4998]: I0227 10:21:32.499272 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nc5bh\" (UniqueName: \"kubernetes.io/projected/4de17c0f-b467-4b89-8152-ecd5eb9cd5ed-kube-api-access-nc5bh\") pod \"cluster-image-registry-operator-dc59b4c8b-hnlvv\" (UID: \"4de17c0f-b467-4b89-8152-ecd5eb9cd5ed\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hnlvv" Feb 27 10:21:32 crc kubenswrapper[4998]: I0227 10:21:32.511236 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-h2zrl" Feb 27 10:21:32 crc kubenswrapper[4998]: I0227 10:21:32.515908 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mp2k6\" (UniqueName: \"kubernetes.io/projected/63800e6f-d2ec-48e9-9739-60ed474ed51b-kube-api-access-mp2k6\") pod \"dns-operator-744455d44c-qx7dp\" (UID: \"63800e6f-d2ec-48e9-9739-60ed474ed51b\") " pod="openshift-dns-operator/dns-operator-744455d44c-qx7dp" Feb 27 10:21:32 crc kubenswrapper[4998]: I0227 10:21:32.538207 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4de17c0f-b467-4b89-8152-ecd5eb9cd5ed-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-hnlvv\" (UID: \"4de17c0f-b467-4b89-8152-ecd5eb9cd5ed\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hnlvv" Feb 27 10:21:32 crc kubenswrapper[4998]: I0227 10:21:32.555703 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-qx7dp" Feb 27 10:21:32 crc kubenswrapper[4998]: I0227 10:21:32.555945 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sc8n8\" (UniqueName: \"kubernetes.io/projected/34cb42eb-c801-4afa-9b85-64ea3d8c3ab2-kube-api-access-sc8n8\") pod \"openshift-apiserver-operator-796bbdcf4f-bwhh6\" (UID: \"34cb42eb-c801-4afa-9b85-64ea3d8c3ab2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bwhh6" Feb 27 10:21:32 crc kubenswrapper[4998]: I0227 10:21:32.578292 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxjgh\" (UniqueName: \"kubernetes.io/projected/bcc07b80-55eb-465c-9528-6bad6e2bcbc1-kube-api-access-xxjgh\") pod \"openshift-config-operator-7777fb866f-vlnjr\" (UID: \"bcc07b80-55eb-465c-9528-6bad6e2bcbc1\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vlnjr" Feb 27 10:21:32 crc kubenswrapper[4998]: I0227 10:21:32.607576 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92wrt\" (UniqueName: \"kubernetes.io/projected/43e2df1f-102d-4440-bc2b-76d89a47be31-kube-api-access-92wrt\") pod \"console-operator-58897d9998-ffk9k\" (UID: \"43e2df1f-102d-4440-bc2b-76d89a47be31\") " pod="openshift-console-operator/console-operator-58897d9998-ffk9k" Feb 27 10:21:32 crc kubenswrapper[4998]: I0227 10:21:32.620864 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-dvl4h" event={"ID":"ae341042-4a8d-4a41-bb0e-931abecc819a","Type":"ContainerStarted","Data":"982763717bf3be983806bff26d2cc812b51b8b625e3e752204326e432ca51ba2"} Feb 27 10:21:32 crc kubenswrapper[4998]: I0227 10:21:32.620906 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bwhh6" Feb 27 10:21:32 crc kubenswrapper[4998]: I0227 10:21:32.620919 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-dvl4h" event={"ID":"ae341042-4a8d-4a41-bb0e-931abecc819a","Type":"ContainerStarted","Data":"0bbdbd5a19861b70bcaf006ea9abe5fc2481513e77d0a87c4aa165aaca85f6a1"} Feb 27 10:21:32 crc kubenswrapper[4998]: I0227 10:21:32.621289 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-dvl4h" event={"ID":"ae341042-4a8d-4a41-bb0e-931abecc819a","Type":"ContainerStarted","Data":"55230a858fe3a44a73ebca88b259660e839f8205c2cd9a53d0b14b7bd55e3640"} Feb 27 10:21:32 crc kubenswrapper[4998]: I0227 10:21:32.623512 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jbr44" event={"ID":"39055adf-7be5-43c0-851f-e0c2c0e631a7","Type":"ContainerStarted","Data":"701d9314b639f20f0e6c22e63b464acd6a2d50d6e03b6a862f46a587af5f1ae3"} Feb 27 10:21:32 crc kubenswrapper[4998]: I0227 10:21:32.623560 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jbr44" event={"ID":"39055adf-7be5-43c0-851f-e0c2c0e631a7","Type":"ContainerStarted","Data":"043ff8b8cabcc4ed9936d55d078b71a89c52e5898da6234be5223743dbfbdb4e"} Feb 27 10:21:32 crc kubenswrapper[4998]: I0227 10:21:32.627853 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5l6m\" (UniqueName: \"kubernetes.io/projected/d89392c8-76ea-4723-8fc3-04fcd6727a23-kube-api-access-c5l6m\") pod \"etcd-operator-b45778765-7g9hp\" (UID: \"d89392c8-76ea-4723-8fc3-04fcd6727a23\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7g9hp" Feb 27 10:21:32 crc kubenswrapper[4998]: I0227 10:21:32.631825 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-x828r" event={"ID":"854e4003-37ab-473a-a282-6e9c453dfd52","Type":"ContainerStarted","Data":"210b0140d48ed7811deeac3524b035673bb8a315a7c06b9601dd83eda3844f65"} Feb 27 10:21:32 crc kubenswrapper[4998]: I0227 10:21:32.631879 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-x828r" event={"ID":"854e4003-37ab-473a-a282-6e9c453dfd52","Type":"ContainerStarted","Data":"c53fcc1b5954074f7456fd74d5820699fc456c603c0081670abb490af0bb9bac"} Feb 27 10:21:32 crc kubenswrapper[4998]: I0227 10:21:32.632568 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-x828r" Feb 27 10:21:32 crc kubenswrapper[4998]: I0227 10:21:32.634551 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mrrql" event={"ID":"2c5ba57a-1b2e-4b26-a17e-2d61d61b9645","Type":"ContainerStarted","Data":"fcf147e2bdc7704ae98479fcb659dec65475cf50f84307bf1970b86406573eaf"} Feb 27 10:21:32 crc kubenswrapper[4998]: I0227 10:21:32.636065 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-ffk9k" Feb 27 10:21:32 crc kubenswrapper[4998]: I0227 10:21:32.636466 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvdnk\" (UniqueName: \"kubernetes.io/projected/e32a75fa-f16d-4386-a933-4a6bd43f1bdc-kube-api-access-lvdnk\") pod \"console-f9d7485db-n6r8g\" (UID: \"e32a75fa-f16d-4386-a933-4a6bd43f1bdc\") " pod="openshift-console/console-f9d7485db-n6r8g" Feb 27 10:21:32 crc kubenswrapper[4998]: I0227 10:21:32.636539 4998 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-x828r container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" start-of-body= Feb 27 10:21:32 crc kubenswrapper[4998]: I0227 10:21:32.636601 4998 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-x828r" podUID="854e4003-37ab-473a-a282-6e9c453dfd52" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" Feb 27 10:21:32 crc kubenswrapper[4998]: I0227 10:21:32.637275 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-vn72h" event={"ID":"daeaab34-be3d-4a1e-964f-17e3661682bc","Type":"ContainerStarted","Data":"6a681b2f05a715f75e778b0f54082554792bccc45382fe298609412ce128521f"} Feb 27 10:21:32 crc kubenswrapper[4998]: I0227 10:21:32.637331 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-vn72h" event={"ID":"daeaab34-be3d-4a1e-964f-17e3661682bc","Type":"ContainerStarted","Data":"b288d7e21a48da24f598b23620f4af2726b4eeb86afb855ddf9e606c7b8682d9"} Feb 27 10:21:32 crc kubenswrapper[4998]: I0227 10:21:32.637712 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-vn72h" Feb 27 10:21:32 crc kubenswrapper[4998]: I0227 10:21:32.639415 4998 patch_prober.go:28] interesting pod/downloads-7954f5f757-vn72h container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Feb 27 10:21:32 crc kubenswrapper[4998]: I0227 10:21:32.639468 4998 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-vn72h" podUID="daeaab34-be3d-4a1e-964f-17e3661682bc" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Feb 27 10:21:32 crc kubenswrapper[4998]: I0227 10:21:32.648582 4998 generic.go:334] "Generic (PLEG): container finished" podID="68f33d7b-1e6e-45c8-b37e-2eff317c25d2" containerID="b56ffed2d7e7fd4762d214b7280db4b445414ac18a5ac74aef96fdbe52de31fb" exitCode=0 Feb 27 10:21:32 crc kubenswrapper[4998]: I0227 10:21:32.648776 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-rqwm5" event={"ID":"68f33d7b-1e6e-45c8-b37e-2eff317c25d2","Type":"ContainerDied","Data":"b56ffed2d7e7fd4762d214b7280db4b445414ac18a5ac74aef96fdbe52de31fb"} Feb 27 10:21:32 crc kubenswrapper[4998]: I0227 10:21:32.652219 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-2ksp8" event={"ID":"65b6d855-bcdd-41e5-b3ea-e269a1b6b689","Type":"ContainerStarted","Data":"3d4a1376472a5aa619ebaf47a5bffeea1351e4b0a9c305f485fe74de13952166"} Feb 27 10:21:32 crc kubenswrapper[4998]: I0227 10:21:32.652290 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-2ksp8" event={"ID":"65b6d855-bcdd-41e5-b3ea-e269a1b6b689","Type":"ContainerStarted","Data":"145778089d579e512edcc8aa52ce46633b5623963c41d0f6ae5e23664dc4de6f"} Feb 27 10:21:32 crc kubenswrapper[4998]: I0227 10:21:32.653374 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-2ksp8" Feb 27 10:21:32 crc kubenswrapper[4998]: I0227 10:21:32.657651 4998 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-2ksp8 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Feb 27 10:21:32 crc kubenswrapper[4998]: I0227 10:21:32.657777 4998 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-2ksp8" podUID="65b6d855-bcdd-41e5-b3ea-e269a1b6b689" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Feb 27 10:21:32 crc kubenswrapper[4998]: I0227 10:21:32.701763 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ggw9l" event={"ID":"4812fe61-7540-41eb-8daa-26541710d1fe","Type":"ContainerStarted","Data":"e1b3e4b2c38826c7e6aa9a7e91295ca36002280b5a53b187714b8c54f308c0a8"} Feb 27 10:21:32 crc kubenswrapper[4998]: I0227 10:21:32.701822 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ggw9l" event={"ID":"4812fe61-7540-41eb-8daa-26541710d1fe","Type":"ContainerStarted","Data":"d8242e8943a2d5bc8008e97c7153229167ca915b98b3d057ad43734a1358d7ab"} Feb 27 10:21:32 crc kubenswrapper[4998]: I0227 10:21:32.701837 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ggw9l" event={"ID":"4812fe61-7540-41eb-8daa-26541710d1fe","Type":"ContainerStarted","Data":"38e9676916f3d1912dfd9228065692637865d71b6d673e1d05d49d4727d6f14b"} Feb 27 10:21:32 crc kubenswrapper[4998]: I0227 10:21:32.702462 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74tm6\" (UniqueName: \"kubernetes.io/projected/223282ee-d242-4896-a1b9-9f63a9bb0915-kube-api-access-74tm6\") pod \"collect-profiles-29536455-gfh89\" (UID: \"223282ee-d242-4896-a1b9-9f63a9bb0915\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536455-gfh89" Feb 27 10:21:32 crc kubenswrapper[4998]: I0227 10:21:32.711755 4998 generic.go:334] "Generic (PLEG): container finished" podID="16ddfc0b-99f7-4c57-a804-665d86d0411b" containerID="b3bbe062c286d9baeb71f11ccd671dbb2938c41bcd2137bc5d736313c4539f73" exitCode=0 Feb 27 10:21:32 crc kubenswrapper[4998]: I0227 10:21:32.713799 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-68mc7" event={"ID":"16ddfc0b-99f7-4c57-a804-665d86d0411b","Type":"ContainerDied","Data":"b3bbe062c286d9baeb71f11ccd671dbb2938c41bcd2137bc5d736313c4539f73"} Feb 27 10:21:32 crc kubenswrapper[4998]: I0227 10:21:32.713834 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-68mc7" event={"ID":"16ddfc0b-99f7-4c57-a804-665d86d0411b","Type":"ContainerStarted","Data":"cc0dbc889e60e7abfad25dd161c704957591d1d27842a17c55043d10cf7ea224"} Feb 27 10:21:32 crc kubenswrapper[4998]: I0227 10:21:32.733944 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cad1f061-186f-4774-ae82-648462d0912a-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-fd4sb\" (UID: \"cad1f061-186f-4774-ae82-648462d0912a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-fd4sb" Feb 27 10:21:32 crc kubenswrapper[4998]: I0227 10:21:32.744116 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-h2zrl"] Feb 27 10:21:32 crc kubenswrapper[4998]: I0227 10:21:32.772256 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rqjb\" (UniqueName: \"kubernetes.io/projected/4e349c3e-b968-4c13-968e-124554aca7d2-kube-api-access-2rqjb\") pod \"multus-admission-controller-857f4d67dd-tsvns\" (UID: \"4e349c3e-b968-4c13-968e-124554aca7d2\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-tsvns" Feb 27 10:21:32 crc kubenswrapper[4998]: I0227 10:21:32.772978 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdjlc\" (UniqueName: \"kubernetes.io/projected/9e380580-8ee5-4746-8abd-1e89104afa78-kube-api-access-qdjlc\") pod \"ingress-operator-5b745b69d9-7mccc\" (UID: \"9e380580-8ee5-4746-8abd-1e89104afa78\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7mccc" Feb 27 10:21:32 crc kubenswrapper[4998]: I0227 10:21:32.775986 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8a02b4b9-556f-4dc2-9647-68951959ab71-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-7rm6t\" (UID: \"8a02b4b9-556f-4dc2-9647-68951959ab71\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7rm6t" Feb 27 10:21:32 crc kubenswrapper[4998]: I0227 10:21:32.781814 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536455-gfh89" Feb 27 10:21:32 crc kubenswrapper[4998]: I0227 10:21:32.796884 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-fd4sb" Feb 27 10:21:32 crc kubenswrapper[4998]: I0227 10:21:32.801580 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-n6r8g" Feb 27 10:21:32 crc kubenswrapper[4998]: I0227 10:21:32.810622 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e380580-8ee5-4746-8abd-1e89104afa78-bound-sa-token\") pod \"ingress-operator-5b745b69d9-7mccc\" (UID: \"9e380580-8ee5-4746-8abd-1e89104afa78\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7mccc" Feb 27 10:21:32 crc kubenswrapper[4998]: I0227 10:21:32.827954 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hnlvv" Feb 27 10:21:32 crc kubenswrapper[4998]: I0227 10:21:32.834213 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vb5r\" (UniqueName: \"kubernetes.io/projected/b820abcf-cb2e-4fef-be37-060602dac285-kube-api-access-6vb5r\") pod \"olm-operator-6b444d44fb-p9c6z\" (UID: \"b820abcf-cb2e-4fef-be37-060602dac285\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p9c6z" Feb 27 10:21:32 crc kubenswrapper[4998]: I0227 10:21:32.834935 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-7g9hp" Feb 27 10:21:32 crc kubenswrapper[4998]: I0227 10:21:32.835309 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbzhk\" (UniqueName: \"kubernetes.io/projected/c085501f-cb61-43a3-816b-dc1e744642aa-kube-api-access-rbzhk\") pod \"packageserver-d55dfcdfc-6pn2r\" (UID: \"c085501f-cb61-43a3-816b-dc1e744642aa\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6pn2r" Feb 27 10:21:32 crc kubenswrapper[4998]: I0227 10:21:32.865745 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vlnjr" Feb 27 10:21:32 crc kubenswrapper[4998]: I0227 10:21:32.890164 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6j79\" (UniqueName: \"kubernetes.io/projected/827ec3fa-aae4-4a9d-b3d6-3b93e0a9b71d-kube-api-access-h6j79\") pod \"control-plane-machine-set-operator-78cbb6b69f-b25mg\" (UID: \"827ec3fa-aae4-4a9d-b3d6-3b93e0a9b71d\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-b25mg" Feb 27 10:21:32 crc kubenswrapper[4998]: I0227 10:21:32.891208 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8fkr\" (UniqueName: \"kubernetes.io/projected/3d03c805-40fa-4fd0-a049-db518941b121-kube-api-access-m8fkr\") pod \"catalog-operator-68c6474976-pxm5q\" (UID: \"3d03c805-40fa-4fd0-a049-db518941b121\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pxm5q" Feb 27 10:21:32 crc kubenswrapper[4998]: I0227 10:21:32.913152 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/21d518bc-a601-48ce-9d37-67eb5657eea1-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-lg8l6\" (UID: \"21d518bc-a601-48ce-9d37-67eb5657eea1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lg8l6" Feb 27 10:21:32 crc kubenswrapper[4998]: I0227 10:21:32.915052 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-qx7dp"] Feb 27 10:21:32 crc kubenswrapper[4998]: I0227 10:21:32.916898 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvx6x\" (UniqueName: \"kubernetes.io/projected/46053881-83ad-4dad-ae13-950fc812a5ed-kube-api-access-pvx6x\") pod \"machine-config-controller-84d6567774-rh88s\" (UID: \"46053881-83ad-4dad-ae13-950fc812a5ed\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rh88s" Feb 27 10:21:32 crc kubenswrapper[4998]: I0227 10:21:32.930057 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 27 10:21:32 crc kubenswrapper[4998]: I0227 10:21:32.932883 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-klsqf\" (UniqueName: \"kubernetes.io/projected/9881d4cb-217e-455b-b8f3-0ad24a1e51d7-kube-api-access-klsqf\") pod \"marketplace-operator-79b997595-2vxxl\" (UID: \"9881d4cb-217e-455b-b8f3-0ad24a1e51d7\") " pod="openshift-marketplace/marketplace-operator-79b997595-2vxxl" Feb 27 10:21:32 crc kubenswrapper[4998]: I0227 10:21:32.948738 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 27 10:21:32 crc kubenswrapper[4998]: I0227 10:21:32.951197 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-tsvns" Feb 27 10:21:32 crc kubenswrapper[4998]: I0227 10:21:32.983680 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6pn2r" Feb 27 10:21:32 crc kubenswrapper[4998]: I0227 10:21:32.984279 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 27 10:21:32 crc kubenswrapper[4998]: I0227 10:21:32.994331 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lg8l6" Feb 27 10:21:32 crc kubenswrapper[4998]: I0227 10:21:32.994617 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rh88s" Feb 27 10:21:32 crc kubenswrapper[4998]: I0227 10:21:32.994626 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p9c6z" Feb 27 10:21:32 crc kubenswrapper[4998]: I0227 10:21:32.994996 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pxm5q" Feb 27 10:21:32 crc kubenswrapper[4998]: I0227 10:21:32.999070 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.004469 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.010048 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7mccc" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.021877 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.029158 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-2vxxl" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.036653 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-b25mg" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.052277 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbgr4\" (UniqueName: \"kubernetes.io/projected/4055490d-1d4a-4b0b-bf94-e2eaa714bc49-kube-api-access-pbgr4\") pod \"image-registry-697d97f7c8-2m9rf\" (UID: \"4055490d-1d4a-4b0b-bf94-e2eaa714bc49\") " pod="openshift-image-registry/image-registry-697d97f7c8-2m9rf" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.052354 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/daa3d572-23af-4c5d-a6a4-7de4c9d1ec3d-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-xggj7\" (UID: \"daa3d572-23af-4c5d-a6a4-7de4c9d1ec3d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xggj7" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.052435 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/83d0a613-ab45-4611-a345-66d75c8f0253-proxy-tls\") pod \"machine-config-operator-74547568cd-rkzrk\" (UID: \"83d0a613-ab45-4611-a345-66d75c8f0253\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rkzrk" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.052481 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k29l9\" (UniqueName: \"kubernetes.io/projected/01657e3a-3a1a-4d73-8bdb-90ddfa4c374b-kube-api-access-k29l9\") pod \"csi-hostpathplugin-f8f5r\" (UID: \"01657e3a-3a1a-4d73-8bdb-90ddfa4c374b\") " pod="hostpath-provisioner/csi-hostpathplugin-f8f5r" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.052540 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/854e8a4a-ba6a-4d3a-b20c-429e3bd1c8a7-default-certificate\") pod \"router-default-5444994796-4n9qx\" (UID: \"854e8a4a-ba6a-4d3a-b20c-429e3bd1c8a7\") " pod="openshift-ingress/router-default-5444994796-4n9qx" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.052577 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmrjr\" (UniqueName: \"kubernetes.io/projected/efcfd94f-2069-47b4-9c8f-a9be5330ff28-kube-api-access-cmrjr\") pod \"package-server-manager-789f6589d5-vvpct\" (UID: \"efcfd94f-2069-47b4-9c8f-a9be5330ff28\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vvpct" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.052604 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/daa3d572-23af-4c5d-a6a4-7de4c9d1ec3d-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-xggj7\" (UID: \"daa3d572-23af-4c5d-a6a4-7de4c9d1ec3d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xggj7" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.052661 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdvb5\" (UniqueName: \"kubernetes.io/projected/daa3d572-23af-4c5d-a6a4-7de4c9d1ec3d-kube-api-access-gdvb5\") pod \"kube-storage-version-migrator-operator-b67b599dd-xggj7\" (UID: \"daa3d572-23af-4c5d-a6a4-7de4c9d1ec3d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xggj7" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.052697 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/efcfd94f-2069-47b4-9c8f-a9be5330ff28-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-vvpct\" (UID: \"efcfd94f-2069-47b4-9c8f-a9be5330ff28\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vvpct" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.052743 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6ea90f3e-d099-4aa2-840c-ab69127694ee-serving-cert\") pod \"service-ca-operator-777779d784-lwjmt\" (UID: \"6ea90f3e-d099-4aa2-840c-ab69127694ee\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-lwjmt" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.052773 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/01657e3a-3a1a-4d73-8bdb-90ddfa4c374b-csi-data-dir\") pod \"csi-hostpathplugin-f8f5r\" (UID: \"01657e3a-3a1a-4d73-8bdb-90ddfa4c374b\") " pod="hostpath-provisioner/csi-hostpathplugin-f8f5r" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.052808 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2m9rf\" (UID: \"4055490d-1d4a-4b0b-bf94-e2eaa714bc49\") " pod="openshift-image-registry/image-registry-697d97f7c8-2m9rf" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.052892 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/8e843cdb-2025-4b35-bcee-562907dbd9d9-node-bootstrap-token\") pod \"machine-config-server-phwvx\" (UID: \"8e843cdb-2025-4b35-bcee-562907dbd9d9\") " pod="openshift-machine-config-operator/machine-config-server-phwvx" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.052961 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/83d0a613-ab45-4611-a345-66d75c8f0253-auth-proxy-config\") pod \"machine-config-operator-74547568cd-rkzrk\" (UID: \"83d0a613-ab45-4611-a345-66d75c8f0253\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rkzrk" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.053050 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/01657e3a-3a1a-4d73-8bdb-90ddfa4c374b-socket-dir\") pod \"csi-hostpathplugin-f8f5r\" (UID: \"01657e3a-3a1a-4d73-8bdb-90ddfa4c374b\") " pod="hostpath-provisioner/csi-hostpathplugin-f8f5r" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.053074 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/01657e3a-3a1a-4d73-8bdb-90ddfa4c374b-registration-dir\") pod \"csi-hostpathplugin-f8f5r\" (UID: \"01657e3a-3a1a-4d73-8bdb-90ddfa4c374b\") " pod="hostpath-provisioner/csi-hostpathplugin-f8f5r" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.053109 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/854e8a4a-ba6a-4d3a-b20c-429e3bd1c8a7-service-ca-bundle\") pod \"router-default-5444994796-4n9qx\" (UID: \"854e8a4a-ba6a-4d3a-b20c-429e3bd1c8a7\") " pod="openshift-ingress/router-default-5444994796-4n9qx" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.053132 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/83d0a613-ab45-4611-a345-66d75c8f0253-images\") pod \"machine-config-operator-74547568cd-rkzrk\" (UID: \"83d0a613-ab45-4611-a345-66d75c8f0253\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rkzrk" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.053155 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kl52g\" (UniqueName: \"kubernetes.io/projected/83d0a613-ab45-4611-a345-66d75c8f0253-kube-api-access-kl52g\") pod \"machine-config-operator-74547568cd-rkzrk\" (UID: \"83d0a613-ab45-4611-a345-66d75c8f0253\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rkzrk" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.053180 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/855749ad-5c93-4f59-be0a-68a0ea0bee93-signing-cabundle\") pod \"service-ca-9c57cc56f-hxpst\" (UID: \"855749ad-5c93-4f59-be0a-68a0ea0bee93\") " pod="openshift-service-ca/service-ca-9c57cc56f-hxpst" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.053237 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4055490d-1d4a-4b0b-bf94-e2eaa714bc49-ca-trust-extracted\") pod \"image-registry-697d97f7c8-2m9rf\" (UID: \"4055490d-1d4a-4b0b-bf94-e2eaa714bc49\") " pod="openshift-image-registry/image-registry-697d97f7c8-2m9rf" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.053416 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4055490d-1d4a-4b0b-bf94-e2eaa714bc49-installation-pull-secrets\") pod \"image-registry-697d97f7c8-2m9rf\" (UID: \"4055490d-1d4a-4b0b-bf94-e2eaa714bc49\") " pod="openshift-image-registry/image-registry-697d97f7c8-2m9rf" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.053459 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ea90f3e-d099-4aa2-840c-ab69127694ee-config\") pod \"service-ca-operator-777779d784-lwjmt\" (UID: \"6ea90f3e-d099-4aa2-840c-ab69127694ee\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-lwjmt" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.053559 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pclh\" (UniqueName: \"kubernetes.io/projected/855749ad-5c93-4f59-be0a-68a0ea0bee93-kube-api-access-5pclh\") pod \"service-ca-9c57cc56f-hxpst\" (UID: \"855749ad-5c93-4f59-be0a-68a0ea0bee93\") " pod="openshift-service-ca/service-ca-9c57cc56f-hxpst" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.053597 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4055490d-1d4a-4b0b-bf94-e2eaa714bc49-bound-sa-token\") pod \"image-registry-697d97f7c8-2m9rf\" (UID: \"4055490d-1d4a-4b0b-bf94-e2eaa714bc49\") " pod="openshift-image-registry/image-registry-697d97f7c8-2m9rf" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.053636 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/854e8a4a-ba6a-4d3a-b20c-429e3bd1c8a7-metrics-certs\") pod \"router-default-5444994796-4n9qx\" (UID: \"854e8a4a-ba6a-4d3a-b20c-429e3bd1c8a7\") " pod="openshift-ingress/router-default-5444994796-4n9qx" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.053661 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/855749ad-5c93-4f59-be0a-68a0ea0bee93-signing-key\") pod \"service-ca-9c57cc56f-hxpst\" (UID: \"855749ad-5c93-4f59-be0a-68a0ea0bee93\") " pod="openshift-service-ca/service-ca-9c57cc56f-hxpst" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.053720 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6njcz\" (UniqueName: \"kubernetes.io/projected/36a1e36a-d138-4606-a280-ef688b10a438-kube-api-access-6njcz\") pod \"auto-csr-approver-29536460-jgglv\" (UID: \"36a1e36a-d138-4606-a280-ef688b10a438\") " pod="openshift-infra/auto-csr-approver-29536460-jgglv" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.053820 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4055490d-1d4a-4b0b-bf94-e2eaa714bc49-registry-tls\") pod \"image-registry-697d97f7c8-2m9rf\" (UID: \"4055490d-1d4a-4b0b-bf94-e2eaa714bc49\") " pod="openshift-image-registry/image-registry-697d97f7c8-2m9rf" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.053844 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/01657e3a-3a1a-4d73-8bdb-90ddfa4c374b-plugins-dir\") pod \"csi-hostpathplugin-f8f5r\" (UID: \"01657e3a-3a1a-4d73-8bdb-90ddfa4c374b\") " pod="hostpath-provisioner/csi-hostpathplugin-f8f5r" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.053883 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22x84\" (UniqueName: \"kubernetes.io/projected/854e8a4a-ba6a-4d3a-b20c-429e3bd1c8a7-kube-api-access-22x84\") pod \"router-default-5444994796-4n9qx\" (UID: \"854e8a4a-ba6a-4d3a-b20c-429e3bd1c8a7\") " pod="openshift-ingress/router-default-5444994796-4n9qx" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.053904 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/8e843cdb-2025-4b35-bcee-562907dbd9d9-certs\") pod \"machine-config-server-phwvx\" (UID: \"8e843cdb-2025-4b35-bcee-562907dbd9d9\") " pod="openshift-machine-config-operator/machine-config-server-phwvx" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.053978 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kts8d\" (UniqueName: \"kubernetes.io/projected/8e843cdb-2025-4b35-bcee-562907dbd9d9-kube-api-access-kts8d\") pod \"machine-config-server-phwvx\" (UID: \"8e843cdb-2025-4b35-bcee-562907dbd9d9\") " pod="openshift-machine-config-operator/machine-config-server-phwvx" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.054125 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4055490d-1d4a-4b0b-bf94-e2eaa714bc49-registry-certificates\") pod \"image-registry-697d97f7c8-2m9rf\" (UID: \"4055490d-1d4a-4b0b-bf94-e2eaa714bc49\") " pod="openshift-image-registry/image-registry-697d97f7c8-2m9rf" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.054150 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/854e8a4a-ba6a-4d3a-b20c-429e3bd1c8a7-stats-auth\") pod \"router-default-5444994796-4n9qx\" (UID: \"854e8a4a-ba6a-4d3a-b20c-429e3bd1c8a7\") " pod="openshift-ingress/router-default-5444994796-4n9qx" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.054173 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x99jh\" (UniqueName: \"kubernetes.io/projected/6ea90f3e-d099-4aa2-840c-ab69127694ee-kube-api-access-x99jh\") pod \"service-ca-operator-777779d784-lwjmt\" (UID: \"6ea90f3e-d099-4aa2-840c-ab69127694ee\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-lwjmt" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.054201 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/01657e3a-3a1a-4d73-8bdb-90ddfa4c374b-mountpoint-dir\") pod \"csi-hostpathplugin-f8f5r\" (UID: \"01657e3a-3a1a-4d73-8bdb-90ddfa4c374b\") " pod="hostpath-provisioner/csi-hostpathplugin-f8f5r" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.054252 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4055490d-1d4a-4b0b-bf94-e2eaa714bc49-trusted-ca\") pod \"image-registry-697d97f7c8-2m9rf\" (UID: \"4055490d-1d4a-4b0b-bf94-e2eaa714bc49\") " pod="openshift-image-registry/image-registry-697d97f7c8-2m9rf" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.054276 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hlrx\" (UniqueName: \"kubernetes.io/projected/6d09adc3-707f-4a3e-b268-07e7df9fa3da-kube-api-access-7hlrx\") pod \"migrator-59844c95c7-sw9b4\" (UID: \"6d09adc3-707f-4a3e-b268-07e7df9fa3da\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-sw9b4" Feb 27 10:21:33 crc kubenswrapper[4998]: E0227 10:21:33.059053 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 10:21:33.559033109 +0000 UTC m=+245.557304267 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2m9rf" (UID: "4055490d-1d4a-4b0b-bf94-e2eaa714bc49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.072780 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7rm6t" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.101820 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-ffk9k"] Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.154920 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.155067 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmrjr\" (UniqueName: \"kubernetes.io/projected/efcfd94f-2069-47b4-9c8f-a9be5330ff28-kube-api-access-cmrjr\") pod \"package-server-manager-789f6589d5-vvpct\" (UID: \"efcfd94f-2069-47b4-9c8f-a9be5330ff28\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vvpct" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.155095 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/daa3d572-23af-4c5d-a6a4-7de4c9d1ec3d-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-xggj7\" (UID: \"daa3d572-23af-4c5d-a6a4-7de4c9d1ec3d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xggj7" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.155130 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdvb5\" (UniqueName: \"kubernetes.io/projected/daa3d572-23af-4c5d-a6a4-7de4c9d1ec3d-kube-api-access-gdvb5\") pod \"kube-storage-version-migrator-operator-b67b599dd-xggj7\" (UID: \"daa3d572-23af-4c5d-a6a4-7de4c9d1ec3d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xggj7" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.155146 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/efcfd94f-2069-47b4-9c8f-a9be5330ff28-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-vvpct\" (UID: \"efcfd94f-2069-47b4-9c8f-a9be5330ff28\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vvpct" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.155173 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6ea90f3e-d099-4aa2-840c-ab69127694ee-serving-cert\") pod \"service-ca-operator-777779d784-lwjmt\" (UID: \"6ea90f3e-d099-4aa2-840c-ab69127694ee\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-lwjmt" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.155189 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/01657e3a-3a1a-4d73-8bdb-90ddfa4c374b-csi-data-dir\") pod \"csi-hostpathplugin-f8f5r\" (UID: \"01657e3a-3a1a-4d73-8bdb-90ddfa4c374b\") " pod="hostpath-provisioner/csi-hostpathplugin-f8f5r" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.155278 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/8e843cdb-2025-4b35-bcee-562907dbd9d9-node-bootstrap-token\") pod \"machine-config-server-phwvx\" (UID: \"8e843cdb-2025-4b35-bcee-562907dbd9d9\") " pod="openshift-machine-config-operator/machine-config-server-phwvx" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.155300 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/83d0a613-ab45-4611-a345-66d75c8f0253-auth-proxy-config\") pod \"machine-config-operator-74547568cd-rkzrk\" (UID: \"83d0a613-ab45-4611-a345-66d75c8f0253\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rkzrk" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.155320 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84d9w\" (UniqueName: \"kubernetes.io/projected/5de52eac-6daa-4c2b-a61d-cb5a9668c0ea-kube-api-access-84d9w\") pod \"dns-default-6bhg7\" (UID: \"5de52eac-6daa-4c2b-a61d-cb5a9668c0ea\") " pod="openshift-dns/dns-default-6bhg7" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.155383 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/01657e3a-3a1a-4d73-8bdb-90ddfa4c374b-socket-dir\") pod \"csi-hostpathplugin-f8f5r\" (UID: \"01657e3a-3a1a-4d73-8bdb-90ddfa4c374b\") " pod="hostpath-provisioner/csi-hostpathplugin-f8f5r" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.155400 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/01657e3a-3a1a-4d73-8bdb-90ddfa4c374b-registration-dir\") pod \"csi-hostpathplugin-f8f5r\" (UID: \"01657e3a-3a1a-4d73-8bdb-90ddfa4c374b\") " pod="hostpath-provisioner/csi-hostpathplugin-f8f5r" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.155414 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5de52eac-6daa-4c2b-a61d-cb5a9668c0ea-metrics-tls\") pod \"dns-default-6bhg7\" (UID: \"5de52eac-6daa-4c2b-a61d-cb5a9668c0ea\") " pod="openshift-dns/dns-default-6bhg7" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.155429 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kl52g\" (UniqueName: \"kubernetes.io/projected/83d0a613-ab45-4611-a345-66d75c8f0253-kube-api-access-kl52g\") pod \"machine-config-operator-74547568cd-rkzrk\" (UID: \"83d0a613-ab45-4611-a345-66d75c8f0253\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rkzrk" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.155445 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/855749ad-5c93-4f59-be0a-68a0ea0bee93-signing-cabundle\") pod \"service-ca-9c57cc56f-hxpst\" (UID: \"855749ad-5c93-4f59-be0a-68a0ea0bee93\") " pod="openshift-service-ca/service-ca-9c57cc56f-hxpst" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.155470 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/854e8a4a-ba6a-4d3a-b20c-429e3bd1c8a7-service-ca-bundle\") pod \"router-default-5444994796-4n9qx\" (UID: \"854e8a4a-ba6a-4d3a-b20c-429e3bd1c8a7\") " pod="openshift-ingress/router-default-5444994796-4n9qx" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.155484 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/83d0a613-ab45-4611-a345-66d75c8f0253-images\") pod \"machine-config-operator-74547568cd-rkzrk\" (UID: \"83d0a613-ab45-4611-a345-66d75c8f0253\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rkzrk" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.155509 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4055490d-1d4a-4b0b-bf94-e2eaa714bc49-ca-trust-extracted\") pod \"image-registry-697d97f7c8-2m9rf\" (UID: \"4055490d-1d4a-4b0b-bf94-e2eaa714bc49\") " pod="openshift-image-registry/image-registry-697d97f7c8-2m9rf" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.155545 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4055490d-1d4a-4b0b-bf94-e2eaa714bc49-installation-pull-secrets\") pod \"image-registry-697d97f7c8-2m9rf\" (UID: \"4055490d-1d4a-4b0b-bf94-e2eaa714bc49\") " pod="openshift-image-registry/image-registry-697d97f7c8-2m9rf" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.155560 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ea90f3e-d099-4aa2-840c-ab69127694ee-config\") pod \"service-ca-operator-777779d784-lwjmt\" (UID: \"6ea90f3e-d099-4aa2-840c-ab69127694ee\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-lwjmt" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.155644 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5pclh\" (UniqueName: \"kubernetes.io/projected/855749ad-5c93-4f59-be0a-68a0ea0bee93-kube-api-access-5pclh\") pod \"service-ca-9c57cc56f-hxpst\" (UID: \"855749ad-5c93-4f59-be0a-68a0ea0bee93\") " pod="openshift-service-ca/service-ca-9c57cc56f-hxpst" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.155660 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4055490d-1d4a-4b0b-bf94-e2eaa714bc49-bound-sa-token\") pod \"image-registry-697d97f7c8-2m9rf\" (UID: \"4055490d-1d4a-4b0b-bf94-e2eaa714bc49\") " pod="openshift-image-registry/image-registry-697d97f7c8-2m9rf" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.155698 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/854e8a4a-ba6a-4d3a-b20c-429e3bd1c8a7-metrics-certs\") pod \"router-default-5444994796-4n9qx\" (UID: \"854e8a4a-ba6a-4d3a-b20c-429e3bd1c8a7\") " pod="openshift-ingress/router-default-5444994796-4n9qx" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.155734 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/855749ad-5c93-4f59-be0a-68a0ea0bee93-signing-key\") pod \"service-ca-9c57cc56f-hxpst\" (UID: \"855749ad-5c93-4f59-be0a-68a0ea0bee93\") " pod="openshift-service-ca/service-ca-9c57cc56f-hxpst" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.155831 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6njcz\" (UniqueName: \"kubernetes.io/projected/36a1e36a-d138-4606-a280-ef688b10a438-kube-api-access-6njcz\") pod \"auto-csr-approver-29536460-jgglv\" (UID: \"36a1e36a-d138-4606-a280-ef688b10a438\") " pod="openshift-infra/auto-csr-approver-29536460-jgglv" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.155954 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4055490d-1d4a-4b0b-bf94-e2eaa714bc49-registry-tls\") pod \"image-registry-697d97f7c8-2m9rf\" (UID: \"4055490d-1d4a-4b0b-bf94-e2eaa714bc49\") " pod="openshift-image-registry/image-registry-697d97f7c8-2m9rf" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.155975 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/01657e3a-3a1a-4d73-8bdb-90ddfa4c374b-plugins-dir\") pod \"csi-hostpathplugin-f8f5r\" (UID: \"01657e3a-3a1a-4d73-8bdb-90ddfa4c374b\") " pod="hostpath-provisioner/csi-hostpathplugin-f8f5r" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.156002 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22x84\" (UniqueName: \"kubernetes.io/projected/854e8a4a-ba6a-4d3a-b20c-429e3bd1c8a7-kube-api-access-22x84\") pod \"router-default-5444994796-4n9qx\" (UID: \"854e8a4a-ba6a-4d3a-b20c-429e3bd1c8a7\") " pod="openshift-ingress/router-default-5444994796-4n9qx" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.156046 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/8e843cdb-2025-4b35-bcee-562907dbd9d9-certs\") pod \"machine-config-server-phwvx\" (UID: \"8e843cdb-2025-4b35-bcee-562907dbd9d9\") " pod="openshift-machine-config-operator/machine-config-server-phwvx" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.156064 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kts8d\" (UniqueName: \"kubernetes.io/projected/8e843cdb-2025-4b35-bcee-562907dbd9d9-kube-api-access-kts8d\") pod \"machine-config-server-phwvx\" (UID: \"8e843cdb-2025-4b35-bcee-562907dbd9d9\") " pod="openshift-machine-config-operator/machine-config-server-phwvx" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.156145 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qz42c\" (UniqueName: \"kubernetes.io/projected/801352bd-8f9c-4af1-8fd1-8f4ec4a4b6d5-kube-api-access-qz42c\") pod \"ingress-canary-hxw4p\" (UID: \"801352bd-8f9c-4af1-8fd1-8f4ec4a4b6d5\") " pod="openshift-ingress-canary/ingress-canary-hxw4p" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.156264 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4055490d-1d4a-4b0b-bf94-e2eaa714bc49-registry-certificates\") pod \"image-registry-697d97f7c8-2m9rf\" (UID: \"4055490d-1d4a-4b0b-bf94-e2eaa714bc49\") " pod="openshift-image-registry/image-registry-697d97f7c8-2m9rf" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.156285 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/854e8a4a-ba6a-4d3a-b20c-429e3bd1c8a7-stats-auth\") pod \"router-default-5444994796-4n9qx\" (UID: \"854e8a4a-ba6a-4d3a-b20c-429e3bd1c8a7\") " pod="openshift-ingress/router-default-5444994796-4n9qx" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.156303 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x99jh\" (UniqueName: \"kubernetes.io/projected/6ea90f3e-d099-4aa2-840c-ab69127694ee-kube-api-access-x99jh\") pod \"service-ca-operator-777779d784-lwjmt\" (UID: \"6ea90f3e-d099-4aa2-840c-ab69127694ee\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-lwjmt" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.156320 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/01657e3a-3a1a-4d73-8bdb-90ddfa4c374b-mountpoint-dir\") pod \"csi-hostpathplugin-f8f5r\" (UID: \"01657e3a-3a1a-4d73-8bdb-90ddfa4c374b\") " pod="hostpath-provisioner/csi-hostpathplugin-f8f5r" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.156335 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4055490d-1d4a-4b0b-bf94-e2eaa714bc49-trusted-ca\") pod \"image-registry-697d97f7c8-2m9rf\" (UID: \"4055490d-1d4a-4b0b-bf94-e2eaa714bc49\") " pod="openshift-image-registry/image-registry-697d97f7c8-2m9rf" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.156351 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hlrx\" (UniqueName: \"kubernetes.io/projected/6d09adc3-707f-4a3e-b268-07e7df9fa3da-kube-api-access-7hlrx\") pod \"migrator-59844c95c7-sw9b4\" (UID: \"6d09adc3-707f-4a3e-b268-07e7df9fa3da\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-sw9b4" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.156397 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbgr4\" (UniqueName: \"kubernetes.io/projected/4055490d-1d4a-4b0b-bf94-e2eaa714bc49-kube-api-access-pbgr4\") pod \"image-registry-697d97f7c8-2m9rf\" (UID: \"4055490d-1d4a-4b0b-bf94-e2eaa714bc49\") " pod="openshift-image-registry/image-registry-697d97f7c8-2m9rf" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.156469 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/daa3d572-23af-4c5d-a6a4-7de4c9d1ec3d-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-xggj7\" (UID: \"daa3d572-23af-4c5d-a6a4-7de4c9d1ec3d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xggj7" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.156488 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5de52eac-6daa-4c2b-a61d-cb5a9668c0ea-config-volume\") pod \"dns-default-6bhg7\" (UID: \"5de52eac-6daa-4c2b-a61d-cb5a9668c0ea\") " pod="openshift-dns/dns-default-6bhg7" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.156522 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/83d0a613-ab45-4611-a345-66d75c8f0253-proxy-tls\") pod \"machine-config-operator-74547568cd-rkzrk\" (UID: \"83d0a613-ab45-4611-a345-66d75c8f0253\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rkzrk" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.156538 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k29l9\" (UniqueName: \"kubernetes.io/projected/01657e3a-3a1a-4d73-8bdb-90ddfa4c374b-kube-api-access-k29l9\") pod \"csi-hostpathplugin-f8f5r\" (UID: \"01657e3a-3a1a-4d73-8bdb-90ddfa4c374b\") " pod="hostpath-provisioner/csi-hostpathplugin-f8f5r" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.156573 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/854e8a4a-ba6a-4d3a-b20c-429e3bd1c8a7-default-certificate\") pod \"router-default-5444994796-4n9qx\" (UID: \"854e8a4a-ba6a-4d3a-b20c-429e3bd1c8a7\") " pod="openshift-ingress/router-default-5444994796-4n9qx" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.156590 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/801352bd-8f9c-4af1-8fd1-8f4ec4a4b6d5-cert\") pod \"ingress-canary-hxw4p\" (UID: \"801352bd-8f9c-4af1-8fd1-8f4ec4a4b6d5\") " pod="openshift-ingress-canary/ingress-canary-hxw4p" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.157587 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/01657e3a-3a1a-4d73-8bdb-90ddfa4c374b-socket-dir\") pod \"csi-hostpathplugin-f8f5r\" (UID: \"01657e3a-3a1a-4d73-8bdb-90ddfa4c374b\") " pod="hostpath-provisioner/csi-hostpathplugin-f8f5r" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.157633 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/01657e3a-3a1a-4d73-8bdb-90ddfa4c374b-registration-dir\") pod \"csi-hostpathplugin-f8f5r\" (UID: \"01657e3a-3a1a-4d73-8bdb-90ddfa4c374b\") " pod="hostpath-provisioner/csi-hostpathplugin-f8f5r" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.158642 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/83d0a613-ab45-4611-a345-66d75c8f0253-auth-proxy-config\") pod \"machine-config-operator-74547568cd-rkzrk\" (UID: \"83d0a613-ab45-4611-a345-66d75c8f0253\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rkzrk" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.160820 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/daa3d572-23af-4c5d-a6a4-7de4c9d1ec3d-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-xggj7\" (UID: \"daa3d572-23af-4c5d-a6a4-7de4c9d1ec3d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xggj7" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.160995 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/01657e3a-3a1a-4d73-8bdb-90ddfa4c374b-plugins-dir\") pod \"csi-hostpathplugin-f8f5r\" (UID: \"01657e3a-3a1a-4d73-8bdb-90ddfa4c374b\") " pod="hostpath-provisioner/csi-hostpathplugin-f8f5r" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.161215 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/01657e3a-3a1a-4d73-8bdb-90ddfa4c374b-csi-data-dir\") pod \"csi-hostpathplugin-f8f5r\" (UID: \"01657e3a-3a1a-4d73-8bdb-90ddfa4c374b\") " pod="hostpath-provisioner/csi-hostpathplugin-f8f5r" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.163588 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ea90f3e-d099-4aa2-840c-ab69127694ee-config\") pod \"service-ca-operator-777779d784-lwjmt\" (UID: \"6ea90f3e-d099-4aa2-840c-ab69127694ee\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-lwjmt" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.168461 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/855749ad-5c93-4f59-be0a-68a0ea0bee93-signing-cabundle\") pod \"service-ca-9c57cc56f-hxpst\" (UID: \"855749ad-5c93-4f59-be0a-68a0ea0bee93\") " pod="openshift-service-ca/service-ca-9c57cc56f-hxpst" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.173101 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/83d0a613-ab45-4611-a345-66d75c8f0253-images\") pod \"machine-config-operator-74547568cd-rkzrk\" (UID: \"83d0a613-ab45-4611-a345-66d75c8f0253\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rkzrk" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.173444 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/854e8a4a-ba6a-4d3a-b20c-429e3bd1c8a7-service-ca-bundle\") pod \"router-default-5444994796-4n9qx\" (UID: \"854e8a4a-ba6a-4d3a-b20c-429e3bd1c8a7\") " pod="openshift-ingress/router-default-5444994796-4n9qx" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.174050 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/01657e3a-3a1a-4d73-8bdb-90ddfa4c374b-mountpoint-dir\") pod \"csi-hostpathplugin-f8f5r\" (UID: \"01657e3a-3a1a-4d73-8bdb-90ddfa4c374b\") " pod="hostpath-provisioner/csi-hostpathplugin-f8f5r" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.178071 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/daa3d572-23af-4c5d-a6a4-7de4c9d1ec3d-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-xggj7\" (UID: \"daa3d572-23af-4c5d-a6a4-7de4c9d1ec3d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xggj7" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.180596 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4055490d-1d4a-4b0b-bf94-e2eaa714bc49-registry-certificates\") pod \"image-registry-697d97f7c8-2m9rf\" (UID: \"4055490d-1d4a-4b0b-bf94-e2eaa714bc49\") " pod="openshift-image-registry/image-registry-697d97f7c8-2m9rf" Feb 27 10:21:33 crc kubenswrapper[4998]: E0227 10:21:33.182673 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 10:21:33.682648983 +0000 UTC m=+245.680919951 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.188603 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4055490d-1d4a-4b0b-bf94-e2eaa714bc49-ca-trust-extracted\") pod \"image-registry-697d97f7c8-2m9rf\" (UID: \"4055490d-1d4a-4b0b-bf94-e2eaa714bc49\") " pod="openshift-image-registry/image-registry-697d97f7c8-2m9rf" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.201022 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4055490d-1d4a-4b0b-bf94-e2eaa714bc49-trusted-ca\") pod \"image-registry-697d97f7c8-2m9rf\" (UID: \"4055490d-1d4a-4b0b-bf94-e2eaa714bc49\") " pod="openshift-image-registry/image-registry-697d97f7c8-2m9rf" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.201834 4998 ???:1] "http: TLS handshake error from 192.168.126.11:34024: no serving certificate available for the kubelet" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.210287 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bwhh6"] Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.217304 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/8e843cdb-2025-4b35-bcee-562907dbd9d9-node-bootstrap-token\") pod \"machine-config-server-phwvx\" (UID: \"8e843cdb-2025-4b35-bcee-562907dbd9d9\") " pod="openshift-machine-config-operator/machine-config-server-phwvx" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.217730 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6ea90f3e-d099-4aa2-840c-ab69127694ee-serving-cert\") pod \"service-ca-operator-777779d784-lwjmt\" (UID: \"6ea90f3e-d099-4aa2-840c-ab69127694ee\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-lwjmt" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.218149 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/8e843cdb-2025-4b35-bcee-562907dbd9d9-certs\") pod \"machine-config-server-phwvx\" (UID: \"8e843cdb-2025-4b35-bcee-562907dbd9d9\") " pod="openshift-machine-config-operator/machine-config-server-phwvx" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.221185 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/854e8a4a-ba6a-4d3a-b20c-429e3bd1c8a7-metrics-certs\") pod \"router-default-5444994796-4n9qx\" (UID: \"854e8a4a-ba6a-4d3a-b20c-429e3bd1c8a7\") " pod="openshift-ingress/router-default-5444994796-4n9qx" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.221394 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/854e8a4a-ba6a-4d3a-b20c-429e3bd1c8a7-default-certificate\") pod \"router-default-5444994796-4n9qx\" (UID: \"854e8a4a-ba6a-4d3a-b20c-429e3bd1c8a7\") " pod="openshift-ingress/router-default-5444994796-4n9qx" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.221451 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/efcfd94f-2069-47b4-9c8f-a9be5330ff28-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-vvpct\" (UID: \"efcfd94f-2069-47b4-9c8f-a9be5330ff28\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vvpct" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.222545 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4055490d-1d4a-4b0b-bf94-e2eaa714bc49-installation-pull-secrets\") pod \"image-registry-697d97f7c8-2m9rf\" (UID: \"4055490d-1d4a-4b0b-bf94-e2eaa714bc49\") " pod="openshift-image-registry/image-registry-697d97f7c8-2m9rf" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.227319 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/855749ad-5c93-4f59-be0a-68a0ea0bee93-signing-key\") pod \"service-ca-9c57cc56f-hxpst\" (UID: \"855749ad-5c93-4f59-be0a-68a0ea0bee93\") " pod="openshift-service-ca/service-ca-9c57cc56f-hxpst" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.227486 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4055490d-1d4a-4b0b-bf94-e2eaa714bc49-registry-tls\") pod \"image-registry-697d97f7c8-2m9rf\" (UID: \"4055490d-1d4a-4b0b-bf94-e2eaa714bc49\") " pod="openshift-image-registry/image-registry-697d97f7c8-2m9rf" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.227538 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/83d0a613-ab45-4611-a345-66d75c8f0253-proxy-tls\") pod \"machine-config-operator-74547568cd-rkzrk\" (UID: \"83d0a613-ab45-4611-a345-66d75c8f0253\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rkzrk" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.227910 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/854e8a4a-ba6a-4d3a-b20c-429e3bd1c8a7-stats-auth\") pod \"router-default-5444994796-4n9qx\" (UID: \"854e8a4a-ba6a-4d3a-b20c-429e3bd1c8a7\") " pod="openshift-ingress/router-default-5444994796-4n9qx" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.228346 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kl52g\" (UniqueName: \"kubernetes.io/projected/83d0a613-ab45-4611-a345-66d75c8f0253-kube-api-access-kl52g\") pod \"machine-config-operator-74547568cd-rkzrk\" (UID: \"83d0a613-ab45-4611-a345-66d75c8f0253\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rkzrk" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.238926 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmrjr\" (UniqueName: \"kubernetes.io/projected/efcfd94f-2069-47b4-9c8f-a9be5330ff28-kube-api-access-cmrjr\") pod \"package-server-manager-789f6589d5-vvpct\" (UID: \"efcfd94f-2069-47b4-9c8f-a9be5330ff28\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vvpct" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.240744 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k29l9\" (UniqueName: \"kubernetes.io/projected/01657e3a-3a1a-4d73-8bdb-90ddfa4c374b-kube-api-access-k29l9\") pod \"csi-hostpathplugin-f8f5r\" (UID: \"01657e3a-3a1a-4d73-8bdb-90ddfa4c374b\") " pod="hostpath-provisioner/csi-hostpathplugin-f8f5r" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.257771 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84d9w\" (UniqueName: \"kubernetes.io/projected/5de52eac-6daa-4c2b-a61d-cb5a9668c0ea-kube-api-access-84d9w\") pod \"dns-default-6bhg7\" (UID: \"5de52eac-6daa-4c2b-a61d-cb5a9668c0ea\") " pod="openshift-dns/dns-default-6bhg7" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.257811 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5de52eac-6daa-4c2b-a61d-cb5a9668c0ea-metrics-tls\") pod \"dns-default-6bhg7\" (UID: \"5de52eac-6daa-4c2b-a61d-cb5a9668c0ea\") " pod="openshift-dns/dns-default-6bhg7" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.257908 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qz42c\" (UniqueName: \"kubernetes.io/projected/801352bd-8f9c-4af1-8fd1-8f4ec4a4b6d5-kube-api-access-qz42c\") pod \"ingress-canary-hxw4p\" (UID: \"801352bd-8f9c-4af1-8fd1-8f4ec4a4b6d5\") " pod="openshift-ingress-canary/ingress-canary-hxw4p" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.257953 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5de52eac-6daa-4c2b-a61d-cb5a9668c0ea-config-volume\") pod \"dns-default-6bhg7\" (UID: \"5de52eac-6daa-4c2b-a61d-cb5a9668c0ea\") " pod="openshift-dns/dns-default-6bhg7" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.257973 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/801352bd-8f9c-4af1-8fd1-8f4ec4a4b6d5-cert\") pod \"ingress-canary-hxw4p\" (UID: \"801352bd-8f9c-4af1-8fd1-8f4ec4a4b6d5\") " pod="openshift-ingress-canary/ingress-canary-hxw4p" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.258012 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2m9rf\" (UID: \"4055490d-1d4a-4b0b-bf94-e2eaa714bc49\") " pod="openshift-image-registry/image-registry-697d97f7c8-2m9rf" Feb 27 10:21:33 crc kubenswrapper[4998]: E0227 10:21:33.258333 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 10:21:33.758320734 +0000 UTC m=+245.756591702 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2m9rf" (UID: "4055490d-1d4a-4b0b-bf94-e2eaa714bc49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.282303 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5de52eac-6daa-4c2b-a61d-cb5a9668c0ea-config-volume\") pod \"dns-default-6bhg7\" (UID: \"5de52eac-6daa-4c2b-a61d-cb5a9668c0ea\") " pod="openshift-dns/dns-default-6bhg7" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.282910 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5de52eac-6daa-4c2b-a61d-cb5a9668c0ea-metrics-tls\") pod \"dns-default-6bhg7\" (UID: \"5de52eac-6daa-4c2b-a61d-cb5a9668c0ea\") " pod="openshift-dns/dns-default-6bhg7" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.283952 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/801352bd-8f9c-4af1-8fd1-8f4ec4a4b6d5-cert\") pod \"ingress-canary-hxw4p\" (UID: \"801352bd-8f9c-4af1-8fd1-8f4ec4a4b6d5\") " pod="openshift-ingress-canary/ingress-canary-hxw4p" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.287630 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22x84\" (UniqueName: \"kubernetes.io/projected/854e8a4a-ba6a-4d3a-b20c-429e3bd1c8a7-kube-api-access-22x84\") pod \"router-default-5444994796-4n9qx\" (UID: \"854e8a4a-ba6a-4d3a-b20c-429e3bd1c8a7\") " pod="openshift-ingress/router-default-5444994796-4n9qx" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.299185 4998 ???:1] "http: TLS handshake error from 192.168.126.11:34036: no serving certificate available for the kubelet" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.304353 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rkzrk" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.319091 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kts8d\" (UniqueName: \"kubernetes.io/projected/8e843cdb-2025-4b35-bcee-562907dbd9d9-kube-api-access-kts8d\") pod \"machine-config-server-phwvx\" (UID: \"8e843cdb-2025-4b35-bcee-562907dbd9d9\") " pod="openshift-machine-config-operator/machine-config-server-phwvx" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.322693 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-4n9qx" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.343976 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vvpct" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.344739 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbgr4\" (UniqueName: \"kubernetes.io/projected/4055490d-1d4a-4b0b-bf94-e2eaa714bc49-kube-api-access-pbgr4\") pod \"image-registry-697d97f7c8-2m9rf\" (UID: \"4055490d-1d4a-4b0b-bf94-e2eaa714bc49\") " pod="openshift-image-registry/image-registry-697d97f7c8-2m9rf" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.358354 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdvb5\" (UniqueName: \"kubernetes.io/projected/daa3d572-23af-4c5d-a6a4-7de4c9d1ec3d-kube-api-access-gdvb5\") pod \"kube-storage-version-migrator-operator-b67b599dd-xggj7\" (UID: \"daa3d572-23af-4c5d-a6a4-7de4c9d1ec3d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xggj7" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.358873 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4055490d-1d4a-4b0b-bf94-e2eaa714bc49-bound-sa-token\") pod \"image-registry-697d97f7c8-2m9rf\" (UID: \"4055490d-1d4a-4b0b-bf94-e2eaa714bc49\") " pod="openshift-image-registry/image-registry-697d97f7c8-2m9rf" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.359318 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:21:33 crc kubenswrapper[4998]: E0227 10:21:33.359877 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 10:21:33.859850702 +0000 UTC m=+245.858121670 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.385017 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hlrx\" (UniqueName: \"kubernetes.io/projected/6d09adc3-707f-4a3e-b268-07e7df9fa3da-kube-api-access-7hlrx\") pod \"migrator-59844c95c7-sw9b4\" (UID: \"6d09adc3-707f-4a3e-b268-07e7df9fa3da\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-sw9b4" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.399540 4998 ???:1] "http: TLS handshake error from 192.168.126.11:34044: no serving certificate available for the kubelet" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.400464 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-f8f5r" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.413137 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x99jh\" (UniqueName: \"kubernetes.io/projected/6ea90f3e-d099-4aa2-840c-ab69127694ee-kube-api-access-x99jh\") pod \"service-ca-operator-777779d784-lwjmt\" (UID: \"6ea90f3e-d099-4aa2-840c-ab69127694ee\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-lwjmt" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.414260 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5pclh\" (UniqueName: \"kubernetes.io/projected/855749ad-5c93-4f59-be0a-68a0ea0bee93-kube-api-access-5pclh\") pod \"service-ca-9c57cc56f-hxpst\" (UID: \"855749ad-5c93-4f59-be0a-68a0ea0bee93\") " pod="openshift-service-ca/service-ca-9c57cc56f-hxpst" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.428876 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6njcz\" (UniqueName: \"kubernetes.io/projected/36a1e36a-d138-4606-a280-ef688b10a438-kube-api-access-6njcz\") pod \"auto-csr-approver-29536460-jgglv\" (UID: \"36a1e36a-d138-4606-a280-ef688b10a438\") " pod="openshift-infra/auto-csr-approver-29536460-jgglv" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.445137 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-phwvx" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.458182 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qz42c\" (UniqueName: \"kubernetes.io/projected/801352bd-8f9c-4af1-8fd1-8f4ec4a4b6d5-kube-api-access-qz42c\") pod \"ingress-canary-hxw4p\" (UID: \"801352bd-8f9c-4af1-8fd1-8f4ec4a4b6d5\") " pod="openshift-ingress-canary/ingress-canary-hxw4p" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.461877 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2m9rf\" (UID: \"4055490d-1d4a-4b0b-bf94-e2eaa714bc49\") " pod="openshift-image-registry/image-registry-697d97f7c8-2m9rf" Feb 27 10:21:33 crc kubenswrapper[4998]: E0227 10:21:33.462522 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 10:21:33.962508961 +0000 UTC m=+245.960779929 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2m9rf" (UID: "4055490d-1d4a-4b0b-bf94-e2eaa714bc49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.504767 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84d9w\" (UniqueName: \"kubernetes.io/projected/5de52eac-6daa-4c2b-a61d-cb5a9668c0ea-kube-api-access-84d9w\") pod \"dns-default-6bhg7\" (UID: \"5de52eac-6daa-4c2b-a61d-cb5a9668c0ea\") " pod="openshift-dns/dns-default-6bhg7" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.509987 4998 ???:1] "http: TLS handshake error from 192.168.126.11:34058: no serving certificate available for the kubelet" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.537787 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-lwjmt" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.551445 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xggj7" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.560711 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-sw9b4" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.569997 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:21:33 crc kubenswrapper[4998]: E0227 10:21:33.570188 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 10:21:34.070157348 +0000 UTC m=+246.068428316 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.571069 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2m9rf\" (UID: \"4055490d-1d4a-4b0b-bf94-e2eaa714bc49\") " pod="openshift-image-registry/image-registry-697d97f7c8-2m9rf" Feb 27 10:21:33 crc kubenswrapper[4998]: E0227 10:21:33.571641 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 10:21:34.071621577 +0000 UTC m=+246.069892545 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2m9rf" (UID: "4055490d-1d4a-4b0b-bf94-e2eaa714bc49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.602657 4998 ???:1] "http: TLS handshake error from 192.168.126.11:34070: no serving certificate available for the kubelet" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.614334 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-hxpst" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.663036 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536460-jgglv" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.672108 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:21:33 crc kubenswrapper[4998]: E0227 10:21:33.677496 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 10:21:34.177463834 +0000 UTC m=+246.175734802 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.679196 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hnlvv"] Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.680785 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2m9rf\" (UID: \"4055490d-1d4a-4b0b-bf94-e2eaa714bc49\") " pod="openshift-image-registry/image-registry-697d97f7c8-2m9rf" Feb 27 10:21:33 crc kubenswrapper[4998]: E0227 10:21:33.681399 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 10:21:34.181386432 +0000 UTC m=+246.179657400 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2m9rf" (UID: "4055490d-1d4a-4b0b-bf94-e2eaa714bc49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.735641 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-hxw4p" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.735926 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-6bhg7" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.742306 4998 ???:1] "http: TLS handshake error from 192.168.126.11:34086: no serving certificate available for the kubelet" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.744753 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-68mc7" event={"ID":"16ddfc0b-99f7-4c57-a804-665d86d0411b","Type":"ContainerStarted","Data":"ac855e161bbe947bc76330b00683dccbbdb1df3744c777aa8f871ef1094b359d"} Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.752172 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-4n9qx" event={"ID":"854e8a4a-ba6a-4d3a-b20c-429e3bd1c8a7","Type":"ContainerStarted","Data":"d7e0cfee5dcf3282b81db319863cba6b16818f89bc832f0303a67d0c8237cc73"} Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.754345 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-h2zrl" event={"ID":"d34656b6-50d4-4173-a40b-5a9eddb99397","Type":"ContainerStarted","Data":"4bb522700ddb2116d394225f1d70a75592cea5509207081f17f3cb59ec6ea747"} Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.757278 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-ffk9k" event={"ID":"43e2df1f-102d-4440-bc2b-76d89a47be31","Type":"ContainerStarted","Data":"ffbf94093463be06c7e013536db56752a75d65f5ef4ee4d6aa1b60492601221a"} Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.764153 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-qx7dp" event={"ID":"63800e6f-d2ec-48e9-9739-60ed474ed51b","Type":"ContainerStarted","Data":"55bc53990f45b0716998c945008d01888b42c024820fcc8cbb0a4fbec639d49f"} Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.767458 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-rqwm5" event={"ID":"68f33d7b-1e6e-45c8-b37e-2eff317c25d2","Type":"ContainerStarted","Data":"d90c99716bb50fb4efc6beef9aeb71e99633b1e73849ca493e2d4f4321cf57a3"} Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.774937 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bwhh6" event={"ID":"34cb42eb-c801-4afa-9b85-64ea3d8c3ab2","Type":"ContainerStarted","Data":"09c65b5ab3f6508aa2348f4918f83aac29df3c25fd8489b1209ed74ab517476b"} Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.781882 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:21:33 crc kubenswrapper[4998]: E0227 10:21:33.783026 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 10:21:34.282977052 +0000 UTC m=+246.281248020 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.783395 4998 patch_prober.go:28] interesting pod/downloads-7954f5f757-vn72h container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.783452 4998 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-vn72h" podUID="daeaab34-be3d-4a1e-964f-17e3661682bc" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.795653 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-x828r" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.799904 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-2ksp8" Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.886091 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2m9rf\" (UID: \"4055490d-1d4a-4b0b-bf94-e2eaa714bc49\") " pod="openshift-image-registry/image-registry-697d97f7c8-2m9rf" Feb 27 10:21:33 crc kubenswrapper[4998]: E0227 10:21:33.902965 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 10:21:34.402944286 +0000 UTC m=+246.401215254 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2m9rf" (UID: "4055490d-1d4a-4b0b-bf94-e2eaa714bc49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.965140 4998 ???:1] "http: TLS handshake error from 192.168.126.11:34092: no serving certificate available for the kubelet" Feb 27 10:21:33 crc kubenswrapper[4998]: E0227 10:21:33.989359 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 10:21:34.48932853 +0000 UTC m=+246.487599498 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.989749 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:21:33 crc kubenswrapper[4998]: I0227 10:21:33.990114 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2m9rf\" (UID: \"4055490d-1d4a-4b0b-bf94-e2eaa714bc49\") " pod="openshift-image-registry/image-registry-697d97f7c8-2m9rf" Feb 27 10:21:33 crc kubenswrapper[4998]: E0227 10:21:33.990650 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 10:21:34.490641365 +0000 UTC m=+246.488912333 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2m9rf" (UID: "4055490d-1d4a-4b0b-bf94-e2eaa714bc49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:21:34 crc kubenswrapper[4998]: I0227 10:21:34.090815 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:21:34 crc kubenswrapper[4998]: E0227 10:21:34.090993 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 10:21:34.590973411 +0000 UTC m=+246.589244379 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:21:34 crc kubenswrapper[4998]: I0227 10:21:34.091377 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2m9rf\" (UID: \"4055490d-1d4a-4b0b-bf94-e2eaa714bc49\") " pod="openshift-image-registry/image-registry-697d97f7c8-2m9rf" Feb 27 10:21:34 crc kubenswrapper[4998]: E0227 10:21:34.091795 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 10:21:34.591785653 +0000 UTC m=+246.590056621 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2m9rf" (UID: "4055490d-1d4a-4b0b-bf94-e2eaa714bc49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:21:34 crc kubenswrapper[4998]: I0227 10:21:34.192146 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:21:34 crc kubenswrapper[4998]: E0227 10:21:34.192342 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 10:21:34.692316145 +0000 UTC m=+246.690587123 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:21:34 crc kubenswrapper[4998]: I0227 10:21:34.192501 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2m9rf\" (UID: \"4055490d-1d4a-4b0b-bf94-e2eaa714bc49\") " pod="openshift-image-registry/image-registry-697d97f7c8-2m9rf" Feb 27 10:21:34 crc kubenswrapper[4998]: E0227 10:21:34.192762 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 10:21:34.692753486 +0000 UTC m=+246.691024454 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2m9rf" (UID: "4055490d-1d4a-4b0b-bf94-e2eaa714bc49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:21:34 crc kubenswrapper[4998]: I0227 10:21:34.209138 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-x828r" podStartSLOduration=176.209099314 podStartE2EDuration="2m56.209099314s" podCreationTimestamp="2026-02-27 10:18:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:21:34.104654116 +0000 UTC m=+246.102925084" watchObservedRunningTime="2026-02-27 10:21:34.209099314 +0000 UTC m=+246.207370282" Feb 27 10:21:34 crc kubenswrapper[4998]: I0227 10:21:34.301476 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:21:34 crc kubenswrapper[4998]: E0227 10:21:34.302005 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 10:21:34.801973485 +0000 UTC m=+246.800244463 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:21:34 crc kubenswrapper[4998]: I0227 10:21:34.324742 4998 ???:1] "http: TLS handshake error from 192.168.126.11:34104: no serving certificate available for the kubelet" Feb 27 10:21:34 crc kubenswrapper[4998]: I0227 10:21:34.403999 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2m9rf\" (UID: \"4055490d-1d4a-4b0b-bf94-e2eaa714bc49\") " pod="openshift-image-registry/image-registry-697d97f7c8-2m9rf" Feb 27 10:21:34 crc kubenswrapper[4998]: E0227 10:21:34.404448 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 10:21:34.90442869 +0000 UTC m=+246.902699658 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2m9rf" (UID: "4055490d-1d4a-4b0b-bf94-e2eaa714bc49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:21:34 crc kubenswrapper[4998]: I0227 10:21:34.471390 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jbr44" podStartSLOduration=177.471363052 podStartE2EDuration="2m57.471363052s" podCreationTimestamp="2026-02-27 10:18:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:21:34.435313105 +0000 UTC m=+246.433584073" watchObservedRunningTime="2026-02-27 10:21:34.471363052 +0000 UTC m=+246.469634040" Feb 27 10:21:34 crc kubenswrapper[4998]: I0227 10:21:34.504786 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:21:34 crc kubenswrapper[4998]: E0227 10:21:34.505305 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 10:21:35.00528626 +0000 UTC m=+247.003557228 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:21:34 crc kubenswrapper[4998]: I0227 10:21:34.607027 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2m9rf\" (UID: \"4055490d-1d4a-4b0b-bf94-e2eaa714bc49\") " pod="openshift-image-registry/image-registry-697d97f7c8-2m9rf" Feb 27 10:21:34 crc kubenswrapper[4998]: E0227 10:21:34.607479 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 10:21:35.107464076 +0000 UTC m=+247.105735044 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2m9rf" (UID: "4055490d-1d4a-4b0b-bf94-e2eaa714bc49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:21:34 crc kubenswrapper[4998]: I0227 10:21:34.641644 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-2ksp8"] Feb 27 10:21:34 crc kubenswrapper[4998]: I0227 10:21:34.686695 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-x828r"] Feb 27 10:21:34 crc kubenswrapper[4998]: I0227 10:21:34.708936 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:21:34 crc kubenswrapper[4998]: E0227 10:21:34.709071 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 10:21:35.209051016 +0000 UTC m=+247.207321984 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:21:34 crc kubenswrapper[4998]: I0227 10:21:34.709604 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2m9rf\" (UID: \"4055490d-1d4a-4b0b-bf94-e2eaa714bc49\") " pod="openshift-image-registry/image-registry-697d97f7c8-2m9rf" Feb 27 10:21:34 crc kubenswrapper[4998]: E0227 10:21:34.709906 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 10:21:35.20989861 +0000 UTC m=+247.208169578 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2m9rf" (UID: "4055490d-1d4a-4b0b-bf94-e2eaa714bc49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:21:34 crc kubenswrapper[4998]: I0227 10:21:34.742593 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-2ksp8" podStartSLOduration=177.742575163 podStartE2EDuration="2m57.742575163s" podCreationTimestamp="2026-02-27 10:18:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:21:34.741078263 +0000 UTC m=+246.739349221" watchObservedRunningTime="2026-02-27 10:21:34.742575163 +0000 UTC m=+246.740846131" Feb 27 10:21:34 crc kubenswrapper[4998]: I0227 10:21:34.810342 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:21:34 crc kubenswrapper[4998]: E0227 10:21:34.810558 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 10:21:35.310528333 +0000 UTC m=+247.308799301 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:21:34 crc kubenswrapper[4998]: I0227 10:21:34.810916 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2m9rf\" (UID: \"4055490d-1d4a-4b0b-bf94-e2eaa714bc49\") " pod="openshift-image-registry/image-registry-697d97f7c8-2m9rf" Feb 27 10:21:34 crc kubenswrapper[4998]: I0227 10:21:34.812553 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-phwvx" event={"ID":"8e843cdb-2025-4b35-bcee-562907dbd9d9","Type":"ContainerStarted","Data":"f9af9580ce48632b823a5537b10266fa21a571adda0d14a6e03bf8b912c09323"} Feb 27 10:21:34 crc kubenswrapper[4998]: I0227 10:21:34.813326 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hnlvv" event={"ID":"4de17c0f-b467-4b89-8152-ecd5eb9cd5ed","Type":"ContainerStarted","Data":"bf7f1180c2b8d6ecba5cca9038cfc5071de58b08c3341be3e2dc2f1f50f7b455"} Feb 27 10:21:34 crc kubenswrapper[4998]: I0227 10:21:34.813382 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-ffk9k" event={"ID":"43e2df1f-102d-4440-bc2b-76d89a47be31","Type":"ContainerStarted","Data":"60a33f01300dc62762da2ae077366c62bcfda373240a3ca7afd6410599d865c9"} Feb 27 10:21:34 crc kubenswrapper[4998]: E0227 10:21:34.813987 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 10:21:35.313973278 +0000 UTC m=+247.312244426 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2m9rf" (UID: "4055490d-1d4a-4b0b-bf94-e2eaa714bc49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:21:34 crc kubenswrapper[4998]: I0227 10:21:34.914774 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:21:34 crc kubenswrapper[4998]: E0227 10:21:34.915196 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 10:21:35.415180767 +0000 UTC m=+247.413451735 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:21:34 crc kubenswrapper[4998]: I0227 10:21:34.996523 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-vn72h" podStartSLOduration=177.996500613 podStartE2EDuration="2m57.996500613s" podCreationTimestamp="2026-02-27 10:18:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:21:34.987824765 +0000 UTC m=+246.986095743" watchObservedRunningTime="2026-02-27 10:21:34.996500613 +0000 UTC m=+246.994771581" Feb 27 10:21:35 crc kubenswrapper[4998]: I0227 10:21:35.003636 4998 ???:1] "http: TLS handshake error from 192.168.126.11:34120: no serving certificate available for the kubelet" Feb 27 10:21:35 crc kubenswrapper[4998]: I0227 10:21:35.017276 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2m9rf\" (UID: \"4055490d-1d4a-4b0b-bf94-e2eaa714bc49\") " pod="openshift-image-registry/image-registry-697d97f7c8-2m9rf" Feb 27 10:21:35 crc kubenswrapper[4998]: E0227 10:21:35.017670 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 10:21:35.517651622 +0000 UTC m=+247.515922640 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2m9rf" (UID: "4055490d-1d4a-4b0b-bf94-e2eaa714bc49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:21:35 crc kubenswrapper[4998]: I0227 10:21:35.032429 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536455-gfh89"] Feb 27 10:21:35 crc kubenswrapper[4998]: I0227 10:21:35.067017 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-n6r8g"] Feb 27 10:21:35 crc kubenswrapper[4998]: I0227 10:21:35.120064 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:21:35 crc kubenswrapper[4998]: E0227 10:21:35.120277 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 10:21:35.620246189 +0000 UTC m=+247.618517157 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:21:35 crc kubenswrapper[4998]: I0227 10:21:35.122556 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2m9rf\" (UID: \"4055490d-1d4a-4b0b-bf94-e2eaa714bc49\") " pod="openshift-image-registry/image-registry-697d97f7c8-2m9rf" Feb 27 10:21:35 crc kubenswrapper[4998]: E0227 10:21:35.123048 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 10:21:35.623035415 +0000 UTC m=+247.621306383 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2m9rf" (UID: "4055490d-1d4a-4b0b-bf94-e2eaa714bc49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:21:35 crc kubenswrapper[4998]: I0227 10:21:35.127508 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-dvl4h" podStartSLOduration=178.127490728 podStartE2EDuration="2m58.127490728s" podCreationTimestamp="2026-02-27 10:18:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:21:35.127344794 +0000 UTC m=+247.125615772" watchObservedRunningTime="2026-02-27 10:21:35.127490728 +0000 UTC m=+247.125761696" Feb 27 10:21:35 crc kubenswrapper[4998]: I0227 10:21:35.151545 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mrrql" podStartSLOduration=178.151524665 podStartE2EDuration="2m58.151524665s" podCreationTimestamp="2026-02-27 10:18:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:21:35.146128197 +0000 UTC m=+247.144399165" watchObservedRunningTime="2026-02-27 10:21:35.151524665 +0000 UTC m=+247.149795633" Feb 27 10:21:35 crc kubenswrapper[4998]: I0227 10:21:35.223910 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:21:35 crc kubenswrapper[4998]: E0227 10:21:35.224327 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 10:21:35.724312117 +0000 UTC m=+247.722583085 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:21:35 crc kubenswrapper[4998]: I0227 10:21:35.316577 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-hpdf9" podStartSLOduration=178.316556711 podStartE2EDuration="2m58.316556711s" podCreationTimestamp="2026-02-27 10:18:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:21:35.267434377 +0000 UTC m=+247.265705345" watchObservedRunningTime="2026-02-27 10:21:35.316556711 +0000 UTC m=+247.314827679" Feb 27 10:21:35 crc kubenswrapper[4998]: I0227 10:21:35.317757 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-68mc7" podStartSLOduration=177.317747284 podStartE2EDuration="2m57.317747284s" podCreationTimestamp="2026-02-27 10:18:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:21:35.309461047 +0000 UTC m=+247.307732015" watchObservedRunningTime="2026-02-27 10:21:35.317747284 +0000 UTC m=+247.316018252" Feb 27 10:21:35 crc kubenswrapper[4998]: I0227 10:21:35.326771 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2m9rf\" (UID: \"4055490d-1d4a-4b0b-bf94-e2eaa714bc49\") " pod="openshift-image-registry/image-registry-697d97f7c8-2m9rf" Feb 27 10:21:35 crc kubenswrapper[4998]: E0227 10:21:35.327150 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 10:21:35.827135121 +0000 UTC m=+247.825406089 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2m9rf" (UID: "4055490d-1d4a-4b0b-bf94-e2eaa714bc49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:21:35 crc kubenswrapper[4998]: I0227 10:21:35.363751 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ggw9l" podStartSLOduration=178.363730802 podStartE2EDuration="2m58.363730802s" podCreationTimestamp="2026-02-27 10:18:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:21:35.363062444 +0000 UTC m=+247.361333432" watchObservedRunningTime="2026-02-27 10:21:35.363730802 +0000 UTC m=+247.362001770" Feb 27 10:21:35 crc kubenswrapper[4998]: I0227 10:21:35.427984 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:21:35 crc kubenswrapper[4998]: E0227 10:21:35.428428 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 10:21:35.928412082 +0000 UTC m=+247.926683050 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:21:35 crc kubenswrapper[4998]: I0227 10:21:35.479413 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-7g9hp"] Feb 27 10:21:35 crc kubenswrapper[4998]: I0227 10:21:35.533081 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2m9rf\" (UID: \"4055490d-1d4a-4b0b-bf94-e2eaa714bc49\") " pod="openshift-image-registry/image-registry-697d97f7c8-2m9rf" Feb 27 10:21:35 crc kubenswrapper[4998]: E0227 10:21:35.533519 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 10:21:36.033504049 +0000 UTC m=+248.031775017 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2m9rf" (UID: "4055490d-1d4a-4b0b-bf94-e2eaa714bc49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:21:35 crc kubenswrapper[4998]: W0227 10:21:35.537732 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd89392c8_76ea_4723_8fc3_04fcd6727a23.slice/crio-29476c1f370b6df410e36a2b03ddee7d31b8fac4a120a7f7247bc3bd5c7f1951 WatchSource:0}: Error finding container 29476c1f370b6df410e36a2b03ddee7d31b8fac4a120a7f7247bc3bd5c7f1951: Status 404 returned error can't find the container with id 29476c1f370b6df410e36a2b03ddee7d31b8fac4a120a7f7247bc3bd5c7f1951 Feb 27 10:21:35 crc kubenswrapper[4998]: I0227 10:21:35.539281 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-fd4sb"] Feb 27 10:21:35 crc kubenswrapper[4998]: I0227 10:21:35.544330 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lg8l6"] Feb 27 10:21:35 crc kubenswrapper[4998]: I0227 10:21:35.560989 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-vlnjr"] Feb 27 10:21:35 crc kubenswrapper[4998]: W0227 10:21:35.611897 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbcc07b80_55eb_465c_9528_6bad6e2bcbc1.slice/crio-9e90368c4f01c02ece04a7bacca5dd221e08a42f1626cee332bfb9dd38293c30 WatchSource:0}: Error finding container 9e90368c4f01c02ece04a7bacca5dd221e08a42f1626cee332bfb9dd38293c30: Status 404 returned error can't find the container with id 9e90368c4f01c02ece04a7bacca5dd221e08a42f1626cee332bfb9dd38293c30 Feb 27 10:21:35 crc kubenswrapper[4998]: I0227 10:21:35.633683 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:21:35 crc kubenswrapper[4998]: E0227 10:21:35.636553 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 10:21:36.136532619 +0000 UTC m=+248.134803587 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:21:35 crc kubenswrapper[4998]: I0227 10:21:35.738075 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2m9rf\" (UID: \"4055490d-1d4a-4b0b-bf94-e2eaa714bc49\") " pod="openshift-image-registry/image-registry-697d97f7c8-2m9rf" Feb 27 10:21:35 crc kubenswrapper[4998]: E0227 10:21:35.738524 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 10:21:36.238510849 +0000 UTC m=+248.236781817 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2m9rf" (UID: "4055490d-1d4a-4b0b-bf94-e2eaa714bc49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:21:35 crc kubenswrapper[4998]: I0227 10:21:35.790278 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p9c6z"] Feb 27 10:21:35 crc kubenswrapper[4998]: I0227 10:21:35.822198 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pxm5q"] Feb 27 10:21:35 crc kubenswrapper[4998]: I0227 10:21:35.826908 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-4n9qx" event={"ID":"854e8a4a-ba6a-4d3a-b20c-429e3bd1c8a7","Type":"ContainerStarted","Data":"e0629cf0a842e6824826b49f9903b78f72f42076de177457468570958195f0f1"} Feb 27 10:21:35 crc kubenswrapper[4998]: I0227 10:21:35.836160 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vvpct"] Feb 27 10:21:35 crc kubenswrapper[4998]: I0227 10:21:35.839050 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:21:35 crc kubenswrapper[4998]: E0227 10:21:35.839422 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 10:21:36.33938833 +0000 UTC m=+248.337659308 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:21:35 crc kubenswrapper[4998]: I0227 10:21:35.844478 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-n6r8g" event={"ID":"e32a75fa-f16d-4386-a933-4a6bd43f1bdc","Type":"ContainerStarted","Data":"8eab16ecf4fbb68fbdc523ca3ecb9b0502bc32155345a98c401dca93c4eeffc5"} Feb 27 10:21:35 crc kubenswrapper[4998]: I0227 10:21:35.844521 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-n6r8g" event={"ID":"e32a75fa-f16d-4386-a933-4a6bd43f1bdc","Type":"ContainerStarted","Data":"8920c4d580634fd5572afc291de8b62123e59b25ac2dc2bd5bc8e84fb88f528b"} Feb 27 10:21:35 crc kubenswrapper[4998]: I0227 10:21:35.856787 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-b25mg"] Feb 27 10:21:35 crc kubenswrapper[4998]: I0227 10:21:35.859689 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-4n9qx" podStartSLOduration=178.859676155 podStartE2EDuration="2m58.859676155s" podCreationTimestamp="2026-02-27 10:18:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:21:35.858099812 +0000 UTC m=+247.856370800" watchObservedRunningTime="2026-02-27 10:21:35.859676155 +0000 UTC m=+247.857947123" Feb 27 10:21:35 crc kubenswrapper[4998]: I0227 10:21:35.863313 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-qx7dp" event={"ID":"63800e6f-d2ec-48e9-9739-60ed474ed51b","Type":"ContainerStarted","Data":"b48a73cb0a1be158eccadc1da7a071bd3eb2e1b708612d65493c200f604f0351"} Feb 27 10:21:35 crc kubenswrapper[4998]: I0227 10:21:35.863354 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-qx7dp" event={"ID":"63800e6f-d2ec-48e9-9739-60ed474ed51b","Type":"ContainerStarted","Data":"5453a5287c21abd0e6d7a21c526557caaacd4f9337d8a60d2136837eaf457716"} Feb 27 10:21:35 crc kubenswrapper[4998]: I0227 10:21:35.897299 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-7mccc"] Feb 27 10:21:35 crc kubenswrapper[4998]: I0227 10:21:35.899268 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7rm6t"] Feb 27 10:21:35 crc kubenswrapper[4998]: I0227 10:21:35.914483 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-n6r8g" podStartSLOduration=178.914467435 podStartE2EDuration="2m58.914467435s" podCreationTimestamp="2026-02-27 10:18:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:21:35.910465584 +0000 UTC m=+247.908736552" watchObservedRunningTime="2026-02-27 10:21:35.914467435 +0000 UTC m=+247.912738403" Feb 27 10:21:35 crc kubenswrapper[4998]: I0227 10:21:35.915167 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-tsvns"] Feb 27 10:21:35 crc kubenswrapper[4998]: I0227 10:21:35.924198 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-lwjmt"] Feb 27 10:21:35 crc kubenswrapper[4998]: I0227 10:21:35.933139 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-qx7dp" podStartSLOduration=178.933123725 podStartE2EDuration="2m58.933123725s" podCreationTimestamp="2026-02-27 10:18:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:21:35.931730917 +0000 UTC m=+247.930001895" watchObservedRunningTime="2026-02-27 10:21:35.933123725 +0000 UTC m=+247.931394693" Feb 27 10:21:35 crc kubenswrapper[4998]: I0227 10:21:35.948024 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2m9rf\" (UID: \"4055490d-1d4a-4b0b-bf94-e2eaa714bc49\") " pod="openshift-image-registry/image-registry-697d97f7c8-2m9rf" Feb 27 10:21:35 crc kubenswrapper[4998]: E0227 10:21:35.951198 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 10:21:36.451185699 +0000 UTC m=+248.449456667 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2m9rf" (UID: "4055490d-1d4a-4b0b-bf94-e2eaa714bc49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:21:36 crc kubenswrapper[4998]: I0227 10:21:36.001249 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-rkzrk"] Feb 27 10:21:36 crc kubenswrapper[4998]: I0227 10:21:36.009589 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-sw9b4"] Feb 27 10:21:36 crc kubenswrapper[4998]: I0227 10:21:36.012895 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-rh88s"] Feb 27 10:21:36 crc kubenswrapper[4998]: I0227 10:21:36.016299 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6pn2r"] Feb 27 10:21:36 crc kubenswrapper[4998]: I0227 10:21:36.021002 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-2vxxl"] Feb 27 10:21:36 crc kubenswrapper[4998]: I0227 10:21:36.046649 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-f8f5r"] Feb 27 10:21:36 crc kubenswrapper[4998]: I0227 10:21:36.052398 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:21:36 crc kubenswrapper[4998]: E0227 10:21:36.052751 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 10:21:36.552736228 +0000 UTC m=+248.551007186 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:21:36 crc kubenswrapper[4998]: I0227 10:21:36.056633 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-rqwm5" event={"ID":"68f33d7b-1e6e-45c8-b37e-2eff317c25d2","Type":"ContainerStarted","Data":"3acd15014444fccb7058d9f9babd660fee784963d8fd714bb1720bd3f8c28de8"} Feb 27 10:21:36 crc kubenswrapper[4998]: I0227 10:21:36.064423 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-hxw4p"] Feb 27 10:21:36 crc kubenswrapper[4998]: I0227 10:21:36.070160 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536460-jgglv"] Feb 27 10:21:36 crc kubenswrapper[4998]: I0227 10:21:36.074643 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bwhh6" event={"ID":"34cb42eb-c801-4afa-9b85-64ea3d8c3ab2","Type":"ContainerStarted","Data":"a1c99f0479aa3ab0c1d1887d3450800834a85ec84dfc13abdbbb6758c12fa47f"} Feb 27 10:21:36 crc kubenswrapper[4998]: I0227 10:21:36.079611 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-hxpst"] Feb 27 10:21:36 crc kubenswrapper[4998]: I0227 10:21:36.082198 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-rqwm5" Feb 27 10:21:36 crc kubenswrapper[4998]: I0227 10:21:36.082682 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-rqwm5" Feb 27 10:21:36 crc kubenswrapper[4998]: I0227 10:21:36.086114 4998 patch_prober.go:28] interesting pod/apiserver-76f77b778f-rqwm5 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.5:8443/livez\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Feb 27 10:21:36 crc kubenswrapper[4998]: I0227 10:21:36.086178 4998 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-rqwm5" podUID="68f33d7b-1e6e-45c8-b37e-2eff317c25d2" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.5:8443/livez\": dial tcp 10.217.0.5:8443: connect: connection refused" Feb 27 10:21:36 crc kubenswrapper[4998]: I0227 10:21:36.086335 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lg8l6" event={"ID":"21d518bc-a601-48ce-9d37-67eb5657eea1","Type":"ContainerStarted","Data":"8211dc6b3a9a8f98c0057a0ce64cb112ce4f9bd2f566234556a3226f26b84e9a"} Feb 27 10:21:36 crc kubenswrapper[4998]: I0227 10:21:36.104512 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hnlvv" event={"ID":"4de17c0f-b467-4b89-8152-ecd5eb9cd5ed","Type":"ContainerStarted","Data":"603294ee0db85672a05ad3652fc9ab06e1993a5220c9116d1b146155466e435e"} Feb 27 10:21:36 crc kubenswrapper[4998]: I0227 10:21:36.141458 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-rqwm5" podStartSLOduration=179.141441446 podStartE2EDuration="2m59.141441446s" podCreationTimestamp="2026-02-27 10:18:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:21:36.125738256 +0000 UTC m=+248.124009234" watchObservedRunningTime="2026-02-27 10:21:36.141441446 +0000 UTC m=+248.139712414" Feb 27 10:21:36 crc kubenswrapper[4998]: I0227 10:21:36.144992 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-6bhg7"] Feb 27 10:21:36 crc kubenswrapper[4998]: I0227 10:21:36.145051 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xggj7"] Feb 27 10:21:36 crc kubenswrapper[4998]: I0227 10:21:36.155508 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2m9rf\" (UID: \"4055490d-1d4a-4b0b-bf94-e2eaa714bc49\") " pod="openshift-image-registry/image-registry-697d97f7c8-2m9rf" Feb 27 10:21:36 crc kubenswrapper[4998]: E0227 10:21:36.157124 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 10:21:36.657109744 +0000 UTC m=+248.655380712 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2m9rf" (UID: "4055490d-1d4a-4b0b-bf94-e2eaa714bc49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:21:36 crc kubenswrapper[4998]: I0227 10:21:36.161846 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-68mc7" Feb 27 10:21:36 crc kubenswrapper[4998]: I0227 10:21:36.165074 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hnlvv" podStartSLOduration=179.165053592 podStartE2EDuration="2m59.165053592s" podCreationTimestamp="2026-02-27 10:18:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:21:36.151970774 +0000 UTC m=+248.150241742" watchObservedRunningTime="2026-02-27 10:21:36.165053592 +0000 UTC m=+248.163324560" Feb 27 10:21:36 crc kubenswrapper[4998]: I0227 10:21:36.166612 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-68mc7" Feb 27 10:21:36 crc kubenswrapper[4998]: I0227 10:21:36.170587 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-h2zrl" event={"ID":"d34656b6-50d4-4173-a40b-5a9eddb99397","Type":"ContainerStarted","Data":"603b5f817c5cb9fb822d92c54a11e961ca0df9a4fdf59a1393046f7177b32015"} Feb 27 10:21:36 crc kubenswrapper[4998]: I0227 10:21:36.172171 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-h2zrl" Feb 27 10:21:36 crc kubenswrapper[4998]: I0227 10:21:36.174946 4998 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-h2zrl container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.11:6443/healthz\": dial tcp 10.217.0.11:6443: connect: connection refused" start-of-body= Feb 27 10:21:36 crc kubenswrapper[4998]: I0227 10:21:36.175000 4998 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-h2zrl" podUID="d34656b6-50d4-4173-a40b-5a9eddb99397" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.11:6443/healthz\": dial tcp 10.217.0.11:6443: connect: connection refused" Feb 27 10:21:36 crc kubenswrapper[4998]: I0227 10:21:36.201630 4998 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 27 10:21:36 crc kubenswrapper[4998]: I0227 10:21:36.204737 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bwhh6" podStartSLOduration=179.204607564 podStartE2EDuration="2m59.204607564s" podCreationTimestamp="2026-02-27 10:18:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:21:36.195532416 +0000 UTC m=+248.193803384" watchObservedRunningTime="2026-02-27 10:21:36.204607564 +0000 UTC m=+248.202878732" Feb 27 10:21:36 crc kubenswrapper[4998]: I0227 10:21:36.256002 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vlnjr" event={"ID":"bcc07b80-55eb-465c-9528-6bad6e2bcbc1","Type":"ContainerStarted","Data":"9e90368c4f01c02ece04a7bacca5dd221e08a42f1626cee332bfb9dd38293c30"} Feb 27 10:21:36 crc kubenswrapper[4998]: I0227 10:21:36.275807 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:21:36 crc kubenswrapper[4998]: E0227 10:21:36.276032 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 10:21:36.776008829 +0000 UTC m=+248.774279807 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:21:36 crc kubenswrapper[4998]: I0227 10:21:36.282181 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2m9rf\" (UID: \"4055490d-1d4a-4b0b-bf94-e2eaa714bc49\") " pod="openshift-image-registry/image-registry-697d97f7c8-2m9rf" Feb 27 10:21:36 crc kubenswrapper[4998]: E0227 10:21:36.282640 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 10:21:36.782620359 +0000 UTC m=+248.780891367 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2m9rf" (UID: "4055490d-1d4a-4b0b-bf94-e2eaa714bc49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:21:36 crc kubenswrapper[4998]: I0227 10:21:36.329996 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-4n9qx" Feb 27 10:21:36 crc kubenswrapper[4998]: I0227 10:21:36.330060 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-fd4sb" event={"ID":"cad1f061-186f-4774-ae82-648462d0912a","Type":"ContainerStarted","Data":"25d8f3ab8bb4b74792c54c3a0ff4bb5765ca8d359b1710422f51df18c47983c7"} Feb 27 10:21:36 crc kubenswrapper[4998]: I0227 10:21:36.359519 4998 patch_prober.go:28] interesting pod/router-default-5444994796-4n9qx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 27 10:21:36 crc kubenswrapper[4998]: [-]has-synced failed: reason withheld Feb 27 10:21:36 crc kubenswrapper[4998]: [+]process-running ok Feb 27 10:21:36 crc kubenswrapper[4998]: healthz check failed Feb 27 10:21:36 crc kubenswrapper[4998]: I0227 10:21:36.359576 4998 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4n9qx" podUID="854e8a4a-ba6a-4d3a-b20c-429e3bd1c8a7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 27 10:21:36 crc kubenswrapper[4998]: I0227 10:21:36.368527 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-phwvx" event={"ID":"8e843cdb-2025-4b35-bcee-562907dbd9d9","Type":"ContainerStarted","Data":"c2dd1615168ee57de5453a91a792984dd3009402f25061fd139147ff7cf5fd3c"} Feb 27 10:21:36 crc kubenswrapper[4998]: I0227 10:21:36.371134 4998 ???:1] "http: TLS handshake error from 192.168.126.11:34126: no serving certificate available for the kubelet" Feb 27 10:21:36 crc kubenswrapper[4998]: I0227 10:21:36.374922 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536455-gfh89" event={"ID":"223282ee-d242-4896-a1b9-9f63a9bb0915","Type":"ContainerStarted","Data":"f9a692f8796aad2576deb220af9aeb561db69389600671e3e61d44bbd69a1dd2"} Feb 27 10:21:36 crc kubenswrapper[4998]: I0227 10:21:36.374965 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536455-gfh89" event={"ID":"223282ee-d242-4896-a1b9-9f63a9bb0915","Type":"ContainerStarted","Data":"202c1812cc5cb59b557f42960d407f0c9d0902a807079b0717cb4273e63f3588"} Feb 27 10:21:36 crc kubenswrapper[4998]: I0227 10:21:36.383248 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:21:36 crc kubenswrapper[4998]: E0227 10:21:36.383448 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 10:21:36.883425878 +0000 UTC m=+248.881696856 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:21:36 crc kubenswrapper[4998]: I0227 10:21:36.383576 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2m9rf\" (UID: \"4055490d-1d4a-4b0b-bf94-e2eaa714bc49\") " pod="openshift-image-registry/image-registry-697d97f7c8-2m9rf" Feb 27 10:21:36 crc kubenswrapper[4998]: E0227 10:21:36.383923 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 10:21:36.883913491 +0000 UTC m=+248.882184459 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2m9rf" (UID: "4055490d-1d4a-4b0b-bf94-e2eaa714bc49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:21:36 crc kubenswrapper[4998]: I0227 10:21:36.405547 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-x828r" podUID="854e4003-37ab-473a-a282-6e9c453dfd52" containerName="route-controller-manager" containerID="cri-o://210b0140d48ed7811deeac3524b035673bb8a315a7c06b9601dd83eda3844f65" gracePeriod=30 Feb 27 10:21:36 crc kubenswrapper[4998]: I0227 10:21:36.406084 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-7g9hp" event={"ID":"d89392c8-76ea-4723-8fc3-04fcd6727a23","Type":"ContainerStarted","Data":"29476c1f370b6df410e36a2b03ddee7d31b8fac4a120a7f7247bc3bd5c7f1951"} Feb 27 10:21:36 crc kubenswrapper[4998]: I0227 10:21:36.407032 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-2ksp8" podUID="65b6d855-bcdd-41e5-b3ea-e269a1b6b689" containerName="controller-manager" containerID="cri-o://3d4a1376472a5aa619ebaf47a5bffeea1351e4b0a9c305f485fe74de13952166" gracePeriod=30 Feb 27 10:21:36 crc kubenswrapper[4998]: I0227 10:21:36.407488 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-h2zrl" podStartSLOduration=179.407476677 podStartE2EDuration="2m59.407476677s" podCreationTimestamp="2026-02-27 10:18:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:21:36.312764564 +0000 UTC m=+248.311035552" watchObservedRunningTime="2026-02-27 10:21:36.407476677 +0000 UTC m=+248.405747645" Feb 27 10:21:36 crc kubenswrapper[4998]: I0227 10:21:36.407752 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-ffk9k" Feb 27 10:21:36 crc kubenswrapper[4998]: I0227 10:21:36.413669 4998 patch_prober.go:28] interesting pod/console-operator-58897d9998-ffk9k container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.26:8443/readyz\": dial tcp 10.217.0.26:8443: connect: connection refused" start-of-body= Feb 27 10:21:36 crc kubenswrapper[4998]: I0227 10:21:36.413728 4998 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-ffk9k" podUID="43e2df1f-102d-4440-bc2b-76d89a47be31" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.26:8443/readyz\": dial tcp 10.217.0.26:8443: connect: connection refused" Feb 27 10:21:36 crc kubenswrapper[4998]: I0227 10:21:36.453037 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-phwvx" podStartSLOduration=6.453015353 podStartE2EDuration="6.453015353s" podCreationTimestamp="2026-02-27 10:21:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:21:36.408247527 +0000 UTC m=+248.406518515" watchObservedRunningTime="2026-02-27 10:21:36.453015353 +0000 UTC m=+248.451286321" Feb 27 10:21:36 crc kubenswrapper[4998]: I0227 10:21:36.493548 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:21:36 crc kubenswrapper[4998]: E0227 10:21:36.495557 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 10:21:36.995531356 +0000 UTC m=+248.993802324 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:21:36 crc kubenswrapper[4998]: I0227 10:21:36.512529 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-ffk9k" podStartSLOduration=179.51250509 podStartE2EDuration="2m59.51250509s" podCreationTimestamp="2026-02-27 10:18:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:21:36.458595425 +0000 UTC m=+248.456866403" watchObservedRunningTime="2026-02-27 10:21:36.51250509 +0000 UTC m=+248.510776068" Feb 27 10:21:36 crc kubenswrapper[4998]: I0227 10:21:36.514560 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29536455-gfh89" podStartSLOduration=179.514523266 podStartE2EDuration="2m59.514523266s" podCreationTimestamp="2026-02-27 10:18:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:21:36.51137498 +0000 UTC m=+248.509645948" watchObservedRunningTime="2026-02-27 10:21:36.514523266 +0000 UTC m=+248.512794244" Feb 27 10:21:36 crc kubenswrapper[4998]: I0227 10:21:36.597182 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2m9rf\" (UID: \"4055490d-1d4a-4b0b-bf94-e2eaa714bc49\") " pod="openshift-image-registry/image-registry-697d97f7c8-2m9rf" Feb 27 10:21:36 crc kubenswrapper[4998]: E0227 10:21:36.597890 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 10:21:37.097876547 +0000 UTC m=+249.096147515 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2m9rf" (UID: "4055490d-1d4a-4b0b-bf94-e2eaa714bc49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:21:36 crc kubenswrapper[4998]: I0227 10:21:36.700522 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-68mc7" Feb 27 10:21:36 crc kubenswrapper[4998]: I0227 10:21:36.700662 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:21:36 crc kubenswrapper[4998]: E0227 10:21:36.700947 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 10:21:37.200931417 +0000 UTC m=+249.199202385 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:21:36 crc kubenswrapper[4998]: I0227 10:21:36.802292 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2m9rf\" (UID: \"4055490d-1d4a-4b0b-bf94-e2eaa714bc49\") " pod="openshift-image-registry/image-registry-697d97f7c8-2m9rf" Feb 27 10:21:36 crc kubenswrapper[4998]: E0227 10:21:36.802807 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 10:21:37.302790355 +0000 UTC m=+249.301061333 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2m9rf" (UID: "4055490d-1d4a-4b0b-bf94-e2eaa714bc49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:21:36 crc kubenswrapper[4998]: I0227 10:21:36.906260 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:21:36 crc kubenswrapper[4998]: E0227 10:21:36.907549 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 10:21:37.407532001 +0000 UTC m=+249.405802969 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:21:37 crc kubenswrapper[4998]: I0227 10:21:37.011012 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2m9rf\" (UID: \"4055490d-1d4a-4b0b-bf94-e2eaa714bc49\") " pod="openshift-image-registry/image-registry-697d97f7c8-2m9rf" Feb 27 10:21:37 crc kubenswrapper[4998]: E0227 10:21:37.011399 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 10:21:37.511386793 +0000 UTC m=+249.509657751 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2m9rf" (UID: "4055490d-1d4a-4b0b-bf94-e2eaa714bc49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:21:37 crc kubenswrapper[4998]: I0227 10:21:37.111625 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:21:37 crc kubenswrapper[4998]: E0227 10:21:37.112032 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 10:21:37.612015417 +0000 UTC m=+249.610286385 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:21:37 crc kubenswrapper[4998]: I0227 10:21:37.119151 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-x828r" Feb 27 10:21:37 crc kubenswrapper[4998]: I0227 10:21:37.165294 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-cffc98ff8-ztcc5"] Feb 27 10:21:37 crc kubenswrapper[4998]: E0227 10:21:37.165847 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="854e4003-37ab-473a-a282-6e9c453dfd52" containerName="route-controller-manager" Feb 27 10:21:37 crc kubenswrapper[4998]: I0227 10:21:37.165864 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="854e4003-37ab-473a-a282-6e9c453dfd52" containerName="route-controller-manager" Feb 27 10:21:37 crc kubenswrapper[4998]: I0227 10:21:37.165978 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="854e4003-37ab-473a-a282-6e9c453dfd52" containerName="route-controller-manager" Feb 27 10:21:37 crc kubenswrapper[4998]: I0227 10:21:37.166428 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-cffc98ff8-ztcc5" Feb 27 10:21:37 crc kubenswrapper[4998]: I0227 10:21:37.179909 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-cffc98ff8-ztcc5"] Feb 27 10:21:37 crc kubenswrapper[4998]: I0227 10:21:37.212787 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/854e4003-37ab-473a-a282-6e9c453dfd52-serving-cert\") pod \"854e4003-37ab-473a-a282-6e9c453dfd52\" (UID: \"854e4003-37ab-473a-a282-6e9c453dfd52\") " Feb 27 10:21:37 crc kubenswrapper[4998]: I0227 10:21:37.212855 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/854e4003-37ab-473a-a282-6e9c453dfd52-config\") pod \"854e4003-37ab-473a-a282-6e9c453dfd52\" (UID: \"854e4003-37ab-473a-a282-6e9c453dfd52\") " Feb 27 10:21:37 crc kubenswrapper[4998]: I0227 10:21:37.212887 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/854e4003-37ab-473a-a282-6e9c453dfd52-client-ca\") pod \"854e4003-37ab-473a-a282-6e9c453dfd52\" (UID: \"854e4003-37ab-473a-a282-6e9c453dfd52\") " Feb 27 10:21:37 crc kubenswrapper[4998]: I0227 10:21:37.212922 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fpb9b\" (UniqueName: \"kubernetes.io/projected/854e4003-37ab-473a-a282-6e9c453dfd52-kube-api-access-fpb9b\") pod \"854e4003-37ab-473a-a282-6e9c453dfd52\" (UID: \"854e4003-37ab-473a-a282-6e9c453dfd52\") " Feb 27 10:21:37 crc kubenswrapper[4998]: I0227 10:21:37.213160 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2m9rf\" (UID: \"4055490d-1d4a-4b0b-bf94-e2eaa714bc49\") " pod="openshift-image-registry/image-registry-697d97f7c8-2m9rf" Feb 27 10:21:37 crc kubenswrapper[4998]: E0227 10:21:37.213885 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 10:21:37.713865265 +0000 UTC m=+249.712136233 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2m9rf" (UID: "4055490d-1d4a-4b0b-bf94-e2eaa714bc49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:21:37 crc kubenswrapper[4998]: I0227 10:21:37.213884 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/854e4003-37ab-473a-a282-6e9c453dfd52-client-ca" (OuterVolumeSpecName: "client-ca") pod "854e4003-37ab-473a-a282-6e9c453dfd52" (UID: "854e4003-37ab-473a-a282-6e9c453dfd52"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:21:37 crc kubenswrapper[4998]: I0227 10:21:37.214614 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/854e4003-37ab-473a-a282-6e9c453dfd52-config" (OuterVolumeSpecName: "config") pod "854e4003-37ab-473a-a282-6e9c453dfd52" (UID: "854e4003-37ab-473a-a282-6e9c453dfd52"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:21:37 crc kubenswrapper[4998]: I0227 10:21:37.242198 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/854e4003-37ab-473a-a282-6e9c453dfd52-kube-api-access-fpb9b" (OuterVolumeSpecName: "kube-api-access-fpb9b") pod "854e4003-37ab-473a-a282-6e9c453dfd52" (UID: "854e4003-37ab-473a-a282-6e9c453dfd52"). InnerVolumeSpecName "kube-api-access-fpb9b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:21:37 crc kubenswrapper[4998]: I0227 10:21:37.260468 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/854e4003-37ab-473a-a282-6e9c453dfd52-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "854e4003-37ab-473a-a282-6e9c453dfd52" (UID: "854e4003-37ab-473a-a282-6e9c453dfd52"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:21:37 crc kubenswrapper[4998]: I0227 10:21:37.287533 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-2ksp8" Feb 27 10:21:37 crc kubenswrapper[4998]: I0227 10:21:37.314455 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:21:37 crc kubenswrapper[4998]: E0227 10:21:37.314704 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 10:21:37.814653553 +0000 UTC m=+249.812924531 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:21:37 crc kubenswrapper[4998]: I0227 10:21:37.314736 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df1e0950-f8e5-4f2c-b7db-7045eba23868-serving-cert\") pod \"route-controller-manager-cffc98ff8-ztcc5\" (UID: \"df1e0950-f8e5-4f2c-b7db-7045eba23868\") " pod="openshift-route-controller-manager/route-controller-manager-cffc98ff8-ztcc5" Feb 27 10:21:37 crc kubenswrapper[4998]: I0227 10:21:37.314765 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df1e0950-f8e5-4f2c-b7db-7045eba23868-config\") pod \"route-controller-manager-cffc98ff8-ztcc5\" (UID: \"df1e0950-f8e5-4f2c-b7db-7045eba23868\") " pod="openshift-route-controller-manager/route-controller-manager-cffc98ff8-ztcc5" Feb 27 10:21:37 crc kubenswrapper[4998]: I0227 10:21:37.314783 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/df1e0950-f8e5-4f2c-b7db-7045eba23868-client-ca\") pod \"route-controller-manager-cffc98ff8-ztcc5\" (UID: \"df1e0950-f8e5-4f2c-b7db-7045eba23868\") " pod="openshift-route-controller-manager/route-controller-manager-cffc98ff8-ztcc5" Feb 27 10:21:37 crc kubenswrapper[4998]: I0227 10:21:37.314824 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2m9rf\" (UID: \"4055490d-1d4a-4b0b-bf94-e2eaa714bc49\") " pod="openshift-image-registry/image-registry-697d97f7c8-2m9rf" Feb 27 10:21:37 crc kubenswrapper[4998]: I0227 10:21:37.314909 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kvpb\" (UniqueName: \"kubernetes.io/projected/df1e0950-f8e5-4f2c-b7db-7045eba23868-kube-api-access-2kvpb\") pod \"route-controller-manager-cffc98ff8-ztcc5\" (UID: \"df1e0950-f8e5-4f2c-b7db-7045eba23868\") " pod="openshift-route-controller-manager/route-controller-manager-cffc98ff8-ztcc5" Feb 27 10:21:37 crc kubenswrapper[4998]: I0227 10:21:37.314939 4998 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/854e4003-37ab-473a-a282-6e9c453dfd52-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 10:21:37 crc kubenswrapper[4998]: I0227 10:21:37.314949 4998 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/854e4003-37ab-473a-a282-6e9c453dfd52-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:21:37 crc kubenswrapper[4998]: I0227 10:21:37.314957 4998 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/854e4003-37ab-473a-a282-6e9c453dfd52-client-ca\") on node \"crc\" DevicePath \"\"" Feb 27 10:21:37 crc kubenswrapper[4998]: I0227 10:21:37.314965 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fpb9b\" (UniqueName: \"kubernetes.io/projected/854e4003-37ab-473a-a282-6e9c453dfd52-kube-api-access-fpb9b\") on node \"crc\" DevicePath \"\"" Feb 27 10:21:37 crc kubenswrapper[4998]: E0227 10:21:37.315203 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 10:21:37.815191478 +0000 UTC m=+249.813462566 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2m9rf" (UID: "4055490d-1d4a-4b0b-bf94-e2eaa714bc49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:21:37 crc kubenswrapper[4998]: I0227 10:21:37.328388 4998 patch_prober.go:28] interesting pod/router-default-5444994796-4n9qx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 27 10:21:37 crc kubenswrapper[4998]: [-]has-synced failed: reason withheld Feb 27 10:21:37 crc kubenswrapper[4998]: [+]process-running ok Feb 27 10:21:37 crc kubenswrapper[4998]: healthz check failed Feb 27 10:21:37 crc kubenswrapper[4998]: I0227 10:21:37.328585 4998 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4n9qx" podUID="854e8a4a-ba6a-4d3a-b20c-429e3bd1c8a7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 27 10:21:37 crc kubenswrapper[4998]: I0227 10:21:37.416329 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65b6d855-bcdd-41e5-b3ea-e269a1b6b689-config\") pod \"65b6d855-bcdd-41e5-b3ea-e269a1b6b689\" (UID: \"65b6d855-bcdd-41e5-b3ea-e269a1b6b689\") " Feb 27 10:21:37 crc kubenswrapper[4998]: I0227 10:21:37.416427 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65b6d855-bcdd-41e5-b3ea-e269a1b6b689-serving-cert\") pod \"65b6d855-bcdd-41e5-b3ea-e269a1b6b689\" (UID: \"65b6d855-bcdd-41e5-b3ea-e269a1b6b689\") " Feb 27 10:21:37 crc kubenswrapper[4998]: I0227 10:21:37.416448 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/65b6d855-bcdd-41e5-b3ea-e269a1b6b689-client-ca\") pod \"65b6d855-bcdd-41e5-b3ea-e269a1b6b689\" (UID: \"65b6d855-bcdd-41e5-b3ea-e269a1b6b689\") " Feb 27 10:21:37 crc kubenswrapper[4998]: I0227 10:21:37.416470 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/65b6d855-bcdd-41e5-b3ea-e269a1b6b689-proxy-ca-bundles\") pod \"65b6d855-bcdd-41e5-b3ea-e269a1b6b689\" (UID: \"65b6d855-bcdd-41e5-b3ea-e269a1b6b689\") " Feb 27 10:21:37 crc kubenswrapper[4998]: I0227 10:21:37.416536 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:21:37 crc kubenswrapper[4998]: I0227 10:21:37.416570 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5qw6r\" (UniqueName: \"kubernetes.io/projected/65b6d855-bcdd-41e5-b3ea-e269a1b6b689-kube-api-access-5qw6r\") pod \"65b6d855-bcdd-41e5-b3ea-e269a1b6b689\" (UID: \"65b6d855-bcdd-41e5-b3ea-e269a1b6b689\") " Feb 27 10:21:37 crc kubenswrapper[4998]: I0227 10:21:37.416854 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kvpb\" (UniqueName: \"kubernetes.io/projected/df1e0950-f8e5-4f2c-b7db-7045eba23868-kube-api-access-2kvpb\") pod \"route-controller-manager-cffc98ff8-ztcc5\" (UID: \"df1e0950-f8e5-4f2c-b7db-7045eba23868\") " pod="openshift-route-controller-manager/route-controller-manager-cffc98ff8-ztcc5" Feb 27 10:21:37 crc kubenswrapper[4998]: I0227 10:21:37.416887 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df1e0950-f8e5-4f2c-b7db-7045eba23868-serving-cert\") pod \"route-controller-manager-cffc98ff8-ztcc5\" (UID: \"df1e0950-f8e5-4f2c-b7db-7045eba23868\") " pod="openshift-route-controller-manager/route-controller-manager-cffc98ff8-ztcc5" Feb 27 10:21:37 crc kubenswrapper[4998]: I0227 10:21:37.416907 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df1e0950-f8e5-4f2c-b7db-7045eba23868-config\") pod \"route-controller-manager-cffc98ff8-ztcc5\" (UID: \"df1e0950-f8e5-4f2c-b7db-7045eba23868\") " pod="openshift-route-controller-manager/route-controller-manager-cffc98ff8-ztcc5" Feb 27 10:21:37 crc kubenswrapper[4998]: I0227 10:21:37.416922 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/df1e0950-f8e5-4f2c-b7db-7045eba23868-client-ca\") pod \"route-controller-manager-cffc98ff8-ztcc5\" (UID: \"df1e0950-f8e5-4f2c-b7db-7045eba23868\") " pod="openshift-route-controller-manager/route-controller-manager-cffc98ff8-ztcc5" Feb 27 10:21:37 crc kubenswrapper[4998]: E0227 10:21:37.416948 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 10:21:37.916927362 +0000 UTC m=+249.915198330 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:21:37 crc kubenswrapper[4998]: I0227 10:21:37.417018 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2m9rf\" (UID: \"4055490d-1d4a-4b0b-bf94-e2eaa714bc49\") " pod="openshift-image-registry/image-registry-697d97f7c8-2m9rf" Feb 27 10:21:37 crc kubenswrapper[4998]: I0227 10:21:37.417281 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65b6d855-bcdd-41e5-b3ea-e269a1b6b689-client-ca" (OuterVolumeSpecName: "client-ca") pod "65b6d855-bcdd-41e5-b3ea-e269a1b6b689" (UID: "65b6d855-bcdd-41e5-b3ea-e269a1b6b689"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:21:37 crc kubenswrapper[4998]: E0227 10:21:37.417334 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 10:21:37.917327232 +0000 UTC m=+249.915598190 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2m9rf" (UID: "4055490d-1d4a-4b0b-bf94-e2eaa714bc49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:21:37 crc kubenswrapper[4998]: I0227 10:21:37.417576 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65b6d855-bcdd-41e5-b3ea-e269a1b6b689-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "65b6d855-bcdd-41e5-b3ea-e269a1b6b689" (UID: "65b6d855-bcdd-41e5-b3ea-e269a1b6b689"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:21:37 crc kubenswrapper[4998]: I0227 10:21:37.417764 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65b6d855-bcdd-41e5-b3ea-e269a1b6b689-config" (OuterVolumeSpecName: "config") pod "65b6d855-bcdd-41e5-b3ea-e269a1b6b689" (UID: "65b6d855-bcdd-41e5-b3ea-e269a1b6b689"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:21:37 crc kubenswrapper[4998]: I0227 10:21:37.417879 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/df1e0950-f8e5-4f2c-b7db-7045eba23868-client-ca\") pod \"route-controller-manager-cffc98ff8-ztcc5\" (UID: \"df1e0950-f8e5-4f2c-b7db-7045eba23868\") " pod="openshift-route-controller-manager/route-controller-manager-cffc98ff8-ztcc5" Feb 27 10:21:37 crc kubenswrapper[4998]: I0227 10:21:37.418694 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df1e0950-f8e5-4f2c-b7db-7045eba23868-config\") pod \"route-controller-manager-cffc98ff8-ztcc5\" (UID: \"df1e0950-f8e5-4f2c-b7db-7045eba23868\") " pod="openshift-route-controller-manager/route-controller-manager-cffc98ff8-ztcc5" Feb 27 10:21:37 crc kubenswrapper[4998]: I0227 10:21:37.428438 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-hxpst" event={"ID":"855749ad-5c93-4f59-be0a-68a0ea0bee93","Type":"ContainerStarted","Data":"f36b129bee29518a9f4dc0b6b4872ffdb251ced8b4d557eacfe9ffa90dcaeadd"} Feb 27 10:21:37 crc kubenswrapper[4998]: I0227 10:21:37.429155 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65b6d855-bcdd-41e5-b3ea-e269a1b6b689-kube-api-access-5qw6r" (OuterVolumeSpecName: "kube-api-access-5qw6r") pod "65b6d855-bcdd-41e5-b3ea-e269a1b6b689" (UID: "65b6d855-bcdd-41e5-b3ea-e269a1b6b689"). InnerVolumeSpecName "kube-api-access-5qw6r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:21:37 crc kubenswrapper[4998]: I0227 10:21:37.429414 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65b6d855-bcdd-41e5-b3ea-e269a1b6b689-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "65b6d855-bcdd-41e5-b3ea-e269a1b6b689" (UID: "65b6d855-bcdd-41e5-b3ea-e269a1b6b689"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:21:37 crc kubenswrapper[4998]: I0227 10:21:37.430363 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xggj7" event={"ID":"daa3d572-23af-4c5d-a6a4-7de4c9d1ec3d","Type":"ContainerStarted","Data":"26739144c7efd16f9697063275fcd28c4cad2e0b5cdd8c2a2e56f06afb46d1e6"} Feb 27 10:21:37 crc kubenswrapper[4998]: I0227 10:21:37.439746 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7mccc" event={"ID":"9e380580-8ee5-4746-8abd-1e89104afa78","Type":"ContainerStarted","Data":"55473eec1645307a3554858a44d42b82e7f3ed2290c080582dc702281f6a2806"} Feb 27 10:21:37 crc kubenswrapper[4998]: I0227 10:21:37.441979 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df1e0950-f8e5-4f2c-b7db-7045eba23868-serving-cert\") pod \"route-controller-manager-cffc98ff8-ztcc5\" (UID: \"df1e0950-f8e5-4f2c-b7db-7045eba23868\") " pod="openshift-route-controller-manager/route-controller-manager-cffc98ff8-ztcc5" Feb 27 10:21:37 crc kubenswrapper[4998]: I0227 10:21:37.445682 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rh88s" event={"ID":"46053881-83ad-4dad-ae13-950fc812a5ed","Type":"ContainerStarted","Data":"87d484a140621cd18da395fa8cf417214bd34d1e1a2761ffad455b7046089ca7"} Feb 27 10:21:37 crc kubenswrapper[4998]: I0227 10:21:37.445739 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rh88s" event={"ID":"46053881-83ad-4dad-ae13-950fc812a5ed","Type":"ContainerStarted","Data":"9ba5f837d7754f5a6e8433be627c7094bb5e37ae91bcfff0fea395dca598e4a8"} Feb 27 10:21:37 crc kubenswrapper[4998]: I0227 10:21:37.450897 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-fd4sb" event={"ID":"cad1f061-186f-4774-ae82-648462d0912a","Type":"ContainerStarted","Data":"ac4bce8e9ea855f816d1d03dc4ee253137d2b1075e1c373a5ae34ca1b35439f7"} Feb 27 10:21:37 crc kubenswrapper[4998]: I0227 10:21:37.453079 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kvpb\" (UniqueName: \"kubernetes.io/projected/df1e0950-f8e5-4f2c-b7db-7045eba23868-kube-api-access-2kvpb\") pod \"route-controller-manager-cffc98ff8-ztcc5\" (UID: \"df1e0950-f8e5-4f2c-b7db-7045eba23868\") " pod="openshift-route-controller-manager/route-controller-manager-cffc98ff8-ztcc5" Feb 27 10:21:37 crc kubenswrapper[4998]: I0227 10:21:37.454470 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-sw9b4" event={"ID":"6d09adc3-707f-4a3e-b268-07e7df9fa3da","Type":"ContainerStarted","Data":"48cfae9f168f361442a3ef7c303465bd39383641e8d2c6db561ed24bd932f019"} Feb 27 10:21:37 crc kubenswrapper[4998]: I0227 10:21:37.454628 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-sw9b4" event={"ID":"6d09adc3-707f-4a3e-b268-07e7df9fa3da","Type":"ContainerStarted","Data":"a32ee6e03293ceef70cbbe9d669a59119848426892dd00e3179dcb1766e78f8d"} Feb 27 10:21:37 crc kubenswrapper[4998]: I0227 10:21:37.455911 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-lwjmt" event={"ID":"6ea90f3e-d099-4aa2-840c-ab69127694ee","Type":"ContainerStarted","Data":"f3181524c61e3351403e8694909f36ac059c88fb52dfdc111ecc3873f85bb635"} Feb 27 10:21:37 crc kubenswrapper[4998]: I0227 10:21:37.457064 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p9c6z" event={"ID":"b820abcf-cb2e-4fef-be37-060602dac285","Type":"ContainerStarted","Data":"177f4364f6d07079c084ad89efa5e00e6aa5bfb98f37d8b2dd57e7bd4e12447d"} Feb 27 10:21:37 crc kubenswrapper[4998]: I0227 10:21:37.457167 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p9c6z" event={"ID":"b820abcf-cb2e-4fef-be37-060602dac285","Type":"ContainerStarted","Data":"fd5773019f5866428c54d548d96c146c005c2976228fc53c253ac2ef5026f4c7"} Feb 27 10:21:37 crc kubenswrapper[4998]: I0227 10:21:37.458162 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p9c6z" Feb 27 10:21:37 crc kubenswrapper[4998]: I0227 10:21:37.460928 4998 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-p9c6z container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Feb 27 10:21:37 crc kubenswrapper[4998]: I0227 10:21:37.461047 4998 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p9c6z" podUID="b820abcf-cb2e-4fef-be37-060602dac285" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" Feb 27 10:21:37 crc kubenswrapper[4998]: I0227 10:21:37.461925 4998 generic.go:334] "Generic (PLEG): container finished" podID="bcc07b80-55eb-465c-9528-6bad6e2bcbc1" containerID="b00c3c922c4050d597fa241fd515345b4f81fe448556905e9ca5c503e4e20518" exitCode=0 Feb 27 10:21:37 crc kubenswrapper[4998]: I0227 10:21:37.462079 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vlnjr" event={"ID":"bcc07b80-55eb-465c-9528-6bad6e2bcbc1","Type":"ContainerDied","Data":"b00c3c922c4050d597fa241fd515345b4f81fe448556905e9ca5c503e4e20518"} Feb 27 10:21:37 crc kubenswrapper[4998]: I0227 10:21:37.463418 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-6bhg7" event={"ID":"5de52eac-6daa-4c2b-a61d-cb5a9668c0ea","Type":"ContainerStarted","Data":"a314c121524de0d4c22dcae333b26959d3c9876f1cd7257f93994037ddcdc20a"} Feb 27 10:21:37 crc kubenswrapper[4998]: I0227 10:21:37.469483 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-2vxxl" event={"ID":"9881d4cb-217e-455b-b8f3-0ad24a1e51d7","Type":"ContainerStarted","Data":"a9d5b7dcd59174d82f9b5e579f11d6598d673771bfdf79ed28439493adb089a2"} Feb 27 10:21:37 crc kubenswrapper[4998]: I0227 10:21:37.473260 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-fd4sb" podStartSLOduration=180.473241223 podStartE2EDuration="3m0.473241223s" podCreationTimestamp="2026-02-27 10:18:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:21:37.471949558 +0000 UTC m=+249.470220526" watchObservedRunningTime="2026-02-27 10:21:37.473241223 +0000 UTC m=+249.471512201" Feb 27 10:21:37 crc kubenswrapper[4998]: I0227 10:21:37.494060 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pxm5q" event={"ID":"3d03c805-40fa-4fd0-a049-db518941b121","Type":"ContainerStarted","Data":"f43a516620037e03f03a391ae0512c682c729c2835ad08a4dbadee0360d0259a"} Feb 27 10:21:37 crc kubenswrapper[4998]: I0227 10:21:37.494105 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pxm5q" event={"ID":"3d03c805-40fa-4fd0-a049-db518941b121","Type":"ContainerStarted","Data":"077d1ecce8e58188a93a843fcefdf4726ec75aefab624dca66e2ab6f64e8a7bf"} Feb 27 10:21:37 crc kubenswrapper[4998]: I0227 10:21:37.495031 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pxm5q" Feb 27 10:21:37 crc kubenswrapper[4998]: I0227 10:21:37.496848 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-f8f5r" event={"ID":"01657e3a-3a1a-4d73-8bdb-90ddfa4c374b","Type":"ContainerStarted","Data":"6521953f8bb105ee41b4e746dc894d12b09caa0f20a279b5e982d1c03f3a4693"} Feb 27 10:21:37 crc kubenswrapper[4998]: I0227 10:21:37.501932 4998 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-pxm5q container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" start-of-body= Feb 27 10:21:37 crc kubenswrapper[4998]: I0227 10:21:37.501986 4998 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pxm5q" podUID="3d03c805-40fa-4fd0-a049-db518941b121" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" Feb 27 10:21:37 crc kubenswrapper[4998]: I0227 10:21:37.504028 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vvpct" event={"ID":"efcfd94f-2069-47b4-9c8f-a9be5330ff28","Type":"ContainerStarted","Data":"420e60bb303a5de44ff80f14116e23877bb1bc5d62509fe9c684894ce29c19e8"} Feb 27 10:21:37 crc kubenswrapper[4998]: I0227 10:21:37.504072 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vvpct" event={"ID":"efcfd94f-2069-47b4-9c8f-a9be5330ff28","Type":"ContainerStarted","Data":"189d315ebcd024bf607aa75f11cd61420b72b7e21d6c119415ae9f03fb92d37a"} Feb 27 10:21:37 crc kubenswrapper[4998]: I0227 10:21:37.505236 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-cffc98ff8-ztcc5" Feb 27 10:21:37 crc kubenswrapper[4998]: I0227 10:21:37.507419 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7rm6t" event={"ID":"8a02b4b9-556f-4dc2-9647-68951959ab71","Type":"ContainerStarted","Data":"cd89e30d84cd16ba4a51fce8fc39edfb606abe8bc900f4164074340abc8b603b"} Feb 27 10:21:37 crc kubenswrapper[4998]: I0227 10:21:37.518271 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:21:37 crc kubenswrapper[4998]: I0227 10:21:37.518662 4998 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65b6d855-bcdd-41e5-b3ea-e269a1b6b689-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 10:21:37 crc kubenswrapper[4998]: I0227 10:21:37.518673 4998 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/65b6d855-bcdd-41e5-b3ea-e269a1b6b689-client-ca\") on node \"crc\" DevicePath \"\"" Feb 27 10:21:37 crc kubenswrapper[4998]: I0227 10:21:37.518682 4998 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/65b6d855-bcdd-41e5-b3ea-e269a1b6b689-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 27 10:21:37 crc kubenswrapper[4998]: I0227 10:21:37.518690 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5qw6r\" (UniqueName: \"kubernetes.io/projected/65b6d855-bcdd-41e5-b3ea-e269a1b6b689-kube-api-access-5qw6r\") on node \"crc\" DevicePath \"\"" Feb 27 10:21:37 crc kubenswrapper[4998]: I0227 10:21:37.518700 4998 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65b6d855-bcdd-41e5-b3ea-e269a1b6b689-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:21:37 crc kubenswrapper[4998]: E0227 10:21:37.519593 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 10:21:38.019564571 +0000 UTC m=+250.017835559 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:21:37 crc kubenswrapper[4998]: I0227 10:21:37.520548 4998 generic.go:334] "Generic (PLEG): container finished" podID="65b6d855-bcdd-41e5-b3ea-e269a1b6b689" containerID="3d4a1376472a5aa619ebaf47a5bffeea1351e4b0a9c305f485fe74de13952166" exitCode=0 Feb 27 10:21:37 crc kubenswrapper[4998]: I0227 10:21:37.520828 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-2ksp8" Feb 27 10:21:37 crc kubenswrapper[4998]: I0227 10:21:37.520882 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-2ksp8" event={"ID":"65b6d855-bcdd-41e5-b3ea-e269a1b6b689","Type":"ContainerDied","Data":"3d4a1376472a5aa619ebaf47a5bffeea1351e4b0a9c305f485fe74de13952166"} Feb 27 10:21:37 crc kubenswrapper[4998]: I0227 10:21:37.522324 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-2ksp8" event={"ID":"65b6d855-bcdd-41e5-b3ea-e269a1b6b689","Type":"ContainerDied","Data":"145778089d579e512edcc8aa52ce46633b5623963c41d0f6ae5e23664dc4de6f"} Feb 27 10:21:37 crc kubenswrapper[4998]: I0227 10:21:37.522351 4998 scope.go:117] "RemoveContainer" containerID="3d4a1376472a5aa619ebaf47a5bffeea1351e4b0a9c305f485fe74de13952166" Feb 27 10:21:37 crc kubenswrapper[4998]: I0227 10:21:37.538708 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p9c6z" podStartSLOduration=180.538692154 podStartE2EDuration="3m0.538692154s" podCreationTimestamp="2026-02-27 10:18:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:21:37.537028359 +0000 UTC m=+249.535299327" watchObservedRunningTime="2026-02-27 10:21:37.538692154 +0000 UTC m=+249.536963122" Feb 27 10:21:37 crc kubenswrapper[4998]: I0227 10:21:37.542533 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lg8l6" event={"ID":"21d518bc-a601-48ce-9d37-67eb5657eea1","Type":"ContainerStarted","Data":"5833f67284723dc175011a2e31c64b84e58c96b68fd6e3a57b2614a82fc7758b"} Feb 27 10:21:37 crc kubenswrapper[4998]: I0227 10:21:37.557533 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-tsvns" event={"ID":"4e349c3e-b968-4c13-968e-124554aca7d2","Type":"ContainerStarted","Data":"190084c60cb7a54467473f0beff7ba96a9df48f23d9cf5db87f7104a1d9955fd"} Feb 27 10:21:37 crc kubenswrapper[4998]: I0227 10:21:37.565420 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536460-jgglv" event={"ID":"36a1e36a-d138-4606-a280-ef688b10a438","Type":"ContainerStarted","Data":"4ec9b50ba1f82d02ab11462b04a27face6fd148daa23b43a4e64ac420b6e5527"} Feb 27 10:21:37 crc kubenswrapper[4998]: I0227 10:21:37.567054 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pxm5q" podStartSLOduration=180.56704342 podStartE2EDuration="3m0.56704342s" podCreationTimestamp="2026-02-27 10:18:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:21:37.562606798 +0000 UTC m=+249.560877786" watchObservedRunningTime="2026-02-27 10:21:37.56704342 +0000 UTC m=+249.565314388" Feb 27 10:21:37 crc kubenswrapper[4998]: I0227 10:21:37.572057 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6pn2r" event={"ID":"c085501f-cb61-43a3-816b-dc1e744642aa","Type":"ContainerStarted","Data":"7489bc11f5fc85d3a7b841bbc48ff36d2c8240327e128b977e6001b4b9bcbf89"} Feb 27 10:21:37 crc kubenswrapper[4998]: I0227 10:21:37.574730 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-2ksp8"] Feb 27 10:21:37 crc kubenswrapper[4998]: I0227 10:21:37.581481 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-2ksp8"] Feb 27 10:21:37 crc kubenswrapper[4998]: I0227 10:21:37.583402 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-b25mg" event={"ID":"827ec3fa-aae4-4a9d-b3d6-3b93e0a9b71d","Type":"ContainerStarted","Data":"c4290a89216e166bfde40cfce2e5351f20c3afcef9ebcfa5f42fecbac9db3dc2"} Feb 27 10:21:37 crc kubenswrapper[4998]: I0227 10:21:37.584855 4998 scope.go:117] "RemoveContainer" containerID="3d4a1376472a5aa619ebaf47a5bffeea1351e4b0a9c305f485fe74de13952166" Feb 27 10:21:37 crc kubenswrapper[4998]: E0227 10:21:37.586755 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d4a1376472a5aa619ebaf47a5bffeea1351e4b0a9c305f485fe74de13952166\": container with ID starting with 3d4a1376472a5aa619ebaf47a5bffeea1351e4b0a9c305f485fe74de13952166 not found: ID does not exist" containerID="3d4a1376472a5aa619ebaf47a5bffeea1351e4b0a9c305f485fe74de13952166" Feb 27 10:21:37 crc kubenswrapper[4998]: I0227 10:21:37.586796 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d4a1376472a5aa619ebaf47a5bffeea1351e4b0a9c305f485fe74de13952166"} err="failed to get container status \"3d4a1376472a5aa619ebaf47a5bffeea1351e4b0a9c305f485fe74de13952166\": rpc error: code = NotFound desc = could not find container \"3d4a1376472a5aa619ebaf47a5bffeea1351e4b0a9c305f485fe74de13952166\": container with ID starting with 3d4a1376472a5aa619ebaf47a5bffeea1351e4b0a9c305f485fe74de13952166 not found: ID does not exist" Feb 27 10:21:37 crc kubenswrapper[4998]: I0227 10:21:37.589928 4998 generic.go:334] "Generic (PLEG): container finished" podID="854e4003-37ab-473a-a282-6e9c453dfd52" containerID="210b0140d48ed7811deeac3524b035673bb8a315a7c06b9601dd83eda3844f65" exitCode=0 Feb 27 10:21:37 crc kubenswrapper[4998]: I0227 10:21:37.590051 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-x828r" Feb 27 10:21:37 crc kubenswrapper[4998]: I0227 10:21:37.591171 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-x828r" event={"ID":"854e4003-37ab-473a-a282-6e9c453dfd52","Type":"ContainerDied","Data":"210b0140d48ed7811deeac3524b035673bb8a315a7c06b9601dd83eda3844f65"} Feb 27 10:21:37 crc kubenswrapper[4998]: I0227 10:21:37.591239 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-x828r" event={"ID":"854e4003-37ab-473a-a282-6e9c453dfd52","Type":"ContainerDied","Data":"c53fcc1b5954074f7456fd74d5820699fc456c603c0081670abb490af0bb9bac"} Feb 27 10:21:37 crc kubenswrapper[4998]: I0227 10:21:37.591262 4998 scope.go:117] "RemoveContainer" containerID="210b0140d48ed7811deeac3524b035673bb8a315a7c06b9601dd83eda3844f65" Feb 27 10:21:37 crc kubenswrapper[4998]: I0227 10:21:37.596327 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lg8l6" podStartSLOduration=180.596302601 podStartE2EDuration="3m0.596302601s" podCreationTimestamp="2026-02-27 10:18:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:21:37.596207348 +0000 UTC m=+249.594478326" watchObservedRunningTime="2026-02-27 10:21:37.596302601 +0000 UTC m=+249.594573569" Feb 27 10:21:37 crc kubenswrapper[4998]: I0227 10:21:37.610254 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-7g9hp" event={"ID":"d89392c8-76ea-4723-8fc3-04fcd6727a23","Type":"ContainerStarted","Data":"5b0045f17bea087539fb43ff4331ca7a4451516180f7713f71e5c326308879b6"} Feb 27 10:21:37 crc kubenswrapper[4998]: I0227 10:21:37.616360 4998 scope.go:117] "RemoveContainer" containerID="210b0140d48ed7811deeac3524b035673bb8a315a7c06b9601dd83eda3844f65" Feb 27 10:21:37 crc kubenswrapper[4998]: E0227 10:21:37.617616 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"210b0140d48ed7811deeac3524b035673bb8a315a7c06b9601dd83eda3844f65\": container with ID starting with 210b0140d48ed7811deeac3524b035673bb8a315a7c06b9601dd83eda3844f65 not found: ID does not exist" containerID="210b0140d48ed7811deeac3524b035673bb8a315a7c06b9601dd83eda3844f65" Feb 27 10:21:37 crc kubenswrapper[4998]: I0227 10:21:37.617646 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"210b0140d48ed7811deeac3524b035673bb8a315a7c06b9601dd83eda3844f65"} err="failed to get container status \"210b0140d48ed7811deeac3524b035673bb8a315a7c06b9601dd83eda3844f65\": rpc error: code = NotFound desc = could not find container \"210b0140d48ed7811deeac3524b035673bb8a315a7c06b9601dd83eda3844f65\": container with ID starting with 210b0140d48ed7811deeac3524b035673bb8a315a7c06b9601dd83eda3844f65 not found: ID does not exist" Feb 27 10:21:37 crc kubenswrapper[4998]: I0227 10:21:37.619703 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2m9rf\" (UID: \"4055490d-1d4a-4b0b-bf94-e2eaa714bc49\") " pod="openshift-image-registry/image-registry-697d97f7c8-2m9rf" Feb 27 10:21:37 crc kubenswrapper[4998]: E0227 10:21:37.620043 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 10:21:38.1200303 +0000 UTC m=+250.118301268 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2m9rf" (UID: "4055490d-1d4a-4b0b-bf94-e2eaa714bc49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:21:37 crc kubenswrapper[4998]: I0227 10:21:37.628723 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rkzrk" event={"ID":"83d0a613-ab45-4611-a345-66d75c8f0253","Type":"ContainerStarted","Data":"7f005a9de6d56deebb4ffae16e56e1befc3b7a1b10b8886a914e76bdf6195412"} Feb 27 10:21:37 crc kubenswrapper[4998]: I0227 10:21:37.634685 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-x828r"] Feb 27 10:21:37 crc kubenswrapper[4998]: I0227 10:21:37.637478 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-x828r"] Feb 27 10:21:37 crc kubenswrapper[4998]: I0227 10:21:37.640260 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-hxw4p" event={"ID":"801352bd-8f9c-4af1-8fd1-8f4ec4a4b6d5","Type":"ContainerStarted","Data":"5d7e725f49ec075e2690e23289e789ad39a40e06910c1048c70398d4a2bbc61e"} Feb 27 10:21:37 crc kubenswrapper[4998]: I0227 10:21:37.651567 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-ffk9k" Feb 27 10:21:37 crc kubenswrapper[4998]: I0227 10:21:37.655621 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-h2zrl" Feb 27 10:21:37 crc kubenswrapper[4998]: I0227 10:21:37.659389 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-68mc7" Feb 27 10:21:37 crc kubenswrapper[4998]: I0227 10:21:37.661777 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-7g9hp" podStartSLOduration=180.661753521 podStartE2EDuration="3m0.661753521s" podCreationTimestamp="2026-02-27 10:18:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:21:37.649727543 +0000 UTC m=+249.647998511" watchObservedRunningTime="2026-02-27 10:21:37.661753521 +0000 UTC m=+249.660024489" Feb 27 10:21:37 crc kubenswrapper[4998]: I0227 10:21:37.721004 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:21:37 crc kubenswrapper[4998]: E0227 10:21:37.723570 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 10:21:38.223550863 +0000 UTC m=+250.221821831 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:21:37 crc kubenswrapper[4998]: I0227 10:21:37.827853 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2m9rf\" (UID: \"4055490d-1d4a-4b0b-bf94-e2eaa714bc49\") " pod="openshift-image-registry/image-registry-697d97f7c8-2m9rf" Feb 27 10:21:37 crc kubenswrapper[4998]: E0227 10:21:37.844159 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 10:21:38.344128773 +0000 UTC m=+250.342399751 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2m9rf" (UID: "4055490d-1d4a-4b0b-bf94-e2eaa714bc49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:21:37 crc kubenswrapper[4998]: I0227 10:21:37.929679 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:21:37 crc kubenswrapper[4998]: E0227 10:21:37.930179 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 10:21:38.430162317 +0000 UTC m=+250.428433295 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:21:38 crc kubenswrapper[4998]: I0227 10:21:38.031002 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2m9rf\" (UID: \"4055490d-1d4a-4b0b-bf94-e2eaa714bc49\") " pod="openshift-image-registry/image-registry-697d97f7c8-2m9rf" Feb 27 10:21:38 crc kubenswrapper[4998]: E0227 10:21:38.031660 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 10:21:38.531641284 +0000 UTC m=+250.529912252 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2m9rf" (UID: "4055490d-1d4a-4b0b-bf94-e2eaa714bc49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:21:38 crc kubenswrapper[4998]: I0227 10:21:38.032110 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-cffc98ff8-ztcc5"] Feb 27 10:21:38 crc kubenswrapper[4998]: W0227 10:21:38.071508 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddf1e0950_f8e5_4f2c_b7db_7045eba23868.slice/crio-352fd5099b0e1101e546427b1a3098d0bdaf9533e703db73ff713638aa4931ce WatchSource:0}: Error finding container 352fd5099b0e1101e546427b1a3098d0bdaf9533e703db73ff713638aa4931ce: Status 404 returned error can't find the container with id 352fd5099b0e1101e546427b1a3098d0bdaf9533e703db73ff713638aa4931ce Feb 27 10:21:38 crc kubenswrapper[4998]: I0227 10:21:38.132690 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:21:38 crc kubenswrapper[4998]: E0227 10:21:38.132945 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 10:21:38.632917746 +0000 UTC m=+250.631188714 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:21:38 crc kubenswrapper[4998]: I0227 10:21:38.133082 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2m9rf\" (UID: \"4055490d-1d4a-4b0b-bf94-e2eaa714bc49\") " pod="openshift-image-registry/image-registry-697d97f7c8-2m9rf" Feb 27 10:21:38 crc kubenswrapper[4998]: E0227 10:21:38.133457 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 10:21:38.633450701 +0000 UTC m=+250.631721669 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2m9rf" (UID: "4055490d-1d4a-4b0b-bf94-e2eaa714bc49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:21:38 crc kubenswrapper[4998]: I0227 10:21:38.234342 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:21:38 crc kubenswrapper[4998]: E0227 10:21:38.234586 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 10:21:38.734553898 +0000 UTC m=+250.732824886 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:21:38 crc kubenswrapper[4998]: I0227 10:21:38.234706 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2m9rf\" (UID: \"4055490d-1d4a-4b0b-bf94-e2eaa714bc49\") " pod="openshift-image-registry/image-registry-697d97f7c8-2m9rf" Feb 27 10:21:38 crc kubenswrapper[4998]: E0227 10:21:38.235166 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 10:21:38.735154594 +0000 UTC m=+250.733425562 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2m9rf" (UID: "4055490d-1d4a-4b0b-bf94-e2eaa714bc49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:21:38 crc kubenswrapper[4998]: I0227 10:21:38.328919 4998 patch_prober.go:28] interesting pod/router-default-5444994796-4n9qx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 27 10:21:38 crc kubenswrapper[4998]: [-]has-synced failed: reason withheld Feb 27 10:21:38 crc kubenswrapper[4998]: [+]process-running ok Feb 27 10:21:38 crc kubenswrapper[4998]: healthz check failed Feb 27 10:21:38 crc kubenswrapper[4998]: I0227 10:21:38.328986 4998 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4n9qx" podUID="854e8a4a-ba6a-4d3a-b20c-429e3bd1c8a7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 27 10:21:38 crc kubenswrapper[4998]: I0227 10:21:38.336571 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:21:38 crc kubenswrapper[4998]: E0227 10:21:38.336924 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 10:21:38.836910658 +0000 UTC m=+250.835181616 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:21:38 crc kubenswrapper[4998]: I0227 10:21:38.442047 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2m9rf\" (UID: \"4055490d-1d4a-4b0b-bf94-e2eaa714bc49\") " pod="openshift-image-registry/image-registry-697d97f7c8-2m9rf" Feb 27 10:21:38 crc kubenswrapper[4998]: E0227 10:21:38.442739 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 10:21:38.942725054 +0000 UTC m=+250.940996022 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2m9rf" (UID: "4055490d-1d4a-4b0b-bf94-e2eaa714bc49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:21:38 crc kubenswrapper[4998]: I0227 10:21:38.543155 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:21:38 crc kubenswrapper[4998]: E0227 10:21:38.543523 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 10:21:39.043503972 +0000 UTC m=+251.041774940 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:21:38 crc kubenswrapper[4998]: I0227 10:21:38.644792 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2m9rf\" (UID: \"4055490d-1d4a-4b0b-bf94-e2eaa714bc49\") " pod="openshift-image-registry/image-registry-697d97f7c8-2m9rf" Feb 27 10:21:38 crc kubenswrapper[4998]: E0227 10:21:38.645194 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 10:21:39.145178955 +0000 UTC m=+251.143449923 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2m9rf" (UID: "4055490d-1d4a-4b0b-bf94-e2eaa714bc49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:21:38 crc kubenswrapper[4998]: I0227 10:21:38.651151 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vlnjr" event={"ID":"bcc07b80-55eb-465c-9528-6bad6e2bcbc1","Type":"ContainerStarted","Data":"dac8a59a2fe8be6f2f3fe278f77d4b7190d004e0135e6fe81feb460f6b37312d"} Feb 27 10:21:38 crc kubenswrapper[4998]: I0227 10:21:38.651317 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vlnjr" Feb 27 10:21:38 crc kubenswrapper[4998]: I0227 10:21:38.653209 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-hxw4p" event={"ID":"801352bd-8f9c-4af1-8fd1-8f4ec4a4b6d5","Type":"ContainerStarted","Data":"80ab8e3e35af127bc7594b092d7e8f1cc68f376bfe9cee25817a69d0ef0920bb"} Feb 27 10:21:38 crc kubenswrapper[4998]: I0227 10:21:38.663675 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-sw9b4" event={"ID":"6d09adc3-707f-4a3e-b268-07e7df9fa3da","Type":"ContainerStarted","Data":"fcbd94f07a51a40bc3160f556a554cf65b351541827a5d021d82e38613bab759"} Feb 27 10:21:38 crc kubenswrapper[4998]: I0227 10:21:38.671320 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-cffc98ff8-ztcc5" event={"ID":"df1e0950-f8e5-4f2c-b7db-7045eba23868","Type":"ContainerStarted","Data":"0cd55ca4f567989dd52f48f5877efb53cba4d798d9fa50663c26a55cc7980879"} Feb 27 10:21:38 crc kubenswrapper[4998]: I0227 10:21:38.671379 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-cffc98ff8-ztcc5" event={"ID":"df1e0950-f8e5-4f2c-b7db-7045eba23868","Type":"ContainerStarted","Data":"352fd5099b0e1101e546427b1a3098d0bdaf9533e703db73ff713638aa4931ce"} Feb 27 10:21:38 crc kubenswrapper[4998]: I0227 10:21:38.671693 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-cffc98ff8-ztcc5" Feb 27 10:21:38 crc kubenswrapper[4998]: I0227 10:21:38.683129 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xggj7" event={"ID":"daa3d572-23af-4c5d-a6a4-7de4c9d1ec3d","Type":"ContainerStarted","Data":"a6f7e18a820557714e7dc9b35444980fd4faa146be2968adc57a908deb457152"} Feb 27 10:21:38 crc kubenswrapper[4998]: I0227 10:21:38.685630 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rkzrk" event={"ID":"83d0a613-ab45-4611-a345-66d75c8f0253","Type":"ContainerStarted","Data":"0a4d505b9ac1f687292cdca10dbda926f1622fb97aa76ff5ab89feeaa33ae668"} Feb 27 10:21:38 crc kubenswrapper[4998]: I0227 10:21:38.685653 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rkzrk" event={"ID":"83d0a613-ab45-4611-a345-66d75c8f0253","Type":"ContainerStarted","Data":"c0101031de094cfe301b3840c273e0e24ff30632149ae9139bfae00292642802"} Feb 27 10:21:38 crc kubenswrapper[4998]: I0227 10:21:38.694549 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-hxw4p" podStartSLOduration=8.694526125 podStartE2EDuration="8.694526125s" podCreationTimestamp="2026-02-27 10:21:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:21:38.693137368 +0000 UTC m=+250.691408346" watchObservedRunningTime="2026-02-27 10:21:38.694526125 +0000 UTC m=+250.692797093" Feb 27 10:21:38 crc kubenswrapper[4998]: I0227 10:21:38.695727 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vlnjr" podStartSLOduration=181.695717699 podStartE2EDuration="3m1.695717699s" podCreationTimestamp="2026-02-27 10:18:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:21:38.674031205 +0000 UTC m=+250.672302203" watchObservedRunningTime="2026-02-27 10:21:38.695717699 +0000 UTC m=+250.693988667" Feb 27 10:21:38 crc kubenswrapper[4998]: I0227 10:21:38.702955 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7rm6t" event={"ID":"8a02b4b9-556f-4dc2-9647-68951959ab71","Type":"ContainerStarted","Data":"cd0a2e1ac27234b502709928a14e526ebc04d1a9ca7ebdb6d82015aa7e8e7cb9"} Feb 27 10:21:38 crc kubenswrapper[4998]: I0227 10:21:38.717847 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6pn2r" event={"ID":"c085501f-cb61-43a3-816b-dc1e744642aa","Type":"ContainerStarted","Data":"46b0430b4017d749bb019afe1f760bdffc2e41bf243957367170bd5e3534cfed"} Feb 27 10:21:38 crc kubenswrapper[4998]: I0227 10:21:38.719146 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6pn2r" Feb 27 10:21:38 crc kubenswrapper[4998]: I0227 10:21:38.721632 4998 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-6pn2r container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:5443/healthz\": dial tcp 10.217.0.37:5443: connect: connection refused" start-of-body= Feb 27 10:21:38 crc kubenswrapper[4998]: I0227 10:21:38.721687 4998 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6pn2r" podUID="c085501f-cb61-43a3-816b-dc1e744642aa" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.37:5443/healthz\": dial tcp 10.217.0.37:5443: connect: connection refused" Feb 27 10:21:38 crc kubenswrapper[4998]: I0227 10:21:38.721831 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-sw9b4" podStartSLOduration=181.721813382 podStartE2EDuration="3m1.721813382s" podCreationTimestamp="2026-02-27 10:18:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:21:38.718547273 +0000 UTC m=+250.716818241" watchObservedRunningTime="2026-02-27 10:21:38.721813382 +0000 UTC m=+250.720084350" Feb 27 10:21:38 crc kubenswrapper[4998]: I0227 10:21:38.739326 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-lwjmt" event={"ID":"6ea90f3e-d099-4aa2-840c-ab69127694ee","Type":"ContainerStarted","Data":"722f1807a79382cda79a6d87eaf0552318b66873488cebff6b35532750d9807e"} Feb 27 10:21:38 crc kubenswrapper[4998]: I0227 10:21:38.745536 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:21:38 crc kubenswrapper[4998]: I0227 10:21:38.746492 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-cffc98ff8-ztcc5" podStartSLOduration=3.746473658 podStartE2EDuration="3.746473658s" podCreationTimestamp="2026-02-27 10:21:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:21:38.744842002 +0000 UTC m=+250.743112970" watchObservedRunningTime="2026-02-27 10:21:38.746473658 +0000 UTC m=+250.744744616" Feb 27 10:21:38 crc kubenswrapper[4998]: E0227 10:21:38.746932 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 10:21:39.246911169 +0000 UTC m=+251.245182157 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:21:38 crc kubenswrapper[4998]: I0227 10:21:38.763301 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7mccc" event={"ID":"9e380580-8ee5-4746-8abd-1e89104afa78","Type":"ContainerStarted","Data":"51f9b93a39b54a9f226be31e8cbeb3396ebfc70569ed38af396a47243a3bb231"} Feb 27 10:21:38 crc kubenswrapper[4998]: I0227 10:21:38.763347 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7mccc" event={"ID":"9e380580-8ee5-4746-8abd-1e89104afa78","Type":"ContainerStarted","Data":"cbdce9ded1b941f6910132eb024cbd8f2b68bb03de02d174a5a92a46def02233"} Feb 27 10:21:38 crc kubenswrapper[4998]: I0227 10:21:38.789071 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65b6d855-bcdd-41e5-b3ea-e269a1b6b689" path="/var/lib/kubelet/pods/65b6d855-bcdd-41e5-b3ea-e269a1b6b689/volumes" Feb 27 10:21:38 crc kubenswrapper[4998]: I0227 10:21:38.792267 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="854e4003-37ab-473a-a282-6e9c453dfd52" path="/var/lib/kubelet/pods/854e4003-37ab-473a-a282-6e9c453dfd52/volumes" Feb 27 10:21:38 crc kubenswrapper[4998]: I0227 10:21:38.792865 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-6bhg7" Feb 27 10:21:38 crc kubenswrapper[4998]: I0227 10:21:38.792897 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rh88s" event={"ID":"46053881-83ad-4dad-ae13-950fc812a5ed","Type":"ContainerStarted","Data":"68628ad0f4343b0605aea863c19182b0d26b447aca47f5ac226442fd7c3c80fb"} Feb 27 10:21:38 crc kubenswrapper[4998]: I0227 10:21:38.792920 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vvpct" Feb 27 10:21:38 crc kubenswrapper[4998]: I0227 10:21:38.792934 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-6bhg7" event={"ID":"5de52eac-6daa-4c2b-a61d-cb5a9668c0ea","Type":"ContainerStarted","Data":"99285e718e6c56d152ca4b908533ba51e820800295fc1501ead32d5688fd3e7d"} Feb 27 10:21:38 crc kubenswrapper[4998]: I0227 10:21:38.792946 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-6bhg7" event={"ID":"5de52eac-6daa-4c2b-a61d-cb5a9668c0ea","Type":"ContainerStarted","Data":"35c46c0778482f495397e054d5de6bb329c9ef6e094e59bebf7079560533fb40"} Feb 27 10:21:38 crc kubenswrapper[4998]: I0227 10:21:38.792959 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vvpct" event={"ID":"efcfd94f-2069-47b4-9c8f-a9be5330ff28","Type":"ContainerStarted","Data":"cb82d25447ddd91af45b00e46c867c2cc43f4ae3b3ca97d4fcbba6f75a015ec2"} Feb 27 10:21:38 crc kubenswrapper[4998]: I0227 10:21:38.798852 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-b25mg" event={"ID":"827ec3fa-aae4-4a9d-b3d6-3b93e0a9b71d","Type":"ContainerStarted","Data":"d9ae0b030a5eb37e5961de436d34dc1db00d1aece4389f114e5b8e3de9b76332"} Feb 27 10:21:38 crc kubenswrapper[4998]: I0227 10:21:38.812054 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6pn2r" podStartSLOduration=181.812026001 podStartE2EDuration="3m1.812026001s" podCreationTimestamp="2026-02-27 10:18:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:21:38.784996631 +0000 UTC m=+250.783267599" watchObservedRunningTime="2026-02-27 10:21:38.812026001 +0000 UTC m=+250.810296969" Feb 27 10:21:38 crc kubenswrapper[4998]: I0227 10:21:38.812534 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rkzrk" podStartSLOduration=181.812525504 podStartE2EDuration="3m1.812525504s" podCreationTimestamp="2026-02-27 10:18:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:21:38.804504355 +0000 UTC m=+250.802775323" watchObservedRunningTime="2026-02-27 10:21:38.812525504 +0000 UTC m=+250.810796482" Feb 27 10:21:38 crc kubenswrapper[4998]: I0227 10:21:38.835857 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7rm6t" podStartSLOduration=181.835842493 podStartE2EDuration="3m1.835842493s" podCreationTimestamp="2026-02-27 10:18:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:21:38.835546174 +0000 UTC m=+250.833817142" watchObservedRunningTime="2026-02-27 10:21:38.835842493 +0000 UTC m=+250.834113461" Feb 27 10:21:38 crc kubenswrapper[4998]: I0227 10:21:38.842304 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-tsvns" event={"ID":"4e349c3e-b968-4c13-968e-124554aca7d2","Type":"ContainerStarted","Data":"a4e32bc0454755b67984a3e61ff69dbb6802542cbcb9c3f4abf99ba98ca5973b"} Feb 27 10:21:38 crc kubenswrapper[4998]: I0227 10:21:38.842342 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-tsvns" event={"ID":"4e349c3e-b968-4c13-968e-124554aca7d2","Type":"ContainerStarted","Data":"ca338607ae4756c3a43ae7913e324994d46f96e6598928a7d62f7457dd3bc7eb"} Feb 27 10:21:38 crc kubenswrapper[4998]: I0227 10:21:38.844298 4998 generic.go:334] "Generic (PLEG): container finished" podID="223282ee-d242-4896-a1b9-9f63a9bb0915" containerID="f9a692f8796aad2576deb220af9aeb561db69389600671e3e61d44bbd69a1dd2" exitCode=0 Feb 27 10:21:38 crc kubenswrapper[4998]: I0227 10:21:38.844345 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536455-gfh89" event={"ID":"223282ee-d242-4896-a1b9-9f63a9bb0915","Type":"ContainerDied","Data":"f9a692f8796aad2576deb220af9aeb561db69389600671e3e61d44bbd69a1dd2"} Feb 27 10:21:38 crc kubenswrapper[4998]: I0227 10:21:38.846047 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-2vxxl" event={"ID":"9881d4cb-217e-455b-b8f3-0ad24a1e51d7","Type":"ContainerStarted","Data":"d9af5ec79418bc2734d362b060b15b7fb5372988ac2361c5d7f37321ffe84590"} Feb 27 10:21:38 crc kubenswrapper[4998]: I0227 10:21:38.846761 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-2vxxl" Feb 27 10:21:38 crc kubenswrapper[4998]: I0227 10:21:38.849209 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-hxpst" event={"ID":"855749ad-5c93-4f59-be0a-68a0ea0bee93","Type":"ContainerStarted","Data":"bc0eb33548f8188bf45f7bb93606d0e17db9deb3a1fc1223a61a6f0da529c684"} Feb 27 10:21:38 crc kubenswrapper[4998]: I0227 10:21:38.850142 4998 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-2vxxl container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" start-of-body= Feb 27 10:21:38 crc kubenswrapper[4998]: I0227 10:21:38.850195 4998 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-2vxxl" podUID="9881d4cb-217e-455b-b8f3-0ad24a1e51d7" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" Feb 27 10:21:38 crc kubenswrapper[4998]: I0227 10:21:38.850644 4998 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-pxm5q container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" start-of-body= Feb 27 10:21:38 crc kubenswrapper[4998]: I0227 10:21:38.850669 4998 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pxm5q" podUID="3d03c805-40fa-4fd0-a049-db518941b121" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" Feb 27 10:21:38 crc kubenswrapper[4998]: I0227 10:21:38.851904 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2m9rf\" (UID: \"4055490d-1d4a-4b0b-bf94-e2eaa714bc49\") " pod="openshift-image-registry/image-registry-697d97f7c8-2m9rf" Feb 27 10:21:38 crc kubenswrapper[4998]: E0227 10:21:38.854754 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 10:21:39.35473925 +0000 UTC m=+251.353010218 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2m9rf" (UID: "4055490d-1d4a-4b0b-bf94-e2eaa714bc49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:21:38 crc kubenswrapper[4998]: I0227 10:21:38.856253 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-lwjmt" podStartSLOduration=180.856236071 podStartE2EDuration="3m0.856236071s" podCreationTimestamp="2026-02-27 10:18:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:21:38.851272115 +0000 UTC m=+250.849543093" watchObservedRunningTime="2026-02-27 10:21:38.856236071 +0000 UTC m=+250.854507049" Feb 27 10:21:38 crc kubenswrapper[4998]: I0227 10:21:38.860074 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p9c6z" Feb 27 10:21:38 crc kubenswrapper[4998]: I0227 10:21:38.877170 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xggj7" podStartSLOduration=181.877148153 podStartE2EDuration="3m1.877148153s" podCreationTimestamp="2026-02-27 10:18:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:21:38.873426791 +0000 UTC m=+250.871697759" watchObservedRunningTime="2026-02-27 10:21:38.877148153 +0000 UTC m=+250.875419121" Feb 27 10:21:38 crc kubenswrapper[4998]: I0227 10:21:38.921335 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rh88s" podStartSLOduration=181.921310812 podStartE2EDuration="3m1.921310812s" podCreationTimestamp="2026-02-27 10:18:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:21:38.902091166 +0000 UTC m=+250.900362134" watchObservedRunningTime="2026-02-27 10:21:38.921310812 +0000 UTC m=+250.919581780" Feb 27 10:21:38 crc kubenswrapper[4998]: I0227 10:21:38.949813 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-hxpst" podStartSLOduration=180.949790181 podStartE2EDuration="3m0.949790181s" podCreationTimestamp="2026-02-27 10:18:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:21:38.949776261 +0000 UTC m=+250.948047229" watchObservedRunningTime="2026-02-27 10:21:38.949790181 +0000 UTC m=+250.948061149" Feb 27 10:21:38 crc kubenswrapper[4998]: I0227 10:21:38.952783 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:21:38 crc kubenswrapper[4998]: E0227 10:21:38.954619 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 10:21:39.454594353 +0000 UTC m=+251.452865371 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:21:39 crc kubenswrapper[4998]: I0227 10:21:39.016443 4998 ???:1] "http: TLS handshake error from 192.168.126.11:34134: no serving certificate available for the kubelet" Feb 27 10:21:39 crc kubenswrapper[4998]: I0227 10:21:39.049935 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-tsvns" podStartSLOduration=182.049912341 podStartE2EDuration="3m2.049912341s" podCreationTimestamp="2026-02-27 10:18:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:21:39.016691372 +0000 UTC m=+251.014962340" watchObservedRunningTime="2026-02-27 10:21:39.049912341 +0000 UTC m=+251.048183319" Feb 27 10:21:39 crc kubenswrapper[4998]: I0227 10:21:39.055144 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2m9rf\" (UID: \"4055490d-1d4a-4b0b-bf94-e2eaa714bc49\") " pod="openshift-image-registry/image-registry-697d97f7c8-2m9rf" Feb 27 10:21:39 crc kubenswrapper[4998]: E0227 10:21:39.055560 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 10:21:39.555544395 +0000 UTC m=+251.553815363 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2m9rf" (UID: "4055490d-1d4a-4b0b-bf94-e2eaa714bc49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:21:39 crc kubenswrapper[4998]: I0227 10:21:39.064184 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-cffc98ff8-ztcc5" Feb 27 10:21:39 crc kubenswrapper[4998]: I0227 10:21:39.106324 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-6bhg7" podStartSLOduration=9.106274854 podStartE2EDuration="9.106274854s" podCreationTimestamp="2026-02-27 10:21:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:21:39.085515396 +0000 UTC m=+251.083786394" watchObservedRunningTime="2026-02-27 10:21:39.106274854 +0000 UTC m=+251.104545822" Feb 27 10:21:39 crc kubenswrapper[4998]: I0227 10:21:39.124948 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-2vxxl" podStartSLOduration=182.124926634 podStartE2EDuration="3m2.124926634s" podCreationTimestamp="2026-02-27 10:18:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:21:39.124016809 +0000 UTC m=+251.122287777" watchObservedRunningTime="2026-02-27 10:21:39.124926634 +0000 UTC m=+251.123197602" Feb 27 10:21:39 crc kubenswrapper[4998]: I0227 10:21:39.160762 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:21:39 crc kubenswrapper[4998]: E0227 10:21:39.160983 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 10:21:39.66095176 +0000 UTC m=+251.659222728 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:21:39 crc kubenswrapper[4998]: I0227 10:21:39.161273 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2m9rf\" (UID: \"4055490d-1d4a-4b0b-bf94-e2eaa714bc49\") " pod="openshift-image-registry/image-registry-697d97f7c8-2m9rf" Feb 27 10:21:39 crc kubenswrapper[4998]: E0227 10:21:39.161731 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 10:21:39.661723301 +0000 UTC m=+251.659994269 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2m9rf" (UID: "4055490d-1d4a-4b0b-bf94-e2eaa714bc49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:21:39 crc kubenswrapper[4998]: I0227 10:21:39.169338 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7mccc" podStartSLOduration=182.169294679 podStartE2EDuration="3m2.169294679s" podCreationTimestamp="2026-02-27 10:18:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:21:39.167602702 +0000 UTC m=+251.165873670" watchObservedRunningTime="2026-02-27 10:21:39.169294679 +0000 UTC m=+251.167565647" Feb 27 10:21:39 crc kubenswrapper[4998]: I0227 10:21:39.198657 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-b25mg" podStartSLOduration=182.198641941 podStartE2EDuration="3m2.198641941s" podCreationTimestamp="2026-02-27 10:18:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:21:39.197661585 +0000 UTC m=+251.195932563" watchObservedRunningTime="2026-02-27 10:21:39.198641941 +0000 UTC m=+251.196912929" Feb 27 10:21:39 crc kubenswrapper[4998]: I0227 10:21:39.227444 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-555685b98-v8h46"] Feb 27 10:21:39 crc kubenswrapper[4998]: E0227 10:21:39.227710 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65b6d855-bcdd-41e5-b3ea-e269a1b6b689" containerName="controller-manager" Feb 27 10:21:39 crc kubenswrapper[4998]: I0227 10:21:39.227725 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="65b6d855-bcdd-41e5-b3ea-e269a1b6b689" containerName="controller-manager" Feb 27 10:21:39 crc kubenswrapper[4998]: I0227 10:21:39.227837 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="65b6d855-bcdd-41e5-b3ea-e269a1b6b689" containerName="controller-manager" Feb 27 10:21:39 crc kubenswrapper[4998]: I0227 10:21:39.228265 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-555685b98-v8h46" Feb 27 10:21:39 crc kubenswrapper[4998]: I0227 10:21:39.230844 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 27 10:21:39 crc kubenswrapper[4998]: I0227 10:21:39.237294 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 27 10:21:39 crc kubenswrapper[4998]: I0227 10:21:39.239781 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vvpct" podStartSLOduration=182.239763367 podStartE2EDuration="3m2.239763367s" podCreationTimestamp="2026-02-27 10:18:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:21:39.238136072 +0000 UTC m=+251.236407060" watchObservedRunningTime="2026-02-27 10:21:39.239763367 +0000 UTC m=+251.238034335" Feb 27 10:21:39 crc kubenswrapper[4998]: I0227 10:21:39.241713 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 27 10:21:39 crc kubenswrapper[4998]: I0227 10:21:39.243197 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 27 10:21:39 crc kubenswrapper[4998]: I0227 10:21:39.244289 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 27 10:21:39 crc kubenswrapper[4998]: I0227 10:21:39.244138 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 27 10:21:39 crc kubenswrapper[4998]: I0227 10:21:39.253271 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 27 10:21:39 crc kubenswrapper[4998]: I0227 10:21:39.258701 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-555685b98-v8h46"] Feb 27 10:21:39 crc kubenswrapper[4998]: I0227 10:21:39.262865 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:21:39 crc kubenswrapper[4998]: E0227 10:21:39.263188 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 10:21:39.763172977 +0000 UTC m=+251.761443945 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:21:39 crc kubenswrapper[4998]: I0227 10:21:39.334347 4998 patch_prober.go:28] interesting pod/router-default-5444994796-4n9qx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 27 10:21:39 crc kubenswrapper[4998]: [-]has-synced failed: reason withheld Feb 27 10:21:39 crc kubenswrapper[4998]: [+]process-running ok Feb 27 10:21:39 crc kubenswrapper[4998]: healthz check failed Feb 27 10:21:39 crc kubenswrapper[4998]: I0227 10:21:39.334433 4998 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4n9qx" podUID="854e8a4a-ba6a-4d3a-b20c-429e3bd1c8a7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 27 10:21:39 crc kubenswrapper[4998]: I0227 10:21:39.364246 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6c167d26-89cf-4d88-a85b-78d070d23b2f-serving-cert\") pod \"controller-manager-555685b98-v8h46\" (UID: \"6c167d26-89cf-4d88-a85b-78d070d23b2f\") " pod="openshift-controller-manager/controller-manager-555685b98-v8h46" Feb 27 10:21:39 crc kubenswrapper[4998]: I0227 10:21:39.364323 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6c167d26-89cf-4d88-a85b-78d070d23b2f-client-ca\") pod \"controller-manager-555685b98-v8h46\" (UID: \"6c167d26-89cf-4d88-a85b-78d070d23b2f\") " pod="openshift-controller-manager/controller-manager-555685b98-v8h46" Feb 27 10:21:39 crc kubenswrapper[4998]: I0227 10:21:39.364371 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c167d26-89cf-4d88-a85b-78d070d23b2f-config\") pod \"controller-manager-555685b98-v8h46\" (UID: \"6c167d26-89cf-4d88-a85b-78d070d23b2f\") " pod="openshift-controller-manager/controller-manager-555685b98-v8h46" Feb 27 10:21:39 crc kubenswrapper[4998]: I0227 10:21:39.364404 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2m9rf\" (UID: \"4055490d-1d4a-4b0b-bf94-e2eaa714bc49\") " pod="openshift-image-registry/image-registry-697d97f7c8-2m9rf" Feb 27 10:21:39 crc kubenswrapper[4998]: I0227 10:21:39.364480 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6c167d26-89cf-4d88-a85b-78d070d23b2f-proxy-ca-bundles\") pod \"controller-manager-555685b98-v8h46\" (UID: \"6c167d26-89cf-4d88-a85b-78d070d23b2f\") " pod="openshift-controller-manager/controller-manager-555685b98-v8h46" Feb 27 10:21:39 crc kubenswrapper[4998]: I0227 10:21:39.364503 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwqrp\" (UniqueName: \"kubernetes.io/projected/6c167d26-89cf-4d88-a85b-78d070d23b2f-kube-api-access-jwqrp\") pod \"controller-manager-555685b98-v8h46\" (UID: \"6c167d26-89cf-4d88-a85b-78d070d23b2f\") " pod="openshift-controller-manager/controller-manager-555685b98-v8h46" Feb 27 10:21:39 crc kubenswrapper[4998]: E0227 10:21:39.364911 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 10:21:39.864893161 +0000 UTC m=+251.863164139 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2m9rf" (UID: "4055490d-1d4a-4b0b-bf94-e2eaa714bc49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:21:39 crc kubenswrapper[4998]: I0227 10:21:39.465809 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:21:39 crc kubenswrapper[4998]: I0227 10:21:39.466327 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c167d26-89cf-4d88-a85b-78d070d23b2f-config\") pod \"controller-manager-555685b98-v8h46\" (UID: \"6c167d26-89cf-4d88-a85b-78d070d23b2f\") " pod="openshift-controller-manager/controller-manager-555685b98-v8h46" Feb 27 10:21:39 crc kubenswrapper[4998]: I0227 10:21:39.466438 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6c167d26-89cf-4d88-a85b-78d070d23b2f-proxy-ca-bundles\") pod \"controller-manager-555685b98-v8h46\" (UID: \"6c167d26-89cf-4d88-a85b-78d070d23b2f\") " pod="openshift-controller-manager/controller-manager-555685b98-v8h46" Feb 27 10:21:39 crc kubenswrapper[4998]: I0227 10:21:39.466461 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwqrp\" (UniqueName: \"kubernetes.io/projected/6c167d26-89cf-4d88-a85b-78d070d23b2f-kube-api-access-jwqrp\") pod \"controller-manager-555685b98-v8h46\" (UID: \"6c167d26-89cf-4d88-a85b-78d070d23b2f\") " pod="openshift-controller-manager/controller-manager-555685b98-v8h46" Feb 27 10:21:39 crc kubenswrapper[4998]: I0227 10:21:39.466535 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6c167d26-89cf-4d88-a85b-78d070d23b2f-serving-cert\") pod \"controller-manager-555685b98-v8h46\" (UID: \"6c167d26-89cf-4d88-a85b-78d070d23b2f\") " pod="openshift-controller-manager/controller-manager-555685b98-v8h46" Feb 27 10:21:39 crc kubenswrapper[4998]: I0227 10:21:39.466573 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6c167d26-89cf-4d88-a85b-78d070d23b2f-client-ca\") pod \"controller-manager-555685b98-v8h46\" (UID: \"6c167d26-89cf-4d88-a85b-78d070d23b2f\") " pod="openshift-controller-manager/controller-manager-555685b98-v8h46" Feb 27 10:21:39 crc kubenswrapper[4998]: I0227 10:21:39.468536 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6c167d26-89cf-4d88-a85b-78d070d23b2f-proxy-ca-bundles\") pod \"controller-manager-555685b98-v8h46\" (UID: \"6c167d26-89cf-4d88-a85b-78d070d23b2f\") " pod="openshift-controller-manager/controller-manager-555685b98-v8h46" Feb 27 10:21:39 crc kubenswrapper[4998]: I0227 10:21:39.468633 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6c167d26-89cf-4d88-a85b-78d070d23b2f-client-ca\") pod \"controller-manager-555685b98-v8h46\" (UID: \"6c167d26-89cf-4d88-a85b-78d070d23b2f\") " pod="openshift-controller-manager/controller-manager-555685b98-v8h46" Feb 27 10:21:39 crc kubenswrapper[4998]: E0227 10:21:39.468662 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 10:21:39.9686423 +0000 UTC m=+251.966913338 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:21:39 crc kubenswrapper[4998]: I0227 10:21:39.469818 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c167d26-89cf-4d88-a85b-78d070d23b2f-config\") pod \"controller-manager-555685b98-v8h46\" (UID: \"6c167d26-89cf-4d88-a85b-78d070d23b2f\") " pod="openshift-controller-manager/controller-manager-555685b98-v8h46" Feb 27 10:21:39 crc kubenswrapper[4998]: I0227 10:21:39.473951 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6c167d26-89cf-4d88-a85b-78d070d23b2f-serving-cert\") pod \"controller-manager-555685b98-v8h46\" (UID: \"6c167d26-89cf-4d88-a85b-78d070d23b2f\") " pod="openshift-controller-manager/controller-manager-555685b98-v8h46" Feb 27 10:21:39 crc kubenswrapper[4998]: I0227 10:21:39.488876 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwqrp\" (UniqueName: \"kubernetes.io/projected/6c167d26-89cf-4d88-a85b-78d070d23b2f-kube-api-access-jwqrp\") pod \"controller-manager-555685b98-v8h46\" (UID: \"6c167d26-89cf-4d88-a85b-78d070d23b2f\") " pod="openshift-controller-manager/controller-manager-555685b98-v8h46" Feb 27 10:21:39 crc kubenswrapper[4998]: I0227 10:21:39.567527 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2m9rf\" (UID: \"4055490d-1d4a-4b0b-bf94-e2eaa714bc49\") " pod="openshift-image-registry/image-registry-697d97f7c8-2m9rf" Feb 27 10:21:39 crc kubenswrapper[4998]: E0227 10:21:39.567865 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 10:21:40.067850726 +0000 UTC m=+252.066121694 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2m9rf" (UID: "4055490d-1d4a-4b0b-bf94-e2eaa714bc49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:21:39 crc kubenswrapper[4998]: I0227 10:21:39.570814 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-555685b98-v8h46" Feb 27 10:21:39 crc kubenswrapper[4998]: I0227 10:21:39.670280 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:21:39 crc kubenswrapper[4998]: E0227 10:21:39.670649 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 10:21:40.170633359 +0000 UTC m=+252.168904327 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:21:39 crc kubenswrapper[4998]: I0227 10:21:39.771931 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2m9rf\" (UID: \"4055490d-1d4a-4b0b-bf94-e2eaa714bc49\") " pod="openshift-image-registry/image-registry-697d97f7c8-2m9rf" Feb 27 10:21:39 crc kubenswrapper[4998]: E0227 10:21:39.772285 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 10:21:40.27227201 +0000 UTC m=+252.270542978 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2m9rf" (UID: "4055490d-1d4a-4b0b-bf94-e2eaa714bc49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:21:39 crc kubenswrapper[4998]: I0227 10:21:39.875835 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:21:39 crc kubenswrapper[4998]: E0227 10:21:39.876396 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 10:21:40.376375899 +0000 UTC m=+252.374646867 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:21:39 crc kubenswrapper[4998]: I0227 10:21:39.889628 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-f8f5r" event={"ID":"01657e3a-3a1a-4d73-8bdb-90ddfa4c374b","Type":"ContainerStarted","Data":"f47956c765026240d07905a539976b158e99d3ec9dac8157eba03ceb67a45904"} Feb 27 10:21:39 crc kubenswrapper[4998]: I0227 10:21:39.892299 4998 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-2vxxl container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" start-of-body= Feb 27 10:21:39 crc kubenswrapper[4998]: I0227 10:21:39.892342 4998 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-2vxxl" podUID="9881d4cb-217e-455b-b8f3-0ad24a1e51d7" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" Feb 27 10:21:39 crc kubenswrapper[4998]: I0227 10:21:39.921918 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pxm5q" Feb 27 10:21:39 crc kubenswrapper[4998]: I0227 10:21:39.979827 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-555685b98-v8h46"] Feb 27 10:21:39 crc kubenswrapper[4998]: I0227 10:21:39.979953 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2m9rf\" (UID: \"4055490d-1d4a-4b0b-bf94-e2eaa714bc49\") " pod="openshift-image-registry/image-registry-697d97f7c8-2m9rf" Feb 27 10:21:39 crc kubenswrapper[4998]: E0227 10:21:39.980301 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 10:21:40.480284052 +0000 UTC m=+252.478555020 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2m9rf" (UID: "4055490d-1d4a-4b0b-bf94-e2eaa714bc49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:21:40 crc kubenswrapper[4998]: I0227 10:21:40.082824 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:21:40 crc kubenswrapper[4998]: E0227 10:21:40.083002 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 10:21:40.582973933 +0000 UTC m=+252.581244901 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:21:40 crc kubenswrapper[4998]: I0227 10:21:40.083741 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2m9rf\" (UID: \"4055490d-1d4a-4b0b-bf94-e2eaa714bc49\") " pod="openshift-image-registry/image-registry-697d97f7c8-2m9rf" Feb 27 10:21:40 crc kubenswrapper[4998]: E0227 10:21:40.084149 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 10:21:40.584133424 +0000 UTC m=+252.582404402 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2m9rf" (UID: "4055490d-1d4a-4b0b-bf94-e2eaa714bc49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:21:40 crc kubenswrapper[4998]: I0227 10:21:40.186898 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:21:40 crc kubenswrapper[4998]: E0227 10:21:40.187161 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 10:21:40.687125583 +0000 UTC m=+252.685396551 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:21:40 crc kubenswrapper[4998]: I0227 10:21:40.187612 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2m9rf\" (UID: \"4055490d-1d4a-4b0b-bf94-e2eaa714bc49\") " pod="openshift-image-registry/image-registry-697d97f7c8-2m9rf" Feb 27 10:21:40 crc kubenswrapper[4998]: E0227 10:21:40.188027 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 10:21:40.688011037 +0000 UTC m=+252.686282205 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2m9rf" (UID: "4055490d-1d4a-4b0b-bf94-e2eaa714bc49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:21:40 crc kubenswrapper[4998]: I0227 10:21:40.288672 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:21:40 crc kubenswrapper[4998]: E0227 10:21:40.289300 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 10:21:40.789284928 +0000 UTC m=+252.787555896 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:21:40 crc kubenswrapper[4998]: I0227 10:21:40.329092 4998 patch_prober.go:28] interesting pod/router-default-5444994796-4n9qx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 27 10:21:40 crc kubenswrapper[4998]: [-]has-synced failed: reason withheld Feb 27 10:21:40 crc kubenswrapper[4998]: [+]process-running ok Feb 27 10:21:40 crc kubenswrapper[4998]: healthz check failed Feb 27 10:21:40 crc kubenswrapper[4998]: I0227 10:21:40.329154 4998 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4n9qx" podUID="854e8a4a-ba6a-4d3a-b20c-429e3bd1c8a7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 27 10:21:40 crc kubenswrapper[4998]: I0227 10:21:40.363301 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536455-gfh89" Feb 27 10:21:40 crc kubenswrapper[4998]: I0227 10:21:40.390282 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2m9rf\" (UID: \"4055490d-1d4a-4b0b-bf94-e2eaa714bc49\") " pod="openshift-image-registry/image-registry-697d97f7c8-2m9rf" Feb 27 10:21:40 crc kubenswrapper[4998]: E0227 10:21:40.390669 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 10:21:40.890650943 +0000 UTC m=+252.888921911 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2m9rf" (UID: "4055490d-1d4a-4b0b-bf94-e2eaa714bc49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:21:40 crc kubenswrapper[4998]: I0227 10:21:40.494688 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-74tm6\" (UniqueName: \"kubernetes.io/projected/223282ee-d242-4896-a1b9-9f63a9bb0915-kube-api-access-74tm6\") pod \"223282ee-d242-4896-a1b9-9f63a9bb0915\" (UID: \"223282ee-d242-4896-a1b9-9f63a9bb0915\") " Feb 27 10:21:40 crc kubenswrapper[4998]: I0227 10:21:40.495060 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:21:40 crc kubenswrapper[4998]: E0227 10:21:40.495401 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 10:21:40.995381509 +0000 UTC m=+252.993652477 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:21:40 crc kubenswrapper[4998]: I0227 10:21:40.495503 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/223282ee-d242-4896-a1b9-9f63a9bb0915-config-volume\") pod \"223282ee-d242-4896-a1b9-9f63a9bb0915\" (UID: \"223282ee-d242-4896-a1b9-9f63a9bb0915\") " Feb 27 10:21:40 crc kubenswrapper[4998]: I0227 10:21:40.496216 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/223282ee-d242-4896-a1b9-9f63a9bb0915-config-volume" (OuterVolumeSpecName: "config-volume") pod "223282ee-d242-4896-a1b9-9f63a9bb0915" (UID: "223282ee-d242-4896-a1b9-9f63a9bb0915"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:21:40 crc kubenswrapper[4998]: I0227 10:21:40.496487 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/223282ee-d242-4896-a1b9-9f63a9bb0915-secret-volume\") pod \"223282ee-d242-4896-a1b9-9f63a9bb0915\" (UID: \"223282ee-d242-4896-a1b9-9f63a9bb0915\") " Feb 27 10:21:40 crc kubenswrapper[4998]: I0227 10:21:40.497532 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2m9rf\" (UID: \"4055490d-1d4a-4b0b-bf94-e2eaa714bc49\") " pod="openshift-image-registry/image-registry-697d97f7c8-2m9rf" Feb 27 10:21:40 crc kubenswrapper[4998]: I0227 10:21:40.497618 4998 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/223282ee-d242-4896-a1b9-9f63a9bb0915-config-volume\") on node \"crc\" DevicePath \"\"" Feb 27 10:21:40 crc kubenswrapper[4998]: E0227 10:21:40.497910 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 10:21:40.997898708 +0000 UTC m=+252.996169676 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2m9rf" (UID: "4055490d-1d4a-4b0b-bf94-e2eaa714bc49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:21:40 crc kubenswrapper[4998]: I0227 10:21:40.505183 4998 patch_prober.go:28] interesting pod/machine-config-daemon-m6kr5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 10:21:40 crc kubenswrapper[4998]: I0227 10:21:40.505278 4998 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 10:21:40 crc kubenswrapper[4998]: I0227 10:21:40.508675 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/223282ee-d242-4896-a1b9-9f63a9bb0915-kube-api-access-74tm6" (OuterVolumeSpecName: "kube-api-access-74tm6") pod "223282ee-d242-4896-a1b9-9f63a9bb0915" (UID: "223282ee-d242-4896-a1b9-9f63a9bb0915"). InnerVolumeSpecName "kube-api-access-74tm6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:21:40 crc kubenswrapper[4998]: I0227 10:21:40.511769 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/223282ee-d242-4896-a1b9-9f63a9bb0915-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "223282ee-d242-4896-a1b9-9f63a9bb0915" (UID: "223282ee-d242-4896-a1b9-9f63a9bb0915"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:21:40 crc kubenswrapper[4998]: I0227 10:21:40.574623 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6pn2r" Feb 27 10:21:40 crc kubenswrapper[4998]: I0227 10:21:40.599746 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:21:40 crc kubenswrapper[4998]: I0227 10:21:40.600097 4998 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/223282ee-d242-4896-a1b9-9f63a9bb0915-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 27 10:21:40 crc kubenswrapper[4998]: I0227 10:21:40.600122 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-74tm6\" (UniqueName: \"kubernetes.io/projected/223282ee-d242-4896-a1b9-9f63a9bb0915-kube-api-access-74tm6\") on node \"crc\" DevicePath \"\"" Feb 27 10:21:40 crc kubenswrapper[4998]: E0227 10:21:40.600203 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 10:21:41.100185637 +0000 UTC m=+253.098456605 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:21:40 crc kubenswrapper[4998]: I0227 10:21:40.701149 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2m9rf\" (UID: \"4055490d-1d4a-4b0b-bf94-e2eaa714bc49\") " pod="openshift-image-registry/image-registry-697d97f7c8-2m9rf" Feb 27 10:21:40 crc kubenswrapper[4998]: E0227 10:21:40.701466 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 10:21:41.201455299 +0000 UTC m=+253.199726267 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2m9rf" (UID: "4055490d-1d4a-4b0b-bf94-e2eaa714bc49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:21:40 crc kubenswrapper[4998]: I0227 10:21:40.801812 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:21:40 crc kubenswrapper[4998]: E0227 10:21:40.802444 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 10:21:41.302428462 +0000 UTC m=+253.300699430 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:21:40 crc kubenswrapper[4998]: I0227 10:21:40.903660 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2m9rf\" (UID: \"4055490d-1d4a-4b0b-bf94-e2eaa714bc49\") " pod="openshift-image-registry/image-registry-697d97f7c8-2m9rf" Feb 27 10:21:40 crc kubenswrapper[4998]: E0227 10:21:40.904019 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 10:21:41.404007332 +0000 UTC m=+253.402278300 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2m9rf" (UID: "4055490d-1d4a-4b0b-bf94-e2eaa714bc49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:21:40 crc kubenswrapper[4998]: I0227 10:21:40.907651 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" Feb 27 10:21:40 crc kubenswrapper[4998]: I0227 10:21:40.922827 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-555685b98-v8h46" event={"ID":"6c167d26-89cf-4d88-a85b-78d070d23b2f","Type":"ContainerStarted","Data":"3591329455f4da35e6ac7d1bb87eae596ad4be4b83e5ee06cd4af0cc1b3b1380"} Feb 27 10:21:40 crc kubenswrapper[4998]: I0227 10:21:40.923090 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-555685b98-v8h46" event={"ID":"6c167d26-89cf-4d88-a85b-78d070d23b2f","Type":"ContainerStarted","Data":"fc2d5a914b6cc066a48fad7e567a883e51a3478c73c4f3aab647c7629420361c"} Feb 27 10:21:40 crc kubenswrapper[4998]: I0227 10:21:40.923368 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-555685b98-v8h46" Feb 27 10:21:40 crc kubenswrapper[4998]: I0227 10:21:40.927521 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536455-gfh89" event={"ID":"223282ee-d242-4896-a1b9-9f63a9bb0915","Type":"ContainerDied","Data":"202c1812cc5cb59b557f42960d407f0c9d0902a807079b0717cb4273e63f3588"} Feb 27 10:21:40 crc kubenswrapper[4998]: I0227 10:21:40.927562 4998 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="202c1812cc5cb59b557f42960d407f0c9d0902a807079b0717cb4273e63f3588" Feb 27 10:21:40 crc kubenswrapper[4998]: I0227 10:21:40.927626 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536455-gfh89" Feb 27 10:21:40 crc kubenswrapper[4998]: I0227 10:21:40.935318 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-f8f5r" event={"ID":"01657e3a-3a1a-4d73-8bdb-90ddfa4c374b","Type":"ContainerStarted","Data":"38bab7c56e1b8c2c779a45cdb3476b96feec8eb4b8b223995aadcd4e4e7f0b6b"} Feb 27 10:21:40 crc kubenswrapper[4998]: I0227 10:21:40.935979 4998 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-2vxxl container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" start-of-body= Feb 27 10:21:40 crc kubenswrapper[4998]: I0227 10:21:40.936125 4998 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-2vxxl" podUID="9881d4cb-217e-455b-b8f3-0ad24a1e51d7" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" Feb 27 10:21:40 crc kubenswrapper[4998]: I0227 10:21:40.948397 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-555685b98-v8h46" Feb 27 10:21:40 crc kubenswrapper[4998]: I0227 10:21:40.973608 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5v4md"] Feb 27 10:21:40 crc kubenswrapper[4998]: E0227 10:21:40.973833 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="223282ee-d242-4896-a1b9-9f63a9bb0915" containerName="collect-profiles" Feb 27 10:21:40 crc kubenswrapper[4998]: I0227 10:21:40.973847 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="223282ee-d242-4896-a1b9-9f63a9bb0915" containerName="collect-profiles" Feb 27 10:21:40 crc kubenswrapper[4998]: I0227 10:21:40.974018 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="223282ee-d242-4896-a1b9-9f63a9bb0915" containerName="collect-profiles" Feb 27 10:21:40 crc kubenswrapper[4998]: I0227 10:21:40.974869 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5v4md" Feb 27 10:21:40 crc kubenswrapper[4998]: I0227 10:21:40.977550 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 27 10:21:40 crc kubenswrapper[4998]: I0227 10:21:40.988137 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5v4md"] Feb 27 10:21:40 crc kubenswrapper[4998]: I0227 10:21:40.998613 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-555685b98-v8h46" podStartSLOduration=5.998592671 podStartE2EDuration="5.998592671s" podCreationTimestamp="2026-02-27 10:21:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:21:40.9960149 +0000 UTC m=+252.994285888" watchObservedRunningTime="2026-02-27 10:21:40.998592671 +0000 UTC m=+252.996863639" Feb 27 10:21:41 crc kubenswrapper[4998]: I0227 10:21:41.004393 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:21:41 crc kubenswrapper[4998]: E0227 10:21:41.005524 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 10:21:41.505500079 +0000 UTC m=+253.503771107 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:21:41 crc kubenswrapper[4998]: I0227 10:21:41.092873 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-rqwm5" Feb 27 10:21:41 crc kubenswrapper[4998]: I0227 10:21:41.105184 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-rqwm5" Feb 27 10:21:41 crc kubenswrapper[4998]: I0227 10:21:41.105889 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de440cc8-1a01-4c10-83e6-027afdacde0c-catalog-content\") pod \"certified-operators-5v4md\" (UID: \"de440cc8-1a01-4c10-83e6-027afdacde0c\") " pod="openshift-marketplace/certified-operators-5v4md" Feb 27 10:21:41 crc kubenswrapper[4998]: I0227 10:21:41.106036 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de440cc8-1a01-4c10-83e6-027afdacde0c-utilities\") pod \"certified-operators-5v4md\" (UID: \"de440cc8-1a01-4c10-83e6-027afdacde0c\") " pod="openshift-marketplace/certified-operators-5v4md" Feb 27 10:21:41 crc kubenswrapper[4998]: I0227 10:21:41.106146 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2m9rf\" (UID: \"4055490d-1d4a-4b0b-bf94-e2eaa714bc49\") " pod="openshift-image-registry/image-registry-697d97f7c8-2m9rf" Feb 27 10:21:41 crc kubenswrapper[4998]: I0227 10:21:41.106236 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzpbc\" (UniqueName: \"kubernetes.io/projected/de440cc8-1a01-4c10-83e6-027afdacde0c-kube-api-access-jzpbc\") pod \"certified-operators-5v4md\" (UID: \"de440cc8-1a01-4c10-83e6-027afdacde0c\") " pod="openshift-marketplace/certified-operators-5v4md" Feb 27 10:21:41 crc kubenswrapper[4998]: E0227 10:21:41.106490 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 10:21:41.606475123 +0000 UTC m=+253.604746091 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2m9rf" (UID: "4055490d-1d4a-4b0b-bf94-e2eaa714bc49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:21:41 crc kubenswrapper[4998]: I0227 10:21:41.191728 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-sxgdw"] Feb 27 10:21:41 crc kubenswrapper[4998]: I0227 10:21:41.192902 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sxgdw" Feb 27 10:21:41 crc kubenswrapper[4998]: I0227 10:21:41.197106 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 27 10:21:41 crc kubenswrapper[4998]: I0227 10:21:41.206958 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:21:41 crc kubenswrapper[4998]: I0227 10:21:41.207283 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de440cc8-1a01-4c10-83e6-027afdacde0c-catalog-content\") pod \"certified-operators-5v4md\" (UID: \"de440cc8-1a01-4c10-83e6-027afdacde0c\") " pod="openshift-marketplace/certified-operators-5v4md" Feb 27 10:21:41 crc kubenswrapper[4998]: I0227 10:21:41.207363 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de440cc8-1a01-4c10-83e6-027afdacde0c-utilities\") pod \"certified-operators-5v4md\" (UID: \"de440cc8-1a01-4c10-83e6-027afdacde0c\") " pod="openshift-marketplace/certified-operators-5v4md" Feb 27 10:21:41 crc kubenswrapper[4998]: I0227 10:21:41.207445 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzpbc\" (UniqueName: \"kubernetes.io/projected/de440cc8-1a01-4c10-83e6-027afdacde0c-kube-api-access-jzpbc\") pod \"certified-operators-5v4md\" (UID: \"de440cc8-1a01-4c10-83e6-027afdacde0c\") " pod="openshift-marketplace/certified-operators-5v4md" Feb 27 10:21:41 crc kubenswrapper[4998]: E0227 10:21:41.207727 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 10:21:41.707700183 +0000 UTC m=+253.705971221 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:21:41 crc kubenswrapper[4998]: I0227 10:21:41.208525 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de440cc8-1a01-4c10-83e6-027afdacde0c-utilities\") pod \"certified-operators-5v4md\" (UID: \"de440cc8-1a01-4c10-83e6-027afdacde0c\") " pod="openshift-marketplace/certified-operators-5v4md" Feb 27 10:21:41 crc kubenswrapper[4998]: I0227 10:21:41.208648 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de440cc8-1a01-4c10-83e6-027afdacde0c-catalog-content\") pod \"certified-operators-5v4md\" (UID: \"de440cc8-1a01-4c10-83e6-027afdacde0c\") " pod="openshift-marketplace/certified-operators-5v4md" Feb 27 10:21:41 crc kubenswrapper[4998]: I0227 10:21:41.226765 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sxgdw"] Feb 27 10:21:41 crc kubenswrapper[4998]: I0227 10:21:41.264031 4998 patch_prober.go:28] interesting pod/downloads-7954f5f757-vn72h container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Feb 27 10:21:41 crc kubenswrapper[4998]: I0227 10:21:41.264091 4998 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-vn72h" podUID="daeaab34-be3d-4a1e-964f-17e3661682bc" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Feb 27 10:21:41 crc kubenswrapper[4998]: I0227 10:21:41.264167 4998 patch_prober.go:28] interesting pod/downloads-7954f5f757-vn72h container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Feb 27 10:21:41 crc kubenswrapper[4998]: I0227 10:21:41.264214 4998 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-vn72h" podUID="daeaab34-be3d-4a1e-964f-17e3661682bc" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Feb 27 10:21:41 crc kubenswrapper[4998]: I0227 10:21:41.274815 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jzpbc\" (UniqueName: \"kubernetes.io/projected/de440cc8-1a01-4c10-83e6-027afdacde0c-kube-api-access-jzpbc\") pod \"certified-operators-5v4md\" (UID: \"de440cc8-1a01-4c10-83e6-027afdacde0c\") " pod="openshift-marketplace/certified-operators-5v4md" Feb 27 10:21:41 crc kubenswrapper[4998]: I0227 10:21:41.306547 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5v4md" Feb 27 10:21:41 crc kubenswrapper[4998]: I0227 10:21:41.313990 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5d59240-590d-47d4-95f7-de0c01a8d3e2-catalog-content\") pod \"community-operators-sxgdw\" (UID: \"f5d59240-590d-47d4-95f7-de0c01a8d3e2\") " pod="openshift-marketplace/community-operators-sxgdw" Feb 27 10:21:41 crc kubenswrapper[4998]: I0227 10:21:41.314054 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqx2d\" (UniqueName: \"kubernetes.io/projected/f5d59240-590d-47d4-95f7-de0c01a8d3e2-kube-api-access-fqx2d\") pod \"community-operators-sxgdw\" (UID: \"f5d59240-590d-47d4-95f7-de0c01a8d3e2\") " pod="openshift-marketplace/community-operators-sxgdw" Feb 27 10:21:41 crc kubenswrapper[4998]: I0227 10:21:41.314086 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5d59240-590d-47d4-95f7-de0c01a8d3e2-utilities\") pod \"community-operators-sxgdw\" (UID: \"f5d59240-590d-47d4-95f7-de0c01a8d3e2\") " pod="openshift-marketplace/community-operators-sxgdw" Feb 27 10:21:41 crc kubenswrapper[4998]: I0227 10:21:41.314158 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2m9rf\" (UID: \"4055490d-1d4a-4b0b-bf94-e2eaa714bc49\") " pod="openshift-image-registry/image-registry-697d97f7c8-2m9rf" Feb 27 10:21:41 crc kubenswrapper[4998]: E0227 10:21:41.314734 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 10:21:41.814718201 +0000 UTC m=+253.812989169 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2m9rf" (UID: "4055490d-1d4a-4b0b-bf94-e2eaa714bc49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:21:41 crc kubenswrapper[4998]: I0227 10:21:41.329724 4998 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 27 10:21:41 crc kubenswrapper[4998]: I0227 10:21:41.333627 4998 patch_prober.go:28] interesting pod/router-default-5444994796-4n9qx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 27 10:21:41 crc kubenswrapper[4998]: [-]has-synced failed: reason withheld Feb 27 10:21:41 crc kubenswrapper[4998]: [+]process-running ok Feb 27 10:21:41 crc kubenswrapper[4998]: healthz check failed Feb 27 10:21:41 crc kubenswrapper[4998]: I0227 10:21:41.333859 4998 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4n9qx" podUID="854e8a4a-ba6a-4d3a-b20c-429e3bd1c8a7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 27 10:21:41 crc kubenswrapper[4998]: I0227 10:21:41.378541 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-zbbbf"] Feb 27 10:21:41 crc kubenswrapper[4998]: I0227 10:21:41.379733 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zbbbf" Feb 27 10:21:41 crc kubenswrapper[4998]: I0227 10:21:41.391885 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zbbbf"] Feb 27 10:21:41 crc kubenswrapper[4998]: I0227 10:21:41.417934 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:21:41 crc kubenswrapper[4998]: E0227 10:21:41.418172 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 10:21:41.918136642 +0000 UTC m=+253.916407620 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:21:41 crc kubenswrapper[4998]: I0227 10:21:41.418451 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5d59240-590d-47d4-95f7-de0c01a8d3e2-catalog-content\") pod \"community-operators-sxgdw\" (UID: \"f5d59240-590d-47d4-95f7-de0c01a8d3e2\") " pod="openshift-marketplace/community-operators-sxgdw" Feb 27 10:21:41 crc kubenswrapper[4998]: I0227 10:21:41.418493 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqx2d\" (UniqueName: \"kubernetes.io/projected/f5d59240-590d-47d4-95f7-de0c01a8d3e2-kube-api-access-fqx2d\") pod \"community-operators-sxgdw\" (UID: \"f5d59240-590d-47d4-95f7-de0c01a8d3e2\") " pod="openshift-marketplace/community-operators-sxgdw" Feb 27 10:21:41 crc kubenswrapper[4998]: I0227 10:21:41.418525 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5d59240-590d-47d4-95f7-de0c01a8d3e2-utilities\") pod \"community-operators-sxgdw\" (UID: \"f5d59240-590d-47d4-95f7-de0c01a8d3e2\") " pod="openshift-marketplace/community-operators-sxgdw" Feb 27 10:21:41 crc kubenswrapper[4998]: I0227 10:21:41.418603 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2m9rf\" (UID: \"4055490d-1d4a-4b0b-bf94-e2eaa714bc49\") " pod="openshift-image-registry/image-registry-697d97f7c8-2m9rf" Feb 27 10:21:41 crc kubenswrapper[4998]: E0227 10:21:41.419064 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 10:21:41.919052447 +0000 UTC m=+253.917323415 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2m9rf" (UID: "4055490d-1d4a-4b0b-bf94-e2eaa714bc49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:21:41 crc kubenswrapper[4998]: I0227 10:21:41.419188 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5d59240-590d-47d4-95f7-de0c01a8d3e2-catalog-content\") pod \"community-operators-sxgdw\" (UID: \"f5d59240-590d-47d4-95f7-de0c01a8d3e2\") " pod="openshift-marketplace/community-operators-sxgdw" Feb 27 10:21:41 crc kubenswrapper[4998]: I0227 10:21:41.419292 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5d59240-590d-47d4-95f7-de0c01a8d3e2-utilities\") pod \"community-operators-sxgdw\" (UID: \"f5d59240-590d-47d4-95f7-de0c01a8d3e2\") " pod="openshift-marketplace/community-operators-sxgdw" Feb 27 10:21:41 crc kubenswrapper[4998]: I0227 10:21:41.490063 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqx2d\" (UniqueName: \"kubernetes.io/projected/f5d59240-590d-47d4-95f7-de0c01a8d3e2-kube-api-access-fqx2d\") pod \"community-operators-sxgdw\" (UID: \"f5d59240-590d-47d4-95f7-de0c01a8d3e2\") " pod="openshift-marketplace/community-operators-sxgdw" Feb 27 10:21:41 crc kubenswrapper[4998]: I0227 10:21:41.524677 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sxgdw" Feb 27 10:21:41 crc kubenswrapper[4998]: I0227 10:21:41.526884 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:21:41 crc kubenswrapper[4998]: I0227 10:21:41.527157 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3cb466ff-6f20-443f-983d-49332c97e530-utilities\") pod \"certified-operators-zbbbf\" (UID: \"3cb466ff-6f20-443f-983d-49332c97e530\") " pod="openshift-marketplace/certified-operators-zbbbf" Feb 27 10:21:41 crc kubenswrapper[4998]: I0227 10:21:41.527199 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkjlk\" (UniqueName: \"kubernetes.io/projected/3cb466ff-6f20-443f-983d-49332c97e530-kube-api-access-rkjlk\") pod \"certified-operators-zbbbf\" (UID: \"3cb466ff-6f20-443f-983d-49332c97e530\") " pod="openshift-marketplace/certified-operators-zbbbf" Feb 27 10:21:41 crc kubenswrapper[4998]: I0227 10:21:41.527271 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3cb466ff-6f20-443f-983d-49332c97e530-catalog-content\") pod \"certified-operators-zbbbf\" (UID: \"3cb466ff-6f20-443f-983d-49332c97e530\") " pod="openshift-marketplace/certified-operators-zbbbf" Feb 27 10:21:41 crc kubenswrapper[4998]: E0227 10:21:41.527375 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 10:21:42.027355621 +0000 UTC m=+254.025626589 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:21:41 crc kubenswrapper[4998]: I0227 10:21:41.574585 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-zgvgp"] Feb 27 10:21:41 crc kubenswrapper[4998]: I0227 10:21:41.575936 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zgvgp" Feb 27 10:21:41 crc kubenswrapper[4998]: I0227 10:21:41.587853 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zgvgp"] Feb 27 10:21:41 crc kubenswrapper[4998]: I0227 10:21:41.628583 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3cb466ff-6f20-443f-983d-49332c97e530-utilities\") pod \"certified-operators-zbbbf\" (UID: \"3cb466ff-6f20-443f-983d-49332c97e530\") " pod="openshift-marketplace/certified-operators-zbbbf" Feb 27 10:21:41 crc kubenswrapper[4998]: I0227 10:21:41.628657 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkjlk\" (UniqueName: \"kubernetes.io/projected/3cb466ff-6f20-443f-983d-49332c97e530-kube-api-access-rkjlk\") pod \"certified-operators-zbbbf\" (UID: \"3cb466ff-6f20-443f-983d-49332c97e530\") " pod="openshift-marketplace/certified-operators-zbbbf" Feb 27 10:21:41 crc kubenswrapper[4998]: I0227 10:21:41.628711 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3cb466ff-6f20-443f-983d-49332c97e530-catalog-content\") pod \"certified-operators-zbbbf\" (UID: \"3cb466ff-6f20-443f-983d-49332c97e530\") " pod="openshift-marketplace/certified-operators-zbbbf" Feb 27 10:21:41 crc kubenswrapper[4998]: I0227 10:21:41.628742 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2m9rf\" (UID: \"4055490d-1d4a-4b0b-bf94-e2eaa714bc49\") " pod="openshift-image-registry/image-registry-697d97f7c8-2m9rf" Feb 27 10:21:41 crc kubenswrapper[4998]: E0227 10:21:41.629079 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 10:21:42.129067545 +0000 UTC m=+254.127338513 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2m9rf" (UID: "4055490d-1d4a-4b0b-bf94-e2eaa714bc49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:21:41 crc kubenswrapper[4998]: I0227 10:21:41.629531 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3cb466ff-6f20-443f-983d-49332c97e530-utilities\") pod \"certified-operators-zbbbf\" (UID: \"3cb466ff-6f20-443f-983d-49332c97e530\") " pod="openshift-marketplace/certified-operators-zbbbf" Feb 27 10:21:41 crc kubenswrapper[4998]: I0227 10:21:41.629932 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3cb466ff-6f20-443f-983d-49332c97e530-catalog-content\") pod \"certified-operators-zbbbf\" (UID: \"3cb466ff-6f20-443f-983d-49332c97e530\") " pod="openshift-marketplace/certified-operators-zbbbf" Feb 27 10:21:41 crc kubenswrapper[4998]: I0227 10:21:41.647678 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkjlk\" (UniqueName: \"kubernetes.io/projected/3cb466ff-6f20-443f-983d-49332c97e530-kube-api-access-rkjlk\") pod \"certified-operators-zbbbf\" (UID: \"3cb466ff-6f20-443f-983d-49332c97e530\") " pod="openshift-marketplace/certified-operators-zbbbf" Feb 27 10:21:41 crc kubenswrapper[4998]: I0227 10:21:41.675201 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5v4md"] Feb 27 10:21:41 crc kubenswrapper[4998]: I0227 10:21:41.701872 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zbbbf" Feb 27 10:21:41 crc kubenswrapper[4998]: I0227 10:21:41.735256 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:21:41 crc kubenswrapper[4998]: I0227 10:21:41.735449 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5dbdfd7-f5af-4244-acbd-508173f391fe-utilities\") pod \"community-operators-zgvgp\" (UID: \"e5dbdfd7-f5af-4244-acbd-508173f391fe\") " pod="openshift-marketplace/community-operators-zgvgp" Feb 27 10:21:41 crc kubenswrapper[4998]: I0227 10:21:41.735505 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbhrd\" (UniqueName: \"kubernetes.io/projected/e5dbdfd7-f5af-4244-acbd-508173f391fe-kube-api-access-hbhrd\") pod \"community-operators-zgvgp\" (UID: \"e5dbdfd7-f5af-4244-acbd-508173f391fe\") " pod="openshift-marketplace/community-operators-zgvgp" Feb 27 10:21:41 crc kubenswrapper[4998]: I0227 10:21:41.735524 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5dbdfd7-f5af-4244-acbd-508173f391fe-catalog-content\") pod \"community-operators-zgvgp\" (UID: \"e5dbdfd7-f5af-4244-acbd-508173f391fe\") " pod="openshift-marketplace/community-operators-zgvgp" Feb 27 10:21:41 crc kubenswrapper[4998]: E0227 10:21:41.735670 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 10:21:42.235654812 +0000 UTC m=+254.233925780 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:21:41 crc kubenswrapper[4998]: I0227 10:21:41.836889 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbhrd\" (UniqueName: \"kubernetes.io/projected/e5dbdfd7-f5af-4244-acbd-508173f391fe-kube-api-access-hbhrd\") pod \"community-operators-zgvgp\" (UID: \"e5dbdfd7-f5af-4244-acbd-508173f391fe\") " pod="openshift-marketplace/community-operators-zgvgp" Feb 27 10:21:41 crc kubenswrapper[4998]: I0227 10:21:41.837151 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5dbdfd7-f5af-4244-acbd-508173f391fe-catalog-content\") pod \"community-operators-zgvgp\" (UID: \"e5dbdfd7-f5af-4244-acbd-508173f391fe\") " pod="openshift-marketplace/community-operators-zgvgp" Feb 27 10:21:41 crc kubenswrapper[4998]: I0227 10:21:41.837184 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2m9rf\" (UID: \"4055490d-1d4a-4b0b-bf94-e2eaa714bc49\") " pod="openshift-image-registry/image-registry-697d97f7c8-2m9rf" Feb 27 10:21:41 crc kubenswrapper[4998]: I0227 10:21:41.837275 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5dbdfd7-f5af-4244-acbd-508173f391fe-utilities\") pod \"community-operators-zgvgp\" (UID: \"e5dbdfd7-f5af-4244-acbd-508173f391fe\") " pod="openshift-marketplace/community-operators-zgvgp" Feb 27 10:21:41 crc kubenswrapper[4998]: I0227 10:21:41.837586 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5dbdfd7-f5af-4244-acbd-508173f391fe-catalog-content\") pod \"community-operators-zgvgp\" (UID: \"e5dbdfd7-f5af-4244-acbd-508173f391fe\") " pod="openshift-marketplace/community-operators-zgvgp" Feb 27 10:21:41 crc kubenswrapper[4998]: E0227 10:21:41.837595 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 10:21:42.337582941 +0000 UTC m=+254.335853909 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2m9rf" (UID: "4055490d-1d4a-4b0b-bf94-e2eaa714bc49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:21:41 crc kubenswrapper[4998]: I0227 10:21:41.838164 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5dbdfd7-f5af-4244-acbd-508173f391fe-utilities\") pod \"community-operators-zgvgp\" (UID: \"e5dbdfd7-f5af-4244-acbd-508173f391fe\") " pod="openshift-marketplace/community-operators-zgvgp" Feb 27 10:21:41 crc kubenswrapper[4998]: I0227 10:21:41.869033 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbhrd\" (UniqueName: \"kubernetes.io/projected/e5dbdfd7-f5af-4244-acbd-508173f391fe-kube-api-access-hbhrd\") pod \"community-operators-zgvgp\" (UID: \"e5dbdfd7-f5af-4244-acbd-508173f391fe\") " pod="openshift-marketplace/community-operators-zgvgp" Feb 27 10:21:41 crc kubenswrapper[4998]: I0227 10:21:41.877366 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vlnjr" Feb 27 10:21:41 crc kubenswrapper[4998]: I0227 10:21:41.892090 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sxgdw"] Feb 27 10:21:41 crc kubenswrapper[4998]: I0227 10:21:41.938620 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zgvgp" Feb 27 10:21:41 crc kubenswrapper[4998]: I0227 10:21:41.938839 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:21:41 crc kubenswrapper[4998]: E0227 10:21:41.939022 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 10:21:42.439007386 +0000 UTC m=+254.437278354 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:21:41 crc kubenswrapper[4998]: I0227 10:21:41.939053 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2m9rf\" (UID: \"4055490d-1d4a-4b0b-bf94-e2eaa714bc49\") " pod="openshift-image-registry/image-registry-697d97f7c8-2m9rf" Feb 27 10:21:41 crc kubenswrapper[4998]: E0227 10:21:41.939460 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 10:21:42.439439748 +0000 UTC m=+254.437710716 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2m9rf" (UID: "4055490d-1d4a-4b0b-bf94-e2eaa714bc49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:21:41 crc kubenswrapper[4998]: I0227 10:21:41.967105 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-f8f5r" event={"ID":"01657e3a-3a1a-4d73-8bdb-90ddfa4c374b","Type":"ContainerStarted","Data":"e461f5b63ec5cf3ffaa86849d46a59cc671d79e2ef6cb8693a5784930b2bc39d"} Feb 27 10:21:41 crc kubenswrapper[4998]: I0227 10:21:41.967396 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-f8f5r" event={"ID":"01657e3a-3a1a-4d73-8bdb-90ddfa4c374b","Type":"ContainerStarted","Data":"81b709c6b98e07d559b7ee24cf9e00a886e750107fe5011e8e02567ff06ccaf9"} Feb 27 10:21:41 crc kubenswrapper[4998]: I0227 10:21:41.988988 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5v4md" event={"ID":"de440cc8-1a01-4c10-83e6-027afdacde0c","Type":"ContainerStarted","Data":"98a93b2d84e6569118234d8426502ed7a8181ac62ef134a51d3d710c993a96c5"} Feb 27 10:21:41 crc kubenswrapper[4998]: I0227 10:21:41.989032 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5v4md" event={"ID":"de440cc8-1a01-4c10-83e6-027afdacde0c","Type":"ContainerStarted","Data":"0c47855489fc5cb1a38fb1f8c84bc7fa689076edd84807ec906a47d0a3c234e4"} Feb 27 10:21:41 crc kubenswrapper[4998]: I0227 10:21:41.990706 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-f8f5r" podStartSLOduration=11.990689711 podStartE2EDuration="11.990689711s" podCreationTimestamp="2026-02-27 10:21:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:21:41.987746371 +0000 UTC m=+253.986017339" watchObservedRunningTime="2026-02-27 10:21:41.990689711 +0000 UTC m=+253.988960679" Feb 27 10:21:42 crc kubenswrapper[4998]: I0227 10:21:42.024166 4998 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-27T10:21:41.330133514Z","Handler":null,"Name":""} Feb 27 10:21:42 crc kubenswrapper[4998]: I0227 10:21:42.031036 4998 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 27 10:21:42 crc kubenswrapper[4998]: I0227 10:21:42.031085 4998 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 27 10:21:42 crc kubenswrapper[4998]: I0227 10:21:42.043372 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:21:42 crc kubenswrapper[4998]: I0227 10:21:42.051922 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 27 10:21:42 crc kubenswrapper[4998]: I0227 10:21:42.145981 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2m9rf\" (UID: \"4055490d-1d4a-4b0b-bf94-e2eaa714bc49\") " pod="openshift-image-registry/image-registry-697d97f7c8-2m9rf" Feb 27 10:21:42 crc kubenswrapper[4998]: I0227 10:21:42.160372 4998 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 27 10:21:42 crc kubenswrapper[4998]: I0227 10:21:42.160413 4998 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2m9rf\" (UID: \"4055490d-1d4a-4b0b-bf94-e2eaa714bc49\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-2m9rf" Feb 27 10:21:42 crc kubenswrapper[4998]: I0227 10:21:42.190732 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2m9rf\" (UID: \"4055490d-1d4a-4b0b-bf94-e2eaa714bc49\") " pod="openshift-image-registry/image-registry-697d97f7c8-2m9rf" Feb 27 10:21:42 crc kubenswrapper[4998]: I0227 10:21:42.213624 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zgvgp"] Feb 27 10:21:42 crc kubenswrapper[4998]: W0227 10:21:42.218957 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode5dbdfd7_f5af_4244_acbd_508173f391fe.slice/crio-231df6b323b537fbeff4d7efd8bd0e493fa186cf19351355b021be019f15528c WatchSource:0}: Error finding container 231df6b323b537fbeff4d7efd8bd0e493fa186cf19351355b021be019f15528c: Status 404 returned error can't find the container with id 231df6b323b537fbeff4d7efd8bd0e493fa186cf19351355b021be019f15528c Feb 27 10:21:42 crc kubenswrapper[4998]: I0227 10:21:42.224525 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zbbbf"] Feb 27 10:21:42 crc kubenswrapper[4998]: I0227 10:21:42.328861 4998 patch_prober.go:28] interesting pod/router-default-5444994796-4n9qx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 27 10:21:42 crc kubenswrapper[4998]: [-]has-synced failed: reason withheld Feb 27 10:21:42 crc kubenswrapper[4998]: [+]process-running ok Feb 27 10:21:42 crc kubenswrapper[4998]: healthz check failed Feb 27 10:21:42 crc kubenswrapper[4998]: I0227 10:21:42.329200 4998 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4n9qx" podUID="854e8a4a-ba6a-4d3a-b20c-429e3bd1c8a7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 27 10:21:42 crc kubenswrapper[4998]: I0227 10:21:42.387025 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-2m9rf" Feb 27 10:21:42 crc kubenswrapper[4998]: I0227 10:21:42.594944 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-2m9rf"] Feb 27 10:21:42 crc kubenswrapper[4998]: W0227 10:21:42.601593 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4055490d_1d4a_4b0b_bf94_e2eaa714bc49.slice/crio-dc898960be9c89eb94ebbc11a240cba660c0c1f56449eef533994680d6237798 WatchSource:0}: Error finding container dc898960be9c89eb94ebbc11a240cba660c0c1f56449eef533994680d6237798: Status 404 returned error can't find the container with id dc898960be9c89eb94ebbc11a240cba660c0c1f56449eef533994680d6237798 Feb 27 10:21:42 crc kubenswrapper[4998]: I0227 10:21:42.791759 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Feb 27 10:21:42 crc kubenswrapper[4998]: I0227 10:21:42.803704 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-n6r8g" Feb 27 10:21:42 crc kubenswrapper[4998]: I0227 10:21:42.803751 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-n6r8g" Feb 27 10:21:42 crc kubenswrapper[4998]: I0227 10:21:42.807610 4998 patch_prober.go:28] interesting pod/console-f9d7485db-n6r8g container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Feb 27 10:21:42 crc kubenswrapper[4998]: I0227 10:21:42.807652 4998 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-n6r8g" podUID="e32a75fa-f16d-4386-a933-4a6bd43f1bdc" containerName="console" probeResult="failure" output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" Feb 27 10:21:42 crc kubenswrapper[4998]: I0227 10:21:42.980782 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-n6ft6"] Feb 27 10:21:42 crc kubenswrapper[4998]: I0227 10:21:42.982262 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n6ft6" Feb 27 10:21:42 crc kubenswrapper[4998]: I0227 10:21:42.988540 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 27 10:21:42 crc kubenswrapper[4998]: I0227 10:21:42.991502 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-n6ft6"] Feb 27 10:21:43 crc kubenswrapper[4998]: I0227 10:21:43.019171 4998 generic.go:334] "Generic (PLEG): container finished" podID="de440cc8-1a01-4c10-83e6-027afdacde0c" containerID="98a93b2d84e6569118234d8426502ed7a8181ac62ef134a51d3d710c993a96c5" exitCode=0 Feb 27 10:21:43 crc kubenswrapper[4998]: I0227 10:21:43.019312 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5v4md" event={"ID":"de440cc8-1a01-4c10-83e6-027afdacde0c","Type":"ContainerDied","Data":"98a93b2d84e6569118234d8426502ed7a8181ac62ef134a51d3d710c993a96c5"} Feb 27 10:21:43 crc kubenswrapper[4998]: I0227 10:21:43.022667 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-2m9rf" event={"ID":"4055490d-1d4a-4b0b-bf94-e2eaa714bc49","Type":"ContainerStarted","Data":"dc898960be9c89eb94ebbc11a240cba660c0c1f56449eef533994680d6237798"} Feb 27 10:21:43 crc kubenswrapper[4998]: I0227 10:21:43.027547 4998 generic.go:334] "Generic (PLEG): container finished" podID="3cb466ff-6f20-443f-983d-49332c97e530" containerID="110021361cae5d6f0b4abe3422b708339a99b126a96aa31b39c970e66871c497" exitCode=0 Feb 27 10:21:43 crc kubenswrapper[4998]: I0227 10:21:43.027632 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zbbbf" event={"ID":"3cb466ff-6f20-443f-983d-49332c97e530","Type":"ContainerDied","Data":"110021361cae5d6f0b4abe3422b708339a99b126a96aa31b39c970e66871c497"} Feb 27 10:21:43 crc kubenswrapper[4998]: I0227 10:21:43.027695 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zbbbf" event={"ID":"3cb466ff-6f20-443f-983d-49332c97e530","Type":"ContainerStarted","Data":"1ded792f44acdfdeab7dbeb5733d24e8937a75894ddd51fa8bf7c53b3f9f2e13"} Feb 27 10:21:43 crc kubenswrapper[4998]: I0227 10:21:43.031924 4998 generic.go:334] "Generic (PLEG): container finished" podID="f5d59240-590d-47d4-95f7-de0c01a8d3e2" containerID="b85f16b4e9246ce5e828de8ed35dfe535832417f4c69bcca3307435ee6b6f323" exitCode=0 Feb 27 10:21:43 crc kubenswrapper[4998]: I0227 10:21:43.031992 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sxgdw" event={"ID":"f5d59240-590d-47d4-95f7-de0c01a8d3e2","Type":"ContainerDied","Data":"b85f16b4e9246ce5e828de8ed35dfe535832417f4c69bcca3307435ee6b6f323"} Feb 27 10:21:43 crc kubenswrapper[4998]: I0227 10:21:43.032029 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sxgdw" event={"ID":"f5d59240-590d-47d4-95f7-de0c01a8d3e2","Type":"ContainerStarted","Data":"e67a887271a70f53462b51c83dc92aafbdfa254451c1a2d290a67b7d220032a7"} Feb 27 10:21:43 crc kubenswrapper[4998]: I0227 10:21:43.034661 4998 generic.go:334] "Generic (PLEG): container finished" podID="e5dbdfd7-f5af-4244-acbd-508173f391fe" containerID="75b6b5087eb911517ea62f5e0db50c18fdaee5c495e7e6fcf1936a4a2cdbecfd" exitCode=0 Feb 27 10:21:43 crc kubenswrapper[4998]: I0227 10:21:43.035742 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zgvgp" event={"ID":"e5dbdfd7-f5af-4244-acbd-508173f391fe","Type":"ContainerDied","Data":"75b6b5087eb911517ea62f5e0db50c18fdaee5c495e7e6fcf1936a4a2cdbecfd"} Feb 27 10:21:43 crc kubenswrapper[4998]: I0227 10:21:43.035783 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zgvgp" event={"ID":"e5dbdfd7-f5af-4244-acbd-508173f391fe","Type":"ContainerStarted","Data":"231df6b323b537fbeff4d7efd8bd0e493fa186cf19351355b021be019f15528c"} Feb 27 10:21:43 crc kubenswrapper[4998]: I0227 10:21:43.039732 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-2vxxl" Feb 27 10:21:43 crc kubenswrapper[4998]: I0227 10:21:43.061390 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f770761-42e0-4e42-92c0-1e7fb8e45a49-utilities\") pod \"redhat-marketplace-n6ft6\" (UID: \"1f770761-42e0-4e42-92c0-1e7fb8e45a49\") " pod="openshift-marketplace/redhat-marketplace-n6ft6" Feb 27 10:21:43 crc kubenswrapper[4998]: I0227 10:21:43.061454 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fmpz\" (UniqueName: \"kubernetes.io/projected/1f770761-42e0-4e42-92c0-1e7fb8e45a49-kube-api-access-2fmpz\") pod \"redhat-marketplace-n6ft6\" (UID: \"1f770761-42e0-4e42-92c0-1e7fb8e45a49\") " pod="openshift-marketplace/redhat-marketplace-n6ft6" Feb 27 10:21:43 crc kubenswrapper[4998]: I0227 10:21:43.061499 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f770761-42e0-4e42-92c0-1e7fb8e45a49-catalog-content\") pod \"redhat-marketplace-n6ft6\" (UID: \"1f770761-42e0-4e42-92c0-1e7fb8e45a49\") " pod="openshift-marketplace/redhat-marketplace-n6ft6" Feb 27 10:21:43 crc kubenswrapper[4998]: I0227 10:21:43.163758 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fmpz\" (UniqueName: \"kubernetes.io/projected/1f770761-42e0-4e42-92c0-1e7fb8e45a49-kube-api-access-2fmpz\") pod \"redhat-marketplace-n6ft6\" (UID: \"1f770761-42e0-4e42-92c0-1e7fb8e45a49\") " pod="openshift-marketplace/redhat-marketplace-n6ft6" Feb 27 10:21:43 crc kubenswrapper[4998]: I0227 10:21:43.163993 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f770761-42e0-4e42-92c0-1e7fb8e45a49-catalog-content\") pod \"redhat-marketplace-n6ft6\" (UID: \"1f770761-42e0-4e42-92c0-1e7fb8e45a49\") " pod="openshift-marketplace/redhat-marketplace-n6ft6" Feb 27 10:21:43 crc kubenswrapper[4998]: I0227 10:21:43.164207 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f770761-42e0-4e42-92c0-1e7fb8e45a49-utilities\") pod \"redhat-marketplace-n6ft6\" (UID: \"1f770761-42e0-4e42-92c0-1e7fb8e45a49\") " pod="openshift-marketplace/redhat-marketplace-n6ft6" Feb 27 10:21:43 crc kubenswrapper[4998]: I0227 10:21:43.165830 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f770761-42e0-4e42-92c0-1e7fb8e45a49-catalog-content\") pod \"redhat-marketplace-n6ft6\" (UID: \"1f770761-42e0-4e42-92c0-1e7fb8e45a49\") " pod="openshift-marketplace/redhat-marketplace-n6ft6" Feb 27 10:21:43 crc kubenswrapper[4998]: I0227 10:21:43.167302 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f770761-42e0-4e42-92c0-1e7fb8e45a49-utilities\") pod \"redhat-marketplace-n6ft6\" (UID: \"1f770761-42e0-4e42-92c0-1e7fb8e45a49\") " pod="openshift-marketplace/redhat-marketplace-n6ft6" Feb 27 10:21:43 crc kubenswrapper[4998]: I0227 10:21:43.190888 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 27 10:21:43 crc kubenswrapper[4998]: I0227 10:21:43.191839 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fmpz\" (UniqueName: \"kubernetes.io/projected/1f770761-42e0-4e42-92c0-1e7fb8e45a49-kube-api-access-2fmpz\") pod \"redhat-marketplace-n6ft6\" (UID: \"1f770761-42e0-4e42-92c0-1e7fb8e45a49\") " pod="openshift-marketplace/redhat-marketplace-n6ft6" Feb 27 10:21:43 crc kubenswrapper[4998]: I0227 10:21:43.201374 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 27 10:21:43 crc kubenswrapper[4998]: I0227 10:21:43.206836 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 27 10:21:43 crc kubenswrapper[4998]: I0227 10:21:43.211219 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 27 10:21:43 crc kubenswrapper[4998]: I0227 10:21:43.214250 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 27 10:21:43 crc kubenswrapper[4998]: I0227 10:21:43.310373 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n6ft6" Feb 27 10:21:43 crc kubenswrapper[4998]: I0227 10:21:43.325853 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-4n9qx" Feb 27 10:21:43 crc kubenswrapper[4998]: I0227 10:21:43.328784 4998 patch_prober.go:28] interesting pod/router-default-5444994796-4n9qx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 27 10:21:43 crc kubenswrapper[4998]: [-]has-synced failed: reason withheld Feb 27 10:21:43 crc kubenswrapper[4998]: [+]process-running ok Feb 27 10:21:43 crc kubenswrapper[4998]: healthz check failed Feb 27 10:21:43 crc kubenswrapper[4998]: I0227 10:21:43.328825 4998 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4n9qx" podUID="854e8a4a-ba6a-4d3a-b20c-429e3bd1c8a7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 27 10:21:43 crc kubenswrapper[4998]: I0227 10:21:43.368549 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8xp8s"] Feb 27 10:21:43 crc kubenswrapper[4998]: I0227 10:21:43.370174 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8xp8s" Feb 27 10:21:43 crc kubenswrapper[4998]: I0227 10:21:43.368832 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/14266b7b-880e-466b-a2f1-409ac1046db5-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"14266b7b-880e-466b-a2f1-409ac1046db5\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 27 10:21:43 crc kubenswrapper[4998]: I0227 10:21:43.371530 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/14266b7b-880e-466b-a2f1-409ac1046db5-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"14266b7b-880e-466b-a2f1-409ac1046db5\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 27 10:21:43 crc kubenswrapper[4998]: I0227 10:21:43.380490 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8xp8s"] Feb 27 10:21:43 crc kubenswrapper[4998]: I0227 10:21:43.477131 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff2b77d3-c103-46bc-9622-7f4a87ea0264-catalog-content\") pod \"redhat-marketplace-8xp8s\" (UID: \"ff2b77d3-c103-46bc-9622-7f4a87ea0264\") " pod="openshift-marketplace/redhat-marketplace-8xp8s" Feb 27 10:21:43 crc kubenswrapper[4998]: I0227 10:21:43.477214 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/14266b7b-880e-466b-a2f1-409ac1046db5-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"14266b7b-880e-466b-a2f1-409ac1046db5\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 27 10:21:43 crc kubenswrapper[4998]: I0227 10:21:43.477373 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff2b77d3-c103-46bc-9622-7f4a87ea0264-utilities\") pod \"redhat-marketplace-8xp8s\" (UID: \"ff2b77d3-c103-46bc-9622-7f4a87ea0264\") " pod="openshift-marketplace/redhat-marketplace-8xp8s" Feb 27 10:21:43 crc kubenswrapper[4998]: I0227 10:21:43.477426 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/14266b7b-880e-466b-a2f1-409ac1046db5-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"14266b7b-880e-466b-a2f1-409ac1046db5\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 27 10:21:43 crc kubenswrapper[4998]: I0227 10:21:43.477472 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rs5st\" (UniqueName: \"kubernetes.io/projected/ff2b77d3-c103-46bc-9622-7f4a87ea0264-kube-api-access-rs5st\") pod \"redhat-marketplace-8xp8s\" (UID: \"ff2b77d3-c103-46bc-9622-7f4a87ea0264\") " pod="openshift-marketplace/redhat-marketplace-8xp8s" Feb 27 10:21:43 crc kubenswrapper[4998]: I0227 10:21:43.477292 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/14266b7b-880e-466b-a2f1-409ac1046db5-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"14266b7b-880e-466b-a2f1-409ac1046db5\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 27 10:21:43 crc kubenswrapper[4998]: I0227 10:21:43.502391 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/14266b7b-880e-466b-a2f1-409ac1046db5-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"14266b7b-880e-466b-a2f1-409ac1046db5\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 27 10:21:43 crc kubenswrapper[4998]: I0227 10:21:43.533546 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 27 10:21:43 crc kubenswrapper[4998]: I0227 10:21:43.539962 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-n6ft6"] Feb 27 10:21:43 crc kubenswrapper[4998]: W0227 10:21:43.562048 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1f770761_42e0_4e42_92c0_1e7fb8e45a49.slice/crio-ed9a49348248b9ce9780780623ceaaf70ce2e00deae3fa828662ca38fef25807 WatchSource:0}: Error finding container ed9a49348248b9ce9780780623ceaaf70ce2e00deae3fa828662ca38fef25807: Status 404 returned error can't find the container with id ed9a49348248b9ce9780780623ceaaf70ce2e00deae3fa828662ca38fef25807 Feb 27 10:21:43 crc kubenswrapper[4998]: I0227 10:21:43.578484 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rs5st\" (UniqueName: \"kubernetes.io/projected/ff2b77d3-c103-46bc-9622-7f4a87ea0264-kube-api-access-rs5st\") pod \"redhat-marketplace-8xp8s\" (UID: \"ff2b77d3-c103-46bc-9622-7f4a87ea0264\") " pod="openshift-marketplace/redhat-marketplace-8xp8s" Feb 27 10:21:43 crc kubenswrapper[4998]: I0227 10:21:43.578588 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff2b77d3-c103-46bc-9622-7f4a87ea0264-catalog-content\") pod \"redhat-marketplace-8xp8s\" (UID: \"ff2b77d3-c103-46bc-9622-7f4a87ea0264\") " pod="openshift-marketplace/redhat-marketplace-8xp8s" Feb 27 10:21:43 crc kubenswrapper[4998]: I0227 10:21:43.578673 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff2b77d3-c103-46bc-9622-7f4a87ea0264-utilities\") pod \"redhat-marketplace-8xp8s\" (UID: \"ff2b77d3-c103-46bc-9622-7f4a87ea0264\") " pod="openshift-marketplace/redhat-marketplace-8xp8s" Feb 27 10:21:43 crc kubenswrapper[4998]: I0227 10:21:43.580219 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff2b77d3-c103-46bc-9622-7f4a87ea0264-catalog-content\") pod \"redhat-marketplace-8xp8s\" (UID: \"ff2b77d3-c103-46bc-9622-7f4a87ea0264\") " pod="openshift-marketplace/redhat-marketplace-8xp8s" Feb 27 10:21:43 crc kubenswrapper[4998]: I0227 10:21:43.610078 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rs5st\" (UniqueName: \"kubernetes.io/projected/ff2b77d3-c103-46bc-9622-7f4a87ea0264-kube-api-access-rs5st\") pod \"redhat-marketplace-8xp8s\" (UID: \"ff2b77d3-c103-46bc-9622-7f4a87ea0264\") " pod="openshift-marketplace/redhat-marketplace-8xp8s" Feb 27 10:21:43 crc kubenswrapper[4998]: I0227 10:21:43.778471 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff2b77d3-c103-46bc-9622-7f4a87ea0264-utilities\") pod \"redhat-marketplace-8xp8s\" (UID: \"ff2b77d3-c103-46bc-9622-7f4a87ea0264\") " pod="openshift-marketplace/redhat-marketplace-8xp8s" Feb 27 10:21:43 crc kubenswrapper[4998]: I0227 10:21:43.795466 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 27 10:21:43 crc kubenswrapper[4998]: W0227 10:21:43.808587 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod14266b7b_880e_466b_a2f1_409ac1046db5.slice/crio-60cd7ea9bcb7e0a6345e08072ec48f7e11e830b1884f099ef3c8875eb4f09d9c WatchSource:0}: Error finding container 60cd7ea9bcb7e0a6345e08072ec48f7e11e830b1884f099ef3c8875eb4f09d9c: Status 404 returned error can't find the container with id 60cd7ea9bcb7e0a6345e08072ec48f7e11e830b1884f099ef3c8875eb4f09d9c Feb 27 10:21:44 crc kubenswrapper[4998]: I0227 10:21:44.000845 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8xp8s" Feb 27 10:21:44 crc kubenswrapper[4998]: I0227 10:21:44.042044 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-2m9rf" event={"ID":"4055490d-1d4a-4b0b-bf94-e2eaa714bc49","Type":"ContainerStarted","Data":"db4e6ac1b424105eed07e278b5c82b027a8845c9e32233fc047ea48cf5a90b6f"} Feb 27 10:21:44 crc kubenswrapper[4998]: I0227 10:21:44.042994 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-2m9rf" Feb 27 10:21:44 crc kubenswrapper[4998]: I0227 10:21:44.044240 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"14266b7b-880e-466b-a2f1-409ac1046db5","Type":"ContainerStarted","Data":"60cd7ea9bcb7e0a6345e08072ec48f7e11e830b1884f099ef3c8875eb4f09d9c"} Feb 27 10:21:44 crc kubenswrapper[4998]: I0227 10:21:44.047549 4998 generic.go:334] "Generic (PLEG): container finished" podID="1f770761-42e0-4e42-92c0-1e7fb8e45a49" containerID="18033807b33477bfde4aed336ca5a92a737869be7b61d1fdff1fb250556a860e" exitCode=0 Feb 27 10:21:44 crc kubenswrapper[4998]: I0227 10:21:44.047572 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n6ft6" event={"ID":"1f770761-42e0-4e42-92c0-1e7fb8e45a49","Type":"ContainerDied","Data":"18033807b33477bfde4aed336ca5a92a737869be7b61d1fdff1fb250556a860e"} Feb 27 10:21:44 crc kubenswrapper[4998]: I0227 10:21:44.047589 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n6ft6" event={"ID":"1f770761-42e0-4e42-92c0-1e7fb8e45a49","Type":"ContainerStarted","Data":"ed9a49348248b9ce9780780623ceaaf70ce2e00deae3fa828662ca38fef25807"} Feb 27 10:21:44 crc kubenswrapper[4998]: I0227 10:21:44.062409 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-2m9rf" podStartSLOduration=187.062390356 podStartE2EDuration="3m7.062390356s" podCreationTimestamp="2026-02-27 10:18:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:21:44.058429688 +0000 UTC m=+256.056700666" watchObservedRunningTime="2026-02-27 10:21:44.062390356 +0000 UTC m=+256.060661324" Feb 27 10:21:44 crc kubenswrapper[4998]: I0227 10:21:44.174097 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-s7v8l"] Feb 27 10:21:44 crc kubenswrapper[4998]: I0227 10:21:44.175255 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s7v8l" Feb 27 10:21:44 crc kubenswrapper[4998]: I0227 10:21:44.181349 4998 ???:1] "http: TLS handshake error from 192.168.126.11:52122: no serving certificate available for the kubelet" Feb 27 10:21:44 crc kubenswrapper[4998]: I0227 10:21:44.181898 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 27 10:21:44 crc kubenswrapper[4998]: I0227 10:21:44.190603 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-s7v8l"] Feb 27 10:21:44 crc kubenswrapper[4998]: I0227 10:21:44.291524 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0b13491-88ff-401a-9df3-dc6c981fb11c-utilities\") pod \"redhat-operators-s7v8l\" (UID: \"c0b13491-88ff-401a-9df3-dc6c981fb11c\") " pod="openshift-marketplace/redhat-operators-s7v8l" Feb 27 10:21:44 crc kubenswrapper[4998]: I0227 10:21:44.291573 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0b13491-88ff-401a-9df3-dc6c981fb11c-catalog-content\") pod \"redhat-operators-s7v8l\" (UID: \"c0b13491-88ff-401a-9df3-dc6c981fb11c\") " pod="openshift-marketplace/redhat-operators-s7v8l" Feb 27 10:21:44 crc kubenswrapper[4998]: I0227 10:21:44.291685 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rtb9\" (UniqueName: \"kubernetes.io/projected/c0b13491-88ff-401a-9df3-dc6c981fb11c-kube-api-access-2rtb9\") pod \"redhat-operators-s7v8l\" (UID: \"c0b13491-88ff-401a-9df3-dc6c981fb11c\") " pod="openshift-marketplace/redhat-operators-s7v8l" Feb 27 10:21:44 crc kubenswrapper[4998]: I0227 10:21:44.327815 4998 patch_prober.go:28] interesting pod/router-default-5444994796-4n9qx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 27 10:21:44 crc kubenswrapper[4998]: [-]has-synced failed: reason withheld Feb 27 10:21:44 crc kubenswrapper[4998]: [+]process-running ok Feb 27 10:21:44 crc kubenswrapper[4998]: healthz check failed Feb 27 10:21:44 crc kubenswrapper[4998]: I0227 10:21:44.327879 4998 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4n9qx" podUID="854e8a4a-ba6a-4d3a-b20c-429e3bd1c8a7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 27 10:21:44 crc kubenswrapper[4998]: I0227 10:21:44.354193 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8xp8s"] Feb 27 10:21:44 crc kubenswrapper[4998]: W0227 10:21:44.368130 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podff2b77d3_c103_46bc_9622_7f4a87ea0264.slice/crio-b6d854846ddc3bfd9111302b300de72471ac615eefd3658024ce4e32b2895bd5 WatchSource:0}: Error finding container b6d854846ddc3bfd9111302b300de72471ac615eefd3658024ce4e32b2895bd5: Status 404 returned error can't find the container with id b6d854846ddc3bfd9111302b300de72471ac615eefd3658024ce4e32b2895bd5 Feb 27 10:21:44 crc kubenswrapper[4998]: I0227 10:21:44.392861 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0b13491-88ff-401a-9df3-dc6c981fb11c-utilities\") pod \"redhat-operators-s7v8l\" (UID: \"c0b13491-88ff-401a-9df3-dc6c981fb11c\") " pod="openshift-marketplace/redhat-operators-s7v8l" Feb 27 10:21:44 crc kubenswrapper[4998]: I0227 10:21:44.392898 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0b13491-88ff-401a-9df3-dc6c981fb11c-catalog-content\") pod \"redhat-operators-s7v8l\" (UID: \"c0b13491-88ff-401a-9df3-dc6c981fb11c\") " pod="openshift-marketplace/redhat-operators-s7v8l" Feb 27 10:21:44 crc kubenswrapper[4998]: I0227 10:21:44.392972 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rtb9\" (UniqueName: \"kubernetes.io/projected/c0b13491-88ff-401a-9df3-dc6c981fb11c-kube-api-access-2rtb9\") pod \"redhat-operators-s7v8l\" (UID: \"c0b13491-88ff-401a-9df3-dc6c981fb11c\") " pod="openshift-marketplace/redhat-operators-s7v8l" Feb 27 10:21:44 crc kubenswrapper[4998]: I0227 10:21:44.394738 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0b13491-88ff-401a-9df3-dc6c981fb11c-utilities\") pod \"redhat-operators-s7v8l\" (UID: \"c0b13491-88ff-401a-9df3-dc6c981fb11c\") " pod="openshift-marketplace/redhat-operators-s7v8l" Feb 27 10:21:44 crc kubenswrapper[4998]: I0227 10:21:44.395406 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0b13491-88ff-401a-9df3-dc6c981fb11c-catalog-content\") pod \"redhat-operators-s7v8l\" (UID: \"c0b13491-88ff-401a-9df3-dc6c981fb11c\") " pod="openshift-marketplace/redhat-operators-s7v8l" Feb 27 10:21:44 crc kubenswrapper[4998]: I0227 10:21:44.417408 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rtb9\" (UniqueName: \"kubernetes.io/projected/c0b13491-88ff-401a-9df3-dc6c981fb11c-kube-api-access-2rtb9\") pod \"redhat-operators-s7v8l\" (UID: \"c0b13491-88ff-401a-9df3-dc6c981fb11c\") " pod="openshift-marketplace/redhat-operators-s7v8l" Feb 27 10:21:44 crc kubenswrapper[4998]: I0227 10:21:44.501048 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s7v8l" Feb 27 10:21:44 crc kubenswrapper[4998]: I0227 10:21:44.576059 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-l7vs8"] Feb 27 10:21:44 crc kubenswrapper[4998]: I0227 10:21:44.577056 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l7vs8" Feb 27 10:21:44 crc kubenswrapper[4998]: I0227 10:21:44.614912 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-l7vs8"] Feb 27 10:21:44 crc kubenswrapper[4998]: I0227 10:21:44.698324 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d365be7-41cf-4570-a8fb-ef974affdb95-catalog-content\") pod \"redhat-operators-l7vs8\" (UID: \"5d365be7-41cf-4570-a8fb-ef974affdb95\") " pod="openshift-marketplace/redhat-operators-l7vs8" Feb 27 10:21:44 crc kubenswrapper[4998]: I0227 10:21:44.698714 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d365be7-41cf-4570-a8fb-ef974affdb95-utilities\") pod \"redhat-operators-l7vs8\" (UID: \"5d365be7-41cf-4570-a8fb-ef974affdb95\") " pod="openshift-marketplace/redhat-operators-l7vs8" Feb 27 10:21:44 crc kubenswrapper[4998]: I0227 10:21:44.698746 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-972ls\" (UniqueName: \"kubernetes.io/projected/5d365be7-41cf-4570-a8fb-ef974affdb95-kube-api-access-972ls\") pod \"redhat-operators-l7vs8\" (UID: \"5d365be7-41cf-4570-a8fb-ef974affdb95\") " pod="openshift-marketplace/redhat-operators-l7vs8" Feb 27 10:21:44 crc kubenswrapper[4998]: I0227 10:21:44.733965 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 27 10:21:44 crc kubenswrapper[4998]: I0227 10:21:44.734718 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 27 10:21:44 crc kubenswrapper[4998]: I0227 10:21:44.739703 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Feb 27 10:21:44 crc kubenswrapper[4998]: I0227 10:21:44.740198 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 27 10:21:44 crc kubenswrapper[4998]: I0227 10:21:44.750675 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 27 10:21:44 crc kubenswrapper[4998]: I0227 10:21:44.754146 4998 ???:1] "http: TLS handshake error from 192.168.126.11:52136: no serving certificate available for the kubelet" Feb 27 10:21:44 crc kubenswrapper[4998]: I0227 10:21:44.800073 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d365be7-41cf-4570-a8fb-ef974affdb95-utilities\") pod \"redhat-operators-l7vs8\" (UID: \"5d365be7-41cf-4570-a8fb-ef974affdb95\") " pod="openshift-marketplace/redhat-operators-l7vs8" Feb 27 10:21:44 crc kubenswrapper[4998]: I0227 10:21:44.800141 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-972ls\" (UniqueName: \"kubernetes.io/projected/5d365be7-41cf-4570-a8fb-ef974affdb95-kube-api-access-972ls\") pod \"redhat-operators-l7vs8\" (UID: \"5d365be7-41cf-4570-a8fb-ef974affdb95\") " pod="openshift-marketplace/redhat-operators-l7vs8" Feb 27 10:21:44 crc kubenswrapper[4998]: I0227 10:21:44.800179 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4d1d2ac5-97b7-4b33-b6ea-50cd9fffeccd-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"4d1d2ac5-97b7-4b33-b6ea-50cd9fffeccd\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 27 10:21:44 crc kubenswrapper[4998]: I0227 10:21:44.800236 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d365be7-41cf-4570-a8fb-ef974affdb95-catalog-content\") pod \"redhat-operators-l7vs8\" (UID: \"5d365be7-41cf-4570-a8fb-ef974affdb95\") " pod="openshift-marketplace/redhat-operators-l7vs8" Feb 27 10:21:44 crc kubenswrapper[4998]: I0227 10:21:44.800262 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4d1d2ac5-97b7-4b33-b6ea-50cd9fffeccd-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"4d1d2ac5-97b7-4b33-b6ea-50cd9fffeccd\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 27 10:21:44 crc kubenswrapper[4998]: I0227 10:21:44.800712 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d365be7-41cf-4570-a8fb-ef974affdb95-utilities\") pod \"redhat-operators-l7vs8\" (UID: \"5d365be7-41cf-4570-a8fb-ef974affdb95\") " pod="openshift-marketplace/redhat-operators-l7vs8" Feb 27 10:21:44 crc kubenswrapper[4998]: I0227 10:21:44.800988 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d365be7-41cf-4570-a8fb-ef974affdb95-catalog-content\") pod \"redhat-operators-l7vs8\" (UID: \"5d365be7-41cf-4570-a8fb-ef974affdb95\") " pod="openshift-marketplace/redhat-operators-l7vs8" Feb 27 10:21:44 crc kubenswrapper[4998]: I0227 10:21:44.826902 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-972ls\" (UniqueName: \"kubernetes.io/projected/5d365be7-41cf-4570-a8fb-ef974affdb95-kube-api-access-972ls\") pod \"redhat-operators-l7vs8\" (UID: \"5d365be7-41cf-4570-a8fb-ef974affdb95\") " pod="openshift-marketplace/redhat-operators-l7vs8" Feb 27 10:21:44 crc kubenswrapper[4998]: I0227 10:21:44.839772 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-s7v8l"] Feb 27 10:21:44 crc kubenswrapper[4998]: I0227 10:21:44.896189 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l7vs8" Feb 27 10:21:44 crc kubenswrapper[4998]: I0227 10:21:44.901532 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4d1d2ac5-97b7-4b33-b6ea-50cd9fffeccd-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"4d1d2ac5-97b7-4b33-b6ea-50cd9fffeccd\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 27 10:21:44 crc kubenswrapper[4998]: I0227 10:21:44.901607 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4d1d2ac5-97b7-4b33-b6ea-50cd9fffeccd-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"4d1d2ac5-97b7-4b33-b6ea-50cd9fffeccd\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 27 10:21:44 crc kubenswrapper[4998]: I0227 10:21:44.902073 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4d1d2ac5-97b7-4b33-b6ea-50cd9fffeccd-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"4d1d2ac5-97b7-4b33-b6ea-50cd9fffeccd\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 27 10:21:44 crc kubenswrapper[4998]: I0227 10:21:44.924202 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4d1d2ac5-97b7-4b33-b6ea-50cd9fffeccd-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"4d1d2ac5-97b7-4b33-b6ea-50cd9fffeccd\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 27 10:21:45 crc kubenswrapper[4998]: I0227 10:21:45.060456 4998 generic.go:334] "Generic (PLEG): container finished" podID="ff2b77d3-c103-46bc-9622-7f4a87ea0264" containerID="6cacc16d818fcc29228160f563022b2a902c9c06a0f5b6ac4c159ec5b1bb8aa7" exitCode=0 Feb 27 10:21:45 crc kubenswrapper[4998]: I0227 10:21:45.060621 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8xp8s" event={"ID":"ff2b77d3-c103-46bc-9622-7f4a87ea0264","Type":"ContainerDied","Data":"6cacc16d818fcc29228160f563022b2a902c9c06a0f5b6ac4c159ec5b1bb8aa7"} Feb 27 10:21:45 crc kubenswrapper[4998]: I0227 10:21:45.060709 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8xp8s" event={"ID":"ff2b77d3-c103-46bc-9622-7f4a87ea0264","Type":"ContainerStarted","Data":"b6d854846ddc3bfd9111302b300de72471ac615eefd3658024ce4e32b2895bd5"} Feb 27 10:21:45 crc kubenswrapper[4998]: I0227 10:21:45.061734 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 27 10:21:45 crc kubenswrapper[4998]: I0227 10:21:45.064385 4998 generic.go:334] "Generic (PLEG): container finished" podID="14266b7b-880e-466b-a2f1-409ac1046db5" containerID="50c6fecd14b4fd3c685cf48f6aa611caceea319c39b5dda26b79dccefafe84b8" exitCode=0 Feb 27 10:21:45 crc kubenswrapper[4998]: I0227 10:21:45.064707 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"14266b7b-880e-466b-a2f1-409ac1046db5","Type":"ContainerDied","Data":"50c6fecd14b4fd3c685cf48f6aa611caceea319c39b5dda26b79dccefafe84b8"} Feb 27 10:21:45 crc kubenswrapper[4998]: I0227 10:21:45.334654 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-4n9qx" Feb 27 10:21:45 crc kubenswrapper[4998]: I0227 10:21:45.337104 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-4n9qx" Feb 27 10:21:48 crc kubenswrapper[4998]: I0227 10:21:48.742049 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-6bhg7" Feb 27 10:21:51 crc kubenswrapper[4998]: I0227 10:21:51.269048 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-vn72h" Feb 27 10:21:52 crc kubenswrapper[4998]: I0227 10:21:52.818060 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-n6r8g" Feb 27 10:21:52 crc kubenswrapper[4998]: I0227 10:21:52.837537 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-n6r8g" Feb 27 10:21:54 crc kubenswrapper[4998]: I0227 10:21:54.024946 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-555685b98-v8h46"] Feb 27 10:21:54 crc kubenswrapper[4998]: I0227 10:21:54.025596 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-555685b98-v8h46" podUID="6c167d26-89cf-4d88-a85b-78d070d23b2f" containerName="controller-manager" containerID="cri-o://3591329455f4da35e6ac7d1bb87eae596ad4be4b83e5ee06cd4af0cc1b3b1380" gracePeriod=30 Feb 27 10:21:54 crc kubenswrapper[4998]: I0227 10:21:54.042141 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-cffc98ff8-ztcc5"] Feb 27 10:21:54 crc kubenswrapper[4998]: I0227 10:21:54.042435 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-cffc98ff8-ztcc5" podUID="df1e0950-f8e5-4f2c-b7db-7045eba23868" containerName="route-controller-manager" containerID="cri-o://0cd55ca4f567989dd52f48f5877efb53cba4d798d9fa50663c26a55cc7980879" gracePeriod=30 Feb 27 10:21:54 crc kubenswrapper[4998]: I0227 10:21:54.455319 4998 ???:1] "http: TLS handshake error from 192.168.126.11:37302: no serving certificate available for the kubelet" Feb 27 10:21:56 crc kubenswrapper[4998]: I0227 10:21:56.139717 4998 generic.go:334] "Generic (PLEG): container finished" podID="6c167d26-89cf-4d88-a85b-78d070d23b2f" containerID="3591329455f4da35e6ac7d1bb87eae596ad4be4b83e5ee06cd4af0cc1b3b1380" exitCode=0 Feb 27 10:21:56 crc kubenswrapper[4998]: I0227 10:21:56.139766 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-555685b98-v8h46" event={"ID":"6c167d26-89cf-4d88-a85b-78d070d23b2f","Type":"ContainerDied","Data":"3591329455f4da35e6ac7d1bb87eae596ad4be4b83e5ee06cd4af0cc1b3b1380"} Feb 27 10:21:57 crc kubenswrapper[4998]: I0227 10:21:57.507358 4998 patch_prober.go:28] interesting pod/route-controller-manager-cffc98ff8-ztcc5 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.45:8443/healthz\": dial tcp 10.217.0.45:8443: connect: connection refused" start-of-body= Feb 27 10:21:57 crc kubenswrapper[4998]: I0227 10:21:57.507475 4998 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-cffc98ff8-ztcc5" podUID="df1e0950-f8e5-4f2c-b7db-7045eba23868" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.45:8443/healthz\": dial tcp 10.217.0.45:8443: connect: connection refused" Feb 27 10:21:59 crc kubenswrapper[4998]: I0227 10:21:59.572179 4998 patch_prober.go:28] interesting pod/controller-manager-555685b98-v8h46 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.46:8443/healthz\": dial tcp 10.217.0.46:8443: connect: connection refused" start-of-body= Feb 27 10:21:59 crc kubenswrapper[4998]: I0227 10:21:59.572514 4998 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-555685b98-v8h46" podUID="6c167d26-89cf-4d88-a85b-78d070d23b2f" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.46:8443/healthz\": dial tcp 10.217.0.46:8443: connect: connection refused" Feb 27 10:22:00 crc kubenswrapper[4998]: I0227 10:22:00.148701 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536462-2r92d"] Feb 27 10:22:00 crc kubenswrapper[4998]: I0227 10:22:00.164606 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536462-2r92d" Feb 27 10:22:00 crc kubenswrapper[4998]: I0227 10:22:00.172519 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-b74ch" Feb 27 10:22:00 crc kubenswrapper[4998]: I0227 10:22:00.175080 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536462-2r92d"] Feb 27 10:22:00 crc kubenswrapper[4998]: I0227 10:22:00.272948 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s87lq\" (UniqueName: \"kubernetes.io/projected/7ea17fe7-41e2-4264-909f-0e905886524b-kube-api-access-s87lq\") pod \"auto-csr-approver-29536462-2r92d\" (UID: \"7ea17fe7-41e2-4264-909f-0e905886524b\") " pod="openshift-infra/auto-csr-approver-29536462-2r92d" Feb 27 10:22:00 crc kubenswrapper[4998]: I0227 10:22:00.375043 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s87lq\" (UniqueName: \"kubernetes.io/projected/7ea17fe7-41e2-4264-909f-0e905886524b-kube-api-access-s87lq\") pod \"auto-csr-approver-29536462-2r92d\" (UID: \"7ea17fe7-41e2-4264-909f-0e905886524b\") " pod="openshift-infra/auto-csr-approver-29536462-2r92d" Feb 27 10:22:00 crc kubenswrapper[4998]: I0227 10:22:00.397742 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s87lq\" (UniqueName: \"kubernetes.io/projected/7ea17fe7-41e2-4264-909f-0e905886524b-kube-api-access-s87lq\") pod \"auto-csr-approver-29536462-2r92d\" (UID: \"7ea17fe7-41e2-4264-909f-0e905886524b\") " pod="openshift-infra/auto-csr-approver-29536462-2r92d" Feb 27 10:22:00 crc kubenswrapper[4998]: I0227 10:22:00.498720 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536462-2r92d" Feb 27 10:22:02 crc kubenswrapper[4998]: I0227 10:22:02.392386 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-2m9rf" Feb 27 10:22:02 crc kubenswrapper[4998]: W0227 10:22:02.782974 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc0b13491_88ff_401a_9df3_dc6c981fb11c.slice/crio-ae13e69305cf46c16a9578363ec9a766ef0331332627e76e22f0e954e78b3755 WatchSource:0}: Error finding container ae13e69305cf46c16a9578363ec9a766ef0331332627e76e22f0e954e78b3755: Status 404 returned error can't find the container with id ae13e69305cf46c16a9578363ec9a766ef0331332627e76e22f0e954e78b3755 Feb 27 10:22:02 crc kubenswrapper[4998]: I0227 10:22:02.819255 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 27 10:22:02 crc kubenswrapper[4998]: I0227 10:22:02.931964 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/14266b7b-880e-466b-a2f1-409ac1046db5-kube-api-access\") pod \"14266b7b-880e-466b-a2f1-409ac1046db5\" (UID: \"14266b7b-880e-466b-a2f1-409ac1046db5\") " Feb 27 10:22:02 crc kubenswrapper[4998]: I0227 10:22:02.932303 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/14266b7b-880e-466b-a2f1-409ac1046db5-kubelet-dir\") pod \"14266b7b-880e-466b-a2f1-409ac1046db5\" (UID: \"14266b7b-880e-466b-a2f1-409ac1046db5\") " Feb 27 10:22:02 crc kubenswrapper[4998]: I0227 10:22:02.932402 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14266b7b-880e-466b-a2f1-409ac1046db5-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "14266b7b-880e-466b-a2f1-409ac1046db5" (UID: "14266b7b-880e-466b-a2f1-409ac1046db5"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 10:22:02 crc kubenswrapper[4998]: I0227 10:22:02.932730 4998 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/14266b7b-880e-466b-a2f1-409ac1046db5-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 27 10:22:02 crc kubenswrapper[4998]: I0227 10:22:02.944400 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14266b7b-880e-466b-a2f1-409ac1046db5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "14266b7b-880e-466b-a2f1-409ac1046db5" (UID: "14266b7b-880e-466b-a2f1-409ac1046db5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:22:03 crc kubenswrapper[4998]: I0227 10:22:03.034063 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/14266b7b-880e-466b-a2f1-409ac1046db5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 27 10:22:03 crc kubenswrapper[4998]: I0227 10:22:03.183808 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"14266b7b-880e-466b-a2f1-409ac1046db5","Type":"ContainerDied","Data":"60cd7ea9bcb7e0a6345e08072ec48f7e11e830b1884f099ef3c8875eb4f09d9c"} Feb 27 10:22:03 crc kubenswrapper[4998]: I0227 10:22:03.183837 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 27 10:22:03 crc kubenswrapper[4998]: I0227 10:22:03.183852 4998 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60cd7ea9bcb7e0a6345e08072ec48f7e11e830b1884f099ef3c8875eb4f09d9c" Feb 27 10:22:03 crc kubenswrapper[4998]: I0227 10:22:03.186245 4998 generic.go:334] "Generic (PLEG): container finished" podID="df1e0950-f8e5-4f2c-b7db-7045eba23868" containerID="0cd55ca4f567989dd52f48f5877efb53cba4d798d9fa50663c26a55cc7980879" exitCode=0 Feb 27 10:22:03 crc kubenswrapper[4998]: I0227 10:22:03.186311 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-cffc98ff8-ztcc5" event={"ID":"df1e0950-f8e5-4f2c-b7db-7045eba23868","Type":"ContainerDied","Data":"0cd55ca4f567989dd52f48f5877efb53cba4d798d9fa50663c26a55cc7980879"} Feb 27 10:22:03 crc kubenswrapper[4998]: I0227 10:22:03.187505 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s7v8l" event={"ID":"c0b13491-88ff-401a-9df3-dc6c981fb11c","Type":"ContainerStarted","Data":"ae13e69305cf46c16a9578363ec9a766ef0331332627e76e22f0e954e78b3755"} Feb 27 10:22:05 crc kubenswrapper[4998]: I0227 10:22:05.284069 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-555685b98-v8h46" Feb 27 10:22:05 crc kubenswrapper[4998]: I0227 10:22:05.325148 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-cc4f5887c-j8pt7"] Feb 27 10:22:05 crc kubenswrapper[4998]: E0227 10:22:05.325457 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c167d26-89cf-4d88-a85b-78d070d23b2f" containerName="controller-manager" Feb 27 10:22:05 crc kubenswrapper[4998]: I0227 10:22:05.325473 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c167d26-89cf-4d88-a85b-78d070d23b2f" containerName="controller-manager" Feb 27 10:22:05 crc kubenswrapper[4998]: E0227 10:22:05.325494 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14266b7b-880e-466b-a2f1-409ac1046db5" containerName="pruner" Feb 27 10:22:05 crc kubenswrapper[4998]: I0227 10:22:05.325503 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="14266b7b-880e-466b-a2f1-409ac1046db5" containerName="pruner" Feb 27 10:22:05 crc kubenswrapper[4998]: I0227 10:22:05.325629 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c167d26-89cf-4d88-a85b-78d070d23b2f" containerName="controller-manager" Feb 27 10:22:05 crc kubenswrapper[4998]: I0227 10:22:05.325643 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="14266b7b-880e-466b-a2f1-409ac1046db5" containerName="pruner" Feb 27 10:22:05 crc kubenswrapper[4998]: I0227 10:22:05.326100 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-cc4f5887c-j8pt7" Feb 27 10:22:05 crc kubenswrapper[4998]: I0227 10:22:05.334104 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-cc4f5887c-j8pt7"] Feb 27 10:22:05 crc kubenswrapper[4998]: I0227 10:22:05.469608 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6c167d26-89cf-4d88-a85b-78d070d23b2f-client-ca\") pod \"6c167d26-89cf-4d88-a85b-78d070d23b2f\" (UID: \"6c167d26-89cf-4d88-a85b-78d070d23b2f\") " Feb 27 10:22:05 crc kubenswrapper[4998]: I0227 10:22:05.469700 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c167d26-89cf-4d88-a85b-78d070d23b2f-config\") pod \"6c167d26-89cf-4d88-a85b-78d070d23b2f\" (UID: \"6c167d26-89cf-4d88-a85b-78d070d23b2f\") " Feb 27 10:22:05 crc kubenswrapper[4998]: I0227 10:22:05.469758 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6c167d26-89cf-4d88-a85b-78d070d23b2f-serving-cert\") pod \"6c167d26-89cf-4d88-a85b-78d070d23b2f\" (UID: \"6c167d26-89cf-4d88-a85b-78d070d23b2f\") " Feb 27 10:22:05 crc kubenswrapper[4998]: I0227 10:22:05.469796 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jwqrp\" (UniqueName: \"kubernetes.io/projected/6c167d26-89cf-4d88-a85b-78d070d23b2f-kube-api-access-jwqrp\") pod \"6c167d26-89cf-4d88-a85b-78d070d23b2f\" (UID: \"6c167d26-89cf-4d88-a85b-78d070d23b2f\") " Feb 27 10:22:05 crc kubenswrapper[4998]: I0227 10:22:05.469848 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6c167d26-89cf-4d88-a85b-78d070d23b2f-proxy-ca-bundles\") pod \"6c167d26-89cf-4d88-a85b-78d070d23b2f\" (UID: \"6c167d26-89cf-4d88-a85b-78d070d23b2f\") " Feb 27 10:22:05 crc kubenswrapper[4998]: I0227 10:22:05.470056 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cf8d7364-856b-4990-97e6-77b661bf2277-proxy-ca-bundles\") pod \"controller-manager-cc4f5887c-j8pt7\" (UID: \"cf8d7364-856b-4990-97e6-77b661bf2277\") " pod="openshift-controller-manager/controller-manager-cc4f5887c-j8pt7" Feb 27 10:22:05 crc kubenswrapper[4998]: I0227 10:22:05.470101 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cf8d7364-856b-4990-97e6-77b661bf2277-client-ca\") pod \"controller-manager-cc4f5887c-j8pt7\" (UID: \"cf8d7364-856b-4990-97e6-77b661bf2277\") " pod="openshift-controller-manager/controller-manager-cc4f5887c-j8pt7" Feb 27 10:22:05 crc kubenswrapper[4998]: I0227 10:22:05.470195 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cf8d7364-856b-4990-97e6-77b661bf2277-serving-cert\") pod \"controller-manager-cc4f5887c-j8pt7\" (UID: \"cf8d7364-856b-4990-97e6-77b661bf2277\") " pod="openshift-controller-manager/controller-manager-cc4f5887c-j8pt7" Feb 27 10:22:05 crc kubenswrapper[4998]: I0227 10:22:05.470264 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdg8f\" (UniqueName: \"kubernetes.io/projected/cf8d7364-856b-4990-97e6-77b661bf2277-kube-api-access-tdg8f\") pod \"controller-manager-cc4f5887c-j8pt7\" (UID: \"cf8d7364-856b-4990-97e6-77b661bf2277\") " pod="openshift-controller-manager/controller-manager-cc4f5887c-j8pt7" Feb 27 10:22:05 crc kubenswrapper[4998]: I0227 10:22:05.470290 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf8d7364-856b-4990-97e6-77b661bf2277-config\") pod \"controller-manager-cc4f5887c-j8pt7\" (UID: \"cf8d7364-856b-4990-97e6-77b661bf2277\") " pod="openshift-controller-manager/controller-manager-cc4f5887c-j8pt7" Feb 27 10:22:05 crc kubenswrapper[4998]: I0227 10:22:05.470538 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c167d26-89cf-4d88-a85b-78d070d23b2f-config" (OuterVolumeSpecName: "config") pod "6c167d26-89cf-4d88-a85b-78d070d23b2f" (UID: "6c167d26-89cf-4d88-a85b-78d070d23b2f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:22:05 crc kubenswrapper[4998]: I0227 10:22:05.470460 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c167d26-89cf-4d88-a85b-78d070d23b2f-client-ca" (OuterVolumeSpecName: "client-ca") pod "6c167d26-89cf-4d88-a85b-78d070d23b2f" (UID: "6c167d26-89cf-4d88-a85b-78d070d23b2f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:22:05 crc kubenswrapper[4998]: I0227 10:22:05.471425 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c167d26-89cf-4d88-a85b-78d070d23b2f-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "6c167d26-89cf-4d88-a85b-78d070d23b2f" (UID: "6c167d26-89cf-4d88-a85b-78d070d23b2f"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:22:05 crc kubenswrapper[4998]: I0227 10:22:05.474816 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c167d26-89cf-4d88-a85b-78d070d23b2f-kube-api-access-jwqrp" (OuterVolumeSpecName: "kube-api-access-jwqrp") pod "6c167d26-89cf-4d88-a85b-78d070d23b2f" (UID: "6c167d26-89cf-4d88-a85b-78d070d23b2f"). InnerVolumeSpecName "kube-api-access-jwqrp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:22:05 crc kubenswrapper[4998]: I0227 10:22:05.474932 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c167d26-89cf-4d88-a85b-78d070d23b2f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6c167d26-89cf-4d88-a85b-78d070d23b2f" (UID: "6c167d26-89cf-4d88-a85b-78d070d23b2f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:22:05 crc kubenswrapper[4998]: I0227 10:22:05.572036 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cf8d7364-856b-4990-97e6-77b661bf2277-client-ca\") pod \"controller-manager-cc4f5887c-j8pt7\" (UID: \"cf8d7364-856b-4990-97e6-77b661bf2277\") " pod="openshift-controller-manager/controller-manager-cc4f5887c-j8pt7" Feb 27 10:22:05 crc kubenswrapper[4998]: I0227 10:22:05.572128 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cf8d7364-856b-4990-97e6-77b661bf2277-serving-cert\") pod \"controller-manager-cc4f5887c-j8pt7\" (UID: \"cf8d7364-856b-4990-97e6-77b661bf2277\") " pod="openshift-controller-manager/controller-manager-cc4f5887c-j8pt7" Feb 27 10:22:05 crc kubenswrapper[4998]: I0227 10:22:05.572162 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdg8f\" (UniqueName: \"kubernetes.io/projected/cf8d7364-856b-4990-97e6-77b661bf2277-kube-api-access-tdg8f\") pod \"controller-manager-cc4f5887c-j8pt7\" (UID: \"cf8d7364-856b-4990-97e6-77b661bf2277\") " pod="openshift-controller-manager/controller-manager-cc4f5887c-j8pt7" Feb 27 10:22:05 crc kubenswrapper[4998]: I0227 10:22:05.572183 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf8d7364-856b-4990-97e6-77b661bf2277-config\") pod \"controller-manager-cc4f5887c-j8pt7\" (UID: \"cf8d7364-856b-4990-97e6-77b661bf2277\") " pod="openshift-controller-manager/controller-manager-cc4f5887c-j8pt7" Feb 27 10:22:05 crc kubenswrapper[4998]: I0227 10:22:05.572279 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cf8d7364-856b-4990-97e6-77b661bf2277-proxy-ca-bundles\") pod \"controller-manager-cc4f5887c-j8pt7\" (UID: \"cf8d7364-856b-4990-97e6-77b661bf2277\") " pod="openshift-controller-manager/controller-manager-cc4f5887c-j8pt7" Feb 27 10:22:05 crc kubenswrapper[4998]: I0227 10:22:05.572323 4998 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c167d26-89cf-4d88-a85b-78d070d23b2f-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:22:05 crc kubenswrapper[4998]: I0227 10:22:05.572335 4998 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6c167d26-89cf-4d88-a85b-78d070d23b2f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 10:22:05 crc kubenswrapper[4998]: I0227 10:22:05.572365 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jwqrp\" (UniqueName: \"kubernetes.io/projected/6c167d26-89cf-4d88-a85b-78d070d23b2f-kube-api-access-jwqrp\") on node \"crc\" DevicePath \"\"" Feb 27 10:22:05 crc kubenswrapper[4998]: I0227 10:22:05.572518 4998 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6c167d26-89cf-4d88-a85b-78d070d23b2f-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 27 10:22:05 crc kubenswrapper[4998]: I0227 10:22:05.572573 4998 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6c167d26-89cf-4d88-a85b-78d070d23b2f-client-ca\") on node \"crc\" DevicePath \"\"" Feb 27 10:22:05 crc kubenswrapper[4998]: I0227 10:22:05.572941 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cf8d7364-856b-4990-97e6-77b661bf2277-client-ca\") pod \"controller-manager-cc4f5887c-j8pt7\" (UID: \"cf8d7364-856b-4990-97e6-77b661bf2277\") " pod="openshift-controller-manager/controller-manager-cc4f5887c-j8pt7" Feb 27 10:22:05 crc kubenswrapper[4998]: I0227 10:22:05.573481 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cf8d7364-856b-4990-97e6-77b661bf2277-proxy-ca-bundles\") pod \"controller-manager-cc4f5887c-j8pt7\" (UID: \"cf8d7364-856b-4990-97e6-77b661bf2277\") " pod="openshift-controller-manager/controller-manager-cc4f5887c-j8pt7" Feb 27 10:22:05 crc kubenswrapper[4998]: I0227 10:22:05.573792 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf8d7364-856b-4990-97e6-77b661bf2277-config\") pod \"controller-manager-cc4f5887c-j8pt7\" (UID: \"cf8d7364-856b-4990-97e6-77b661bf2277\") " pod="openshift-controller-manager/controller-manager-cc4f5887c-j8pt7" Feb 27 10:22:05 crc kubenswrapper[4998]: I0227 10:22:05.590101 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdg8f\" (UniqueName: \"kubernetes.io/projected/cf8d7364-856b-4990-97e6-77b661bf2277-kube-api-access-tdg8f\") pod \"controller-manager-cc4f5887c-j8pt7\" (UID: \"cf8d7364-856b-4990-97e6-77b661bf2277\") " pod="openshift-controller-manager/controller-manager-cc4f5887c-j8pt7" Feb 27 10:22:05 crc kubenswrapper[4998]: I0227 10:22:05.598391 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cf8d7364-856b-4990-97e6-77b661bf2277-serving-cert\") pod \"controller-manager-cc4f5887c-j8pt7\" (UID: \"cf8d7364-856b-4990-97e6-77b661bf2277\") " pod="openshift-controller-manager/controller-manager-cc4f5887c-j8pt7" Feb 27 10:22:05 crc kubenswrapper[4998]: I0227 10:22:05.648678 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-cc4f5887c-j8pt7" Feb 27 10:22:06 crc kubenswrapper[4998]: I0227 10:22:06.203392 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-555685b98-v8h46" event={"ID":"6c167d26-89cf-4d88-a85b-78d070d23b2f","Type":"ContainerDied","Data":"fc2d5a914b6cc066a48fad7e567a883e51a3478c73c4f3aab647c7629420361c"} Feb 27 10:22:06 crc kubenswrapper[4998]: I0227 10:22:06.203483 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-555685b98-v8h46" Feb 27 10:22:06 crc kubenswrapper[4998]: I0227 10:22:06.203846 4998 scope.go:117] "RemoveContainer" containerID="3591329455f4da35e6ac7d1bb87eae596ad4be4b83e5ee06cd4af0cc1b3b1380" Feb 27 10:22:06 crc kubenswrapper[4998]: I0227 10:22:06.234830 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-555685b98-v8h46"] Feb 27 10:22:06 crc kubenswrapper[4998]: I0227 10:22:06.238581 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-555685b98-v8h46"] Feb 27 10:22:06 crc kubenswrapper[4998]: I0227 10:22:06.782174 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c167d26-89cf-4d88-a85b-78d070d23b2f" path="/var/lib/kubelet/pods/6c167d26-89cf-4d88-a85b-78d070d23b2f/volumes" Feb 27 10:22:07 crc kubenswrapper[4998]: I0227 10:22:07.136542 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-cffc98ff8-ztcc5" Feb 27 10:22:07 crc kubenswrapper[4998]: E0227 10:22:07.147335 4998 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 10:22:07 crc kubenswrapper[4998]: E0227 10:22:07.147479 4998 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 10:22:07 crc kubenswrapper[4998]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 10:22:07 crc kubenswrapper[4998]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6njcz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29536460-jgglv_openshift-infra(36a1e36a-d138-4606-a280-ef688b10a438): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled Feb 27 10:22:07 crc kubenswrapper[4998]: > logger="UnhandledError" Feb 27 10:22:07 crc kubenswrapper[4998]: E0227 10:22:07.149777 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-infra/auto-csr-approver-29536460-jgglv" podUID="36a1e36a-d138-4606-a280-ef688b10a438" Feb 27 10:22:07 crc kubenswrapper[4998]: I0227 10:22:07.224691 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-cffc98ff8-ztcc5" event={"ID":"df1e0950-f8e5-4f2c-b7db-7045eba23868","Type":"ContainerDied","Data":"352fd5099b0e1101e546427b1a3098d0bdaf9533e703db73ff713638aa4931ce"} Feb 27 10:22:07 crc kubenswrapper[4998]: I0227 10:22:07.224726 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-cffc98ff8-ztcc5" Feb 27 10:22:07 crc kubenswrapper[4998]: E0227 10:22:07.227045 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536460-jgglv" podUID="36a1e36a-d138-4606-a280-ef688b10a438" Feb 27 10:22:07 crc kubenswrapper[4998]: I0227 10:22:07.295935 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/df1e0950-f8e5-4f2c-b7db-7045eba23868-client-ca\") pod \"df1e0950-f8e5-4f2c-b7db-7045eba23868\" (UID: \"df1e0950-f8e5-4f2c-b7db-7045eba23868\") " Feb 27 10:22:07 crc kubenswrapper[4998]: I0227 10:22:07.295996 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df1e0950-f8e5-4f2c-b7db-7045eba23868-config\") pod \"df1e0950-f8e5-4f2c-b7db-7045eba23868\" (UID: \"df1e0950-f8e5-4f2c-b7db-7045eba23868\") " Feb 27 10:22:07 crc kubenswrapper[4998]: I0227 10:22:07.296047 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2kvpb\" (UniqueName: \"kubernetes.io/projected/df1e0950-f8e5-4f2c-b7db-7045eba23868-kube-api-access-2kvpb\") pod \"df1e0950-f8e5-4f2c-b7db-7045eba23868\" (UID: \"df1e0950-f8e5-4f2c-b7db-7045eba23868\") " Feb 27 10:22:07 crc kubenswrapper[4998]: I0227 10:22:07.296119 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df1e0950-f8e5-4f2c-b7db-7045eba23868-serving-cert\") pod \"df1e0950-f8e5-4f2c-b7db-7045eba23868\" (UID: \"df1e0950-f8e5-4f2c-b7db-7045eba23868\") " Feb 27 10:22:07 crc kubenswrapper[4998]: I0227 10:22:07.297166 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df1e0950-f8e5-4f2c-b7db-7045eba23868-client-ca" (OuterVolumeSpecName: "client-ca") pod "df1e0950-f8e5-4f2c-b7db-7045eba23868" (UID: "df1e0950-f8e5-4f2c-b7db-7045eba23868"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:22:07 crc kubenswrapper[4998]: I0227 10:22:07.297469 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df1e0950-f8e5-4f2c-b7db-7045eba23868-config" (OuterVolumeSpecName: "config") pod "df1e0950-f8e5-4f2c-b7db-7045eba23868" (UID: "df1e0950-f8e5-4f2c-b7db-7045eba23868"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:22:07 crc kubenswrapper[4998]: I0227 10:22:07.302688 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df1e0950-f8e5-4f2c-b7db-7045eba23868-kube-api-access-2kvpb" (OuterVolumeSpecName: "kube-api-access-2kvpb") pod "df1e0950-f8e5-4f2c-b7db-7045eba23868" (UID: "df1e0950-f8e5-4f2c-b7db-7045eba23868"). InnerVolumeSpecName "kube-api-access-2kvpb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:22:07 crc kubenswrapper[4998]: I0227 10:22:07.303348 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df1e0950-f8e5-4f2c-b7db-7045eba23868-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "df1e0950-f8e5-4f2c-b7db-7045eba23868" (UID: "df1e0950-f8e5-4f2c-b7db-7045eba23868"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:22:07 crc kubenswrapper[4998]: I0227 10:22:07.397771 4998 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df1e0950-f8e5-4f2c-b7db-7045eba23868-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 10:22:07 crc kubenswrapper[4998]: I0227 10:22:07.397802 4998 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/df1e0950-f8e5-4f2c-b7db-7045eba23868-client-ca\") on node \"crc\" DevicePath \"\"" Feb 27 10:22:07 crc kubenswrapper[4998]: I0227 10:22:07.397814 4998 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df1e0950-f8e5-4f2c-b7db-7045eba23868-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:22:07 crc kubenswrapper[4998]: I0227 10:22:07.397826 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2kvpb\" (UniqueName: \"kubernetes.io/projected/df1e0950-f8e5-4f2c-b7db-7045eba23868-kube-api-access-2kvpb\") on node \"crc\" DevicePath \"\"" Feb 27 10:22:07 crc kubenswrapper[4998]: I0227 10:22:07.485432 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 27 10:22:07 crc kubenswrapper[4998]: I0227 10:22:07.555266 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-cffc98ff8-ztcc5"] Feb 27 10:22:07 crc kubenswrapper[4998]: I0227 10:22:07.556216 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-cffc98ff8-ztcc5"] Feb 27 10:22:08 crc kubenswrapper[4998]: I0227 10:22:08.782201 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df1e0950-f8e5-4f2c-b7db-7045eba23868" path="/var/lib/kubelet/pods/df1e0950-f8e5-4f2c-b7db-7045eba23868/volumes" Feb 27 10:22:10 crc kubenswrapper[4998]: I0227 10:22:10.264027 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-667964dbd8-56s2h"] Feb 27 10:22:10 crc kubenswrapper[4998]: E0227 10:22:10.264873 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df1e0950-f8e5-4f2c-b7db-7045eba23868" containerName="route-controller-manager" Feb 27 10:22:10 crc kubenswrapper[4998]: I0227 10:22:10.264886 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="df1e0950-f8e5-4f2c-b7db-7045eba23868" containerName="route-controller-manager" Feb 27 10:22:10 crc kubenswrapper[4998]: I0227 10:22:10.264992 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="df1e0950-f8e5-4f2c-b7db-7045eba23868" containerName="route-controller-manager" Feb 27 10:22:10 crc kubenswrapper[4998]: I0227 10:22:10.265435 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-667964dbd8-56s2h" Feb 27 10:22:10 crc kubenswrapper[4998]: I0227 10:22:10.268575 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 27 10:22:10 crc kubenswrapper[4998]: I0227 10:22:10.269032 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 27 10:22:10 crc kubenswrapper[4998]: I0227 10:22:10.269199 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 27 10:22:10 crc kubenswrapper[4998]: I0227 10:22:10.269411 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 27 10:22:10 crc kubenswrapper[4998]: I0227 10:22:10.269650 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 27 10:22:10 crc kubenswrapper[4998]: I0227 10:22:10.271588 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 27 10:22:10 crc kubenswrapper[4998]: I0227 10:22:10.280995 4998 scope.go:117] "RemoveContainer" containerID="0cd55ca4f567989dd52f48f5877efb53cba4d798d9fa50663c26a55cc7980879" Feb 27 10:22:10 crc kubenswrapper[4998]: I0227 10:22:10.284097 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-667964dbd8-56s2h"] Feb 27 10:22:10 crc kubenswrapper[4998]: W0227 10:22:10.285271 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod4d1d2ac5_97b7_4b33_b6ea_50cd9fffeccd.slice/crio-366abfd0f793a5bdd160f4522e259d7143075c4cac3e9fecf10eb7c465cdaa49 WatchSource:0}: Error finding container 366abfd0f793a5bdd160f4522e259d7143075c4cac3e9fecf10eb7c465cdaa49: Status 404 returned error can't find the container with id 366abfd0f793a5bdd160f4522e259d7143075c4cac3e9fecf10eb7c465cdaa49 Feb 27 10:22:10 crc kubenswrapper[4998]: I0227 10:22:10.348797 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghks5\" (UniqueName: \"kubernetes.io/projected/5b7d9866-74be-49cd-bcdf-e5d0f6c47d15-kube-api-access-ghks5\") pod \"route-controller-manager-667964dbd8-56s2h\" (UID: \"5b7d9866-74be-49cd-bcdf-e5d0f6c47d15\") " pod="openshift-route-controller-manager/route-controller-manager-667964dbd8-56s2h" Feb 27 10:22:10 crc kubenswrapper[4998]: I0227 10:22:10.349128 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5b7d9866-74be-49cd-bcdf-e5d0f6c47d15-serving-cert\") pod \"route-controller-manager-667964dbd8-56s2h\" (UID: \"5b7d9866-74be-49cd-bcdf-e5d0f6c47d15\") " pod="openshift-route-controller-manager/route-controller-manager-667964dbd8-56s2h" Feb 27 10:22:10 crc kubenswrapper[4998]: I0227 10:22:10.349154 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5b7d9866-74be-49cd-bcdf-e5d0f6c47d15-client-ca\") pod \"route-controller-manager-667964dbd8-56s2h\" (UID: \"5b7d9866-74be-49cd-bcdf-e5d0f6c47d15\") " pod="openshift-route-controller-manager/route-controller-manager-667964dbd8-56s2h" Feb 27 10:22:10 crc kubenswrapper[4998]: I0227 10:22:10.352209 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b7d9866-74be-49cd-bcdf-e5d0f6c47d15-config\") pod \"route-controller-manager-667964dbd8-56s2h\" (UID: \"5b7d9866-74be-49cd-bcdf-e5d0f6c47d15\") " pod="openshift-route-controller-manager/route-controller-manager-667964dbd8-56s2h" Feb 27 10:22:10 crc kubenswrapper[4998]: I0227 10:22:10.449399 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536462-2r92d"] Feb 27 10:22:10 crc kubenswrapper[4998]: I0227 10:22:10.453391 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5b7d9866-74be-49cd-bcdf-e5d0f6c47d15-serving-cert\") pod \"route-controller-manager-667964dbd8-56s2h\" (UID: \"5b7d9866-74be-49cd-bcdf-e5d0f6c47d15\") " pod="openshift-route-controller-manager/route-controller-manager-667964dbd8-56s2h" Feb 27 10:22:10 crc kubenswrapper[4998]: I0227 10:22:10.453452 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5b7d9866-74be-49cd-bcdf-e5d0f6c47d15-client-ca\") pod \"route-controller-manager-667964dbd8-56s2h\" (UID: \"5b7d9866-74be-49cd-bcdf-e5d0f6c47d15\") " pod="openshift-route-controller-manager/route-controller-manager-667964dbd8-56s2h" Feb 27 10:22:10 crc kubenswrapper[4998]: I0227 10:22:10.453530 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b7d9866-74be-49cd-bcdf-e5d0f6c47d15-config\") pod \"route-controller-manager-667964dbd8-56s2h\" (UID: \"5b7d9866-74be-49cd-bcdf-e5d0f6c47d15\") " pod="openshift-route-controller-manager/route-controller-manager-667964dbd8-56s2h" Feb 27 10:22:10 crc kubenswrapper[4998]: I0227 10:22:10.453580 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghks5\" (UniqueName: \"kubernetes.io/projected/5b7d9866-74be-49cd-bcdf-e5d0f6c47d15-kube-api-access-ghks5\") pod \"route-controller-manager-667964dbd8-56s2h\" (UID: \"5b7d9866-74be-49cd-bcdf-e5d0f6c47d15\") " pod="openshift-route-controller-manager/route-controller-manager-667964dbd8-56s2h" Feb 27 10:22:10 crc kubenswrapper[4998]: I0227 10:22:10.456017 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b7d9866-74be-49cd-bcdf-e5d0f6c47d15-config\") pod \"route-controller-manager-667964dbd8-56s2h\" (UID: \"5b7d9866-74be-49cd-bcdf-e5d0f6c47d15\") " pod="openshift-route-controller-manager/route-controller-manager-667964dbd8-56s2h" Feb 27 10:22:10 crc kubenswrapper[4998]: I0227 10:22:10.454997 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5b7d9866-74be-49cd-bcdf-e5d0f6c47d15-client-ca\") pod \"route-controller-manager-667964dbd8-56s2h\" (UID: \"5b7d9866-74be-49cd-bcdf-e5d0f6c47d15\") " pod="openshift-route-controller-manager/route-controller-manager-667964dbd8-56s2h" Feb 27 10:22:10 crc kubenswrapper[4998]: I0227 10:22:10.472363 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5b7d9866-74be-49cd-bcdf-e5d0f6c47d15-serving-cert\") pod \"route-controller-manager-667964dbd8-56s2h\" (UID: \"5b7d9866-74be-49cd-bcdf-e5d0f6c47d15\") " pod="openshift-route-controller-manager/route-controller-manager-667964dbd8-56s2h" Feb 27 10:22:10 crc kubenswrapper[4998]: I0227 10:22:10.487052 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghks5\" (UniqueName: \"kubernetes.io/projected/5b7d9866-74be-49cd-bcdf-e5d0f6c47d15-kube-api-access-ghks5\") pod \"route-controller-manager-667964dbd8-56s2h\" (UID: \"5b7d9866-74be-49cd-bcdf-e5d0f6c47d15\") " pod="openshift-route-controller-manager/route-controller-manager-667964dbd8-56s2h" Feb 27 10:22:10 crc kubenswrapper[4998]: I0227 10:22:10.493764 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-l7vs8"] Feb 27 10:22:10 crc kubenswrapper[4998]: I0227 10:22:10.504508 4998 patch_prober.go:28] interesting pod/machine-config-daemon-m6kr5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 10:22:10 crc kubenswrapper[4998]: I0227 10:22:10.504571 4998 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 10:22:10 crc kubenswrapper[4998]: I0227 10:22:10.635342 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-667964dbd8-56s2h" Feb 27 10:22:11 crc kubenswrapper[4998]: I0227 10:22:11.253501 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"4d1d2ac5-97b7-4b33-b6ea-50cd9fffeccd","Type":"ContainerStarted","Data":"366abfd0f793a5bdd160f4522e259d7143075c4cac3e9fecf10eb7c465cdaa49"} Feb 27 10:22:12 crc kubenswrapper[4998]: E0227 10:22:12.199210 4998 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 27 10:22:12 crc kubenswrapper[4998]: E0227 10:22:12.199587 4998 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jzpbc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-5v4md_openshift-marketplace(de440cc8-1a01-4c10-83e6-027afdacde0c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 27 10:22:12 crc kubenswrapper[4998]: E0227 10:22:12.200918 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-5v4md" podUID="de440cc8-1a01-4c10-83e6-027afdacde0c" Feb 27 10:22:13 crc kubenswrapper[4998]: I0227 10:22:13.348288 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vvpct" Feb 27 10:22:14 crc kubenswrapper[4998]: I0227 10:22:14.017189 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-cc4f5887c-j8pt7"] Feb 27 10:22:14 crc kubenswrapper[4998]: I0227 10:22:14.103296 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-667964dbd8-56s2h"] Feb 27 10:22:14 crc kubenswrapper[4998]: I0227 10:22:14.389256 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 27 10:22:14 crc kubenswrapper[4998]: I0227 10:22:14.390110 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 27 10:22:14 crc kubenswrapper[4998]: I0227 10:22:14.400930 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 27 10:22:14 crc kubenswrapper[4998]: I0227 10:22:14.401190 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 27 10:22:14 crc kubenswrapper[4998]: I0227 10:22:14.403888 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 27 10:22:14 crc kubenswrapper[4998]: I0227 10:22:14.504431 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cbaa6e79-1a8b-4d0c-bfa6-f9292907081d-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"cbaa6e79-1a8b-4d0c-bfa6-f9292907081d\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 27 10:22:14 crc kubenswrapper[4998]: I0227 10:22:14.504484 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cbaa6e79-1a8b-4d0c-bfa6-f9292907081d-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"cbaa6e79-1a8b-4d0c-bfa6-f9292907081d\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 27 10:22:14 crc kubenswrapper[4998]: I0227 10:22:14.605639 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cbaa6e79-1a8b-4d0c-bfa6-f9292907081d-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"cbaa6e79-1a8b-4d0c-bfa6-f9292907081d\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 27 10:22:14 crc kubenswrapper[4998]: I0227 10:22:14.605695 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cbaa6e79-1a8b-4d0c-bfa6-f9292907081d-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"cbaa6e79-1a8b-4d0c-bfa6-f9292907081d\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 27 10:22:14 crc kubenswrapper[4998]: I0227 10:22:14.605785 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cbaa6e79-1a8b-4d0c-bfa6-f9292907081d-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"cbaa6e79-1a8b-4d0c-bfa6-f9292907081d\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 27 10:22:14 crc kubenswrapper[4998]: I0227 10:22:14.628004 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cbaa6e79-1a8b-4d0c-bfa6-f9292907081d-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"cbaa6e79-1a8b-4d0c-bfa6-f9292907081d\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 27 10:22:14 crc kubenswrapper[4998]: I0227 10:22:14.727689 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 27 10:22:15 crc kubenswrapper[4998]: W0227 10:22:15.695571 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5d365be7_41cf_4570_a8fb_ef974affdb95.slice/crio-d58e9922e839a8ab7e3932375510ab76b505cfa2430bda779613c32344983028 WatchSource:0}: Error finding container d58e9922e839a8ab7e3932375510ab76b505cfa2430bda779613c32344983028: Status 404 returned error can't find the container with id d58e9922e839a8ab7e3932375510ab76b505cfa2430bda779613c32344983028 Feb 27 10:22:15 crc kubenswrapper[4998]: E0227 10:22:15.698813 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-5v4md" podUID="de440cc8-1a01-4c10-83e6-027afdacde0c" Feb 27 10:22:15 crc kubenswrapper[4998]: W0227 10:22:15.700365 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7ea17fe7_41e2_4264_909f_0e905886524b.slice/crio-da81fe2dae9825bb2b1558cf845e81639dd5b8ea0a9ebe191f9523ce3334e2a8 WatchSource:0}: Error finding container da81fe2dae9825bb2b1558cf845e81639dd5b8ea0a9ebe191f9523ce3334e2a8: Status 404 returned error can't find the container with id da81fe2dae9825bb2b1558cf845e81639dd5b8ea0a9ebe191f9523ce3334e2a8 Feb 27 10:22:15 crc kubenswrapper[4998]: E0227 10:22:15.763392 4998 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 27 10:22:15 crc kubenswrapper[4998]: E0227 10:22:15.763540 4998 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fqx2d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-sxgdw_openshift-marketplace(f5d59240-590d-47d4-95f7-de0c01a8d3e2): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 27 10:22:15 crc kubenswrapper[4998]: E0227 10:22:15.764837 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-sxgdw" podUID="f5d59240-590d-47d4-95f7-de0c01a8d3e2" Feb 27 10:22:15 crc kubenswrapper[4998]: E0227 10:22:15.799867 4998 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 27 10:22:15 crc kubenswrapper[4998]: E0227 10:22:15.800035 4998 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hbhrd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-zgvgp_openshift-marketplace(e5dbdfd7-f5af-4244-acbd-508173f391fe): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 27 10:22:15 crc kubenswrapper[4998]: E0227 10:22:15.801430 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-zgvgp" podUID="e5dbdfd7-f5af-4244-acbd-508173f391fe" Feb 27 10:22:15 crc kubenswrapper[4998]: E0227 10:22:15.854029 4998 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 27 10:22:15 crc kubenswrapper[4998]: E0227 10:22:15.854587 4998 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rkjlk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-zbbbf_openshift-marketplace(3cb466ff-6f20-443f-983d-49332c97e530): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 27 10:22:15 crc kubenswrapper[4998]: E0227 10:22:15.855811 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-zbbbf" podUID="3cb466ff-6f20-443f-983d-49332c97e530" Feb 27 10:22:15 crc kubenswrapper[4998]: I0227 10:22:15.970360 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-cc4f5887c-j8pt7"] Feb 27 10:22:16 crc kubenswrapper[4998]: I0227 10:22:16.254677 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 27 10:22:16 crc kubenswrapper[4998]: I0227 10:22:16.262871 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-667964dbd8-56s2h"] Feb 27 10:22:16 crc kubenswrapper[4998]: W0227 10:22:16.266684 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5b7d9866_74be_49cd_bcdf_e5d0f6c47d15.slice/crio-97a6ddf71f1bfe39517fd614ec0220553d80eb09c5f8260ecbb02dd877b5cd6c WatchSource:0}: Error finding container 97a6ddf71f1bfe39517fd614ec0220553d80eb09c5f8260ecbb02dd877b5cd6c: Status 404 returned error can't find the container with id 97a6ddf71f1bfe39517fd614ec0220553d80eb09c5f8260ecbb02dd877b5cd6c Feb 27 10:22:16 crc kubenswrapper[4998]: I0227 10:22:16.280465 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n6ft6" event={"ID":"1f770761-42e0-4e42-92c0-1e7fb8e45a49","Type":"ContainerStarted","Data":"c9225ffd134f1cd0c59eb285e23bce72e3c8b7e871c130eedce090345a2446d8"} Feb 27 10:22:16 crc kubenswrapper[4998]: I0227 10:22:16.282291 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"cbaa6e79-1a8b-4d0c-bfa6-f9292907081d","Type":"ContainerStarted","Data":"15860cdb1e4ec585aef6dab13b64738f0ab4495cdfcb9dd475ccaaf8fe83b6c2"} Feb 27 10:22:16 crc kubenswrapper[4998]: I0227 10:22:16.283805 4998 generic.go:334] "Generic (PLEG): container finished" podID="5d365be7-41cf-4570-a8fb-ef974affdb95" containerID="17caf79fbc4a776fced267f5159f61cdcffb1810336229fcf4015a13b9b7325d" exitCode=0 Feb 27 10:22:16 crc kubenswrapper[4998]: I0227 10:22:16.283856 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l7vs8" event={"ID":"5d365be7-41cf-4570-a8fb-ef974affdb95","Type":"ContainerDied","Data":"17caf79fbc4a776fced267f5159f61cdcffb1810336229fcf4015a13b9b7325d"} Feb 27 10:22:16 crc kubenswrapper[4998]: I0227 10:22:16.283873 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l7vs8" event={"ID":"5d365be7-41cf-4570-a8fb-ef974affdb95","Type":"ContainerStarted","Data":"d58e9922e839a8ab7e3932375510ab76b505cfa2430bda779613c32344983028"} Feb 27 10:22:16 crc kubenswrapper[4998]: I0227 10:22:16.284952 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"4d1d2ac5-97b7-4b33-b6ea-50cd9fffeccd","Type":"ContainerStarted","Data":"9e53268c2691690b3b15100c2b2028fc1b73ab435d13dc437577c718dfc8ded9"} Feb 27 10:22:16 crc kubenswrapper[4998]: I0227 10:22:16.286503 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536462-2r92d" event={"ID":"7ea17fe7-41e2-4264-909f-0e905886524b","Type":"ContainerStarted","Data":"da81fe2dae9825bb2b1558cf845e81639dd5b8ea0a9ebe191f9523ce3334e2a8"} Feb 27 10:22:16 crc kubenswrapper[4998]: I0227 10:22:16.288074 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-cc4f5887c-j8pt7" event={"ID":"cf8d7364-856b-4990-97e6-77b661bf2277","Type":"ContainerStarted","Data":"08351a3e0eb7d2d1f5499139aaaf8d456736cf7aecd224dba49c6d991d8df044"} Feb 27 10:22:16 crc kubenswrapper[4998]: I0227 10:22:16.289053 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-cc4f5887c-j8pt7" Feb 27 10:22:16 crc kubenswrapper[4998]: I0227 10:22:16.289138 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-cc4f5887c-j8pt7" event={"ID":"cf8d7364-856b-4990-97e6-77b661bf2277","Type":"ContainerStarted","Data":"8faad9fac49d8cfe3f532674db890f7201b5280952401135014c2bbe285b5c7e"} Feb 27 10:22:16 crc kubenswrapper[4998]: I0227 10:22:16.288111 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-cc4f5887c-j8pt7" podUID="cf8d7364-856b-4990-97e6-77b661bf2277" containerName="controller-manager" containerID="cri-o://08351a3e0eb7d2d1f5499139aaaf8d456736cf7aecd224dba49c6d991d8df044" gracePeriod=30 Feb 27 10:22:16 crc kubenswrapper[4998]: I0227 10:22:16.301338 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-cc4f5887c-j8pt7" Feb 27 10:22:16 crc kubenswrapper[4998]: I0227 10:22:16.302272 4998 generic.go:334] "Generic (PLEG): container finished" podID="c0b13491-88ff-401a-9df3-dc6c981fb11c" containerID="7a4197c8878bf6156d50484ccfd6bd94a2601a22b6e45f366c6d1f280ff9cc18" exitCode=0 Feb 27 10:22:16 crc kubenswrapper[4998]: I0227 10:22:16.302350 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s7v8l" event={"ID":"c0b13491-88ff-401a-9df3-dc6c981fb11c","Type":"ContainerDied","Data":"7a4197c8878bf6156d50484ccfd6bd94a2601a22b6e45f366c6d1f280ff9cc18"} Feb 27 10:22:16 crc kubenswrapper[4998]: I0227 10:22:16.322189 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8xp8s" event={"ID":"ff2b77d3-c103-46bc-9622-7f4a87ea0264","Type":"ContainerStarted","Data":"6a880d179a30421d371cf14980d7d04a08f8b606e7b4fd8a0cddb1b5e7595c30"} Feb 27 10:22:16 crc kubenswrapper[4998]: I0227 10:22:16.329516 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-667964dbd8-56s2h" event={"ID":"5b7d9866-74be-49cd-bcdf-e5d0f6c47d15","Type":"ContainerStarted","Data":"97a6ddf71f1bfe39517fd614ec0220553d80eb09c5f8260ecbb02dd877b5cd6c"} Feb 27 10:22:16 crc kubenswrapper[4998]: E0227 10:22:16.330677 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-sxgdw" podUID="f5d59240-590d-47d4-95f7-de0c01a8d3e2" Feb 27 10:22:16 crc kubenswrapper[4998]: E0227 10:22:16.330976 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-zbbbf" podUID="3cb466ff-6f20-443f-983d-49332c97e530" Feb 27 10:22:16 crc kubenswrapper[4998]: E0227 10:22:16.331058 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-zgvgp" podUID="e5dbdfd7-f5af-4244-acbd-508173f391fe" Feb 27 10:22:16 crc kubenswrapper[4998]: I0227 10:22:16.357182 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=32.357158562 podStartE2EDuration="32.357158562s" podCreationTimestamp="2026-02-27 10:21:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:22:16.354806558 +0000 UTC m=+288.353077546" watchObservedRunningTime="2026-02-27 10:22:16.357158562 +0000 UTC m=+288.355429530" Feb 27 10:22:16 crc kubenswrapper[4998]: I0227 10:22:16.359581 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-cc4f5887c-j8pt7" podStartSLOduration=22.359562149 podStartE2EDuration="22.359562149s" podCreationTimestamp="2026-02-27 10:21:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:22:16.337772325 +0000 UTC m=+288.336043283" watchObservedRunningTime="2026-02-27 10:22:16.359562149 +0000 UTC m=+288.357833117" Feb 27 10:22:16 crc kubenswrapper[4998]: I0227 10:22:16.763621 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-cc4f5887c-j8pt7" Feb 27 10:22:16 crc kubenswrapper[4998]: I0227 10:22:16.859579 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-679b689f4c-lmpbh"] Feb 27 10:22:16 crc kubenswrapper[4998]: E0227 10:22:16.859886 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf8d7364-856b-4990-97e6-77b661bf2277" containerName="controller-manager" Feb 27 10:22:16 crc kubenswrapper[4998]: I0227 10:22:16.859901 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf8d7364-856b-4990-97e6-77b661bf2277" containerName="controller-manager" Feb 27 10:22:16 crc kubenswrapper[4998]: I0227 10:22:16.860047 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf8d7364-856b-4990-97e6-77b661bf2277" containerName="controller-manager" Feb 27 10:22:16 crc kubenswrapper[4998]: I0227 10:22:16.860495 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-679b689f4c-lmpbh"] Feb 27 10:22:16 crc kubenswrapper[4998]: I0227 10:22:16.860581 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-679b689f4c-lmpbh" Feb 27 10:22:16 crc kubenswrapper[4998]: I0227 10:22:16.941263 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cf8d7364-856b-4990-97e6-77b661bf2277-proxy-ca-bundles\") pod \"cf8d7364-856b-4990-97e6-77b661bf2277\" (UID: \"cf8d7364-856b-4990-97e6-77b661bf2277\") " Feb 27 10:22:16 crc kubenswrapper[4998]: I0227 10:22:16.941356 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tdg8f\" (UniqueName: \"kubernetes.io/projected/cf8d7364-856b-4990-97e6-77b661bf2277-kube-api-access-tdg8f\") pod \"cf8d7364-856b-4990-97e6-77b661bf2277\" (UID: \"cf8d7364-856b-4990-97e6-77b661bf2277\") " Feb 27 10:22:16 crc kubenswrapper[4998]: I0227 10:22:16.941417 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cf8d7364-856b-4990-97e6-77b661bf2277-serving-cert\") pod \"cf8d7364-856b-4990-97e6-77b661bf2277\" (UID: \"cf8d7364-856b-4990-97e6-77b661bf2277\") " Feb 27 10:22:16 crc kubenswrapper[4998]: I0227 10:22:16.941453 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cf8d7364-856b-4990-97e6-77b661bf2277-client-ca\") pod \"cf8d7364-856b-4990-97e6-77b661bf2277\" (UID: \"cf8d7364-856b-4990-97e6-77b661bf2277\") " Feb 27 10:22:16 crc kubenswrapper[4998]: I0227 10:22:16.941516 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf8d7364-856b-4990-97e6-77b661bf2277-config\") pod \"cf8d7364-856b-4990-97e6-77b661bf2277\" (UID: \"cf8d7364-856b-4990-97e6-77b661bf2277\") " Feb 27 10:22:16 crc kubenswrapper[4998]: I0227 10:22:16.942155 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf8d7364-856b-4990-97e6-77b661bf2277-client-ca" (OuterVolumeSpecName: "client-ca") pod "cf8d7364-856b-4990-97e6-77b661bf2277" (UID: "cf8d7364-856b-4990-97e6-77b661bf2277"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:22:16 crc kubenswrapper[4998]: I0227 10:22:16.942169 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf8d7364-856b-4990-97e6-77b661bf2277-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "cf8d7364-856b-4990-97e6-77b661bf2277" (UID: "cf8d7364-856b-4990-97e6-77b661bf2277"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:22:16 crc kubenswrapper[4998]: I0227 10:22:16.942347 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf8d7364-856b-4990-97e6-77b661bf2277-config" (OuterVolumeSpecName: "config") pod "cf8d7364-856b-4990-97e6-77b661bf2277" (UID: "cf8d7364-856b-4990-97e6-77b661bf2277"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:22:16 crc kubenswrapper[4998]: I0227 10:22:16.948318 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf8d7364-856b-4990-97e6-77b661bf2277-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "cf8d7364-856b-4990-97e6-77b661bf2277" (UID: "cf8d7364-856b-4990-97e6-77b661bf2277"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:22:16 crc kubenswrapper[4998]: I0227 10:22:16.963516 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf8d7364-856b-4990-97e6-77b661bf2277-kube-api-access-tdg8f" (OuterVolumeSpecName: "kube-api-access-tdg8f") pod "cf8d7364-856b-4990-97e6-77b661bf2277" (UID: "cf8d7364-856b-4990-97e6-77b661bf2277"). InnerVolumeSpecName "kube-api-access-tdg8f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:22:17 crc kubenswrapper[4998]: I0227 10:22:17.042619 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/82833b5f-2598-4958-a974-3ae9455ecc78-serving-cert\") pod \"controller-manager-679b689f4c-lmpbh\" (UID: \"82833b5f-2598-4958-a974-3ae9455ecc78\") " pod="openshift-controller-manager/controller-manager-679b689f4c-lmpbh" Feb 27 10:22:17 crc kubenswrapper[4998]: I0227 10:22:17.042698 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/82833b5f-2598-4958-a974-3ae9455ecc78-proxy-ca-bundles\") pod \"controller-manager-679b689f4c-lmpbh\" (UID: \"82833b5f-2598-4958-a974-3ae9455ecc78\") " pod="openshift-controller-manager/controller-manager-679b689f4c-lmpbh" Feb 27 10:22:17 crc kubenswrapper[4998]: I0227 10:22:17.042764 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82833b5f-2598-4958-a974-3ae9455ecc78-config\") pod \"controller-manager-679b689f4c-lmpbh\" (UID: \"82833b5f-2598-4958-a974-3ae9455ecc78\") " pod="openshift-controller-manager/controller-manager-679b689f4c-lmpbh" Feb 27 10:22:17 crc kubenswrapper[4998]: I0227 10:22:17.042783 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nh9j8\" (UniqueName: \"kubernetes.io/projected/82833b5f-2598-4958-a974-3ae9455ecc78-kube-api-access-nh9j8\") pod \"controller-manager-679b689f4c-lmpbh\" (UID: \"82833b5f-2598-4958-a974-3ae9455ecc78\") " pod="openshift-controller-manager/controller-manager-679b689f4c-lmpbh" Feb 27 10:22:17 crc kubenswrapper[4998]: I0227 10:22:17.042858 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/82833b5f-2598-4958-a974-3ae9455ecc78-client-ca\") pod \"controller-manager-679b689f4c-lmpbh\" (UID: \"82833b5f-2598-4958-a974-3ae9455ecc78\") " pod="openshift-controller-manager/controller-manager-679b689f4c-lmpbh" Feb 27 10:22:17 crc kubenswrapper[4998]: I0227 10:22:17.042900 4998 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf8d7364-856b-4990-97e6-77b661bf2277-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:22:17 crc kubenswrapper[4998]: I0227 10:22:17.042911 4998 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cf8d7364-856b-4990-97e6-77b661bf2277-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 27 10:22:17 crc kubenswrapper[4998]: I0227 10:22:17.042921 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tdg8f\" (UniqueName: \"kubernetes.io/projected/cf8d7364-856b-4990-97e6-77b661bf2277-kube-api-access-tdg8f\") on node \"crc\" DevicePath \"\"" Feb 27 10:22:17 crc kubenswrapper[4998]: I0227 10:22:17.042928 4998 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cf8d7364-856b-4990-97e6-77b661bf2277-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 10:22:17 crc kubenswrapper[4998]: I0227 10:22:17.042936 4998 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cf8d7364-856b-4990-97e6-77b661bf2277-client-ca\") on node \"crc\" DevicePath \"\"" Feb 27 10:22:17 crc kubenswrapper[4998]: I0227 10:22:17.144111 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82833b5f-2598-4958-a974-3ae9455ecc78-config\") pod \"controller-manager-679b689f4c-lmpbh\" (UID: \"82833b5f-2598-4958-a974-3ae9455ecc78\") " pod="openshift-controller-manager/controller-manager-679b689f4c-lmpbh" Feb 27 10:22:17 crc kubenswrapper[4998]: I0227 10:22:17.144169 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nh9j8\" (UniqueName: \"kubernetes.io/projected/82833b5f-2598-4958-a974-3ae9455ecc78-kube-api-access-nh9j8\") pod \"controller-manager-679b689f4c-lmpbh\" (UID: \"82833b5f-2598-4958-a974-3ae9455ecc78\") " pod="openshift-controller-manager/controller-manager-679b689f4c-lmpbh" Feb 27 10:22:17 crc kubenswrapper[4998]: I0227 10:22:17.144266 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/82833b5f-2598-4958-a974-3ae9455ecc78-client-ca\") pod \"controller-manager-679b689f4c-lmpbh\" (UID: \"82833b5f-2598-4958-a974-3ae9455ecc78\") " pod="openshift-controller-manager/controller-manager-679b689f4c-lmpbh" Feb 27 10:22:17 crc kubenswrapper[4998]: I0227 10:22:17.144307 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/82833b5f-2598-4958-a974-3ae9455ecc78-serving-cert\") pod \"controller-manager-679b689f4c-lmpbh\" (UID: \"82833b5f-2598-4958-a974-3ae9455ecc78\") " pod="openshift-controller-manager/controller-manager-679b689f4c-lmpbh" Feb 27 10:22:17 crc kubenswrapper[4998]: I0227 10:22:17.144342 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/82833b5f-2598-4958-a974-3ae9455ecc78-proxy-ca-bundles\") pod \"controller-manager-679b689f4c-lmpbh\" (UID: \"82833b5f-2598-4958-a974-3ae9455ecc78\") " pod="openshift-controller-manager/controller-manager-679b689f4c-lmpbh" Feb 27 10:22:17 crc kubenswrapper[4998]: I0227 10:22:17.145178 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/82833b5f-2598-4958-a974-3ae9455ecc78-client-ca\") pod \"controller-manager-679b689f4c-lmpbh\" (UID: \"82833b5f-2598-4958-a974-3ae9455ecc78\") " pod="openshift-controller-manager/controller-manager-679b689f4c-lmpbh" Feb 27 10:22:17 crc kubenswrapper[4998]: I0227 10:22:17.145437 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/82833b5f-2598-4958-a974-3ae9455ecc78-proxy-ca-bundles\") pod \"controller-manager-679b689f4c-lmpbh\" (UID: \"82833b5f-2598-4958-a974-3ae9455ecc78\") " pod="openshift-controller-manager/controller-manager-679b689f4c-lmpbh" Feb 27 10:22:17 crc kubenswrapper[4998]: I0227 10:22:17.145472 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82833b5f-2598-4958-a974-3ae9455ecc78-config\") pod \"controller-manager-679b689f4c-lmpbh\" (UID: \"82833b5f-2598-4958-a974-3ae9455ecc78\") " pod="openshift-controller-manager/controller-manager-679b689f4c-lmpbh" Feb 27 10:22:17 crc kubenswrapper[4998]: I0227 10:22:17.148560 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/82833b5f-2598-4958-a974-3ae9455ecc78-serving-cert\") pod \"controller-manager-679b689f4c-lmpbh\" (UID: \"82833b5f-2598-4958-a974-3ae9455ecc78\") " pod="openshift-controller-manager/controller-manager-679b689f4c-lmpbh" Feb 27 10:22:17 crc kubenswrapper[4998]: I0227 10:22:17.164454 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nh9j8\" (UniqueName: \"kubernetes.io/projected/82833b5f-2598-4958-a974-3ae9455ecc78-kube-api-access-nh9j8\") pod \"controller-manager-679b689f4c-lmpbh\" (UID: \"82833b5f-2598-4958-a974-3ae9455ecc78\") " pod="openshift-controller-manager/controller-manager-679b689f4c-lmpbh" Feb 27 10:22:17 crc kubenswrapper[4998]: I0227 10:22:17.184407 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-679b689f4c-lmpbh" Feb 27 10:22:17 crc kubenswrapper[4998]: I0227 10:22:17.210035 4998 csr.go:261] certificate signing request csr-cg6gm is approved, waiting to be issued Feb 27 10:22:17 crc kubenswrapper[4998]: I0227 10:22:17.216929 4998 csr.go:257] certificate signing request csr-cg6gm is issued Feb 27 10:22:17 crc kubenswrapper[4998]: I0227 10:22:17.338946 4998 generic.go:334] "Generic (PLEG): container finished" podID="ff2b77d3-c103-46bc-9622-7f4a87ea0264" containerID="6a880d179a30421d371cf14980d7d04a08f8b606e7b4fd8a0cddb1b5e7595c30" exitCode=0 Feb 27 10:22:17 crc kubenswrapper[4998]: I0227 10:22:17.339021 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8xp8s" event={"ID":"ff2b77d3-c103-46bc-9622-7f4a87ea0264","Type":"ContainerDied","Data":"6a880d179a30421d371cf14980d7d04a08f8b606e7b4fd8a0cddb1b5e7595c30"} Feb 27 10:22:17 crc kubenswrapper[4998]: I0227 10:22:17.343138 4998 generic.go:334] "Generic (PLEG): container finished" podID="7ea17fe7-41e2-4264-909f-0e905886524b" containerID="df0281943c7e1d6fc2638382d6d0da22b78cdbccb85786d23764b01450e40f0c" exitCode=0 Feb 27 10:22:17 crc kubenswrapper[4998]: I0227 10:22:17.343190 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536462-2r92d" event={"ID":"7ea17fe7-41e2-4264-909f-0e905886524b","Type":"ContainerDied","Data":"df0281943c7e1d6fc2638382d6d0da22b78cdbccb85786d23764b01450e40f0c"} Feb 27 10:22:17 crc kubenswrapper[4998]: I0227 10:22:17.348530 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-667964dbd8-56s2h" event={"ID":"5b7d9866-74be-49cd-bcdf-e5d0f6c47d15","Type":"ContainerStarted","Data":"767ee612252c069e15e179c5ba011232886199aa388aca27ee571b777a4b9b74"} Feb 27 10:22:17 crc kubenswrapper[4998]: I0227 10:22:17.348643 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-667964dbd8-56s2h" podUID="5b7d9866-74be-49cd-bcdf-e5d0f6c47d15" containerName="route-controller-manager" containerID="cri-o://767ee612252c069e15e179c5ba011232886199aa388aca27ee571b777a4b9b74" gracePeriod=30 Feb 27 10:22:17 crc kubenswrapper[4998]: I0227 10:22:17.348772 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-667964dbd8-56s2h" Feb 27 10:22:17 crc kubenswrapper[4998]: I0227 10:22:17.355741 4998 generic.go:334] "Generic (PLEG): container finished" podID="cf8d7364-856b-4990-97e6-77b661bf2277" containerID="08351a3e0eb7d2d1f5499139aaaf8d456736cf7aecd224dba49c6d991d8df044" exitCode=0 Feb 27 10:22:17 crc kubenswrapper[4998]: I0227 10:22:17.355807 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-cc4f5887c-j8pt7" event={"ID":"cf8d7364-856b-4990-97e6-77b661bf2277","Type":"ContainerDied","Data":"08351a3e0eb7d2d1f5499139aaaf8d456736cf7aecd224dba49c6d991d8df044"} Feb 27 10:22:17 crc kubenswrapper[4998]: I0227 10:22:17.355803 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-cc4f5887c-j8pt7" Feb 27 10:22:17 crc kubenswrapper[4998]: I0227 10:22:17.355855 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-667964dbd8-56s2h" Feb 27 10:22:17 crc kubenswrapper[4998]: I0227 10:22:17.355868 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-cc4f5887c-j8pt7" event={"ID":"cf8d7364-856b-4990-97e6-77b661bf2277","Type":"ContainerDied","Data":"8faad9fac49d8cfe3f532674db890f7201b5280952401135014c2bbe285b5c7e"} Feb 27 10:22:17 crc kubenswrapper[4998]: I0227 10:22:17.355932 4998 scope.go:117] "RemoveContainer" containerID="08351a3e0eb7d2d1f5499139aaaf8d456736cf7aecd224dba49c6d991d8df044" Feb 27 10:22:17 crc kubenswrapper[4998]: I0227 10:22:17.363592 4998 generic.go:334] "Generic (PLEG): container finished" podID="1f770761-42e0-4e42-92c0-1e7fb8e45a49" containerID="c9225ffd134f1cd0c59eb285e23bce72e3c8b7e871c130eedce090345a2446d8" exitCode=0 Feb 27 10:22:17 crc kubenswrapper[4998]: I0227 10:22:17.363676 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n6ft6" event={"ID":"1f770761-42e0-4e42-92c0-1e7fb8e45a49","Type":"ContainerDied","Data":"c9225ffd134f1cd0c59eb285e23bce72e3c8b7e871c130eedce090345a2446d8"} Feb 27 10:22:17 crc kubenswrapper[4998]: I0227 10:22:17.369082 4998 generic.go:334] "Generic (PLEG): container finished" podID="cbaa6e79-1a8b-4d0c-bfa6-f9292907081d" containerID="9fb88fee941af450a88308dd6909656a4c4a69be291c42102286338bf28cdf9e" exitCode=0 Feb 27 10:22:17 crc kubenswrapper[4998]: I0227 10:22:17.369157 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"cbaa6e79-1a8b-4d0c-bfa6-f9292907081d","Type":"ContainerDied","Data":"9fb88fee941af450a88308dd6909656a4c4a69be291c42102286338bf28cdf9e"} Feb 27 10:22:17 crc kubenswrapper[4998]: I0227 10:22:17.372865 4998 generic.go:334] "Generic (PLEG): container finished" podID="4d1d2ac5-97b7-4b33-b6ea-50cd9fffeccd" containerID="9e53268c2691690b3b15100c2b2028fc1b73ab435d13dc437577c718dfc8ded9" exitCode=0 Feb 27 10:22:17 crc kubenswrapper[4998]: I0227 10:22:17.372905 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"4d1d2ac5-97b7-4b33-b6ea-50cd9fffeccd","Type":"ContainerDied","Data":"9e53268c2691690b3b15100c2b2028fc1b73ab435d13dc437577c718dfc8ded9"} Feb 27 10:22:17 crc kubenswrapper[4998]: I0227 10:22:17.388621 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-667964dbd8-56s2h" podStartSLOduration=23.388599184 podStartE2EDuration="23.388599184s" podCreationTimestamp="2026-02-27 10:21:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:22:17.376978201 +0000 UTC m=+289.375249189" watchObservedRunningTime="2026-02-27 10:22:17.388599184 +0000 UTC m=+289.386870152" Feb 27 10:22:17 crc kubenswrapper[4998]: I0227 10:22:17.397929 4998 scope.go:117] "RemoveContainer" containerID="08351a3e0eb7d2d1f5499139aaaf8d456736cf7aecd224dba49c6d991d8df044" Feb 27 10:22:17 crc kubenswrapper[4998]: E0227 10:22:17.399017 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08351a3e0eb7d2d1f5499139aaaf8d456736cf7aecd224dba49c6d991d8df044\": container with ID starting with 08351a3e0eb7d2d1f5499139aaaf8d456736cf7aecd224dba49c6d991d8df044 not found: ID does not exist" containerID="08351a3e0eb7d2d1f5499139aaaf8d456736cf7aecd224dba49c6d991d8df044" Feb 27 10:22:17 crc kubenswrapper[4998]: I0227 10:22:17.399061 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08351a3e0eb7d2d1f5499139aaaf8d456736cf7aecd224dba49c6d991d8df044"} err="failed to get container status \"08351a3e0eb7d2d1f5499139aaaf8d456736cf7aecd224dba49c6d991d8df044\": rpc error: code = NotFound desc = could not find container \"08351a3e0eb7d2d1f5499139aaaf8d456736cf7aecd224dba49c6d991d8df044\": container with ID starting with 08351a3e0eb7d2d1f5499139aaaf8d456736cf7aecd224dba49c6d991d8df044 not found: ID does not exist" Feb 27 10:22:17 crc kubenswrapper[4998]: I0227 10:22:17.477015 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-cc4f5887c-j8pt7"] Feb 27 10:22:17 crc kubenswrapper[4998]: I0227 10:22:17.477070 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-cc4f5887c-j8pt7"] Feb 27 10:22:17 crc kubenswrapper[4998]: I0227 10:22:17.604723 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-679b689f4c-lmpbh"] Feb 27 10:22:17 crc kubenswrapper[4998]: W0227 10:22:17.616162 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82833b5f_2598_4958_a974_3ae9455ecc78.slice/crio-24e0eadfd102504609487b81ad9c61cc2e506b183dca84a8c14140281f96e57e WatchSource:0}: Error finding container 24e0eadfd102504609487b81ad9c61cc2e506b183dca84a8c14140281f96e57e: Status 404 returned error can't find the container with id 24e0eadfd102504609487b81ad9c61cc2e506b183dca84a8c14140281f96e57e Feb 27 10:22:17 crc kubenswrapper[4998]: I0227 10:22:17.758421 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-667964dbd8-56s2h" Feb 27 10:22:17 crc kubenswrapper[4998]: I0227 10:22:17.852851 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ghks5\" (UniqueName: \"kubernetes.io/projected/5b7d9866-74be-49cd-bcdf-e5d0f6c47d15-kube-api-access-ghks5\") pod \"5b7d9866-74be-49cd-bcdf-e5d0f6c47d15\" (UID: \"5b7d9866-74be-49cd-bcdf-e5d0f6c47d15\") " Feb 27 10:22:17 crc kubenswrapper[4998]: I0227 10:22:17.852944 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5b7d9866-74be-49cd-bcdf-e5d0f6c47d15-serving-cert\") pod \"5b7d9866-74be-49cd-bcdf-e5d0f6c47d15\" (UID: \"5b7d9866-74be-49cd-bcdf-e5d0f6c47d15\") " Feb 27 10:22:17 crc kubenswrapper[4998]: I0227 10:22:17.852969 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5b7d9866-74be-49cd-bcdf-e5d0f6c47d15-client-ca\") pod \"5b7d9866-74be-49cd-bcdf-e5d0f6c47d15\" (UID: \"5b7d9866-74be-49cd-bcdf-e5d0f6c47d15\") " Feb 27 10:22:17 crc kubenswrapper[4998]: I0227 10:22:17.852991 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b7d9866-74be-49cd-bcdf-e5d0f6c47d15-config\") pod \"5b7d9866-74be-49cd-bcdf-e5d0f6c47d15\" (UID: \"5b7d9866-74be-49cd-bcdf-e5d0f6c47d15\") " Feb 27 10:22:17 crc kubenswrapper[4998]: I0227 10:22:17.854211 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b7d9866-74be-49cd-bcdf-e5d0f6c47d15-client-ca" (OuterVolumeSpecName: "client-ca") pod "5b7d9866-74be-49cd-bcdf-e5d0f6c47d15" (UID: "5b7d9866-74be-49cd-bcdf-e5d0f6c47d15"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:22:17 crc kubenswrapper[4998]: I0227 10:22:17.854271 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b7d9866-74be-49cd-bcdf-e5d0f6c47d15-config" (OuterVolumeSpecName: "config") pod "5b7d9866-74be-49cd-bcdf-e5d0f6c47d15" (UID: "5b7d9866-74be-49cd-bcdf-e5d0f6c47d15"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:22:17 crc kubenswrapper[4998]: I0227 10:22:17.860961 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b7d9866-74be-49cd-bcdf-e5d0f6c47d15-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5b7d9866-74be-49cd-bcdf-e5d0f6c47d15" (UID: "5b7d9866-74be-49cd-bcdf-e5d0f6c47d15"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:22:17 crc kubenswrapper[4998]: I0227 10:22:17.863131 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b7d9866-74be-49cd-bcdf-e5d0f6c47d15-kube-api-access-ghks5" (OuterVolumeSpecName: "kube-api-access-ghks5") pod "5b7d9866-74be-49cd-bcdf-e5d0f6c47d15" (UID: "5b7d9866-74be-49cd-bcdf-e5d0f6c47d15"). InnerVolumeSpecName "kube-api-access-ghks5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:22:17 crc kubenswrapper[4998]: I0227 10:22:17.955314 4998 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b7d9866-74be-49cd-bcdf-e5d0f6c47d15-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:22:17 crc kubenswrapper[4998]: I0227 10:22:17.955633 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ghks5\" (UniqueName: \"kubernetes.io/projected/5b7d9866-74be-49cd-bcdf-e5d0f6c47d15-kube-api-access-ghks5\") on node \"crc\" DevicePath \"\"" Feb 27 10:22:17 crc kubenswrapper[4998]: I0227 10:22:17.955649 4998 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5b7d9866-74be-49cd-bcdf-e5d0f6c47d15-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 10:22:17 crc kubenswrapper[4998]: I0227 10:22:17.955659 4998 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5b7d9866-74be-49cd-bcdf-e5d0f6c47d15-client-ca\") on node \"crc\" DevicePath \"\"" Feb 27 10:22:18 crc kubenswrapper[4998]: I0227 10:22:18.218801 4998 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2026-11-18 01:46:24.39431994 +0000 UTC Feb 27 10:22:18 crc kubenswrapper[4998]: I0227 10:22:18.218846 4998 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6327h24m6.175477162s for next certificate rotation Feb 27 10:22:18 crc kubenswrapper[4998]: I0227 10:22:18.382279 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n6ft6" event={"ID":"1f770761-42e0-4e42-92c0-1e7fb8e45a49","Type":"ContainerStarted","Data":"b505ce2826ecb1df3ed2898d6f985851f5c28c9be530e002bb0f8177a161e452"} Feb 27 10:22:18 crc kubenswrapper[4998]: I0227 10:22:18.384938 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-679b689f4c-lmpbh" event={"ID":"82833b5f-2598-4958-a974-3ae9455ecc78","Type":"ContainerStarted","Data":"c005a84825d62b25ffc063daa23a406007d691a8c8b5c9b7765bc9016ef529b9"} Feb 27 10:22:18 crc kubenswrapper[4998]: I0227 10:22:18.384975 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-679b689f4c-lmpbh" event={"ID":"82833b5f-2598-4958-a974-3ae9455ecc78","Type":"ContainerStarted","Data":"24e0eadfd102504609487b81ad9c61cc2e506b183dca84a8c14140281f96e57e"} Feb 27 10:22:18 crc kubenswrapper[4998]: I0227 10:22:18.385681 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-679b689f4c-lmpbh" Feb 27 10:22:18 crc kubenswrapper[4998]: I0227 10:22:18.395995 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-679b689f4c-lmpbh" Feb 27 10:22:18 crc kubenswrapper[4998]: I0227 10:22:18.402348 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-n6ft6" podStartSLOduration=14.629610058 podStartE2EDuration="36.402327324s" podCreationTimestamp="2026-02-27 10:21:42 +0000 UTC" firstStartedPulling="2026-02-27 10:21:56.117778021 +0000 UTC m=+268.116048989" lastFinishedPulling="2026-02-27 10:22:17.890495287 +0000 UTC m=+289.888766255" observedRunningTime="2026-02-27 10:22:18.399237608 +0000 UTC m=+290.397508576" watchObservedRunningTime="2026-02-27 10:22:18.402327324 +0000 UTC m=+290.400598292" Feb 27 10:22:18 crc kubenswrapper[4998]: I0227 10:22:18.402981 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8xp8s" event={"ID":"ff2b77d3-c103-46bc-9622-7f4a87ea0264","Type":"ContainerStarted","Data":"0cd8e45a7c93befc8c805ebe723890af0f53904aed8fa3fe14c19aa1eda844ca"} Feb 27 10:22:18 crc kubenswrapper[4998]: I0227 10:22:18.407525 4998 generic.go:334] "Generic (PLEG): container finished" podID="5b7d9866-74be-49cd-bcdf-e5d0f6c47d15" containerID="767ee612252c069e15e179c5ba011232886199aa388aca27ee571b777a4b9b74" exitCode=0 Feb 27 10:22:18 crc kubenswrapper[4998]: I0227 10:22:18.407595 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-667964dbd8-56s2h" event={"ID":"5b7d9866-74be-49cd-bcdf-e5d0f6c47d15","Type":"ContainerDied","Data":"767ee612252c069e15e179c5ba011232886199aa388aca27ee571b777a4b9b74"} Feb 27 10:22:18 crc kubenswrapper[4998]: I0227 10:22:18.407626 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-667964dbd8-56s2h" event={"ID":"5b7d9866-74be-49cd-bcdf-e5d0f6c47d15","Type":"ContainerDied","Data":"97a6ddf71f1bfe39517fd614ec0220553d80eb09c5f8260ecbb02dd877b5cd6c"} Feb 27 10:22:18 crc kubenswrapper[4998]: I0227 10:22:18.407647 4998 scope.go:117] "RemoveContainer" containerID="767ee612252c069e15e179c5ba011232886199aa388aca27ee571b777a4b9b74" Feb 27 10:22:18 crc kubenswrapper[4998]: I0227 10:22:18.407730 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-667964dbd8-56s2h" Feb 27 10:22:18 crc kubenswrapper[4998]: I0227 10:22:18.421478 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-679b689f4c-lmpbh" podStartSLOduration=4.421456344 podStartE2EDuration="4.421456344s" podCreationTimestamp="2026-02-27 10:22:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:22:18.415377676 +0000 UTC m=+290.413648654" watchObservedRunningTime="2026-02-27 10:22:18.421456344 +0000 UTC m=+290.419727312" Feb 27 10:22:18 crc kubenswrapper[4998]: I0227 10:22:18.442679 4998 scope.go:117] "RemoveContainer" containerID="767ee612252c069e15e179c5ba011232886199aa388aca27ee571b777a4b9b74" Feb 27 10:22:18 crc kubenswrapper[4998]: I0227 10:22:18.443750 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8xp8s" podStartSLOduration=13.791605279 podStartE2EDuration="35.443732982s" podCreationTimestamp="2026-02-27 10:21:43 +0000 UTC" firstStartedPulling="2026-02-27 10:21:56.117970206 +0000 UTC m=+268.116241174" lastFinishedPulling="2026-02-27 10:22:17.770097909 +0000 UTC m=+289.768368877" observedRunningTime="2026-02-27 10:22:18.442431576 +0000 UTC m=+290.440702564" watchObservedRunningTime="2026-02-27 10:22:18.443732982 +0000 UTC m=+290.442003950" Feb 27 10:22:18 crc kubenswrapper[4998]: E0227 10:22:18.445426 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"767ee612252c069e15e179c5ba011232886199aa388aca27ee571b777a4b9b74\": container with ID starting with 767ee612252c069e15e179c5ba011232886199aa388aca27ee571b777a4b9b74 not found: ID does not exist" containerID="767ee612252c069e15e179c5ba011232886199aa388aca27ee571b777a4b9b74" Feb 27 10:22:18 crc kubenswrapper[4998]: I0227 10:22:18.445474 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"767ee612252c069e15e179c5ba011232886199aa388aca27ee571b777a4b9b74"} err="failed to get container status \"767ee612252c069e15e179c5ba011232886199aa388aca27ee571b777a4b9b74\": rpc error: code = NotFound desc = could not find container \"767ee612252c069e15e179c5ba011232886199aa388aca27ee571b777a4b9b74\": container with ID starting with 767ee612252c069e15e179c5ba011232886199aa388aca27ee571b777a4b9b74 not found: ID does not exist" Feb 27 10:22:18 crc kubenswrapper[4998]: I0227 10:22:18.460648 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-667964dbd8-56s2h"] Feb 27 10:22:18 crc kubenswrapper[4998]: I0227 10:22:18.465947 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-667964dbd8-56s2h"] Feb 27 10:22:18 crc kubenswrapper[4998]: I0227 10:22:18.695289 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536462-2r92d" Feb 27 10:22:18 crc kubenswrapper[4998]: I0227 10:22:18.773407 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b7d9866-74be-49cd-bcdf-e5d0f6c47d15" path="/var/lib/kubelet/pods/5b7d9866-74be-49cd-bcdf-e5d0f6c47d15/volumes" Feb 27 10:22:18 crc kubenswrapper[4998]: I0227 10:22:18.774302 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf8d7364-856b-4990-97e6-77b661bf2277" path="/var/lib/kubelet/pods/cf8d7364-856b-4990-97e6-77b661bf2277/volumes" Feb 27 10:22:18 crc kubenswrapper[4998]: I0227 10:22:18.856989 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 27 10:22:18 crc kubenswrapper[4998]: I0227 10:22:18.862070 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 27 10:22:18 crc kubenswrapper[4998]: I0227 10:22:18.871663 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s87lq\" (UniqueName: \"kubernetes.io/projected/7ea17fe7-41e2-4264-909f-0e905886524b-kube-api-access-s87lq\") pod \"7ea17fe7-41e2-4264-909f-0e905886524b\" (UID: \"7ea17fe7-41e2-4264-909f-0e905886524b\") " Feb 27 10:22:18 crc kubenswrapper[4998]: I0227 10:22:18.912548 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ea17fe7-41e2-4264-909f-0e905886524b-kube-api-access-s87lq" (OuterVolumeSpecName: "kube-api-access-s87lq") pod "7ea17fe7-41e2-4264-909f-0e905886524b" (UID: "7ea17fe7-41e2-4264-909f-0e905886524b"). InnerVolumeSpecName "kube-api-access-s87lq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:22:18 crc kubenswrapper[4998]: I0227 10:22:18.973343 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4d1d2ac5-97b7-4b33-b6ea-50cd9fffeccd-kubelet-dir\") pod \"4d1d2ac5-97b7-4b33-b6ea-50cd9fffeccd\" (UID: \"4d1d2ac5-97b7-4b33-b6ea-50cd9fffeccd\") " Feb 27 10:22:18 crc kubenswrapper[4998]: I0227 10:22:18.973539 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cbaa6e79-1a8b-4d0c-bfa6-f9292907081d-kubelet-dir\") pod \"cbaa6e79-1a8b-4d0c-bfa6-f9292907081d\" (UID: \"cbaa6e79-1a8b-4d0c-bfa6-f9292907081d\") " Feb 27 10:22:18 crc kubenswrapper[4998]: I0227 10:22:18.973660 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4d1d2ac5-97b7-4b33-b6ea-50cd9fffeccd-kube-api-access\") pod \"4d1d2ac5-97b7-4b33-b6ea-50cd9fffeccd\" (UID: \"4d1d2ac5-97b7-4b33-b6ea-50cd9fffeccd\") " Feb 27 10:22:18 crc kubenswrapper[4998]: I0227 10:22:18.973792 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cbaa6e79-1a8b-4d0c-bfa6-f9292907081d-kube-api-access\") pod \"cbaa6e79-1a8b-4d0c-bfa6-f9292907081d\" (UID: \"cbaa6e79-1a8b-4d0c-bfa6-f9292907081d\") " Feb 27 10:22:18 crc kubenswrapper[4998]: I0227 10:22:18.974297 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s87lq\" (UniqueName: \"kubernetes.io/projected/7ea17fe7-41e2-4264-909f-0e905886524b-kube-api-access-s87lq\") on node \"crc\" DevicePath \"\"" Feb 27 10:22:18 crc kubenswrapper[4998]: I0227 10:22:18.974476 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cbaa6e79-1a8b-4d0c-bfa6-f9292907081d-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "cbaa6e79-1a8b-4d0c-bfa6-f9292907081d" (UID: "cbaa6e79-1a8b-4d0c-bfa6-f9292907081d"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 10:22:18 crc kubenswrapper[4998]: I0227 10:22:18.974646 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4d1d2ac5-97b7-4b33-b6ea-50cd9fffeccd-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "4d1d2ac5-97b7-4b33-b6ea-50cd9fffeccd" (UID: "4d1d2ac5-97b7-4b33-b6ea-50cd9fffeccd"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 10:22:18 crc kubenswrapper[4998]: I0227 10:22:18.977695 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbaa6e79-1a8b-4d0c-bfa6-f9292907081d-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "cbaa6e79-1a8b-4d0c-bfa6-f9292907081d" (UID: "cbaa6e79-1a8b-4d0c-bfa6-f9292907081d"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:22:18 crc kubenswrapper[4998]: I0227 10:22:18.979752 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d1d2ac5-97b7-4b33-b6ea-50cd9fffeccd-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "4d1d2ac5-97b7-4b33-b6ea-50cd9fffeccd" (UID: "4d1d2ac5-97b7-4b33-b6ea-50cd9fffeccd"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:22:19 crc kubenswrapper[4998]: I0227 10:22:19.076997 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cbaa6e79-1a8b-4d0c-bfa6-f9292907081d-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 27 10:22:19 crc kubenswrapper[4998]: I0227 10:22:19.077040 4998 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4d1d2ac5-97b7-4b33-b6ea-50cd9fffeccd-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 27 10:22:19 crc kubenswrapper[4998]: I0227 10:22:19.077053 4998 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cbaa6e79-1a8b-4d0c-bfa6-f9292907081d-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 27 10:22:19 crc kubenswrapper[4998]: I0227 10:22:19.077064 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4d1d2ac5-97b7-4b33-b6ea-50cd9fffeccd-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 27 10:22:19 crc kubenswrapper[4998]: I0227 10:22:19.218954 4998 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2027-01-09 11:21:50.40295537 +0000 UTC Feb 27 10:22:19 crc kubenswrapper[4998]: I0227 10:22:19.218995 4998 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7584h59m31.183963588s for next certificate rotation Feb 27 10:22:19 crc kubenswrapper[4998]: I0227 10:22:19.298886 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-856b9b75b6-wdlx2"] Feb 27 10:22:19 crc kubenswrapper[4998]: E0227 10:22:19.299095 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d1d2ac5-97b7-4b33-b6ea-50cd9fffeccd" containerName="pruner" Feb 27 10:22:19 crc kubenswrapper[4998]: I0227 10:22:19.299106 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d1d2ac5-97b7-4b33-b6ea-50cd9fffeccd" containerName="pruner" Feb 27 10:22:19 crc kubenswrapper[4998]: E0227 10:22:19.299121 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ea17fe7-41e2-4264-909f-0e905886524b" containerName="oc" Feb 27 10:22:19 crc kubenswrapper[4998]: I0227 10:22:19.299126 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ea17fe7-41e2-4264-909f-0e905886524b" containerName="oc" Feb 27 10:22:19 crc kubenswrapper[4998]: E0227 10:22:19.299133 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b7d9866-74be-49cd-bcdf-e5d0f6c47d15" containerName="route-controller-manager" Feb 27 10:22:19 crc kubenswrapper[4998]: I0227 10:22:19.299139 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b7d9866-74be-49cd-bcdf-e5d0f6c47d15" containerName="route-controller-manager" Feb 27 10:22:19 crc kubenswrapper[4998]: E0227 10:22:19.299147 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbaa6e79-1a8b-4d0c-bfa6-f9292907081d" containerName="pruner" Feb 27 10:22:19 crc kubenswrapper[4998]: I0227 10:22:19.299153 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbaa6e79-1a8b-4d0c-bfa6-f9292907081d" containerName="pruner" Feb 27 10:22:19 crc kubenswrapper[4998]: I0227 10:22:19.299265 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b7d9866-74be-49cd-bcdf-e5d0f6c47d15" containerName="route-controller-manager" Feb 27 10:22:19 crc kubenswrapper[4998]: I0227 10:22:19.299274 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ea17fe7-41e2-4264-909f-0e905886524b" containerName="oc" Feb 27 10:22:19 crc kubenswrapper[4998]: I0227 10:22:19.299282 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d1d2ac5-97b7-4b33-b6ea-50cd9fffeccd" containerName="pruner" Feb 27 10:22:19 crc kubenswrapper[4998]: I0227 10:22:19.299294 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbaa6e79-1a8b-4d0c-bfa6-f9292907081d" containerName="pruner" Feb 27 10:22:19 crc kubenswrapper[4998]: I0227 10:22:19.299664 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-856b9b75b6-wdlx2" Feb 27 10:22:19 crc kubenswrapper[4998]: I0227 10:22:19.301002 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-856b9b75b6-wdlx2"] Feb 27 10:22:19 crc kubenswrapper[4998]: I0227 10:22:19.302955 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 27 10:22:19 crc kubenswrapper[4998]: I0227 10:22:19.303158 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 27 10:22:19 crc kubenswrapper[4998]: I0227 10:22:19.303380 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 27 10:22:19 crc kubenswrapper[4998]: I0227 10:22:19.303500 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 27 10:22:19 crc kubenswrapper[4998]: I0227 10:22:19.303977 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 27 10:22:19 crc kubenswrapper[4998]: I0227 10:22:19.304560 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 27 10:22:19 crc kubenswrapper[4998]: I0227 10:22:19.381475 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/48b32b89-b335-44e6-8083-a60ed3aba60f-serving-cert\") pod \"route-controller-manager-856b9b75b6-wdlx2\" (UID: \"48b32b89-b335-44e6-8083-a60ed3aba60f\") " pod="openshift-route-controller-manager/route-controller-manager-856b9b75b6-wdlx2" Feb 27 10:22:19 crc kubenswrapper[4998]: I0227 10:22:19.381567 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ndqp\" (UniqueName: \"kubernetes.io/projected/48b32b89-b335-44e6-8083-a60ed3aba60f-kube-api-access-8ndqp\") pod \"route-controller-manager-856b9b75b6-wdlx2\" (UID: \"48b32b89-b335-44e6-8083-a60ed3aba60f\") " pod="openshift-route-controller-manager/route-controller-manager-856b9b75b6-wdlx2" Feb 27 10:22:19 crc kubenswrapper[4998]: I0227 10:22:19.381651 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/48b32b89-b335-44e6-8083-a60ed3aba60f-client-ca\") pod \"route-controller-manager-856b9b75b6-wdlx2\" (UID: \"48b32b89-b335-44e6-8083-a60ed3aba60f\") " pod="openshift-route-controller-manager/route-controller-manager-856b9b75b6-wdlx2" Feb 27 10:22:19 crc kubenswrapper[4998]: I0227 10:22:19.381673 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48b32b89-b335-44e6-8083-a60ed3aba60f-config\") pod \"route-controller-manager-856b9b75b6-wdlx2\" (UID: \"48b32b89-b335-44e6-8083-a60ed3aba60f\") " pod="openshift-route-controller-manager/route-controller-manager-856b9b75b6-wdlx2" Feb 27 10:22:19 crc kubenswrapper[4998]: I0227 10:22:19.385844 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 27 10:22:19 crc kubenswrapper[4998]: I0227 10:22:19.386502 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 27 10:22:19 crc kubenswrapper[4998]: I0227 10:22:19.396430 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 27 10:22:19 crc kubenswrapper[4998]: I0227 10:22:19.429156 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536462-2r92d" event={"ID":"7ea17fe7-41e2-4264-909f-0e905886524b","Type":"ContainerDied","Data":"da81fe2dae9825bb2b1558cf845e81639dd5b8ea0a9ebe191f9523ce3334e2a8"} Feb 27 10:22:19 crc kubenswrapper[4998]: I0227 10:22:19.429208 4998 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da81fe2dae9825bb2b1558cf845e81639dd5b8ea0a9ebe191f9523ce3334e2a8" Feb 27 10:22:19 crc kubenswrapper[4998]: I0227 10:22:19.429323 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536462-2r92d" Feb 27 10:22:19 crc kubenswrapper[4998]: I0227 10:22:19.441463 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"cbaa6e79-1a8b-4d0c-bfa6-f9292907081d","Type":"ContainerDied","Data":"15860cdb1e4ec585aef6dab13b64738f0ab4495cdfcb9dd475ccaaf8fe83b6c2"} Feb 27 10:22:19 crc kubenswrapper[4998]: I0227 10:22:19.441505 4998 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="15860cdb1e4ec585aef6dab13b64738f0ab4495cdfcb9dd475ccaaf8fe83b6c2" Feb 27 10:22:19 crc kubenswrapper[4998]: I0227 10:22:19.441585 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 27 10:22:19 crc kubenswrapper[4998]: I0227 10:22:19.457284 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 27 10:22:19 crc kubenswrapper[4998]: I0227 10:22:19.457580 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"4d1d2ac5-97b7-4b33-b6ea-50cd9fffeccd","Type":"ContainerDied","Data":"366abfd0f793a5bdd160f4522e259d7143075c4cac3e9fecf10eb7c465cdaa49"} Feb 27 10:22:19 crc kubenswrapper[4998]: I0227 10:22:19.457605 4998 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="366abfd0f793a5bdd160f4522e259d7143075c4cac3e9fecf10eb7c465cdaa49" Feb 27 10:22:19 crc kubenswrapper[4998]: I0227 10:22:19.482642 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48b32b89-b335-44e6-8083-a60ed3aba60f-config\") pod \"route-controller-manager-856b9b75b6-wdlx2\" (UID: \"48b32b89-b335-44e6-8083-a60ed3aba60f\") " pod="openshift-route-controller-manager/route-controller-manager-856b9b75b6-wdlx2" Feb 27 10:22:19 crc kubenswrapper[4998]: I0227 10:22:19.482690 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ef5a2466-eb1e-408c-934b-cf47168986b8-kube-api-access\") pod \"installer-9-crc\" (UID: \"ef5a2466-eb1e-408c-934b-cf47168986b8\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 27 10:22:19 crc kubenswrapper[4998]: I0227 10:22:19.482731 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ef5a2466-eb1e-408c-934b-cf47168986b8-kubelet-dir\") pod \"installer-9-crc\" (UID: \"ef5a2466-eb1e-408c-934b-cf47168986b8\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 27 10:22:19 crc kubenswrapper[4998]: I0227 10:22:19.482785 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ef5a2466-eb1e-408c-934b-cf47168986b8-var-lock\") pod \"installer-9-crc\" (UID: \"ef5a2466-eb1e-408c-934b-cf47168986b8\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 27 10:22:19 crc kubenswrapper[4998]: I0227 10:22:19.482821 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/48b32b89-b335-44e6-8083-a60ed3aba60f-serving-cert\") pod \"route-controller-manager-856b9b75b6-wdlx2\" (UID: \"48b32b89-b335-44e6-8083-a60ed3aba60f\") " pod="openshift-route-controller-manager/route-controller-manager-856b9b75b6-wdlx2" Feb 27 10:22:19 crc kubenswrapper[4998]: I0227 10:22:19.482843 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ndqp\" (UniqueName: \"kubernetes.io/projected/48b32b89-b335-44e6-8083-a60ed3aba60f-kube-api-access-8ndqp\") pod \"route-controller-manager-856b9b75b6-wdlx2\" (UID: \"48b32b89-b335-44e6-8083-a60ed3aba60f\") " pod="openshift-route-controller-manager/route-controller-manager-856b9b75b6-wdlx2" Feb 27 10:22:19 crc kubenswrapper[4998]: I0227 10:22:19.482880 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/48b32b89-b335-44e6-8083-a60ed3aba60f-client-ca\") pod \"route-controller-manager-856b9b75b6-wdlx2\" (UID: \"48b32b89-b335-44e6-8083-a60ed3aba60f\") " pod="openshift-route-controller-manager/route-controller-manager-856b9b75b6-wdlx2" Feb 27 10:22:19 crc kubenswrapper[4998]: I0227 10:22:19.484141 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/48b32b89-b335-44e6-8083-a60ed3aba60f-client-ca\") pod \"route-controller-manager-856b9b75b6-wdlx2\" (UID: \"48b32b89-b335-44e6-8083-a60ed3aba60f\") " pod="openshift-route-controller-manager/route-controller-manager-856b9b75b6-wdlx2" Feb 27 10:22:19 crc kubenswrapper[4998]: I0227 10:22:19.485570 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48b32b89-b335-44e6-8083-a60ed3aba60f-config\") pod \"route-controller-manager-856b9b75b6-wdlx2\" (UID: \"48b32b89-b335-44e6-8083-a60ed3aba60f\") " pod="openshift-route-controller-manager/route-controller-manager-856b9b75b6-wdlx2" Feb 27 10:22:19 crc kubenswrapper[4998]: I0227 10:22:19.489994 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/48b32b89-b335-44e6-8083-a60ed3aba60f-serving-cert\") pod \"route-controller-manager-856b9b75b6-wdlx2\" (UID: \"48b32b89-b335-44e6-8083-a60ed3aba60f\") " pod="openshift-route-controller-manager/route-controller-manager-856b9b75b6-wdlx2" Feb 27 10:22:19 crc kubenswrapper[4998]: I0227 10:22:19.506198 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ndqp\" (UniqueName: \"kubernetes.io/projected/48b32b89-b335-44e6-8083-a60ed3aba60f-kube-api-access-8ndqp\") pod \"route-controller-manager-856b9b75b6-wdlx2\" (UID: \"48b32b89-b335-44e6-8083-a60ed3aba60f\") " pod="openshift-route-controller-manager/route-controller-manager-856b9b75b6-wdlx2" Feb 27 10:22:19 crc kubenswrapper[4998]: I0227 10:22:19.584348 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ef5a2466-eb1e-408c-934b-cf47168986b8-kubelet-dir\") pod \"installer-9-crc\" (UID: \"ef5a2466-eb1e-408c-934b-cf47168986b8\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 27 10:22:19 crc kubenswrapper[4998]: I0227 10:22:19.584532 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ef5a2466-eb1e-408c-934b-cf47168986b8-var-lock\") pod \"installer-9-crc\" (UID: \"ef5a2466-eb1e-408c-934b-cf47168986b8\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 27 10:22:19 crc kubenswrapper[4998]: I0227 10:22:19.584703 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ef5a2466-eb1e-408c-934b-cf47168986b8-kube-api-access\") pod \"installer-9-crc\" (UID: \"ef5a2466-eb1e-408c-934b-cf47168986b8\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 27 10:22:19 crc kubenswrapper[4998]: I0227 10:22:19.584709 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ef5a2466-eb1e-408c-934b-cf47168986b8-var-lock\") pod \"installer-9-crc\" (UID: \"ef5a2466-eb1e-408c-934b-cf47168986b8\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 27 10:22:19 crc kubenswrapper[4998]: I0227 10:22:19.585220 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ef5a2466-eb1e-408c-934b-cf47168986b8-kubelet-dir\") pod \"installer-9-crc\" (UID: \"ef5a2466-eb1e-408c-934b-cf47168986b8\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 27 10:22:19 crc kubenswrapper[4998]: I0227 10:22:19.606035 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ef5a2466-eb1e-408c-934b-cf47168986b8-kube-api-access\") pod \"installer-9-crc\" (UID: \"ef5a2466-eb1e-408c-934b-cf47168986b8\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 27 10:22:19 crc kubenswrapper[4998]: I0227 10:22:19.618894 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-856b9b75b6-wdlx2" Feb 27 10:22:19 crc kubenswrapper[4998]: I0227 10:22:19.716996 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 27 10:22:20 crc kubenswrapper[4998]: I0227 10:22:20.116096 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-856b9b75b6-wdlx2"] Feb 27 10:22:20 crc kubenswrapper[4998]: W0227 10:22:20.134437 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48b32b89_b335_44e6_8083_a60ed3aba60f.slice/crio-e518731149f80d9a41aff95610c96fb9c274eb3c63d19b34db76e210b481a684 WatchSource:0}: Error finding container e518731149f80d9a41aff95610c96fb9c274eb3c63d19b34db76e210b481a684: Status 404 returned error can't find the container with id e518731149f80d9a41aff95610c96fb9c274eb3c63d19b34db76e210b481a684 Feb 27 10:22:20 crc kubenswrapper[4998]: I0227 10:22:20.196675 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 27 10:22:20 crc kubenswrapper[4998]: W0227 10:22:20.203075 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podef5a2466_eb1e_408c_934b_cf47168986b8.slice/crio-0543b2c0434e7c330caf7768811d842a53c27e3a3ad4463af090a7f21ab716c1 WatchSource:0}: Error finding container 0543b2c0434e7c330caf7768811d842a53c27e3a3ad4463af090a7f21ab716c1: Status 404 returned error can't find the container with id 0543b2c0434e7c330caf7768811d842a53c27e3a3ad4463af090a7f21ab716c1 Feb 27 10:22:20 crc kubenswrapper[4998]: I0227 10:22:20.465568 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"ef5a2466-eb1e-408c-934b-cf47168986b8","Type":"ContainerStarted","Data":"0543b2c0434e7c330caf7768811d842a53c27e3a3ad4463af090a7f21ab716c1"} Feb 27 10:22:20 crc kubenswrapper[4998]: I0227 10:22:20.467490 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-856b9b75b6-wdlx2" event={"ID":"48b32b89-b335-44e6-8083-a60ed3aba60f","Type":"ContainerStarted","Data":"e518731149f80d9a41aff95610c96fb9c274eb3c63d19b34db76e210b481a684"} Feb 27 10:22:22 crc kubenswrapper[4998]: I0227 10:22:22.483479 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-856b9b75b6-wdlx2" event={"ID":"48b32b89-b335-44e6-8083-a60ed3aba60f","Type":"ContainerStarted","Data":"a9f3a9b93b507fb60aed07de32d8b7b1830b97f12d5f6dbf33651f3a0c9a7dca"} Feb 27 10:22:23 crc kubenswrapper[4998]: I0227 10:22:23.311370 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-n6ft6" Feb 27 10:22:23 crc kubenswrapper[4998]: I0227 10:22:23.311693 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-n6ft6" Feb 27 10:22:23 crc kubenswrapper[4998]: I0227 10:22:23.490916 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"ef5a2466-eb1e-408c-934b-cf47168986b8","Type":"ContainerStarted","Data":"b210414b8903d594001c84b551c06e97dc1c675da1ca488233d74f9d9c8ba270"} Feb 27 10:22:23 crc kubenswrapper[4998]: I0227 10:22:23.512281 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=4.512259081 podStartE2EDuration="4.512259081s" podCreationTimestamp="2026-02-27 10:22:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:22:23.510811221 +0000 UTC m=+295.509082239" watchObservedRunningTime="2026-02-27 10:22:23.512259081 +0000 UTC m=+295.510530049" Feb 27 10:22:24 crc kubenswrapper[4998]: I0227 10:22:24.001654 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8xp8s" Feb 27 10:22:24 crc kubenswrapper[4998]: I0227 10:22:24.002036 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8xp8s" Feb 27 10:22:24 crc kubenswrapper[4998]: I0227 10:22:24.031623 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-n6ft6" Feb 27 10:22:24 crc kubenswrapper[4998]: I0227 10:22:24.042020 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8xp8s" Feb 27 10:22:24 crc kubenswrapper[4998]: I0227 10:22:24.525356 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-856b9b75b6-wdlx2" podStartSLOduration=10.525329943 podStartE2EDuration="10.525329943s" podCreationTimestamp="2026-02-27 10:22:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:22:24.521334183 +0000 UTC m=+296.519605151" watchObservedRunningTime="2026-02-27 10:22:24.525329943 +0000 UTC m=+296.523600911" Feb 27 10:22:24 crc kubenswrapper[4998]: I0227 10:22:24.542916 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-n6ft6" Feb 27 10:22:24 crc kubenswrapper[4998]: I0227 10:22:24.546435 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8xp8s" Feb 27 10:22:25 crc kubenswrapper[4998]: I0227 10:22:25.692897 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8xp8s"] Feb 27 10:22:27 crc kubenswrapper[4998]: I0227 10:22:27.518995 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-8xp8s" podUID="ff2b77d3-c103-46bc-9622-7f4a87ea0264" containerName="registry-server" containerID="cri-o://0cd8e45a7c93befc8c805ebe723890af0f53904aed8fa3fe14c19aa1eda844ca" gracePeriod=2 Feb 27 10:22:29 crc kubenswrapper[4998]: I0227 10:22:29.531110 4998 generic.go:334] "Generic (PLEG): container finished" podID="ff2b77d3-c103-46bc-9622-7f4a87ea0264" containerID="0cd8e45a7c93befc8c805ebe723890af0f53904aed8fa3fe14c19aa1eda844ca" exitCode=0 Feb 27 10:22:29 crc kubenswrapper[4998]: I0227 10:22:29.531163 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8xp8s" event={"ID":"ff2b77d3-c103-46bc-9622-7f4a87ea0264","Type":"ContainerDied","Data":"0cd8e45a7c93befc8c805ebe723890af0f53904aed8fa3fe14c19aa1eda844ca"} Feb 27 10:22:29 crc kubenswrapper[4998]: I0227 10:22:29.619286 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-856b9b75b6-wdlx2" Feb 27 10:22:29 crc kubenswrapper[4998]: I0227 10:22:29.624702 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-856b9b75b6-wdlx2" Feb 27 10:22:32 crc kubenswrapper[4998]: I0227 10:22:32.831820 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8xp8s" Feb 27 10:22:33 crc kubenswrapper[4998]: I0227 10:22:33.027009 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff2b77d3-c103-46bc-9622-7f4a87ea0264-utilities\") pod \"ff2b77d3-c103-46bc-9622-7f4a87ea0264\" (UID: \"ff2b77d3-c103-46bc-9622-7f4a87ea0264\") " Feb 27 10:22:33 crc kubenswrapper[4998]: I0227 10:22:33.027079 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff2b77d3-c103-46bc-9622-7f4a87ea0264-catalog-content\") pod \"ff2b77d3-c103-46bc-9622-7f4a87ea0264\" (UID: \"ff2b77d3-c103-46bc-9622-7f4a87ea0264\") " Feb 27 10:22:33 crc kubenswrapper[4998]: I0227 10:22:33.027114 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rs5st\" (UniqueName: \"kubernetes.io/projected/ff2b77d3-c103-46bc-9622-7f4a87ea0264-kube-api-access-rs5st\") pod \"ff2b77d3-c103-46bc-9622-7f4a87ea0264\" (UID: \"ff2b77d3-c103-46bc-9622-7f4a87ea0264\") " Feb 27 10:22:33 crc kubenswrapper[4998]: I0227 10:22:33.027940 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff2b77d3-c103-46bc-9622-7f4a87ea0264-utilities" (OuterVolumeSpecName: "utilities") pod "ff2b77d3-c103-46bc-9622-7f4a87ea0264" (UID: "ff2b77d3-c103-46bc-9622-7f4a87ea0264"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:22:33 crc kubenswrapper[4998]: I0227 10:22:33.034735 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff2b77d3-c103-46bc-9622-7f4a87ea0264-kube-api-access-rs5st" (OuterVolumeSpecName: "kube-api-access-rs5st") pod "ff2b77d3-c103-46bc-9622-7f4a87ea0264" (UID: "ff2b77d3-c103-46bc-9622-7f4a87ea0264"). InnerVolumeSpecName "kube-api-access-rs5st". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:22:33 crc kubenswrapper[4998]: I0227 10:22:33.054763 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff2b77d3-c103-46bc-9622-7f4a87ea0264-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ff2b77d3-c103-46bc-9622-7f4a87ea0264" (UID: "ff2b77d3-c103-46bc-9622-7f4a87ea0264"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:22:33 crc kubenswrapper[4998]: I0227 10:22:33.128391 4998 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff2b77d3-c103-46bc-9622-7f4a87ea0264-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 10:22:33 crc kubenswrapper[4998]: I0227 10:22:33.128425 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rs5st\" (UniqueName: \"kubernetes.io/projected/ff2b77d3-c103-46bc-9622-7f4a87ea0264-kube-api-access-rs5st\") on node \"crc\" DevicePath \"\"" Feb 27 10:22:33 crc kubenswrapper[4998]: I0227 10:22:33.128436 4998 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff2b77d3-c103-46bc-9622-7f4a87ea0264-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 10:22:33 crc kubenswrapper[4998]: I0227 10:22:33.553155 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8xp8s" event={"ID":"ff2b77d3-c103-46bc-9622-7f4a87ea0264","Type":"ContainerDied","Data":"b6d854846ddc3bfd9111302b300de72471ac615eefd3658024ce4e32b2895bd5"} Feb 27 10:22:33 crc kubenswrapper[4998]: I0227 10:22:33.553197 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8xp8s" Feb 27 10:22:33 crc kubenswrapper[4998]: I0227 10:22:33.553544 4998 scope.go:117] "RemoveContainer" containerID="0cd8e45a7c93befc8c805ebe723890af0f53904aed8fa3fe14c19aa1eda844ca" Feb 27 10:22:33 crc kubenswrapper[4998]: I0227 10:22:33.568706 4998 scope.go:117] "RemoveContainer" containerID="6a880d179a30421d371cf14980d7d04a08f8b606e7b4fd8a0cddb1b5e7595c30" Feb 27 10:22:33 crc kubenswrapper[4998]: I0227 10:22:33.589997 4998 scope.go:117] "RemoveContainer" containerID="6cacc16d818fcc29228160f563022b2a902c9c06a0f5b6ac4c159ec5b1bb8aa7" Feb 27 10:22:33 crc kubenswrapper[4998]: I0227 10:22:33.591899 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8xp8s"] Feb 27 10:22:33 crc kubenswrapper[4998]: I0227 10:22:33.594564 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-8xp8s"] Feb 27 10:22:34 crc kubenswrapper[4998]: I0227 10:22:34.037841 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-679b689f4c-lmpbh"] Feb 27 10:22:34 crc kubenswrapper[4998]: I0227 10:22:34.038087 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-679b689f4c-lmpbh" podUID="82833b5f-2598-4958-a974-3ae9455ecc78" containerName="controller-manager" containerID="cri-o://c005a84825d62b25ffc063daa23a406007d691a8c8b5c9b7765bc9016ef529b9" gracePeriod=30 Feb 27 10:22:34 crc kubenswrapper[4998]: I0227 10:22:34.038553 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-856b9b75b6-wdlx2"] Feb 27 10:22:34 crc kubenswrapper[4998]: I0227 10:22:34.038655 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-856b9b75b6-wdlx2" podUID="48b32b89-b335-44e6-8083-a60ed3aba60f" containerName="route-controller-manager" containerID="cri-o://a9f3a9b93b507fb60aed07de32d8b7b1830b97f12d5f6dbf33651f3a0c9a7dca" gracePeriod=30 Feb 27 10:22:34 crc kubenswrapper[4998]: I0227 10:22:34.561447 4998 generic.go:334] "Generic (PLEG): container finished" podID="36a1e36a-d138-4606-a280-ef688b10a438" containerID="713976cb87fce173e3b17a3d70cb1e821996f3247786856d82b49adc76d54e8a" exitCode=0 Feb 27 10:22:34 crc kubenswrapper[4998]: I0227 10:22:34.561539 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536460-jgglv" event={"ID":"36a1e36a-d138-4606-a280-ef688b10a438","Type":"ContainerDied","Data":"713976cb87fce173e3b17a3d70cb1e821996f3247786856d82b49adc76d54e8a"} Feb 27 10:22:34 crc kubenswrapper[4998]: I0227 10:22:34.777621 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff2b77d3-c103-46bc-9622-7f4a87ea0264" path="/var/lib/kubelet/pods/ff2b77d3-c103-46bc-9622-7f4a87ea0264/volumes" Feb 27 10:22:35 crc kubenswrapper[4998]: E0227 10:22:35.301434 4998 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 27 10:22:35 crc kubenswrapper[4998]: E0227 10:22:35.301970 4998 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-972ls,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-l7vs8_openshift-marketplace(5d365be7-41cf-4570-a8fb-ef974affdb95): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 27 10:22:35 crc kubenswrapper[4998]: E0227 10:22:35.303599 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-l7vs8" podUID="5d365be7-41cf-4570-a8fb-ef974affdb95" Feb 27 10:22:35 crc kubenswrapper[4998]: I0227 10:22:35.569975 4998 generic.go:334] "Generic (PLEG): container finished" podID="48b32b89-b335-44e6-8083-a60ed3aba60f" containerID="a9f3a9b93b507fb60aed07de32d8b7b1830b97f12d5f6dbf33651f3a0c9a7dca" exitCode=0 Feb 27 10:22:35 crc kubenswrapper[4998]: I0227 10:22:35.570060 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-856b9b75b6-wdlx2" event={"ID":"48b32b89-b335-44e6-8083-a60ed3aba60f","Type":"ContainerDied","Data":"a9f3a9b93b507fb60aed07de32d8b7b1830b97f12d5f6dbf33651f3a0c9a7dca"} Feb 27 10:22:35 crc kubenswrapper[4998]: I0227 10:22:35.573108 4998 generic.go:334] "Generic (PLEG): container finished" podID="82833b5f-2598-4958-a974-3ae9455ecc78" containerID="c005a84825d62b25ffc063daa23a406007d691a8c8b5c9b7765bc9016ef529b9" exitCode=0 Feb 27 10:22:35 crc kubenswrapper[4998]: I0227 10:22:35.573678 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-679b689f4c-lmpbh" event={"ID":"82833b5f-2598-4958-a974-3ae9455ecc78","Type":"ContainerDied","Data":"c005a84825d62b25ffc063daa23a406007d691a8c8b5c9b7765bc9016ef529b9"} Feb 27 10:22:35 crc kubenswrapper[4998]: E0227 10:22:35.838619 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-l7vs8" podUID="5d365be7-41cf-4570-a8fb-ef974affdb95" Feb 27 10:22:35 crc kubenswrapper[4998]: I0227 10:22:35.862730 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-856b9b75b6-wdlx2" Feb 27 10:22:35 crc kubenswrapper[4998]: I0227 10:22:35.876851 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-679b689f4c-lmpbh" Feb 27 10:22:35 crc kubenswrapper[4998]: I0227 10:22:35.893279 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5485fd6c89-629gf"] Feb 27 10:22:35 crc kubenswrapper[4998]: E0227 10:22:35.893528 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff2b77d3-c103-46bc-9622-7f4a87ea0264" containerName="extract-utilities" Feb 27 10:22:35 crc kubenswrapper[4998]: I0227 10:22:35.893544 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff2b77d3-c103-46bc-9622-7f4a87ea0264" containerName="extract-utilities" Feb 27 10:22:35 crc kubenswrapper[4998]: E0227 10:22:35.893553 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48b32b89-b335-44e6-8083-a60ed3aba60f" containerName="route-controller-manager" Feb 27 10:22:35 crc kubenswrapper[4998]: I0227 10:22:35.893561 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="48b32b89-b335-44e6-8083-a60ed3aba60f" containerName="route-controller-manager" Feb 27 10:22:35 crc kubenswrapper[4998]: E0227 10:22:35.893577 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff2b77d3-c103-46bc-9622-7f4a87ea0264" containerName="registry-server" Feb 27 10:22:35 crc kubenswrapper[4998]: I0227 10:22:35.893585 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff2b77d3-c103-46bc-9622-7f4a87ea0264" containerName="registry-server" Feb 27 10:22:35 crc kubenswrapper[4998]: E0227 10:22:35.893599 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff2b77d3-c103-46bc-9622-7f4a87ea0264" containerName="extract-content" Feb 27 10:22:35 crc kubenswrapper[4998]: I0227 10:22:35.893606 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff2b77d3-c103-46bc-9622-7f4a87ea0264" containerName="extract-content" Feb 27 10:22:35 crc kubenswrapper[4998]: E0227 10:22:35.893617 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82833b5f-2598-4958-a974-3ae9455ecc78" containerName="controller-manager" Feb 27 10:22:35 crc kubenswrapper[4998]: I0227 10:22:35.893623 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="82833b5f-2598-4958-a974-3ae9455ecc78" containerName="controller-manager" Feb 27 10:22:35 crc kubenswrapper[4998]: I0227 10:22:35.893759 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="48b32b89-b335-44e6-8083-a60ed3aba60f" containerName="route-controller-manager" Feb 27 10:22:35 crc kubenswrapper[4998]: I0227 10:22:35.893777 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff2b77d3-c103-46bc-9622-7f4a87ea0264" containerName="registry-server" Feb 27 10:22:35 crc kubenswrapper[4998]: I0227 10:22:35.893785 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="82833b5f-2598-4958-a974-3ae9455ecc78" containerName="controller-manager" Feb 27 10:22:35 crc kubenswrapper[4998]: I0227 10:22:35.898158 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5485fd6c89-629gf" Feb 27 10:22:35 crc kubenswrapper[4998]: I0227 10:22:35.902794 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5485fd6c89-629gf"] Feb 27 10:22:35 crc kubenswrapper[4998]: I0227 10:22:35.978187 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nh9j8\" (UniqueName: \"kubernetes.io/projected/82833b5f-2598-4958-a974-3ae9455ecc78-kube-api-access-nh9j8\") pod \"82833b5f-2598-4958-a974-3ae9455ecc78\" (UID: \"82833b5f-2598-4958-a974-3ae9455ecc78\") " Feb 27 10:22:35 crc kubenswrapper[4998]: I0227 10:22:35.978255 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82833b5f-2598-4958-a974-3ae9455ecc78-config\") pod \"82833b5f-2598-4958-a974-3ae9455ecc78\" (UID: \"82833b5f-2598-4958-a974-3ae9455ecc78\") " Feb 27 10:22:35 crc kubenswrapper[4998]: I0227 10:22:35.978284 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48b32b89-b335-44e6-8083-a60ed3aba60f-config\") pod \"48b32b89-b335-44e6-8083-a60ed3aba60f\" (UID: \"48b32b89-b335-44e6-8083-a60ed3aba60f\") " Feb 27 10:22:35 crc kubenswrapper[4998]: I0227 10:22:35.978314 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/82833b5f-2598-4958-a974-3ae9455ecc78-client-ca\") pod \"82833b5f-2598-4958-a974-3ae9455ecc78\" (UID: \"82833b5f-2598-4958-a974-3ae9455ecc78\") " Feb 27 10:22:35 crc kubenswrapper[4998]: I0227 10:22:35.978341 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/48b32b89-b335-44e6-8083-a60ed3aba60f-serving-cert\") pod \"48b32b89-b335-44e6-8083-a60ed3aba60f\" (UID: \"48b32b89-b335-44e6-8083-a60ed3aba60f\") " Feb 27 10:22:35 crc kubenswrapper[4998]: I0227 10:22:35.978368 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8ndqp\" (UniqueName: \"kubernetes.io/projected/48b32b89-b335-44e6-8083-a60ed3aba60f-kube-api-access-8ndqp\") pod \"48b32b89-b335-44e6-8083-a60ed3aba60f\" (UID: \"48b32b89-b335-44e6-8083-a60ed3aba60f\") " Feb 27 10:22:35 crc kubenswrapper[4998]: I0227 10:22:35.978390 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/82833b5f-2598-4958-a974-3ae9455ecc78-serving-cert\") pod \"82833b5f-2598-4958-a974-3ae9455ecc78\" (UID: \"82833b5f-2598-4958-a974-3ae9455ecc78\") " Feb 27 10:22:35 crc kubenswrapper[4998]: I0227 10:22:35.978409 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/48b32b89-b335-44e6-8083-a60ed3aba60f-client-ca\") pod \"48b32b89-b335-44e6-8083-a60ed3aba60f\" (UID: \"48b32b89-b335-44e6-8083-a60ed3aba60f\") " Feb 27 10:22:35 crc kubenswrapper[4998]: I0227 10:22:35.978449 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/82833b5f-2598-4958-a974-3ae9455ecc78-proxy-ca-bundles\") pod \"82833b5f-2598-4958-a974-3ae9455ecc78\" (UID: \"82833b5f-2598-4958-a974-3ae9455ecc78\") " Feb 27 10:22:35 crc kubenswrapper[4998]: I0227 10:22:35.978636 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf0d239d-a33e-4be9-866a-e98671b2cb99-config\") pod \"route-controller-manager-5485fd6c89-629gf\" (UID: \"cf0d239d-a33e-4be9-866a-e98671b2cb99\") " pod="openshift-route-controller-manager/route-controller-manager-5485fd6c89-629gf" Feb 27 10:22:35 crc kubenswrapper[4998]: I0227 10:22:35.978667 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzrwc\" (UniqueName: \"kubernetes.io/projected/cf0d239d-a33e-4be9-866a-e98671b2cb99-kube-api-access-fzrwc\") pod \"route-controller-manager-5485fd6c89-629gf\" (UID: \"cf0d239d-a33e-4be9-866a-e98671b2cb99\") " pod="openshift-route-controller-manager/route-controller-manager-5485fd6c89-629gf" Feb 27 10:22:35 crc kubenswrapper[4998]: I0227 10:22:35.978714 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cf0d239d-a33e-4be9-866a-e98671b2cb99-client-ca\") pod \"route-controller-manager-5485fd6c89-629gf\" (UID: \"cf0d239d-a33e-4be9-866a-e98671b2cb99\") " pod="openshift-route-controller-manager/route-controller-manager-5485fd6c89-629gf" Feb 27 10:22:35 crc kubenswrapper[4998]: I0227 10:22:35.978738 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cf0d239d-a33e-4be9-866a-e98671b2cb99-serving-cert\") pod \"route-controller-manager-5485fd6c89-629gf\" (UID: \"cf0d239d-a33e-4be9-866a-e98671b2cb99\") " pod="openshift-route-controller-manager/route-controller-manager-5485fd6c89-629gf" Feb 27 10:22:35 crc kubenswrapper[4998]: I0227 10:22:35.979999 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82833b5f-2598-4958-a974-3ae9455ecc78-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "82833b5f-2598-4958-a974-3ae9455ecc78" (UID: "82833b5f-2598-4958-a974-3ae9455ecc78"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:22:35 crc kubenswrapper[4998]: I0227 10:22:35.980422 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82833b5f-2598-4958-a974-3ae9455ecc78-config" (OuterVolumeSpecName: "config") pod "82833b5f-2598-4958-a974-3ae9455ecc78" (UID: "82833b5f-2598-4958-a974-3ae9455ecc78"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:22:35 crc kubenswrapper[4998]: I0227 10:22:35.980535 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82833b5f-2598-4958-a974-3ae9455ecc78-client-ca" (OuterVolumeSpecName: "client-ca") pod "82833b5f-2598-4958-a974-3ae9455ecc78" (UID: "82833b5f-2598-4958-a974-3ae9455ecc78"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:22:35 crc kubenswrapper[4998]: I0227 10:22:35.980649 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48b32b89-b335-44e6-8083-a60ed3aba60f-config" (OuterVolumeSpecName: "config") pod "48b32b89-b335-44e6-8083-a60ed3aba60f" (UID: "48b32b89-b335-44e6-8083-a60ed3aba60f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:22:35 crc kubenswrapper[4998]: I0227 10:22:35.981167 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48b32b89-b335-44e6-8083-a60ed3aba60f-client-ca" (OuterVolumeSpecName: "client-ca") pod "48b32b89-b335-44e6-8083-a60ed3aba60f" (UID: "48b32b89-b335-44e6-8083-a60ed3aba60f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:22:35 crc kubenswrapper[4998]: I0227 10:22:35.985811 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48b32b89-b335-44e6-8083-a60ed3aba60f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "48b32b89-b335-44e6-8083-a60ed3aba60f" (UID: "48b32b89-b335-44e6-8083-a60ed3aba60f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:22:35 crc kubenswrapper[4998]: I0227 10:22:35.989131 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48b32b89-b335-44e6-8083-a60ed3aba60f-kube-api-access-8ndqp" (OuterVolumeSpecName: "kube-api-access-8ndqp") pod "48b32b89-b335-44e6-8083-a60ed3aba60f" (UID: "48b32b89-b335-44e6-8083-a60ed3aba60f"). InnerVolumeSpecName "kube-api-access-8ndqp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:22:35 crc kubenswrapper[4998]: I0227 10:22:35.990473 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82833b5f-2598-4958-a974-3ae9455ecc78-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "82833b5f-2598-4958-a974-3ae9455ecc78" (UID: "82833b5f-2598-4958-a974-3ae9455ecc78"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:22:35 crc kubenswrapper[4998]: I0227 10:22:35.993571 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82833b5f-2598-4958-a974-3ae9455ecc78-kube-api-access-nh9j8" (OuterVolumeSpecName: "kube-api-access-nh9j8") pod "82833b5f-2598-4958-a974-3ae9455ecc78" (UID: "82833b5f-2598-4958-a974-3ae9455ecc78"). InnerVolumeSpecName "kube-api-access-nh9j8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:22:36 crc kubenswrapper[4998]: I0227 10:22:36.079861 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf0d239d-a33e-4be9-866a-e98671b2cb99-config\") pod \"route-controller-manager-5485fd6c89-629gf\" (UID: \"cf0d239d-a33e-4be9-866a-e98671b2cb99\") " pod="openshift-route-controller-manager/route-controller-manager-5485fd6c89-629gf" Feb 27 10:22:36 crc kubenswrapper[4998]: I0227 10:22:36.080275 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzrwc\" (UniqueName: \"kubernetes.io/projected/cf0d239d-a33e-4be9-866a-e98671b2cb99-kube-api-access-fzrwc\") pod \"route-controller-manager-5485fd6c89-629gf\" (UID: \"cf0d239d-a33e-4be9-866a-e98671b2cb99\") " pod="openshift-route-controller-manager/route-controller-manager-5485fd6c89-629gf" Feb 27 10:22:36 crc kubenswrapper[4998]: I0227 10:22:36.080350 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cf0d239d-a33e-4be9-866a-e98671b2cb99-client-ca\") pod \"route-controller-manager-5485fd6c89-629gf\" (UID: \"cf0d239d-a33e-4be9-866a-e98671b2cb99\") " pod="openshift-route-controller-manager/route-controller-manager-5485fd6c89-629gf" Feb 27 10:22:36 crc kubenswrapper[4998]: I0227 10:22:36.080394 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cf0d239d-a33e-4be9-866a-e98671b2cb99-serving-cert\") pod \"route-controller-manager-5485fd6c89-629gf\" (UID: \"cf0d239d-a33e-4be9-866a-e98671b2cb99\") " pod="openshift-route-controller-manager/route-controller-manager-5485fd6c89-629gf" Feb 27 10:22:36 crc kubenswrapper[4998]: I0227 10:22:36.080506 4998 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/82833b5f-2598-4958-a974-3ae9455ecc78-client-ca\") on node \"crc\" DevicePath \"\"" Feb 27 10:22:36 crc kubenswrapper[4998]: I0227 10:22:36.080518 4998 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/48b32b89-b335-44e6-8083-a60ed3aba60f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 10:22:36 crc kubenswrapper[4998]: I0227 10:22:36.080527 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8ndqp\" (UniqueName: \"kubernetes.io/projected/48b32b89-b335-44e6-8083-a60ed3aba60f-kube-api-access-8ndqp\") on node \"crc\" DevicePath \"\"" Feb 27 10:22:36 crc kubenswrapper[4998]: I0227 10:22:36.080538 4998 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/82833b5f-2598-4958-a974-3ae9455ecc78-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 10:22:36 crc kubenswrapper[4998]: I0227 10:22:36.080547 4998 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/48b32b89-b335-44e6-8083-a60ed3aba60f-client-ca\") on node \"crc\" DevicePath \"\"" Feb 27 10:22:36 crc kubenswrapper[4998]: I0227 10:22:36.080555 4998 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/82833b5f-2598-4958-a974-3ae9455ecc78-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 27 10:22:36 crc kubenswrapper[4998]: I0227 10:22:36.080586 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nh9j8\" (UniqueName: \"kubernetes.io/projected/82833b5f-2598-4958-a974-3ae9455ecc78-kube-api-access-nh9j8\") on node \"crc\" DevicePath \"\"" Feb 27 10:22:36 crc kubenswrapper[4998]: I0227 10:22:36.080595 4998 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82833b5f-2598-4958-a974-3ae9455ecc78-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:22:36 crc kubenswrapper[4998]: I0227 10:22:36.080603 4998 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48b32b89-b335-44e6-8083-a60ed3aba60f-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:22:36 crc kubenswrapper[4998]: I0227 10:22:36.082589 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf0d239d-a33e-4be9-866a-e98671b2cb99-config\") pod \"route-controller-manager-5485fd6c89-629gf\" (UID: \"cf0d239d-a33e-4be9-866a-e98671b2cb99\") " pod="openshift-route-controller-manager/route-controller-manager-5485fd6c89-629gf" Feb 27 10:22:36 crc kubenswrapper[4998]: I0227 10:22:36.084786 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cf0d239d-a33e-4be9-866a-e98671b2cb99-client-ca\") pod \"route-controller-manager-5485fd6c89-629gf\" (UID: \"cf0d239d-a33e-4be9-866a-e98671b2cb99\") " pod="openshift-route-controller-manager/route-controller-manager-5485fd6c89-629gf" Feb 27 10:22:36 crc kubenswrapper[4998]: I0227 10:22:36.085515 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cf0d239d-a33e-4be9-866a-e98671b2cb99-serving-cert\") pod \"route-controller-manager-5485fd6c89-629gf\" (UID: \"cf0d239d-a33e-4be9-866a-e98671b2cb99\") " pod="openshift-route-controller-manager/route-controller-manager-5485fd6c89-629gf" Feb 27 10:22:36 crc kubenswrapper[4998]: I0227 10:22:36.102034 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzrwc\" (UniqueName: \"kubernetes.io/projected/cf0d239d-a33e-4be9-866a-e98671b2cb99-kube-api-access-fzrwc\") pod \"route-controller-manager-5485fd6c89-629gf\" (UID: \"cf0d239d-a33e-4be9-866a-e98671b2cb99\") " pod="openshift-route-controller-manager/route-controller-manager-5485fd6c89-629gf" Feb 27 10:22:36 crc kubenswrapper[4998]: I0227 10:22:36.215781 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5485fd6c89-629gf" Feb 27 10:22:36 crc kubenswrapper[4998]: I0227 10:22:36.436097 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5485fd6c89-629gf"] Feb 27 10:22:36 crc kubenswrapper[4998]: I0227 10:22:36.581752 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-856b9b75b6-wdlx2" event={"ID":"48b32b89-b335-44e6-8083-a60ed3aba60f","Type":"ContainerDied","Data":"e518731149f80d9a41aff95610c96fb9c274eb3c63d19b34db76e210b481a684"} Feb 27 10:22:36 crc kubenswrapper[4998]: I0227 10:22:36.581832 4998 scope.go:117] "RemoveContainer" containerID="a9f3a9b93b507fb60aed07de32d8b7b1830b97f12d5f6dbf33651f3a0c9a7dca" Feb 27 10:22:36 crc kubenswrapper[4998]: I0227 10:22:36.582179 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-856b9b75b6-wdlx2" Feb 27 10:22:36 crc kubenswrapper[4998]: I0227 10:22:36.585326 4998 generic.go:334] "Generic (PLEG): container finished" podID="e5dbdfd7-f5af-4244-acbd-508173f391fe" containerID="cf51a0607d321e9c52e941ac31d3feee82232b9125ed9940d35110139b3daede" exitCode=0 Feb 27 10:22:36 crc kubenswrapper[4998]: I0227 10:22:36.585423 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zgvgp" event={"ID":"e5dbdfd7-f5af-4244-acbd-508173f391fe","Type":"ContainerDied","Data":"cf51a0607d321e9c52e941ac31d3feee82232b9125ed9940d35110139b3daede"} Feb 27 10:22:36 crc kubenswrapper[4998]: I0227 10:22:36.589916 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s7v8l" event={"ID":"c0b13491-88ff-401a-9df3-dc6c981fb11c","Type":"ContainerStarted","Data":"dc81ad87ba311e77d3cf11251fcc5005494c45fe93b4d15b78555a7ec6ff4cc0"} Feb 27 10:22:36 crc kubenswrapper[4998]: I0227 10:22:36.595603 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-679b689f4c-lmpbh" event={"ID":"82833b5f-2598-4958-a974-3ae9455ecc78","Type":"ContainerDied","Data":"24e0eadfd102504609487b81ad9c61cc2e506b183dca84a8c14140281f96e57e"} Feb 27 10:22:36 crc kubenswrapper[4998]: I0227 10:22:36.595630 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-679b689f4c-lmpbh" Feb 27 10:22:36 crc kubenswrapper[4998]: I0227 10:22:36.598084 4998 generic.go:334] "Generic (PLEG): container finished" podID="de440cc8-1a01-4c10-83e6-027afdacde0c" containerID="1d7d92b9acca418a0294c6d03171daa3189a4c186061b716dad4aecd11818d55" exitCode=0 Feb 27 10:22:36 crc kubenswrapper[4998]: I0227 10:22:36.598173 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5v4md" event={"ID":"de440cc8-1a01-4c10-83e6-027afdacde0c","Type":"ContainerDied","Data":"1d7d92b9acca418a0294c6d03171daa3189a4c186061b716dad4aecd11818d55"} Feb 27 10:22:36 crc kubenswrapper[4998]: I0227 10:22:36.599317 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5485fd6c89-629gf" event={"ID":"cf0d239d-a33e-4be9-866a-e98671b2cb99","Type":"ContainerStarted","Data":"ff76d22657aa244ee68b0add660e7ebbfac8e1bcabb04abb23ba41bb945356ca"} Feb 27 10:22:36 crc kubenswrapper[4998]: I0227 10:22:36.601626 4998 generic.go:334] "Generic (PLEG): container finished" podID="3cb466ff-6f20-443f-983d-49332c97e530" containerID="190d1837ab99059fb9227fcbdcb28a19110ccc98bd28c161c0fcb85153e113c8" exitCode=0 Feb 27 10:22:36 crc kubenswrapper[4998]: I0227 10:22:36.601761 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zbbbf" event={"ID":"3cb466ff-6f20-443f-983d-49332c97e530","Type":"ContainerDied","Data":"190d1837ab99059fb9227fcbdcb28a19110ccc98bd28c161c0fcb85153e113c8"} Feb 27 10:22:36 crc kubenswrapper[4998]: I0227 10:22:36.604530 4998 generic.go:334] "Generic (PLEG): container finished" podID="f5d59240-590d-47d4-95f7-de0c01a8d3e2" containerID="cfca23ff4ad102d37f35e093e0dc7376fe336a4d305c764c440873cb44f628b4" exitCode=0 Feb 27 10:22:36 crc kubenswrapper[4998]: I0227 10:22:36.604683 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sxgdw" event={"ID":"f5d59240-590d-47d4-95f7-de0c01a8d3e2","Type":"ContainerDied","Data":"cfca23ff4ad102d37f35e093e0dc7376fe336a4d305c764c440873cb44f628b4"} Feb 27 10:22:36 crc kubenswrapper[4998]: I0227 10:22:36.685128 4998 scope.go:117] "RemoveContainer" containerID="c005a84825d62b25ffc063daa23a406007d691a8c8b5c9b7765bc9016ef529b9" Feb 27 10:22:36 crc kubenswrapper[4998]: I0227 10:22:36.723216 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-856b9b75b6-wdlx2"] Feb 27 10:22:36 crc kubenswrapper[4998]: I0227 10:22:36.727616 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-856b9b75b6-wdlx2"] Feb 27 10:22:36 crc kubenswrapper[4998]: I0227 10:22:36.735079 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-679b689f4c-lmpbh"] Feb 27 10:22:36 crc kubenswrapper[4998]: I0227 10:22:36.740460 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-679b689f4c-lmpbh"] Feb 27 10:22:36 crc kubenswrapper[4998]: I0227 10:22:36.777472 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48b32b89-b335-44e6-8083-a60ed3aba60f" path="/var/lib/kubelet/pods/48b32b89-b335-44e6-8083-a60ed3aba60f/volumes" Feb 27 10:22:36 crc kubenswrapper[4998]: I0227 10:22:36.778269 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82833b5f-2598-4958-a974-3ae9455ecc78" path="/var/lib/kubelet/pods/82833b5f-2598-4958-a974-3ae9455ecc78/volumes" Feb 27 10:22:36 crc kubenswrapper[4998]: I0227 10:22:36.956471 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536460-jgglv" Feb 27 10:22:36 crc kubenswrapper[4998]: I0227 10:22:36.990645 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6njcz\" (UniqueName: \"kubernetes.io/projected/36a1e36a-d138-4606-a280-ef688b10a438-kube-api-access-6njcz\") pod \"36a1e36a-d138-4606-a280-ef688b10a438\" (UID: \"36a1e36a-d138-4606-a280-ef688b10a438\") " Feb 27 10:22:37 crc kubenswrapper[4998]: I0227 10:22:37.004554 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36a1e36a-d138-4606-a280-ef688b10a438-kube-api-access-6njcz" (OuterVolumeSpecName: "kube-api-access-6njcz") pod "36a1e36a-d138-4606-a280-ef688b10a438" (UID: "36a1e36a-d138-4606-a280-ef688b10a438"). InnerVolumeSpecName "kube-api-access-6njcz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:22:37 crc kubenswrapper[4998]: I0227 10:22:37.092560 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6njcz\" (UniqueName: \"kubernetes.io/projected/36a1e36a-d138-4606-a280-ef688b10a438-kube-api-access-6njcz\") on node \"crc\" DevicePath \"\"" Feb 27 10:22:37 crc kubenswrapper[4998]: I0227 10:22:37.613676 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zgvgp" event={"ID":"e5dbdfd7-f5af-4244-acbd-508173f391fe","Type":"ContainerStarted","Data":"1da6fbc46650aa6e5ea806cc20bc407f9e3170d6cded345cdf7bd57e63f796ee"} Feb 27 10:22:37 crc kubenswrapper[4998]: I0227 10:22:37.616604 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5v4md" event={"ID":"de440cc8-1a01-4c10-83e6-027afdacde0c","Type":"ContainerStarted","Data":"bafecbdde451921e88f0f4ea4463de30d677ec035432447fbc4985cb14347eb0"} Feb 27 10:22:37 crc kubenswrapper[4998]: I0227 10:22:37.619287 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zbbbf" event={"ID":"3cb466ff-6f20-443f-983d-49332c97e530","Type":"ContainerStarted","Data":"17b710bf3bf01b28d9c56eeb14e239ea97f202f02c3415e3751ae6b12e2cb86e"} Feb 27 10:22:37 crc kubenswrapper[4998]: I0227 10:22:37.623017 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sxgdw" event={"ID":"f5d59240-590d-47d4-95f7-de0c01a8d3e2","Type":"ContainerStarted","Data":"162043baa4dadda544f11bf778dc6cadbc2ce1b777335ebb490b67900277db8c"} Feb 27 10:22:37 crc kubenswrapper[4998]: I0227 10:22:37.624904 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536460-jgglv" event={"ID":"36a1e36a-d138-4606-a280-ef688b10a438","Type":"ContainerDied","Data":"4ec9b50ba1f82d02ab11462b04a27face6fd148daa23b43a4e64ac420b6e5527"} Feb 27 10:22:37 crc kubenswrapper[4998]: I0227 10:22:37.624927 4998 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ec9b50ba1f82d02ab11462b04a27face6fd148daa23b43a4e64ac420b6e5527" Feb 27 10:22:37 crc kubenswrapper[4998]: I0227 10:22:37.625124 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536460-jgglv" Feb 27 10:22:37 crc kubenswrapper[4998]: I0227 10:22:37.626803 4998 generic.go:334] "Generic (PLEG): container finished" podID="c0b13491-88ff-401a-9df3-dc6c981fb11c" containerID="dc81ad87ba311e77d3cf11251fcc5005494c45fe93b4d15b78555a7ec6ff4cc0" exitCode=0 Feb 27 10:22:37 crc kubenswrapper[4998]: I0227 10:22:37.626838 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s7v8l" event={"ID":"c0b13491-88ff-401a-9df3-dc6c981fb11c","Type":"ContainerDied","Data":"dc81ad87ba311e77d3cf11251fcc5005494c45fe93b4d15b78555a7ec6ff4cc0"} Feb 27 10:22:37 crc kubenswrapper[4998]: I0227 10:22:37.631531 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5485fd6c89-629gf" event={"ID":"cf0d239d-a33e-4be9-866a-e98671b2cb99","Type":"ContainerStarted","Data":"e2adfecdaf3fa40a5a2538e6d352112ed186ba383e9157f7976aa4ea683ae08b"} Feb 27 10:22:37 crc kubenswrapper[4998]: I0227 10:22:37.633508 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5485fd6c89-629gf" Feb 27 10:22:37 crc kubenswrapper[4998]: I0227 10:22:37.640845 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5485fd6c89-629gf" Feb 27 10:22:37 crc kubenswrapper[4998]: I0227 10:22:37.649559 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-zgvgp" podStartSLOduration=2.642681993 podStartE2EDuration="56.649537145s" podCreationTimestamp="2026-02-27 10:21:41 +0000 UTC" firstStartedPulling="2026-02-27 10:21:43.040000457 +0000 UTC m=+255.038271425" lastFinishedPulling="2026-02-27 10:22:37.046855609 +0000 UTC m=+309.045126577" observedRunningTime="2026-02-27 10:22:37.646992304 +0000 UTC m=+309.645263302" watchObservedRunningTime="2026-02-27 10:22:37.649537145 +0000 UTC m=+309.647808113" Feb 27 10:22:37 crc kubenswrapper[4998]: I0227 10:22:37.685575 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5485fd6c89-629gf" podStartSLOduration=3.685556044 podStartE2EDuration="3.685556044s" podCreationTimestamp="2026-02-27 10:22:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:22:37.667145443 +0000 UTC m=+309.665416421" watchObservedRunningTime="2026-02-27 10:22:37.685556044 +0000 UTC m=+309.683827012" Feb 27 10:22:37 crc kubenswrapper[4998]: I0227 10:22:37.686764 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5v4md" podStartSLOduration=3.717191774 podStartE2EDuration="57.686754706s" podCreationTimestamp="2026-02-27 10:21:40 +0000 UTC" firstStartedPulling="2026-02-27 10:21:43.020930765 +0000 UTC m=+255.019201733" lastFinishedPulling="2026-02-27 10:22:36.990493707 +0000 UTC m=+308.988764665" observedRunningTime="2026-02-27 10:22:37.683050744 +0000 UTC m=+309.681321712" watchObservedRunningTime="2026-02-27 10:22:37.686754706 +0000 UTC m=+309.685025674" Feb 27 10:22:37 crc kubenswrapper[4998]: I0227 10:22:37.719688 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-sxgdw" podStartSLOduration=2.679543058 podStartE2EDuration="56.719664189s" podCreationTimestamp="2026-02-27 10:21:41 +0000 UTC" firstStartedPulling="2026-02-27 10:21:43.033418147 +0000 UTC m=+255.031689115" lastFinishedPulling="2026-02-27 10:22:37.073539278 +0000 UTC m=+309.071810246" observedRunningTime="2026-02-27 10:22:37.715154024 +0000 UTC m=+309.713424992" watchObservedRunningTime="2026-02-27 10:22:37.719664189 +0000 UTC m=+309.717935157" Feb 27 10:22:37 crc kubenswrapper[4998]: I0227 10:22:37.761258 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-zbbbf" podStartSLOduration=2.649782874 podStartE2EDuration="56.761215221s" podCreationTimestamp="2026-02-27 10:21:41 +0000 UTC" firstStartedPulling="2026-02-27 10:21:43.029577662 +0000 UTC m=+255.027848630" lastFinishedPulling="2026-02-27 10:22:37.141010009 +0000 UTC m=+309.139280977" observedRunningTime="2026-02-27 10:22:37.754752982 +0000 UTC m=+309.753023970" watchObservedRunningTime="2026-02-27 10:22:37.761215221 +0000 UTC m=+309.759486199" Feb 27 10:22:38 crc kubenswrapper[4998]: I0227 10:22:38.283167 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6557cd49cf-q76cz"] Feb 27 10:22:38 crc kubenswrapper[4998]: E0227 10:22:38.283423 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36a1e36a-d138-4606-a280-ef688b10a438" containerName="oc" Feb 27 10:22:38 crc kubenswrapper[4998]: I0227 10:22:38.283439 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="36a1e36a-d138-4606-a280-ef688b10a438" containerName="oc" Feb 27 10:22:38 crc kubenswrapper[4998]: I0227 10:22:38.283568 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="36a1e36a-d138-4606-a280-ef688b10a438" containerName="oc" Feb 27 10:22:38 crc kubenswrapper[4998]: I0227 10:22:38.283974 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6557cd49cf-q76cz" Feb 27 10:22:38 crc kubenswrapper[4998]: I0227 10:22:38.285568 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 27 10:22:38 crc kubenswrapper[4998]: I0227 10:22:38.285586 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 27 10:22:38 crc kubenswrapper[4998]: I0227 10:22:38.286001 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 27 10:22:38 crc kubenswrapper[4998]: I0227 10:22:38.286387 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 27 10:22:38 crc kubenswrapper[4998]: I0227 10:22:38.289560 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 27 10:22:38 crc kubenswrapper[4998]: I0227 10:22:38.292180 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 27 10:22:38 crc kubenswrapper[4998]: I0227 10:22:38.293485 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 27 10:22:38 crc kubenswrapper[4998]: I0227 10:22:38.298554 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6557cd49cf-q76cz"] Feb 27 10:22:38 crc kubenswrapper[4998]: I0227 10:22:38.306890 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/acda69a8-df5d-459f-91b5-4ccee1212ef6-serving-cert\") pod \"controller-manager-6557cd49cf-q76cz\" (UID: \"acda69a8-df5d-459f-91b5-4ccee1212ef6\") " pod="openshift-controller-manager/controller-manager-6557cd49cf-q76cz" Feb 27 10:22:38 crc kubenswrapper[4998]: I0227 10:22:38.306937 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/acda69a8-df5d-459f-91b5-4ccee1212ef6-client-ca\") pod \"controller-manager-6557cd49cf-q76cz\" (UID: \"acda69a8-df5d-459f-91b5-4ccee1212ef6\") " pod="openshift-controller-manager/controller-manager-6557cd49cf-q76cz" Feb 27 10:22:38 crc kubenswrapper[4998]: I0227 10:22:38.306975 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/acda69a8-df5d-459f-91b5-4ccee1212ef6-config\") pod \"controller-manager-6557cd49cf-q76cz\" (UID: \"acda69a8-df5d-459f-91b5-4ccee1212ef6\") " pod="openshift-controller-manager/controller-manager-6557cd49cf-q76cz" Feb 27 10:22:38 crc kubenswrapper[4998]: I0227 10:22:38.307124 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssbmf\" (UniqueName: \"kubernetes.io/projected/acda69a8-df5d-459f-91b5-4ccee1212ef6-kube-api-access-ssbmf\") pod \"controller-manager-6557cd49cf-q76cz\" (UID: \"acda69a8-df5d-459f-91b5-4ccee1212ef6\") " pod="openshift-controller-manager/controller-manager-6557cd49cf-q76cz" Feb 27 10:22:38 crc kubenswrapper[4998]: I0227 10:22:38.307164 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/acda69a8-df5d-459f-91b5-4ccee1212ef6-proxy-ca-bundles\") pod \"controller-manager-6557cd49cf-q76cz\" (UID: \"acda69a8-df5d-459f-91b5-4ccee1212ef6\") " pod="openshift-controller-manager/controller-manager-6557cd49cf-q76cz" Feb 27 10:22:38 crc kubenswrapper[4998]: I0227 10:22:38.408137 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ssbmf\" (UniqueName: \"kubernetes.io/projected/acda69a8-df5d-459f-91b5-4ccee1212ef6-kube-api-access-ssbmf\") pod \"controller-manager-6557cd49cf-q76cz\" (UID: \"acda69a8-df5d-459f-91b5-4ccee1212ef6\") " pod="openshift-controller-manager/controller-manager-6557cd49cf-q76cz" Feb 27 10:22:38 crc kubenswrapper[4998]: I0227 10:22:38.408188 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/acda69a8-df5d-459f-91b5-4ccee1212ef6-proxy-ca-bundles\") pod \"controller-manager-6557cd49cf-q76cz\" (UID: \"acda69a8-df5d-459f-91b5-4ccee1212ef6\") " pod="openshift-controller-manager/controller-manager-6557cd49cf-q76cz" Feb 27 10:22:38 crc kubenswrapper[4998]: I0227 10:22:38.408212 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/acda69a8-df5d-459f-91b5-4ccee1212ef6-serving-cert\") pod \"controller-manager-6557cd49cf-q76cz\" (UID: \"acda69a8-df5d-459f-91b5-4ccee1212ef6\") " pod="openshift-controller-manager/controller-manager-6557cd49cf-q76cz" Feb 27 10:22:38 crc kubenswrapper[4998]: I0227 10:22:38.408249 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/acda69a8-df5d-459f-91b5-4ccee1212ef6-client-ca\") pod \"controller-manager-6557cd49cf-q76cz\" (UID: \"acda69a8-df5d-459f-91b5-4ccee1212ef6\") " pod="openshift-controller-manager/controller-manager-6557cd49cf-q76cz" Feb 27 10:22:38 crc kubenswrapper[4998]: I0227 10:22:38.408277 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/acda69a8-df5d-459f-91b5-4ccee1212ef6-config\") pod \"controller-manager-6557cd49cf-q76cz\" (UID: \"acda69a8-df5d-459f-91b5-4ccee1212ef6\") " pod="openshift-controller-manager/controller-manager-6557cd49cf-q76cz" Feb 27 10:22:38 crc kubenswrapper[4998]: I0227 10:22:38.409452 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/acda69a8-df5d-459f-91b5-4ccee1212ef6-config\") pod \"controller-manager-6557cd49cf-q76cz\" (UID: \"acda69a8-df5d-459f-91b5-4ccee1212ef6\") " pod="openshift-controller-manager/controller-manager-6557cd49cf-q76cz" Feb 27 10:22:38 crc kubenswrapper[4998]: I0227 10:22:38.410556 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/acda69a8-df5d-459f-91b5-4ccee1212ef6-proxy-ca-bundles\") pod \"controller-manager-6557cd49cf-q76cz\" (UID: \"acda69a8-df5d-459f-91b5-4ccee1212ef6\") " pod="openshift-controller-manager/controller-manager-6557cd49cf-q76cz" Feb 27 10:22:38 crc kubenswrapper[4998]: I0227 10:22:38.411091 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/acda69a8-df5d-459f-91b5-4ccee1212ef6-client-ca\") pod \"controller-manager-6557cd49cf-q76cz\" (UID: \"acda69a8-df5d-459f-91b5-4ccee1212ef6\") " pod="openshift-controller-manager/controller-manager-6557cd49cf-q76cz" Feb 27 10:22:38 crc kubenswrapper[4998]: I0227 10:22:38.415629 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/acda69a8-df5d-459f-91b5-4ccee1212ef6-serving-cert\") pod \"controller-manager-6557cd49cf-q76cz\" (UID: \"acda69a8-df5d-459f-91b5-4ccee1212ef6\") " pod="openshift-controller-manager/controller-manager-6557cd49cf-q76cz" Feb 27 10:22:38 crc kubenswrapper[4998]: I0227 10:22:38.426122 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ssbmf\" (UniqueName: \"kubernetes.io/projected/acda69a8-df5d-459f-91b5-4ccee1212ef6-kube-api-access-ssbmf\") pod \"controller-manager-6557cd49cf-q76cz\" (UID: \"acda69a8-df5d-459f-91b5-4ccee1212ef6\") " pod="openshift-controller-manager/controller-manager-6557cd49cf-q76cz" Feb 27 10:22:38 crc kubenswrapper[4998]: I0227 10:22:38.611366 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6557cd49cf-q76cz" Feb 27 10:22:38 crc kubenswrapper[4998]: I0227 10:22:38.638530 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s7v8l" event={"ID":"c0b13491-88ff-401a-9df3-dc6c981fb11c","Type":"ContainerStarted","Data":"75406e77eeb9350094d2cb1a187e294d6f5122612bd7576fad75f37d95c44342"} Feb 27 10:22:38 crc kubenswrapper[4998]: I0227 10:22:38.657930 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-s7v8l" podStartSLOduration=32.775946061 podStartE2EDuration="54.657914417s" podCreationTimestamp="2026-02-27 10:21:44 +0000 UTC" firstStartedPulling="2026-02-27 10:22:16.310051617 +0000 UTC m=+288.308322585" lastFinishedPulling="2026-02-27 10:22:38.192019943 +0000 UTC m=+310.190290941" observedRunningTime="2026-02-27 10:22:38.656527079 +0000 UTC m=+310.654798067" watchObservedRunningTime="2026-02-27 10:22:38.657914417 +0000 UTC m=+310.656185385" Feb 27 10:22:39 crc kubenswrapper[4998]: I0227 10:22:39.096860 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6557cd49cf-q76cz"] Feb 27 10:22:39 crc kubenswrapper[4998]: I0227 10:22:39.645496 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6557cd49cf-q76cz" event={"ID":"acda69a8-df5d-459f-91b5-4ccee1212ef6","Type":"ContainerStarted","Data":"a8c841cc7034777ff25731a6dcff74f8f4e42e2a05a97c78266fa7138d80be69"} Feb 27 10:22:39 crc kubenswrapper[4998]: I0227 10:22:39.645875 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6557cd49cf-q76cz" event={"ID":"acda69a8-df5d-459f-91b5-4ccee1212ef6","Type":"ContainerStarted","Data":"fdc2b4e809c27042a5fa4067ce6f89426590d00261e7f90bdcdf3444ff72148e"} Feb 27 10:22:39 crc kubenswrapper[4998]: I0227 10:22:39.664382 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6557cd49cf-q76cz" podStartSLOduration=5.664363996 podStartE2EDuration="5.664363996s" podCreationTimestamp="2026-02-27 10:22:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:22:39.660689874 +0000 UTC m=+311.658960862" watchObservedRunningTime="2026-02-27 10:22:39.664363996 +0000 UTC m=+311.662634964" Feb 27 10:22:40 crc kubenswrapper[4998]: I0227 10:22:40.505316 4998 patch_prober.go:28] interesting pod/machine-config-daemon-m6kr5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 10:22:40 crc kubenswrapper[4998]: I0227 10:22:40.505637 4998 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 10:22:40 crc kubenswrapper[4998]: I0227 10:22:40.505684 4998 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" Feb 27 10:22:40 crc kubenswrapper[4998]: I0227 10:22:40.506293 4998 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"94514be337c3278cc1dfc8b2f0c50050f03294a2cc4ac6a72c62695d2fe4152a"} pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 27 10:22:40 crc kubenswrapper[4998]: I0227 10:22:40.506347 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" containerName="machine-config-daemon" containerID="cri-o://94514be337c3278cc1dfc8b2f0c50050f03294a2cc4ac6a72c62695d2fe4152a" gracePeriod=600 Feb 27 10:22:40 crc kubenswrapper[4998]: I0227 10:22:40.655312 4998 generic.go:334] "Generic (PLEG): container finished" podID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" containerID="94514be337c3278cc1dfc8b2f0c50050f03294a2cc4ac6a72c62695d2fe4152a" exitCode=0 Feb 27 10:22:40 crc kubenswrapper[4998]: I0227 10:22:40.655385 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" event={"ID":"400c5e2f-5448-49c6-bf8e-04b21e552bb2","Type":"ContainerDied","Data":"94514be337c3278cc1dfc8b2f0c50050f03294a2cc4ac6a72c62695d2fe4152a"} Feb 27 10:22:40 crc kubenswrapper[4998]: I0227 10:22:40.655698 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6557cd49cf-q76cz" Feb 27 10:22:40 crc kubenswrapper[4998]: I0227 10:22:40.663080 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6557cd49cf-q76cz" Feb 27 10:22:41 crc kubenswrapper[4998]: I0227 10:22:41.307870 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5v4md" Feb 27 10:22:41 crc kubenswrapper[4998]: I0227 10:22:41.308488 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5v4md" Feb 27 10:22:41 crc kubenswrapper[4998]: I0227 10:22:41.349342 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5v4md" Feb 27 10:22:41 crc kubenswrapper[4998]: I0227 10:22:41.526773 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-sxgdw" Feb 27 10:22:41 crc kubenswrapper[4998]: I0227 10:22:41.526825 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-sxgdw" Feb 27 10:22:41 crc kubenswrapper[4998]: I0227 10:22:41.577641 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-sxgdw" Feb 27 10:22:41 crc kubenswrapper[4998]: I0227 10:22:41.662129 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" event={"ID":"400c5e2f-5448-49c6-bf8e-04b21e552bb2","Type":"ContainerStarted","Data":"ef0e9e290020f4de6e2dbb18ef565f138df88eee5d534e2df45cae2f81d96bd3"} Feb 27 10:22:41 crc kubenswrapper[4998]: I0227 10:22:41.703093 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-zbbbf" Feb 27 10:22:41 crc kubenswrapper[4998]: I0227 10:22:41.703267 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-zbbbf" Feb 27 10:22:41 crc kubenswrapper[4998]: I0227 10:22:41.760710 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-zbbbf" Feb 27 10:22:41 crc kubenswrapper[4998]: I0227 10:22:41.939130 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-zgvgp" Feb 27 10:22:41 crc kubenswrapper[4998]: I0227 10:22:41.939279 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-zgvgp" Feb 27 10:22:42 crc kubenswrapper[4998]: I0227 10:22:42.004076 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-zgvgp" Feb 27 10:22:42 crc kubenswrapper[4998]: I0227 10:22:42.707189 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-zbbbf" Feb 27 10:22:42 crc kubenswrapper[4998]: I0227 10:22:42.710919 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5v4md" Feb 27 10:22:42 crc kubenswrapper[4998]: I0227 10:22:42.714812 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-zgvgp" Feb 27 10:22:43 crc kubenswrapper[4998]: I0227 10:22:43.494750 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zgvgp"] Feb 27 10:22:44 crc kubenswrapper[4998]: I0227 10:22:44.103251 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zbbbf"] Feb 27 10:22:44 crc kubenswrapper[4998]: I0227 10:22:44.502300 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-s7v8l" Feb 27 10:22:44 crc kubenswrapper[4998]: I0227 10:22:44.502350 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-s7v8l" Feb 27 10:22:44 crc kubenswrapper[4998]: I0227 10:22:44.550589 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-s7v8l" Feb 27 10:22:44 crc kubenswrapper[4998]: I0227 10:22:44.675722 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-zgvgp" podUID="e5dbdfd7-f5af-4244-acbd-508173f391fe" containerName="registry-server" containerID="cri-o://1da6fbc46650aa6e5ea806cc20bc407f9e3170d6cded345cdf7bd57e63f796ee" gracePeriod=2 Feb 27 10:22:44 crc kubenswrapper[4998]: I0227 10:22:44.725843 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-s7v8l" Feb 27 10:22:45 crc kubenswrapper[4998]: I0227 10:22:45.682356 4998 generic.go:334] "Generic (PLEG): container finished" podID="e5dbdfd7-f5af-4244-acbd-508173f391fe" containerID="1da6fbc46650aa6e5ea806cc20bc407f9e3170d6cded345cdf7bd57e63f796ee" exitCode=0 Feb 27 10:22:45 crc kubenswrapper[4998]: I0227 10:22:45.682939 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-zbbbf" podUID="3cb466ff-6f20-443f-983d-49332c97e530" containerName="registry-server" containerID="cri-o://17b710bf3bf01b28d9c56eeb14e239ea97f202f02c3415e3751ae6b12e2cb86e" gracePeriod=2 Feb 27 10:22:45 crc kubenswrapper[4998]: I0227 10:22:45.683240 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zgvgp" event={"ID":"e5dbdfd7-f5af-4244-acbd-508173f391fe","Type":"ContainerDied","Data":"1da6fbc46650aa6e5ea806cc20bc407f9e3170d6cded345cdf7bd57e63f796ee"} Feb 27 10:22:45 crc kubenswrapper[4998]: I0227 10:22:45.822018 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zgvgp" Feb 27 10:22:45 crc kubenswrapper[4998]: I0227 10:22:45.929022 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5dbdfd7-f5af-4244-acbd-508173f391fe-catalog-content\") pod \"e5dbdfd7-f5af-4244-acbd-508173f391fe\" (UID: \"e5dbdfd7-f5af-4244-acbd-508173f391fe\") " Feb 27 10:22:45 crc kubenswrapper[4998]: I0227 10:22:45.929123 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5dbdfd7-f5af-4244-acbd-508173f391fe-utilities\") pod \"e5dbdfd7-f5af-4244-acbd-508173f391fe\" (UID: \"e5dbdfd7-f5af-4244-acbd-508173f391fe\") " Feb 27 10:22:45 crc kubenswrapper[4998]: I0227 10:22:45.929178 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hbhrd\" (UniqueName: \"kubernetes.io/projected/e5dbdfd7-f5af-4244-acbd-508173f391fe-kube-api-access-hbhrd\") pod \"e5dbdfd7-f5af-4244-acbd-508173f391fe\" (UID: \"e5dbdfd7-f5af-4244-acbd-508173f391fe\") " Feb 27 10:22:45 crc kubenswrapper[4998]: I0227 10:22:45.930900 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5dbdfd7-f5af-4244-acbd-508173f391fe-utilities" (OuterVolumeSpecName: "utilities") pod "e5dbdfd7-f5af-4244-acbd-508173f391fe" (UID: "e5dbdfd7-f5af-4244-acbd-508173f391fe"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:22:45 crc kubenswrapper[4998]: I0227 10:22:45.935690 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5dbdfd7-f5af-4244-acbd-508173f391fe-kube-api-access-hbhrd" (OuterVolumeSpecName: "kube-api-access-hbhrd") pod "e5dbdfd7-f5af-4244-acbd-508173f391fe" (UID: "e5dbdfd7-f5af-4244-acbd-508173f391fe"). InnerVolumeSpecName "kube-api-access-hbhrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:22:45 crc kubenswrapper[4998]: I0227 10:22:45.986620 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5dbdfd7-f5af-4244-acbd-508173f391fe-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e5dbdfd7-f5af-4244-acbd-508173f391fe" (UID: "e5dbdfd7-f5af-4244-acbd-508173f391fe"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:22:46 crc kubenswrapper[4998]: I0227 10:22:46.031008 4998 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5dbdfd7-f5af-4244-acbd-508173f391fe-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 10:22:46 crc kubenswrapper[4998]: I0227 10:22:46.031050 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hbhrd\" (UniqueName: \"kubernetes.io/projected/e5dbdfd7-f5af-4244-acbd-508173f391fe-kube-api-access-hbhrd\") on node \"crc\" DevicePath \"\"" Feb 27 10:22:46 crc kubenswrapper[4998]: I0227 10:22:46.031064 4998 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5dbdfd7-f5af-4244-acbd-508173f391fe-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 10:22:46 crc kubenswrapper[4998]: I0227 10:22:46.696004 4998 generic.go:334] "Generic (PLEG): container finished" podID="3cb466ff-6f20-443f-983d-49332c97e530" containerID="17b710bf3bf01b28d9c56eeb14e239ea97f202f02c3415e3751ae6b12e2cb86e" exitCode=0 Feb 27 10:22:46 crc kubenswrapper[4998]: I0227 10:22:46.696082 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zbbbf" event={"ID":"3cb466ff-6f20-443f-983d-49332c97e530","Type":"ContainerDied","Data":"17b710bf3bf01b28d9c56eeb14e239ea97f202f02c3415e3751ae6b12e2cb86e"} Feb 27 10:22:46 crc kubenswrapper[4998]: I0227 10:22:46.698753 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zgvgp" event={"ID":"e5dbdfd7-f5af-4244-acbd-508173f391fe","Type":"ContainerDied","Data":"231df6b323b537fbeff4d7efd8bd0e493fa186cf19351355b021be019f15528c"} Feb 27 10:22:46 crc kubenswrapper[4998]: I0227 10:22:46.698817 4998 scope.go:117] "RemoveContainer" containerID="1da6fbc46650aa6e5ea806cc20bc407f9e3170d6cded345cdf7bd57e63f796ee" Feb 27 10:22:46 crc kubenswrapper[4998]: I0227 10:22:46.698857 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zgvgp" Feb 27 10:22:46 crc kubenswrapper[4998]: I0227 10:22:46.715460 4998 scope.go:117] "RemoveContainer" containerID="cf51a0607d321e9c52e941ac31d3feee82232b9125ed9940d35110139b3daede" Feb 27 10:22:46 crc kubenswrapper[4998]: I0227 10:22:46.734591 4998 scope.go:117] "RemoveContainer" containerID="75b6b5087eb911517ea62f5e0db50c18fdaee5c495e7e6fcf1936a4a2cdbecfd" Feb 27 10:22:46 crc kubenswrapper[4998]: I0227 10:22:46.736937 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zgvgp"] Feb 27 10:22:46 crc kubenswrapper[4998]: I0227 10:22:46.741118 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-zgvgp"] Feb 27 10:22:46 crc kubenswrapper[4998]: I0227 10:22:46.775703 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5dbdfd7-f5af-4244-acbd-508173f391fe" path="/var/lib/kubelet/pods/e5dbdfd7-f5af-4244-acbd-508173f391fe/volumes" Feb 27 10:22:47 crc kubenswrapper[4998]: I0227 10:22:47.241488 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zbbbf" Feb 27 10:22:47 crc kubenswrapper[4998]: I0227 10:22:47.349397 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3cb466ff-6f20-443f-983d-49332c97e530-catalog-content\") pod \"3cb466ff-6f20-443f-983d-49332c97e530\" (UID: \"3cb466ff-6f20-443f-983d-49332c97e530\") " Feb 27 10:22:47 crc kubenswrapper[4998]: I0227 10:22:47.349455 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rkjlk\" (UniqueName: \"kubernetes.io/projected/3cb466ff-6f20-443f-983d-49332c97e530-kube-api-access-rkjlk\") pod \"3cb466ff-6f20-443f-983d-49332c97e530\" (UID: \"3cb466ff-6f20-443f-983d-49332c97e530\") " Feb 27 10:22:47 crc kubenswrapper[4998]: I0227 10:22:47.349572 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3cb466ff-6f20-443f-983d-49332c97e530-utilities\") pod \"3cb466ff-6f20-443f-983d-49332c97e530\" (UID: \"3cb466ff-6f20-443f-983d-49332c97e530\") " Feb 27 10:22:47 crc kubenswrapper[4998]: I0227 10:22:47.350395 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3cb466ff-6f20-443f-983d-49332c97e530-utilities" (OuterVolumeSpecName: "utilities") pod "3cb466ff-6f20-443f-983d-49332c97e530" (UID: "3cb466ff-6f20-443f-983d-49332c97e530"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:22:47 crc kubenswrapper[4998]: I0227 10:22:47.353693 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb466ff-6f20-443f-983d-49332c97e530-kube-api-access-rkjlk" (OuterVolumeSpecName: "kube-api-access-rkjlk") pod "3cb466ff-6f20-443f-983d-49332c97e530" (UID: "3cb466ff-6f20-443f-983d-49332c97e530"). InnerVolumeSpecName "kube-api-access-rkjlk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:22:47 crc kubenswrapper[4998]: I0227 10:22:47.452947 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rkjlk\" (UniqueName: \"kubernetes.io/projected/3cb466ff-6f20-443f-983d-49332c97e530-kube-api-access-rkjlk\") on node \"crc\" DevicePath \"\"" Feb 27 10:22:47 crc kubenswrapper[4998]: I0227 10:22:47.453022 4998 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3cb466ff-6f20-443f-983d-49332c97e530-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 10:22:47 crc kubenswrapper[4998]: I0227 10:22:47.709437 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zbbbf" event={"ID":"3cb466ff-6f20-443f-983d-49332c97e530","Type":"ContainerDied","Data":"1ded792f44acdfdeab7dbeb5733d24e8937a75894ddd51fa8bf7c53b3f9f2e13"} Feb 27 10:22:47 crc kubenswrapper[4998]: I0227 10:22:47.709535 4998 scope.go:117] "RemoveContainer" containerID="17b710bf3bf01b28d9c56eeb14e239ea97f202f02c3415e3751ae6b12e2cb86e" Feb 27 10:22:47 crc kubenswrapper[4998]: I0227 10:22:47.709529 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zbbbf" Feb 27 10:22:47 crc kubenswrapper[4998]: I0227 10:22:47.730916 4998 scope.go:117] "RemoveContainer" containerID="190d1837ab99059fb9227fcbdcb28a19110ccc98bd28c161c0fcb85153e113c8" Feb 27 10:22:47 crc kubenswrapper[4998]: I0227 10:22:47.750927 4998 scope.go:117] "RemoveContainer" containerID="110021361cae5d6f0b4abe3422b708339a99b126a96aa31b39c970e66871c497" Feb 27 10:22:48 crc kubenswrapper[4998]: I0227 10:22:48.444719 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3cb466ff-6f20-443f-983d-49332c97e530-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3cb466ff-6f20-443f-983d-49332c97e530" (UID: "3cb466ff-6f20-443f-983d-49332c97e530"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:22:48 crc kubenswrapper[4998]: I0227 10:22:48.468132 4998 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3cb466ff-6f20-443f-983d-49332c97e530-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 10:22:48 crc kubenswrapper[4998]: I0227 10:22:48.639411 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zbbbf"] Feb 27 10:22:48 crc kubenswrapper[4998]: I0227 10:22:48.642433 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-zbbbf"] Feb 27 10:22:48 crc kubenswrapper[4998]: I0227 10:22:48.718033 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l7vs8" event={"ID":"5d365be7-41cf-4570-a8fb-ef974affdb95","Type":"ContainerStarted","Data":"7d9bd734251bd7221062ed7b1e51ad5daa0836d4cd9ec22d8422d8a298133a02"} Feb 27 10:22:48 crc kubenswrapper[4998]: I0227 10:22:48.774674 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb466ff-6f20-443f-983d-49332c97e530" path="/var/lib/kubelet/pods/3cb466ff-6f20-443f-983d-49332c97e530/volumes" Feb 27 10:22:49 crc kubenswrapper[4998]: I0227 10:22:49.727838 4998 generic.go:334] "Generic (PLEG): container finished" podID="5d365be7-41cf-4570-a8fb-ef974affdb95" containerID="7d9bd734251bd7221062ed7b1e51ad5daa0836d4cd9ec22d8422d8a298133a02" exitCode=0 Feb 27 10:22:49 crc kubenswrapper[4998]: I0227 10:22:49.727874 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l7vs8" event={"ID":"5d365be7-41cf-4570-a8fb-ef974affdb95","Type":"ContainerDied","Data":"7d9bd734251bd7221062ed7b1e51ad5daa0836d4cd9ec22d8422d8a298133a02"} Feb 27 10:22:50 crc kubenswrapper[4998]: I0227 10:22:50.735351 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l7vs8" event={"ID":"5d365be7-41cf-4570-a8fb-ef974affdb95","Type":"ContainerStarted","Data":"4e79d2ec61f6186bf9c8f9cab79b7bf4ab9ffb7f070484ccd3d18cafb27ad4c9"} Feb 27 10:22:50 crc kubenswrapper[4998]: I0227 10:22:50.754578 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-l7vs8" podStartSLOduration=32.806481717 podStartE2EDuration="1m6.754560406s" podCreationTimestamp="2026-02-27 10:21:44 +0000 UTC" firstStartedPulling="2026-02-27 10:22:16.287404948 +0000 UTC m=+288.285675916" lastFinishedPulling="2026-02-27 10:22:50.235483587 +0000 UTC m=+322.233754605" observedRunningTime="2026-02-27 10:22:50.751747768 +0000 UTC m=+322.750018756" watchObservedRunningTime="2026-02-27 10:22:50.754560406 +0000 UTC m=+322.752831374" Feb 27 10:22:51 crc kubenswrapper[4998]: I0227 10:22:51.569385 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-sxgdw" Feb 27 10:22:52 crc kubenswrapper[4998]: I0227 10:22:52.663503 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-h2zrl"] Feb 27 10:22:54 crc kubenswrapper[4998]: I0227 10:22:54.024408 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6557cd49cf-q76cz"] Feb 27 10:22:54 crc kubenswrapper[4998]: I0227 10:22:54.024959 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6557cd49cf-q76cz" podUID="acda69a8-df5d-459f-91b5-4ccee1212ef6" containerName="controller-manager" containerID="cri-o://a8c841cc7034777ff25731a6dcff74f8f4e42e2a05a97c78266fa7138d80be69" gracePeriod=30 Feb 27 10:22:54 crc kubenswrapper[4998]: I0227 10:22:54.123662 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5485fd6c89-629gf"] Feb 27 10:22:54 crc kubenswrapper[4998]: I0227 10:22:54.124048 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5485fd6c89-629gf" podUID="cf0d239d-a33e-4be9-866a-e98671b2cb99" containerName="route-controller-manager" containerID="cri-o://e2adfecdaf3fa40a5a2538e6d352112ed186ba383e9157f7976aa4ea683ae08b" gracePeriod=30 Feb 27 10:22:54 crc kubenswrapper[4998]: I0227 10:22:54.754199 4998 generic.go:334] "Generic (PLEG): container finished" podID="cf0d239d-a33e-4be9-866a-e98671b2cb99" containerID="e2adfecdaf3fa40a5a2538e6d352112ed186ba383e9157f7976aa4ea683ae08b" exitCode=0 Feb 27 10:22:54 crc kubenswrapper[4998]: I0227 10:22:54.754292 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5485fd6c89-629gf" event={"ID":"cf0d239d-a33e-4be9-866a-e98671b2cb99","Type":"ContainerDied","Data":"e2adfecdaf3fa40a5a2538e6d352112ed186ba383e9157f7976aa4ea683ae08b"} Feb 27 10:22:54 crc kubenswrapper[4998]: I0227 10:22:54.756338 4998 generic.go:334] "Generic (PLEG): container finished" podID="acda69a8-df5d-459f-91b5-4ccee1212ef6" containerID="a8c841cc7034777ff25731a6dcff74f8f4e42e2a05a97c78266fa7138d80be69" exitCode=0 Feb 27 10:22:54 crc kubenswrapper[4998]: I0227 10:22:54.756368 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6557cd49cf-q76cz" event={"ID":"acda69a8-df5d-459f-91b5-4ccee1212ef6","Type":"ContainerDied","Data":"a8c841cc7034777ff25731a6dcff74f8f4e42e2a05a97c78266fa7138d80be69"} Feb 27 10:22:54 crc kubenswrapper[4998]: I0227 10:22:54.897429 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-l7vs8" Feb 27 10:22:54 crc kubenswrapper[4998]: I0227 10:22:54.897485 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-l7vs8" Feb 27 10:22:55 crc kubenswrapper[4998]: I0227 10:22:55.401581 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5485fd6c89-629gf" Feb 27 10:22:55 crc kubenswrapper[4998]: I0227 10:22:55.430684 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86784d4bd6-h6wxc"] Feb 27 10:22:55 crc kubenswrapper[4998]: E0227 10:22:55.430911 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5dbdfd7-f5af-4244-acbd-508173f391fe" containerName="registry-server" Feb 27 10:22:55 crc kubenswrapper[4998]: I0227 10:22:55.430925 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5dbdfd7-f5af-4244-acbd-508173f391fe" containerName="registry-server" Feb 27 10:22:55 crc kubenswrapper[4998]: E0227 10:22:55.430934 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cb466ff-6f20-443f-983d-49332c97e530" containerName="extract-utilities" Feb 27 10:22:55 crc kubenswrapper[4998]: I0227 10:22:55.430940 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cb466ff-6f20-443f-983d-49332c97e530" containerName="extract-utilities" Feb 27 10:22:55 crc kubenswrapper[4998]: E0227 10:22:55.430951 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf0d239d-a33e-4be9-866a-e98671b2cb99" containerName="route-controller-manager" Feb 27 10:22:55 crc kubenswrapper[4998]: I0227 10:22:55.430958 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf0d239d-a33e-4be9-866a-e98671b2cb99" containerName="route-controller-manager" Feb 27 10:22:55 crc kubenswrapper[4998]: E0227 10:22:55.430967 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5dbdfd7-f5af-4244-acbd-508173f391fe" containerName="extract-utilities" Feb 27 10:22:55 crc kubenswrapper[4998]: I0227 10:22:55.430973 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5dbdfd7-f5af-4244-acbd-508173f391fe" containerName="extract-utilities" Feb 27 10:22:55 crc kubenswrapper[4998]: E0227 10:22:55.430982 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cb466ff-6f20-443f-983d-49332c97e530" containerName="registry-server" Feb 27 10:22:55 crc kubenswrapper[4998]: I0227 10:22:55.430989 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cb466ff-6f20-443f-983d-49332c97e530" containerName="registry-server" Feb 27 10:22:55 crc kubenswrapper[4998]: E0227 10:22:55.430998 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cb466ff-6f20-443f-983d-49332c97e530" containerName="extract-content" Feb 27 10:22:55 crc kubenswrapper[4998]: I0227 10:22:55.431004 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cb466ff-6f20-443f-983d-49332c97e530" containerName="extract-content" Feb 27 10:22:55 crc kubenswrapper[4998]: E0227 10:22:55.431014 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5dbdfd7-f5af-4244-acbd-508173f391fe" containerName="extract-content" Feb 27 10:22:55 crc kubenswrapper[4998]: I0227 10:22:55.431020 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5dbdfd7-f5af-4244-acbd-508173f391fe" containerName="extract-content" Feb 27 10:22:55 crc kubenswrapper[4998]: I0227 10:22:55.431147 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf0d239d-a33e-4be9-866a-e98671b2cb99" containerName="route-controller-manager" Feb 27 10:22:55 crc kubenswrapper[4998]: I0227 10:22:55.431162 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5dbdfd7-f5af-4244-acbd-508173f391fe" containerName="registry-server" Feb 27 10:22:55 crc kubenswrapper[4998]: I0227 10:22:55.431170 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="3cb466ff-6f20-443f-983d-49332c97e530" containerName="registry-server" Feb 27 10:22:55 crc kubenswrapper[4998]: I0227 10:22:55.431530 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-86784d4bd6-h6wxc" Feb 27 10:22:55 crc kubenswrapper[4998]: I0227 10:22:55.442751 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86784d4bd6-h6wxc"] Feb 27 10:22:55 crc kubenswrapper[4998]: I0227 10:22:55.587150 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fzrwc\" (UniqueName: \"kubernetes.io/projected/cf0d239d-a33e-4be9-866a-e98671b2cb99-kube-api-access-fzrwc\") pod \"cf0d239d-a33e-4be9-866a-e98671b2cb99\" (UID: \"cf0d239d-a33e-4be9-866a-e98671b2cb99\") " Feb 27 10:22:55 crc kubenswrapper[4998]: I0227 10:22:55.587304 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cf0d239d-a33e-4be9-866a-e98671b2cb99-client-ca\") pod \"cf0d239d-a33e-4be9-866a-e98671b2cb99\" (UID: \"cf0d239d-a33e-4be9-866a-e98671b2cb99\") " Feb 27 10:22:55 crc kubenswrapper[4998]: I0227 10:22:55.587335 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf0d239d-a33e-4be9-866a-e98671b2cb99-config\") pod \"cf0d239d-a33e-4be9-866a-e98671b2cb99\" (UID: \"cf0d239d-a33e-4be9-866a-e98671b2cb99\") " Feb 27 10:22:55 crc kubenswrapper[4998]: I0227 10:22:55.587386 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cf0d239d-a33e-4be9-866a-e98671b2cb99-serving-cert\") pod \"cf0d239d-a33e-4be9-866a-e98671b2cb99\" (UID: \"cf0d239d-a33e-4be9-866a-e98671b2cb99\") " Feb 27 10:22:55 crc kubenswrapper[4998]: I0227 10:22:55.587582 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqcm2\" (UniqueName: \"kubernetes.io/projected/a8cf23db-cd89-41e7-9038-8609a9672f76-kube-api-access-dqcm2\") pod \"route-controller-manager-86784d4bd6-h6wxc\" (UID: \"a8cf23db-cd89-41e7-9038-8609a9672f76\") " pod="openshift-route-controller-manager/route-controller-manager-86784d4bd6-h6wxc" Feb 27 10:22:55 crc kubenswrapper[4998]: I0227 10:22:55.587665 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a8cf23db-cd89-41e7-9038-8609a9672f76-client-ca\") pod \"route-controller-manager-86784d4bd6-h6wxc\" (UID: \"a8cf23db-cd89-41e7-9038-8609a9672f76\") " pod="openshift-route-controller-manager/route-controller-manager-86784d4bd6-h6wxc" Feb 27 10:22:55 crc kubenswrapper[4998]: I0227 10:22:55.587709 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8cf23db-cd89-41e7-9038-8609a9672f76-serving-cert\") pod \"route-controller-manager-86784d4bd6-h6wxc\" (UID: \"a8cf23db-cd89-41e7-9038-8609a9672f76\") " pod="openshift-route-controller-manager/route-controller-manager-86784d4bd6-h6wxc" Feb 27 10:22:55 crc kubenswrapper[4998]: I0227 10:22:55.587735 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8cf23db-cd89-41e7-9038-8609a9672f76-config\") pod \"route-controller-manager-86784d4bd6-h6wxc\" (UID: \"a8cf23db-cd89-41e7-9038-8609a9672f76\") " pod="openshift-route-controller-manager/route-controller-manager-86784d4bd6-h6wxc" Feb 27 10:22:55 crc kubenswrapper[4998]: I0227 10:22:55.588144 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf0d239d-a33e-4be9-866a-e98671b2cb99-client-ca" (OuterVolumeSpecName: "client-ca") pod "cf0d239d-a33e-4be9-866a-e98671b2cb99" (UID: "cf0d239d-a33e-4be9-866a-e98671b2cb99"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:22:55 crc kubenswrapper[4998]: I0227 10:22:55.588262 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf0d239d-a33e-4be9-866a-e98671b2cb99-config" (OuterVolumeSpecName: "config") pod "cf0d239d-a33e-4be9-866a-e98671b2cb99" (UID: "cf0d239d-a33e-4be9-866a-e98671b2cb99"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:22:55 crc kubenswrapper[4998]: I0227 10:22:55.592233 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf0d239d-a33e-4be9-866a-e98671b2cb99-kube-api-access-fzrwc" (OuterVolumeSpecName: "kube-api-access-fzrwc") pod "cf0d239d-a33e-4be9-866a-e98671b2cb99" (UID: "cf0d239d-a33e-4be9-866a-e98671b2cb99"). InnerVolumeSpecName "kube-api-access-fzrwc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:22:55 crc kubenswrapper[4998]: I0227 10:22:55.592822 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf0d239d-a33e-4be9-866a-e98671b2cb99-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "cf0d239d-a33e-4be9-866a-e98671b2cb99" (UID: "cf0d239d-a33e-4be9-866a-e98671b2cb99"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:22:55 crc kubenswrapper[4998]: I0227 10:22:55.688977 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8cf23db-cd89-41e7-9038-8609a9672f76-serving-cert\") pod \"route-controller-manager-86784d4bd6-h6wxc\" (UID: \"a8cf23db-cd89-41e7-9038-8609a9672f76\") " pod="openshift-route-controller-manager/route-controller-manager-86784d4bd6-h6wxc" Feb 27 10:22:55 crc kubenswrapper[4998]: I0227 10:22:55.689026 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8cf23db-cd89-41e7-9038-8609a9672f76-config\") pod \"route-controller-manager-86784d4bd6-h6wxc\" (UID: \"a8cf23db-cd89-41e7-9038-8609a9672f76\") " pod="openshift-route-controller-manager/route-controller-manager-86784d4bd6-h6wxc" Feb 27 10:22:55 crc kubenswrapper[4998]: I0227 10:22:55.689092 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqcm2\" (UniqueName: \"kubernetes.io/projected/a8cf23db-cd89-41e7-9038-8609a9672f76-kube-api-access-dqcm2\") pod \"route-controller-manager-86784d4bd6-h6wxc\" (UID: \"a8cf23db-cd89-41e7-9038-8609a9672f76\") " pod="openshift-route-controller-manager/route-controller-manager-86784d4bd6-h6wxc" Feb 27 10:22:55 crc kubenswrapper[4998]: I0227 10:22:55.689182 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a8cf23db-cd89-41e7-9038-8609a9672f76-client-ca\") pod \"route-controller-manager-86784d4bd6-h6wxc\" (UID: \"a8cf23db-cd89-41e7-9038-8609a9672f76\") " pod="openshift-route-controller-manager/route-controller-manager-86784d4bd6-h6wxc" Feb 27 10:22:55 crc kubenswrapper[4998]: I0227 10:22:55.689521 4998 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cf0d239d-a33e-4be9-866a-e98671b2cb99-client-ca\") on node \"crc\" DevicePath \"\"" Feb 27 10:22:55 crc kubenswrapper[4998]: I0227 10:22:55.690337 4998 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf0d239d-a33e-4be9-866a-e98671b2cb99-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:22:55 crc kubenswrapper[4998]: I0227 10:22:55.690355 4998 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cf0d239d-a33e-4be9-866a-e98671b2cb99-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 10:22:55 crc kubenswrapper[4998]: I0227 10:22:55.690364 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fzrwc\" (UniqueName: \"kubernetes.io/projected/cf0d239d-a33e-4be9-866a-e98671b2cb99-kube-api-access-fzrwc\") on node \"crc\" DevicePath \"\"" Feb 27 10:22:55 crc kubenswrapper[4998]: I0227 10:22:55.690310 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a8cf23db-cd89-41e7-9038-8609a9672f76-client-ca\") pod \"route-controller-manager-86784d4bd6-h6wxc\" (UID: \"a8cf23db-cd89-41e7-9038-8609a9672f76\") " pod="openshift-route-controller-manager/route-controller-manager-86784d4bd6-h6wxc" Feb 27 10:22:55 crc kubenswrapper[4998]: I0227 10:22:55.690818 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8cf23db-cd89-41e7-9038-8609a9672f76-config\") pod \"route-controller-manager-86784d4bd6-h6wxc\" (UID: \"a8cf23db-cd89-41e7-9038-8609a9672f76\") " pod="openshift-route-controller-manager/route-controller-manager-86784d4bd6-h6wxc" Feb 27 10:22:55 crc kubenswrapper[4998]: I0227 10:22:55.695104 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8cf23db-cd89-41e7-9038-8609a9672f76-serving-cert\") pod \"route-controller-manager-86784d4bd6-h6wxc\" (UID: \"a8cf23db-cd89-41e7-9038-8609a9672f76\") " pod="openshift-route-controller-manager/route-controller-manager-86784d4bd6-h6wxc" Feb 27 10:22:55 crc kubenswrapper[4998]: I0227 10:22:55.708825 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqcm2\" (UniqueName: \"kubernetes.io/projected/a8cf23db-cd89-41e7-9038-8609a9672f76-kube-api-access-dqcm2\") pod \"route-controller-manager-86784d4bd6-h6wxc\" (UID: \"a8cf23db-cd89-41e7-9038-8609a9672f76\") " pod="openshift-route-controller-manager/route-controller-manager-86784d4bd6-h6wxc" Feb 27 10:22:55 crc kubenswrapper[4998]: I0227 10:22:55.757246 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-86784d4bd6-h6wxc" Feb 27 10:22:55 crc kubenswrapper[4998]: I0227 10:22:55.769192 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5485fd6c89-629gf" event={"ID":"cf0d239d-a33e-4be9-866a-e98671b2cb99","Type":"ContainerDied","Data":"ff76d22657aa244ee68b0add660e7ebbfac8e1bcabb04abb23ba41bb945356ca"} Feb 27 10:22:55 crc kubenswrapper[4998]: I0227 10:22:55.769256 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5485fd6c89-629gf" Feb 27 10:22:55 crc kubenswrapper[4998]: I0227 10:22:55.769265 4998 scope.go:117] "RemoveContainer" containerID="e2adfecdaf3fa40a5a2538e6d352112ed186ba383e9157f7976aa4ea683ae08b" Feb 27 10:22:55 crc kubenswrapper[4998]: I0227 10:22:55.823168 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5485fd6c89-629gf"] Feb 27 10:22:55 crc kubenswrapper[4998]: I0227 10:22:55.824731 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5485fd6c89-629gf"] Feb 27 10:22:55 crc kubenswrapper[4998]: I0227 10:22:55.909662 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6557cd49cf-q76cz" Feb 27 10:22:55 crc kubenswrapper[4998]: I0227 10:22:55.952279 4998 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-l7vs8" podUID="5d365be7-41cf-4570-a8fb-ef974affdb95" containerName="registry-server" probeResult="failure" output=< Feb 27 10:22:55 crc kubenswrapper[4998]: timeout: failed to connect service ":50051" within 1s Feb 27 10:22:55 crc kubenswrapper[4998]: > Feb 27 10:22:55 crc kubenswrapper[4998]: I0227 10:22:55.984680 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86784d4bd6-h6wxc"] Feb 27 10:22:56 crc kubenswrapper[4998]: I0227 10:22:56.096536 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ssbmf\" (UniqueName: \"kubernetes.io/projected/acda69a8-df5d-459f-91b5-4ccee1212ef6-kube-api-access-ssbmf\") pod \"acda69a8-df5d-459f-91b5-4ccee1212ef6\" (UID: \"acda69a8-df5d-459f-91b5-4ccee1212ef6\") " Feb 27 10:22:56 crc kubenswrapper[4998]: I0227 10:22:56.096615 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/acda69a8-df5d-459f-91b5-4ccee1212ef6-config\") pod \"acda69a8-df5d-459f-91b5-4ccee1212ef6\" (UID: \"acda69a8-df5d-459f-91b5-4ccee1212ef6\") " Feb 27 10:22:56 crc kubenswrapper[4998]: I0227 10:22:56.096648 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/acda69a8-df5d-459f-91b5-4ccee1212ef6-client-ca\") pod \"acda69a8-df5d-459f-91b5-4ccee1212ef6\" (UID: \"acda69a8-df5d-459f-91b5-4ccee1212ef6\") " Feb 27 10:22:56 crc kubenswrapper[4998]: I0227 10:22:56.096687 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/acda69a8-df5d-459f-91b5-4ccee1212ef6-serving-cert\") pod \"acda69a8-df5d-459f-91b5-4ccee1212ef6\" (UID: \"acda69a8-df5d-459f-91b5-4ccee1212ef6\") " Feb 27 10:22:56 crc kubenswrapper[4998]: I0227 10:22:56.096748 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/acda69a8-df5d-459f-91b5-4ccee1212ef6-proxy-ca-bundles\") pod \"acda69a8-df5d-459f-91b5-4ccee1212ef6\" (UID: \"acda69a8-df5d-459f-91b5-4ccee1212ef6\") " Feb 27 10:22:56 crc kubenswrapper[4998]: I0227 10:22:56.097738 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/acda69a8-df5d-459f-91b5-4ccee1212ef6-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "acda69a8-df5d-459f-91b5-4ccee1212ef6" (UID: "acda69a8-df5d-459f-91b5-4ccee1212ef6"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:22:56 crc kubenswrapper[4998]: I0227 10:22:56.097786 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/acda69a8-df5d-459f-91b5-4ccee1212ef6-client-ca" (OuterVolumeSpecName: "client-ca") pod "acda69a8-df5d-459f-91b5-4ccee1212ef6" (UID: "acda69a8-df5d-459f-91b5-4ccee1212ef6"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:22:56 crc kubenswrapper[4998]: I0227 10:22:56.097891 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/acda69a8-df5d-459f-91b5-4ccee1212ef6-config" (OuterVolumeSpecName: "config") pod "acda69a8-df5d-459f-91b5-4ccee1212ef6" (UID: "acda69a8-df5d-459f-91b5-4ccee1212ef6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:22:56 crc kubenswrapper[4998]: I0227 10:22:56.098571 4998 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/acda69a8-df5d-459f-91b5-4ccee1212ef6-client-ca\") on node \"crc\" DevicePath \"\"" Feb 27 10:22:56 crc kubenswrapper[4998]: I0227 10:22:56.098595 4998 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/acda69a8-df5d-459f-91b5-4ccee1212ef6-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 27 10:22:56 crc kubenswrapper[4998]: I0227 10:22:56.098608 4998 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/acda69a8-df5d-459f-91b5-4ccee1212ef6-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:22:56 crc kubenswrapper[4998]: I0227 10:22:56.103719 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acda69a8-df5d-459f-91b5-4ccee1212ef6-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "acda69a8-df5d-459f-91b5-4ccee1212ef6" (UID: "acda69a8-df5d-459f-91b5-4ccee1212ef6"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:22:56 crc kubenswrapper[4998]: I0227 10:22:56.104369 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/acda69a8-df5d-459f-91b5-4ccee1212ef6-kube-api-access-ssbmf" (OuterVolumeSpecName: "kube-api-access-ssbmf") pod "acda69a8-df5d-459f-91b5-4ccee1212ef6" (UID: "acda69a8-df5d-459f-91b5-4ccee1212ef6"). InnerVolumeSpecName "kube-api-access-ssbmf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:22:56 crc kubenswrapper[4998]: I0227 10:22:56.199507 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ssbmf\" (UniqueName: \"kubernetes.io/projected/acda69a8-df5d-459f-91b5-4ccee1212ef6-kube-api-access-ssbmf\") on node \"crc\" DevicePath \"\"" Feb 27 10:22:56 crc kubenswrapper[4998]: I0227 10:22:56.199556 4998 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/acda69a8-df5d-459f-91b5-4ccee1212ef6-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 10:22:56 crc kubenswrapper[4998]: I0227 10:22:56.774298 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf0d239d-a33e-4be9-866a-e98671b2cb99" path="/var/lib/kubelet/pods/cf0d239d-a33e-4be9-866a-e98671b2cb99/volumes" Feb 27 10:22:56 crc kubenswrapper[4998]: I0227 10:22:56.777655 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6557cd49cf-q76cz" Feb 27 10:22:56 crc kubenswrapper[4998]: I0227 10:22:56.777662 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6557cd49cf-q76cz" event={"ID":"acda69a8-df5d-459f-91b5-4ccee1212ef6","Type":"ContainerDied","Data":"fdc2b4e809c27042a5fa4067ce6f89426590d00261e7f90bdcdf3444ff72148e"} Feb 27 10:22:56 crc kubenswrapper[4998]: I0227 10:22:56.777742 4998 scope.go:117] "RemoveContainer" containerID="a8c841cc7034777ff25731a6dcff74f8f4e42e2a05a97c78266fa7138d80be69" Feb 27 10:22:56 crc kubenswrapper[4998]: I0227 10:22:56.786162 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-86784d4bd6-h6wxc" event={"ID":"a8cf23db-cd89-41e7-9038-8609a9672f76","Type":"ContainerStarted","Data":"b8c7a37930a30748dbe2bec35f7da5484cbb876c3faf4a3c8c8f5183dd7f418e"} Feb 27 10:22:56 crc kubenswrapper[4998]: I0227 10:22:56.786275 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-86784d4bd6-h6wxc" event={"ID":"a8cf23db-cd89-41e7-9038-8609a9672f76","Type":"ContainerStarted","Data":"b23dd28c87a3f2f5a772864c17d6e419357d4ff1603414ed152b73c5cf65e82f"} Feb 27 10:22:56 crc kubenswrapper[4998]: I0227 10:22:56.807547 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6557cd49cf-q76cz"] Feb 27 10:22:56 crc kubenswrapper[4998]: I0227 10:22:56.810108 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6557cd49cf-q76cz"] Feb 27 10:22:57 crc kubenswrapper[4998]: I0227 10:22:57.791420 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-86784d4bd6-h6wxc" Feb 27 10:22:57 crc kubenswrapper[4998]: I0227 10:22:57.796389 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-86784d4bd6-h6wxc" Feb 27 10:22:57 crc kubenswrapper[4998]: I0227 10:22:57.808033 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-86784d4bd6-h6wxc" podStartSLOduration=3.808008504 podStartE2EDuration="3.808008504s" podCreationTimestamp="2026-02-27 10:22:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:22:57.806327689 +0000 UTC m=+329.804598657" watchObservedRunningTime="2026-02-27 10:22:57.808008504 +0000 UTC m=+329.806279482" Feb 27 10:22:58 crc kubenswrapper[4998]: I0227 10:22:58.299791 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6bd45f59f8-np7dk"] Feb 27 10:22:58 crc kubenswrapper[4998]: E0227 10:22:58.300001 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acda69a8-df5d-459f-91b5-4ccee1212ef6" containerName="controller-manager" Feb 27 10:22:58 crc kubenswrapper[4998]: I0227 10:22:58.300012 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="acda69a8-df5d-459f-91b5-4ccee1212ef6" containerName="controller-manager" Feb 27 10:22:58 crc kubenswrapper[4998]: I0227 10:22:58.300102 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="acda69a8-df5d-459f-91b5-4ccee1212ef6" containerName="controller-manager" Feb 27 10:22:58 crc kubenswrapper[4998]: I0227 10:22:58.300451 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6bd45f59f8-np7dk" Feb 27 10:22:58 crc kubenswrapper[4998]: I0227 10:22:58.304318 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 27 10:22:58 crc kubenswrapper[4998]: I0227 10:22:58.304649 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 27 10:22:58 crc kubenswrapper[4998]: I0227 10:22:58.304792 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 27 10:22:58 crc kubenswrapper[4998]: I0227 10:22:58.305046 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 27 10:22:58 crc kubenswrapper[4998]: I0227 10:22:58.305092 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 27 10:22:58 crc kubenswrapper[4998]: I0227 10:22:58.305151 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 27 10:22:58 crc kubenswrapper[4998]: I0227 10:22:58.316064 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 27 10:22:58 crc kubenswrapper[4998]: I0227 10:22:58.321304 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6bd45f59f8-np7dk"] Feb 27 10:22:58 crc kubenswrapper[4998]: I0227 10:22:58.426081 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7d36626b-c9f0-4e5b-bdca-15bc2b8515b5-proxy-ca-bundles\") pod \"controller-manager-6bd45f59f8-np7dk\" (UID: \"7d36626b-c9f0-4e5b-bdca-15bc2b8515b5\") " pod="openshift-controller-manager/controller-manager-6bd45f59f8-np7dk" Feb 27 10:22:58 crc kubenswrapper[4998]: I0227 10:22:58.426151 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7d36626b-c9f0-4e5b-bdca-15bc2b8515b5-client-ca\") pod \"controller-manager-6bd45f59f8-np7dk\" (UID: \"7d36626b-c9f0-4e5b-bdca-15bc2b8515b5\") " pod="openshift-controller-manager/controller-manager-6bd45f59f8-np7dk" Feb 27 10:22:58 crc kubenswrapper[4998]: I0227 10:22:58.426253 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d36626b-c9f0-4e5b-bdca-15bc2b8515b5-config\") pod \"controller-manager-6bd45f59f8-np7dk\" (UID: \"7d36626b-c9f0-4e5b-bdca-15bc2b8515b5\") " pod="openshift-controller-manager/controller-manager-6bd45f59f8-np7dk" Feb 27 10:22:58 crc kubenswrapper[4998]: I0227 10:22:58.426322 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glrvk\" (UniqueName: \"kubernetes.io/projected/7d36626b-c9f0-4e5b-bdca-15bc2b8515b5-kube-api-access-glrvk\") pod \"controller-manager-6bd45f59f8-np7dk\" (UID: \"7d36626b-c9f0-4e5b-bdca-15bc2b8515b5\") " pod="openshift-controller-manager/controller-manager-6bd45f59f8-np7dk" Feb 27 10:22:58 crc kubenswrapper[4998]: I0227 10:22:58.426369 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d36626b-c9f0-4e5b-bdca-15bc2b8515b5-serving-cert\") pod \"controller-manager-6bd45f59f8-np7dk\" (UID: \"7d36626b-c9f0-4e5b-bdca-15bc2b8515b5\") " pod="openshift-controller-manager/controller-manager-6bd45f59f8-np7dk" Feb 27 10:22:58 crc kubenswrapper[4998]: I0227 10:22:58.527623 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-glrvk\" (UniqueName: \"kubernetes.io/projected/7d36626b-c9f0-4e5b-bdca-15bc2b8515b5-kube-api-access-glrvk\") pod \"controller-manager-6bd45f59f8-np7dk\" (UID: \"7d36626b-c9f0-4e5b-bdca-15bc2b8515b5\") " pod="openshift-controller-manager/controller-manager-6bd45f59f8-np7dk" Feb 27 10:22:58 crc kubenswrapper[4998]: I0227 10:22:58.527706 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d36626b-c9f0-4e5b-bdca-15bc2b8515b5-serving-cert\") pod \"controller-manager-6bd45f59f8-np7dk\" (UID: \"7d36626b-c9f0-4e5b-bdca-15bc2b8515b5\") " pod="openshift-controller-manager/controller-manager-6bd45f59f8-np7dk" Feb 27 10:22:58 crc kubenswrapper[4998]: I0227 10:22:58.527756 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7d36626b-c9f0-4e5b-bdca-15bc2b8515b5-proxy-ca-bundles\") pod \"controller-manager-6bd45f59f8-np7dk\" (UID: \"7d36626b-c9f0-4e5b-bdca-15bc2b8515b5\") " pod="openshift-controller-manager/controller-manager-6bd45f59f8-np7dk" Feb 27 10:22:58 crc kubenswrapper[4998]: I0227 10:22:58.527790 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7d36626b-c9f0-4e5b-bdca-15bc2b8515b5-client-ca\") pod \"controller-manager-6bd45f59f8-np7dk\" (UID: \"7d36626b-c9f0-4e5b-bdca-15bc2b8515b5\") " pod="openshift-controller-manager/controller-manager-6bd45f59f8-np7dk" Feb 27 10:22:58 crc kubenswrapper[4998]: I0227 10:22:58.527857 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d36626b-c9f0-4e5b-bdca-15bc2b8515b5-config\") pod \"controller-manager-6bd45f59f8-np7dk\" (UID: \"7d36626b-c9f0-4e5b-bdca-15bc2b8515b5\") " pod="openshift-controller-manager/controller-manager-6bd45f59f8-np7dk" Feb 27 10:22:58 crc kubenswrapper[4998]: I0227 10:22:58.529514 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d36626b-c9f0-4e5b-bdca-15bc2b8515b5-config\") pod \"controller-manager-6bd45f59f8-np7dk\" (UID: \"7d36626b-c9f0-4e5b-bdca-15bc2b8515b5\") " pod="openshift-controller-manager/controller-manager-6bd45f59f8-np7dk" Feb 27 10:22:58 crc kubenswrapper[4998]: I0227 10:22:58.529527 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7d36626b-c9f0-4e5b-bdca-15bc2b8515b5-proxy-ca-bundles\") pod \"controller-manager-6bd45f59f8-np7dk\" (UID: \"7d36626b-c9f0-4e5b-bdca-15bc2b8515b5\") " pod="openshift-controller-manager/controller-manager-6bd45f59f8-np7dk" Feb 27 10:22:58 crc kubenswrapper[4998]: I0227 10:22:58.529741 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7d36626b-c9f0-4e5b-bdca-15bc2b8515b5-client-ca\") pod \"controller-manager-6bd45f59f8-np7dk\" (UID: \"7d36626b-c9f0-4e5b-bdca-15bc2b8515b5\") " pod="openshift-controller-manager/controller-manager-6bd45f59f8-np7dk" Feb 27 10:22:58 crc kubenswrapper[4998]: I0227 10:22:58.532954 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d36626b-c9f0-4e5b-bdca-15bc2b8515b5-serving-cert\") pod \"controller-manager-6bd45f59f8-np7dk\" (UID: \"7d36626b-c9f0-4e5b-bdca-15bc2b8515b5\") " pod="openshift-controller-manager/controller-manager-6bd45f59f8-np7dk" Feb 27 10:22:58 crc kubenswrapper[4998]: I0227 10:22:58.550492 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-glrvk\" (UniqueName: \"kubernetes.io/projected/7d36626b-c9f0-4e5b-bdca-15bc2b8515b5-kube-api-access-glrvk\") pod \"controller-manager-6bd45f59f8-np7dk\" (UID: \"7d36626b-c9f0-4e5b-bdca-15bc2b8515b5\") " pod="openshift-controller-manager/controller-manager-6bd45f59f8-np7dk" Feb 27 10:22:58 crc kubenswrapper[4998]: I0227 10:22:58.615157 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6bd45f59f8-np7dk" Feb 27 10:22:58 crc kubenswrapper[4998]: I0227 10:22:58.786047 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="acda69a8-df5d-459f-91b5-4ccee1212ef6" path="/var/lib/kubelet/pods/acda69a8-df5d-459f-91b5-4ccee1212ef6/volumes" Feb 27 10:22:58 crc kubenswrapper[4998]: I0227 10:22:58.844789 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6bd45f59f8-np7dk"] Feb 27 10:22:58 crc kubenswrapper[4998]: W0227 10:22:58.851370 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7d36626b_c9f0_4e5b_bdca_15bc2b8515b5.slice/crio-dda0830906a2e2d7bb4fd738373e7e961900a4bd5c549865806ba1f79e879752 WatchSource:0}: Error finding container dda0830906a2e2d7bb4fd738373e7e961900a4bd5c549865806ba1f79e879752: Status 404 returned error can't find the container with id dda0830906a2e2d7bb4fd738373e7e961900a4bd5c549865806ba1f79e879752 Feb 27 10:22:59 crc kubenswrapper[4998]: I0227 10:22:59.802723 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6bd45f59f8-np7dk" event={"ID":"7d36626b-c9f0-4e5b-bdca-15bc2b8515b5","Type":"ContainerStarted","Data":"eeaaaf2e46428b7cd6cfc91ced509e34007f06032eec29d2a64312782e90ad71"} Feb 27 10:22:59 crc kubenswrapper[4998]: I0227 10:22:59.803050 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6bd45f59f8-np7dk" event={"ID":"7d36626b-c9f0-4e5b-bdca-15bc2b8515b5","Type":"ContainerStarted","Data":"dda0830906a2e2d7bb4fd738373e7e961900a4bd5c549865806ba1f79e879752"} Feb 27 10:22:59 crc kubenswrapper[4998]: I0227 10:22:59.826044 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6bd45f59f8-np7dk" podStartSLOduration=5.826020477 podStartE2EDuration="5.826020477s" podCreationTimestamp="2026-02-27 10:22:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:22:59.823395656 +0000 UTC m=+331.821666624" watchObservedRunningTime="2026-02-27 10:22:59.826020477 +0000 UTC m=+331.824291445" Feb 27 10:22:59 crc kubenswrapper[4998]: I0227 10:22:59.995416 4998 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 27 10:22:59 crc kubenswrapper[4998]: I0227 10:22:59.995695 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://2c807f36a54905b570613990b7428942b90cbad2d8d8dfbab02ee177d5575644" gracePeriod=15 Feb 27 10:22:59 crc kubenswrapper[4998]: I0227 10:22:59.995741 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://22e749ae59c7bd8ab4f2d458cd33ccbae459eb43375c8abcdd275ce7f3978d5b" gracePeriod=15 Feb 27 10:22:59 crc kubenswrapper[4998]: I0227 10:22:59.995801 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://93c102b441273f39ca361ed86f6ef259e15d8adb7eea414db62fabebfda0dee1" gracePeriod=15 Feb 27 10:22:59 crc kubenswrapper[4998]: I0227 10:22:59.995798 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://9f0cce1c3309bc7de11ea57038f7a306b781b41434337fbf3da176ca2dba2b91" gracePeriod=15 Feb 27 10:22:59 crc kubenswrapper[4998]: I0227 10:22:59.995906 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://df65cc66d990595a8fdb751234ffd8b9e890f53de56c736241bc8b52e7341357" gracePeriod=15 Feb 27 10:22:59 crc kubenswrapper[4998]: I0227 10:22:59.997353 4998 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 27 10:22:59 crc kubenswrapper[4998]: E0227 10:22:59.997591 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 27 10:22:59 crc kubenswrapper[4998]: I0227 10:22:59.997609 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 27 10:22:59 crc kubenswrapper[4998]: E0227 10:22:59.997619 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 27 10:22:59 crc kubenswrapper[4998]: I0227 10:22:59.997626 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 27 10:22:59 crc kubenswrapper[4998]: E0227 10:22:59.997635 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 27 10:22:59 crc kubenswrapper[4998]: I0227 10:22:59.997640 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 27 10:22:59 crc kubenswrapper[4998]: E0227 10:22:59.997646 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 27 10:22:59 crc kubenswrapper[4998]: I0227 10:22:59.997653 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 27 10:22:59 crc kubenswrapper[4998]: E0227 10:22:59.997664 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 27 10:22:59 crc kubenswrapper[4998]: I0227 10:22:59.997671 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 27 10:22:59 crc kubenswrapper[4998]: E0227 10:22:59.997680 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 27 10:22:59 crc kubenswrapper[4998]: I0227 10:22:59.997687 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 27 10:22:59 crc kubenswrapper[4998]: E0227 10:22:59.997697 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 27 10:22:59 crc kubenswrapper[4998]: I0227 10:22:59.997706 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 27 10:22:59 crc kubenswrapper[4998]: E0227 10:22:59.997717 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 27 10:22:59 crc kubenswrapper[4998]: I0227 10:22:59.997724 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 27 10:22:59 crc kubenswrapper[4998]: E0227 10:22:59.997731 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 27 10:22:59 crc kubenswrapper[4998]: I0227 10:22:59.997736 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 27 10:22:59 crc kubenswrapper[4998]: I0227 10:22:59.997821 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 27 10:22:59 crc kubenswrapper[4998]: I0227 10:22:59.997833 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 27 10:22:59 crc kubenswrapper[4998]: I0227 10:22:59.997843 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 27 10:22:59 crc kubenswrapper[4998]: I0227 10:22:59.997852 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 27 10:22:59 crc kubenswrapper[4998]: I0227 10:22:59.997860 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 27 10:22:59 crc kubenswrapper[4998]: I0227 10:22:59.997870 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 27 10:22:59 crc kubenswrapper[4998]: I0227 10:22:59.997883 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 27 10:22:59 crc kubenswrapper[4998]: E0227 10:22:59.997980 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 27 10:22:59 crc kubenswrapper[4998]: I0227 10:22:59.997991 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 27 10:22:59 crc kubenswrapper[4998]: I0227 10:22:59.998102 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 27 10:22:59 crc kubenswrapper[4998]: I0227 10:22:59.998318 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 27 10:22:59 crc kubenswrapper[4998]: I0227 10:22:59.999156 4998 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 27 10:22:59 crc kubenswrapper[4998]: I0227 10:22:59.999609 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 10:23:00 crc kubenswrapper[4998]: I0227 10:23:00.002793 4998 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Feb 27 10:23:00 crc kubenswrapper[4998]: I0227 10:23:00.049425 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 10:23:00 crc kubenswrapper[4998]: I0227 10:23:00.049484 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 10:23:00 crc kubenswrapper[4998]: I0227 10:23:00.049513 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 10:23:00 crc kubenswrapper[4998]: I0227 10:23:00.049543 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 10:23:00 crc kubenswrapper[4998]: I0227 10:23:00.049561 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 10:23:00 crc kubenswrapper[4998]: I0227 10:23:00.049731 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 10:23:00 crc kubenswrapper[4998]: I0227 10:23:00.049795 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 10:23:00 crc kubenswrapper[4998]: I0227 10:23:00.049976 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 10:23:00 crc kubenswrapper[4998]: I0227 10:23:00.150338 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 10:23:00 crc kubenswrapper[4998]: I0227 10:23:00.150384 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 10:23:00 crc kubenswrapper[4998]: I0227 10:23:00.150408 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 10:23:00 crc kubenswrapper[4998]: I0227 10:23:00.150433 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 10:23:00 crc kubenswrapper[4998]: I0227 10:23:00.150446 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 10:23:00 crc kubenswrapper[4998]: I0227 10:23:00.150484 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 10:23:00 crc kubenswrapper[4998]: I0227 10:23:00.150503 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 10:23:00 crc kubenswrapper[4998]: I0227 10:23:00.150551 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 10:23:00 crc kubenswrapper[4998]: I0227 10:23:00.150608 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 10:23:00 crc kubenswrapper[4998]: I0227 10:23:00.151506 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 10:23:00 crc kubenswrapper[4998]: I0227 10:23:00.151719 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 10:23:00 crc kubenswrapper[4998]: I0227 10:23:00.151742 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 10:23:00 crc kubenswrapper[4998]: I0227 10:23:00.151761 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 10:23:00 crc kubenswrapper[4998]: I0227 10:23:00.151782 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 10:23:00 crc kubenswrapper[4998]: I0227 10:23:00.151802 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 10:23:00 crc kubenswrapper[4998]: I0227 10:23:00.151821 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 10:23:00 crc kubenswrapper[4998]: I0227 10:23:00.811287 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Feb 27 10:23:00 crc kubenswrapper[4998]: I0227 10:23:00.812878 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 27 10:23:00 crc kubenswrapper[4998]: I0227 10:23:00.813695 4998 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="22e749ae59c7bd8ab4f2d458cd33ccbae459eb43375c8abcdd275ce7f3978d5b" exitCode=0 Feb 27 10:23:00 crc kubenswrapper[4998]: I0227 10:23:00.813715 4998 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="df65cc66d990595a8fdb751234ffd8b9e890f53de56c736241bc8b52e7341357" exitCode=0 Feb 27 10:23:00 crc kubenswrapper[4998]: I0227 10:23:00.813723 4998 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="9f0cce1c3309bc7de11ea57038f7a306b781b41434337fbf3da176ca2dba2b91" exitCode=0 Feb 27 10:23:00 crc kubenswrapper[4998]: I0227 10:23:00.813731 4998 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="93c102b441273f39ca361ed86f6ef259e15d8adb7eea414db62fabebfda0dee1" exitCode=2 Feb 27 10:23:00 crc kubenswrapper[4998]: I0227 10:23:00.813787 4998 scope.go:117] "RemoveContainer" containerID="9b6e8ecfcaba19fb742ce57619627409073b5ea272a20cca6cff39ebcef8d3e2" Feb 27 10:23:00 crc kubenswrapper[4998]: I0227 10:23:00.815756 4998 generic.go:334] "Generic (PLEG): container finished" podID="ef5a2466-eb1e-408c-934b-cf47168986b8" containerID="b210414b8903d594001c84b551c06e97dc1c675da1ca488233d74f9d9c8ba270" exitCode=0 Feb 27 10:23:00 crc kubenswrapper[4998]: I0227 10:23:00.815794 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"ef5a2466-eb1e-408c-934b-cf47168986b8","Type":"ContainerDied","Data":"b210414b8903d594001c84b551c06e97dc1c675da1ca488233d74f9d9c8ba270"} Feb 27 10:23:00 crc kubenswrapper[4998]: I0227 10:23:00.816199 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6bd45f59f8-np7dk" Feb 27 10:23:00 crc kubenswrapper[4998]: I0227 10:23:00.816858 4998 status_manager.go:851] "Failed to get status for pod" podUID="ef5a2466-eb1e-408c-934b-cf47168986b8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 27 10:23:00 crc kubenswrapper[4998]: I0227 10:23:00.821388 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6bd45f59f8-np7dk" Feb 27 10:23:00 crc kubenswrapper[4998]: I0227 10:23:00.821948 4998 status_manager.go:851] "Failed to get status for pod" podUID="7d36626b-c9f0-4e5b-bdca-15bc2b8515b5" pod="openshift-controller-manager/controller-manager-6bd45f59f8-np7dk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6bd45f59f8-np7dk\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 27 10:23:00 crc kubenswrapper[4998]: I0227 10:23:00.822447 4998 status_manager.go:851] "Failed to get status for pod" podUID="ef5a2466-eb1e-408c-934b-cf47168986b8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 27 10:23:01 crc kubenswrapper[4998]: I0227 10:23:01.824550 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 27 10:23:02 crc kubenswrapper[4998]: I0227 10:23:02.147010 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 27 10:23:02 crc kubenswrapper[4998]: I0227 10:23:02.148453 4998 status_manager.go:851] "Failed to get status for pod" podUID="7d36626b-c9f0-4e5b-bdca-15bc2b8515b5" pod="openshift-controller-manager/controller-manager-6bd45f59f8-np7dk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6bd45f59f8-np7dk\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 27 10:23:02 crc kubenswrapper[4998]: I0227 10:23:02.148867 4998 status_manager.go:851] "Failed to get status for pod" podUID="ef5a2466-eb1e-408c-934b-cf47168986b8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 27 10:23:02 crc kubenswrapper[4998]: I0227 10:23:02.191073 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ef5a2466-eb1e-408c-934b-cf47168986b8-kube-api-access\") pod \"ef5a2466-eb1e-408c-934b-cf47168986b8\" (UID: \"ef5a2466-eb1e-408c-934b-cf47168986b8\") " Feb 27 10:23:02 crc kubenswrapper[4998]: I0227 10:23:02.191368 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ef5a2466-eb1e-408c-934b-cf47168986b8-kubelet-dir\") pod \"ef5a2466-eb1e-408c-934b-cf47168986b8\" (UID: \"ef5a2466-eb1e-408c-934b-cf47168986b8\") " Feb 27 10:23:02 crc kubenswrapper[4998]: I0227 10:23:02.191471 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ef5a2466-eb1e-408c-934b-cf47168986b8-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "ef5a2466-eb1e-408c-934b-cf47168986b8" (UID: "ef5a2466-eb1e-408c-934b-cf47168986b8"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 10:23:02 crc kubenswrapper[4998]: I0227 10:23:02.191542 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ef5a2466-eb1e-408c-934b-cf47168986b8-var-lock\") pod \"ef5a2466-eb1e-408c-934b-cf47168986b8\" (UID: \"ef5a2466-eb1e-408c-934b-cf47168986b8\") " Feb 27 10:23:02 crc kubenswrapper[4998]: I0227 10:23:02.191709 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ef5a2466-eb1e-408c-934b-cf47168986b8-var-lock" (OuterVolumeSpecName: "var-lock") pod "ef5a2466-eb1e-408c-934b-cf47168986b8" (UID: "ef5a2466-eb1e-408c-934b-cf47168986b8"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 10:23:02 crc kubenswrapper[4998]: I0227 10:23:02.191970 4998 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ef5a2466-eb1e-408c-934b-cf47168986b8-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 27 10:23:02 crc kubenswrapper[4998]: I0227 10:23:02.191986 4998 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ef5a2466-eb1e-408c-934b-cf47168986b8-var-lock\") on node \"crc\" DevicePath \"\"" Feb 27 10:23:02 crc kubenswrapper[4998]: I0227 10:23:02.197276 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef5a2466-eb1e-408c-934b-cf47168986b8-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "ef5a2466-eb1e-408c-934b-cf47168986b8" (UID: "ef5a2466-eb1e-408c-934b-cf47168986b8"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:23:02 crc kubenswrapper[4998]: I0227 10:23:02.293740 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ef5a2466-eb1e-408c-934b-cf47168986b8-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 27 10:23:02 crc kubenswrapper[4998]: I0227 10:23:02.918520 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"ef5a2466-eb1e-408c-934b-cf47168986b8","Type":"ContainerDied","Data":"0543b2c0434e7c330caf7768811d842a53c27e3a3ad4463af090a7f21ab716c1"} Feb 27 10:23:02 crc kubenswrapper[4998]: I0227 10:23:02.918555 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 27 10:23:02 crc kubenswrapper[4998]: I0227 10:23:02.918567 4998 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0543b2c0434e7c330caf7768811d842a53c27e3a3ad4463af090a7f21ab716c1" Feb 27 10:23:02 crc kubenswrapper[4998]: I0227 10:23:02.931093 4998 status_manager.go:851] "Failed to get status for pod" podUID="7d36626b-c9f0-4e5b-bdca-15bc2b8515b5" pod="openshift-controller-manager/controller-manager-6bd45f59f8-np7dk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6bd45f59f8-np7dk\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 27 10:23:02 crc kubenswrapper[4998]: I0227 10:23:02.931586 4998 status_manager.go:851] "Failed to get status for pod" podUID="ef5a2466-eb1e-408c-934b-cf47168986b8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 27 10:23:03 crc kubenswrapper[4998]: I0227 10:23:03.939728 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 27 10:23:03 crc kubenswrapper[4998]: I0227 10:23:03.940892 4998 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="2c807f36a54905b570613990b7428942b90cbad2d8d8dfbab02ee177d5575644" exitCode=0 Feb 27 10:23:03 crc kubenswrapper[4998]: I0227 10:23:03.941007 4998 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="52f412f2efbaaa5422a01f300f2fcb61c19d6fc7fb10145e654d09b891891244" Feb 27 10:23:03 crc kubenswrapper[4998]: I0227 10:23:03.960939 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 27 10:23:03 crc kubenswrapper[4998]: I0227 10:23:03.961736 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 10:23:03 crc kubenswrapper[4998]: I0227 10:23:03.962339 4998 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 27 10:23:03 crc kubenswrapper[4998]: I0227 10:23:03.962745 4998 status_manager.go:851] "Failed to get status for pod" podUID="ef5a2466-eb1e-408c-934b-cf47168986b8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 27 10:23:03 crc kubenswrapper[4998]: I0227 10:23:03.963095 4998 status_manager.go:851] "Failed to get status for pod" podUID="7d36626b-c9f0-4e5b-bdca-15bc2b8515b5" pod="openshift-controller-manager/controller-manager-6bd45f59f8-np7dk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6bd45f59f8-np7dk\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 27 10:23:04 crc kubenswrapper[4998]: I0227 10:23:04.111068 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 27 10:23:04 crc kubenswrapper[4998]: I0227 10:23:04.111403 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 27 10:23:04 crc kubenswrapper[4998]: I0227 10:23:04.111534 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 27 10:23:04 crc kubenswrapper[4998]: I0227 10:23:04.111211 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 10:23:04 crc kubenswrapper[4998]: I0227 10:23:04.111501 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 10:23:04 crc kubenswrapper[4998]: I0227 10:23:04.111568 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 10:23:04 crc kubenswrapper[4998]: I0227 10:23:04.111987 4998 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 27 10:23:04 crc kubenswrapper[4998]: I0227 10:23:04.112056 4998 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 27 10:23:04 crc kubenswrapper[4998]: I0227 10:23:04.112122 4998 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 27 10:23:04 crc kubenswrapper[4998]: I0227 10:23:04.775178 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Feb 27 10:23:04 crc kubenswrapper[4998]: I0227 10:23:04.945710 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 10:23:04 crc kubenswrapper[4998]: I0227 10:23:04.946466 4998 status_manager.go:851] "Failed to get status for pod" podUID="7d36626b-c9f0-4e5b-bdca-15bc2b8515b5" pod="openshift-controller-manager/controller-manager-6bd45f59f8-np7dk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6bd45f59f8-np7dk\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 27 10:23:04 crc kubenswrapper[4998]: I0227 10:23:04.946717 4998 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 27 10:23:04 crc kubenswrapper[4998]: I0227 10:23:04.947204 4998 status_manager.go:851] "Failed to get status for pod" podUID="ef5a2466-eb1e-408c-934b-cf47168986b8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 27 10:23:04 crc kubenswrapper[4998]: I0227 10:23:04.948711 4998 status_manager.go:851] "Failed to get status for pod" podUID="7d36626b-c9f0-4e5b-bdca-15bc2b8515b5" pod="openshift-controller-manager/controller-manager-6bd45f59f8-np7dk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6bd45f59f8-np7dk\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 27 10:23:04 crc kubenswrapper[4998]: I0227 10:23:04.949017 4998 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 27 10:23:04 crc kubenswrapper[4998]: I0227 10:23:04.949418 4998 status_manager.go:851] "Failed to get status for pod" podUID="ef5a2466-eb1e-408c-934b-cf47168986b8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 27 10:23:04 crc kubenswrapper[4998]: I0227 10:23:04.951436 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-l7vs8" Feb 27 10:23:04 crc kubenswrapper[4998]: I0227 10:23:04.951859 4998 status_manager.go:851] "Failed to get status for pod" podUID="ef5a2466-eb1e-408c-934b-cf47168986b8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 27 10:23:04 crc kubenswrapper[4998]: I0227 10:23:04.952210 4998 status_manager.go:851] "Failed to get status for pod" podUID="7d36626b-c9f0-4e5b-bdca-15bc2b8515b5" pod="openshift-controller-manager/controller-manager-6bd45f59f8-np7dk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6bd45f59f8-np7dk\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 27 10:23:04 crc kubenswrapper[4998]: I0227 10:23:04.952635 4998 status_manager.go:851] "Failed to get status for pod" podUID="5d365be7-41cf-4570-a8fb-ef974affdb95" pod="openshift-marketplace/redhat-operators-l7vs8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-l7vs8\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 27 10:23:04 crc kubenswrapper[4998]: I0227 10:23:04.952904 4998 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 27 10:23:05 crc kubenswrapper[4998]: I0227 10:23:05.002272 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-l7vs8" Feb 27 10:23:05 crc kubenswrapper[4998]: I0227 10:23:05.002978 4998 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 27 10:23:05 crc kubenswrapper[4998]: I0227 10:23:05.003777 4998 status_manager.go:851] "Failed to get status for pod" podUID="ef5a2466-eb1e-408c-934b-cf47168986b8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 27 10:23:05 crc kubenswrapper[4998]: I0227 10:23:05.004275 4998 status_manager.go:851] "Failed to get status for pod" podUID="7d36626b-c9f0-4e5b-bdca-15bc2b8515b5" pod="openshift-controller-manager/controller-manager-6bd45f59f8-np7dk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6bd45f59f8-np7dk\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 27 10:23:05 crc kubenswrapper[4998]: I0227 10:23:05.004719 4998 status_manager.go:851] "Failed to get status for pod" podUID="5d365be7-41cf-4570-a8fb-ef974affdb95" pod="openshift-marketplace/redhat-operators-l7vs8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-l7vs8\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 27 10:23:05 crc kubenswrapper[4998]: E0227 10:23:05.019553 4998 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.173:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 10:23:05 crc kubenswrapper[4998]: I0227 10:23:05.020087 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 10:23:05 crc kubenswrapper[4998]: W0227 10:23:05.053453 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-f39ec620fc304d79b16ad7342d787b3238b1587f5fda10c793b9f3b4ef5c80dd WatchSource:0}: Error finding container f39ec620fc304d79b16ad7342d787b3238b1587f5fda10c793b9f3b4ef5c80dd: Status 404 returned error can't find the container with id f39ec620fc304d79b16ad7342d787b3238b1587f5fda10c793b9f3b4ef5c80dd Feb 27 10:23:05 crc kubenswrapper[4998]: E0227 10:23:05.056831 4998 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.173:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18981366ab90ccac openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:23:05.056275628 +0000 UTC m=+337.054546596,LastTimestamp:2026-02-27 10:23:05.056275628 +0000 UTC m=+337.054546596,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:23:05 crc kubenswrapper[4998]: I0227 10:23:05.955635 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"0ef5641ff4de281f5fa26ccb7af64ebc9256bd84cf0df69ec6cfaa9d44490bee"} Feb 27 10:23:05 crc kubenswrapper[4998]: I0227 10:23:05.956853 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"f39ec620fc304d79b16ad7342d787b3238b1587f5fda10c793b9f3b4ef5c80dd"} Feb 27 10:23:05 crc kubenswrapper[4998]: I0227 10:23:05.957390 4998 status_manager.go:851] "Failed to get status for pod" podUID="ef5a2466-eb1e-408c-934b-cf47168986b8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 27 10:23:05 crc kubenswrapper[4998]: E0227 10:23:05.957409 4998 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.173:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 10:23:05 crc kubenswrapper[4998]: I0227 10:23:05.957598 4998 status_manager.go:851] "Failed to get status for pod" podUID="7d36626b-c9f0-4e5b-bdca-15bc2b8515b5" pod="openshift-controller-manager/controller-manager-6bd45f59f8-np7dk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6bd45f59f8-np7dk\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 27 10:23:05 crc kubenswrapper[4998]: I0227 10:23:05.957797 4998 status_manager.go:851] "Failed to get status for pod" podUID="5d365be7-41cf-4570-a8fb-ef974affdb95" pod="openshift-marketplace/redhat-operators-l7vs8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-l7vs8\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 27 10:23:05 crc kubenswrapper[4998]: I0227 10:23:05.958051 4998 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 27 10:23:08 crc kubenswrapper[4998]: I0227 10:23:08.767514 4998 status_manager.go:851] "Failed to get status for pod" podUID="ef5a2466-eb1e-408c-934b-cf47168986b8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 27 10:23:08 crc kubenswrapper[4998]: I0227 10:23:08.768218 4998 status_manager.go:851] "Failed to get status for pod" podUID="7d36626b-c9f0-4e5b-bdca-15bc2b8515b5" pod="openshift-controller-manager/controller-manager-6bd45f59f8-np7dk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6bd45f59f8-np7dk\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 27 10:23:08 crc kubenswrapper[4998]: I0227 10:23:08.768516 4998 status_manager.go:851] "Failed to get status for pod" podUID="5d365be7-41cf-4570-a8fb-ef974affdb95" pod="openshift-marketplace/redhat-operators-l7vs8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-l7vs8\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 27 10:23:09 crc kubenswrapper[4998]: E0227 10:23:09.562632 4998 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.173:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18981366ab90ccac openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:23:05.056275628 +0000 UTC m=+337.054546596,LastTimestamp:2026-02-27 10:23:05.056275628 +0000 UTC m=+337.054546596,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:23:10 crc kubenswrapper[4998]: E0227 10:23:10.265852 4998 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 27 10:23:10 crc kubenswrapper[4998]: E0227 10:23:10.266110 4998 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 27 10:23:10 crc kubenswrapper[4998]: E0227 10:23:10.266400 4998 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 27 10:23:10 crc kubenswrapper[4998]: E0227 10:23:10.266748 4998 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 27 10:23:10 crc kubenswrapper[4998]: E0227 10:23:10.267249 4998 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 27 10:23:10 crc kubenswrapper[4998]: I0227 10:23:10.267272 4998 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 27 10:23:10 crc kubenswrapper[4998]: E0227 10:23:10.267499 4998 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.173:6443: connect: connection refused" interval="200ms" Feb 27 10:23:10 crc kubenswrapper[4998]: E0227 10:23:10.468325 4998 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.173:6443: connect: connection refused" interval="400ms" Feb 27 10:23:10 crc kubenswrapper[4998]: E0227 10:23:10.869859 4998 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.173:6443: connect: connection refused" interval="800ms" Feb 27 10:23:11 crc kubenswrapper[4998]: E0227 10:23:11.670756 4998 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.173:6443: connect: connection refused" interval="1.6s" Feb 27 10:23:12 crc kubenswrapper[4998]: I0227 10:23:12.825375 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:23:12 crc kubenswrapper[4998]: I0227 10:23:12.825472 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:23:12 crc kubenswrapper[4998]: I0227 10:23:12.825594 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:23:12 crc kubenswrapper[4998]: W0227 10:23:12.826543 4998 reflector.go:561] object-"openshift-network-console"/"networking-console-plugin-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/secrets?fieldSelector=metadata.name%3Dnetworking-console-plugin-cert&resourceVersion=27459": dial tcp 38.102.83.173:6443: connect: connection refused Feb 27 10:23:12 crc kubenswrapper[4998]: W0227 10:23:12.826539 4998 reflector.go:561] object-"openshift-network-console"/"networking-console-plugin": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/configmaps?fieldSelector=metadata.name%3Dnetworking-console-plugin&resourceVersion=27525": dial tcp 38.102.83.173:6443: connect: connection refused Feb 27 10:23:12 crc kubenswrapper[4998]: E0227 10:23:12.826613 4998 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-console\"/\"networking-console-plugin-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/secrets?fieldSelector=metadata.name%3Dnetworking-console-plugin-cert&resourceVersion=27459\": dial tcp 38.102.83.173:6443: connect: connection refused" logger="UnhandledError" Feb 27 10:23:12 crc kubenswrapper[4998]: E0227 10:23:12.826640 4998 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-console\"/\"networking-console-plugin\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/configmaps?fieldSelector=metadata.name%3Dnetworking-console-plugin&resourceVersion=27525\": dial tcp 38.102.83.173:6443: connect: connection refused" logger="UnhandledError" Feb 27 10:23:12 crc kubenswrapper[4998]: W0227 10:23:12.826934 4998 reflector.go:561] object-"openshift-network-diagnostics"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=27525": dial tcp 38.102.83.173:6443: connect: connection refused Feb 27 10:23:12 crc kubenswrapper[4998]: E0227 10:23:12.827051 4998 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=27525\": dial tcp 38.102.83.173:6443: connect: connection refused" logger="UnhandledError" Feb 27 10:23:12 crc kubenswrapper[4998]: I0227 10:23:12.927401 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:23:13 crc kubenswrapper[4998]: E0227 10:23:13.271550 4998 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.173:6443: connect: connection refused" interval="3.2s" Feb 27 10:23:13 crc kubenswrapper[4998]: E0227 10:23:13.826519 4998 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: failed to sync configmap cache: timed out waiting for the condition Feb 27 10:23:13 crc kubenswrapper[4998]: E0227 10:23:13.826551 4998 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 27 10:23:13 crc kubenswrapper[4998]: E0227 10:23:13.826583 4998 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: failed to sync secret cache: timed out waiting for the condition Feb 27 10:23:13 crc kubenswrapper[4998]: E0227 10:23:13.826635 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-27 10:25:15.826606382 +0000 UTC m=+467.824877390 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : failed to sync configmap cache: timed out waiting for the condition Feb 27 10:23:13 crc kubenswrapper[4998]: E0227 10:23:13.826671 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-27 10:25:15.826649993 +0000 UTC m=+467.824920991 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : failed to sync secret cache: timed out waiting for the condition Feb 27 10:23:13 crc kubenswrapper[4998]: W0227 10:23:13.827416 4998 reflector.go:561] object-"openshift-network-diagnostics"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=27525": dial tcp 38.102.83.173:6443: connect: connection refused Feb 27 10:23:13 crc kubenswrapper[4998]: E0227 10:23:13.827508 4998 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=27525\": dial tcp 38.102.83.173:6443: connect: connection refused" logger="UnhandledError" Feb 27 10:23:13 crc kubenswrapper[4998]: E0227 10:23:13.927845 4998 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 27 10:23:14 crc kubenswrapper[4998]: I0227 10:23:14.015054 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/1.log" Feb 27 10:23:14 crc kubenswrapper[4998]: I0227 10:23:14.017073 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 27 10:23:14 crc kubenswrapper[4998]: I0227 10:23:14.017256 4998 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="522fda246ba145bdebb76c4053bdce6892f1420b31e9bf785d9f94cc17d7f88a" exitCode=1 Feb 27 10:23:14 crc kubenswrapper[4998]: I0227 10:23:14.017314 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"522fda246ba145bdebb76c4053bdce6892f1420b31e9bf785d9f94cc17d7f88a"} Feb 27 10:23:14 crc kubenswrapper[4998]: I0227 10:23:14.018017 4998 scope.go:117] "RemoveContainer" containerID="522fda246ba145bdebb76c4053bdce6892f1420b31e9bf785d9f94cc17d7f88a" Feb 27 10:23:14 crc kubenswrapper[4998]: I0227 10:23:14.018418 4998 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 27 10:23:14 crc kubenswrapper[4998]: I0227 10:23:14.018922 4998 status_manager.go:851] "Failed to get status for pod" podUID="7d36626b-c9f0-4e5b-bdca-15bc2b8515b5" pod="openshift-controller-manager/controller-manager-6bd45f59f8-np7dk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6bd45f59f8-np7dk\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 27 10:23:14 crc kubenswrapper[4998]: I0227 10:23:14.019527 4998 status_manager.go:851] "Failed to get status for pod" podUID="5d365be7-41cf-4570-a8fb-ef974affdb95" pod="openshift-marketplace/redhat-operators-l7vs8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-l7vs8\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 27 10:23:14 crc kubenswrapper[4998]: I0227 10:23:14.020178 4998 status_manager.go:851] "Failed to get status for pod" podUID="ef5a2466-eb1e-408c-934b-cf47168986b8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 27 10:23:14 crc kubenswrapper[4998]: W0227 10:23:14.453994 4998 reflector.go:561] object-"openshift-network-console"/"networking-console-plugin": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/configmaps?fieldSelector=metadata.name%3Dnetworking-console-plugin&resourceVersion=27525": dial tcp 38.102.83.173:6443: connect: connection refused Feb 27 10:23:14 crc kubenswrapper[4998]: E0227 10:23:14.454138 4998 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-console\"/\"networking-console-plugin\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/configmaps?fieldSelector=metadata.name%3Dnetworking-console-plugin&resourceVersion=27525\": dial tcp 38.102.83.173:6443: connect: connection refused" logger="UnhandledError" Feb 27 10:23:14 crc kubenswrapper[4998]: I0227 10:23:14.765094 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 10:23:14 crc kubenswrapper[4998]: I0227 10:23:14.765961 4998 status_manager.go:851] "Failed to get status for pod" podUID="ef5a2466-eb1e-408c-934b-cf47168986b8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 27 10:23:14 crc kubenswrapper[4998]: I0227 10:23:14.766382 4998 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 27 10:23:14 crc kubenswrapper[4998]: I0227 10:23:14.766985 4998 status_manager.go:851] "Failed to get status for pod" podUID="7d36626b-c9f0-4e5b-bdca-15bc2b8515b5" pod="openshift-controller-manager/controller-manager-6bd45f59f8-np7dk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6bd45f59f8-np7dk\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 27 10:23:14 crc kubenswrapper[4998]: I0227 10:23:14.767854 4998 status_manager.go:851] "Failed to get status for pod" podUID="5d365be7-41cf-4570-a8fb-ef974affdb95" pod="openshift-marketplace/redhat-operators-l7vs8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-l7vs8\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 27 10:23:14 crc kubenswrapper[4998]: I0227 10:23:14.777638 4998 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5fc05123-f698-45ff-a3c3-13c18e466cdb" Feb 27 10:23:14 crc kubenswrapper[4998]: I0227 10:23:14.777683 4998 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5fc05123-f698-45ff-a3c3-13c18e466cdb" Feb 27 10:23:14 crc kubenswrapper[4998]: E0227 10:23:14.778165 4998 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 10:23:14 crc kubenswrapper[4998]: I0227 10:23:14.778658 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 10:23:14 crc kubenswrapper[4998]: W0227 10:23:14.794315 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-e33c79e2c43675fa96ce40ccb47d046666fd49d6622cdf39b6722d9de19975a5 WatchSource:0}: Error finding container e33c79e2c43675fa96ce40ccb47d046666fd49d6622cdf39b6722d9de19975a5: Status 404 returned error can't find the container with id e33c79e2c43675fa96ce40ccb47d046666fd49d6622cdf39b6722d9de19975a5 Feb 27 10:23:14 crc kubenswrapper[4998]: E0227 10:23:14.827369 4998 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 27 10:23:14 crc kubenswrapper[4998]: E0227 10:23:14.827423 4998 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: failed to sync configmap cache: timed out waiting for the condition Feb 27 10:23:14 crc kubenswrapper[4998]: E0227 10:23:14.827509 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-27 10:25:16.82748758 +0000 UTC m=+468.825758548 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : failed to sync configmap cache: timed out waiting for the condition Feb 27 10:23:14 crc kubenswrapper[4998]: E0227 10:23:14.928614 4998 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 27 10:23:14 crc kubenswrapper[4998]: E0227 10:23:14.928663 4998 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: failed to sync configmap cache: timed out waiting for the condition Feb 27 10:23:14 crc kubenswrapper[4998]: E0227 10:23:14.928757 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-27 10:25:16.928734336 +0000 UTC m=+468.927005304 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : failed to sync configmap cache: timed out waiting for the condition Feb 27 10:23:15 crc kubenswrapper[4998]: W0227 10:23:15.013374 4998 reflector.go:561] object-"openshift-network-diagnostics"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=27525": dial tcp 38.102.83.173:6443: connect: connection refused Feb 27 10:23:15 crc kubenswrapper[4998]: E0227 10:23:15.013459 4998 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=27525\": dial tcp 38.102.83.173:6443: connect: connection refused" logger="UnhandledError" Feb 27 10:23:15 crc kubenswrapper[4998]: I0227 10:23:15.023415 4998 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="87f82957c4b02c49dc998345f0dee230023ee2948928177a9322f4d1c410820f" exitCode=0 Feb 27 10:23:15 crc kubenswrapper[4998]: I0227 10:23:15.023501 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"87f82957c4b02c49dc998345f0dee230023ee2948928177a9322f4d1c410820f"} Feb 27 10:23:15 crc kubenswrapper[4998]: I0227 10:23:15.023545 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"e33c79e2c43675fa96ce40ccb47d046666fd49d6622cdf39b6722d9de19975a5"} Feb 27 10:23:15 crc kubenswrapper[4998]: I0227 10:23:15.023830 4998 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5fc05123-f698-45ff-a3c3-13c18e466cdb" Feb 27 10:23:15 crc kubenswrapper[4998]: I0227 10:23:15.023845 4998 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5fc05123-f698-45ff-a3c3-13c18e466cdb" Feb 27 10:23:15 crc kubenswrapper[4998]: E0227 10:23:15.024272 4998 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 10:23:15 crc kubenswrapper[4998]: I0227 10:23:15.024275 4998 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 27 10:23:15 crc kubenswrapper[4998]: I0227 10:23:15.024870 4998 status_manager.go:851] "Failed to get status for pod" podUID="7d36626b-c9f0-4e5b-bdca-15bc2b8515b5" pod="openshift-controller-manager/controller-manager-6bd45f59f8-np7dk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6bd45f59f8-np7dk\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 27 10:23:15 crc kubenswrapper[4998]: I0227 10:23:15.025108 4998 status_manager.go:851] "Failed to get status for pod" podUID="5d365be7-41cf-4570-a8fb-ef974affdb95" pod="openshift-marketplace/redhat-operators-l7vs8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-l7vs8\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 27 10:23:15 crc kubenswrapper[4998]: I0227 10:23:15.025356 4998 status_manager.go:851] "Failed to get status for pod" podUID="ef5a2466-eb1e-408c-934b-cf47168986b8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 27 10:23:15 crc kubenswrapper[4998]: I0227 10:23:15.026171 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/1.log" Feb 27 10:23:15 crc kubenswrapper[4998]: I0227 10:23:15.027439 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 27 10:23:15 crc kubenswrapper[4998]: I0227 10:23:15.027491 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"e460b8461fc9f63d0c36756323beccb9d3cf644de67afd2cf89f3394844d00b8"} Feb 27 10:23:15 crc kubenswrapper[4998]: I0227 10:23:15.028178 4998 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 27 10:23:15 crc kubenswrapper[4998]: I0227 10:23:15.028673 4998 status_manager.go:851] "Failed to get status for pod" podUID="7d36626b-c9f0-4e5b-bdca-15bc2b8515b5" pod="openshift-controller-manager/controller-manager-6bd45f59f8-np7dk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6bd45f59f8-np7dk\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 27 10:23:15 crc kubenswrapper[4998]: I0227 10:23:15.029030 4998 status_manager.go:851] "Failed to get status for pod" podUID="5d365be7-41cf-4570-a8fb-ef974affdb95" pod="openshift-marketplace/redhat-operators-l7vs8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-l7vs8\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 27 10:23:15 crc kubenswrapper[4998]: I0227 10:23:15.029423 4998 status_manager.go:851] "Failed to get status for pod" podUID="ef5a2466-eb1e-408c-934b-cf47168986b8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 27 10:23:16 crc kubenswrapper[4998]: I0227 10:23:16.043209 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"208d04f2580b17c497b1654a795e47d2ed5992457a0c82115d8a43eea99ce2a6"} Feb 27 10:23:16 crc kubenswrapper[4998]: I0227 10:23:16.043550 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"e2e339ff534d3d9dba2ac58455647aaad86bdbf2ee206d96af346c28a961aadb"} Feb 27 10:23:16 crc kubenswrapper[4998]: I0227 10:23:16.043562 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"76be2a82f76c2f8eb737d8496ec03cdd50064653b4a3b3ac7c68a6db8981f9f6"} Feb 27 10:23:16 crc kubenswrapper[4998]: I0227 10:23:16.043571 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"96bd4b3b69e6b08aa74484e9d3c96fc721baddacae116e7d92906f198470801a"} Feb 27 10:23:16 crc kubenswrapper[4998]: I0227 10:23:16.423564 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 10:23:17 crc kubenswrapper[4998]: I0227 10:23:17.052807 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"77ede5bc3fe44363b122dc7600852a02eb87bbc45681c1ff24dc33916f4a13f0"} Feb 27 10:23:17 crc kubenswrapper[4998]: I0227 10:23:17.053006 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 10:23:17 crc kubenswrapper[4998]: I0227 10:23:17.053095 4998 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5fc05123-f698-45ff-a3c3-13c18e466cdb" Feb 27 10:23:17 crc kubenswrapper[4998]: I0227 10:23:17.053122 4998 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5fc05123-f698-45ff-a3c3-13c18e466cdb" Feb 27 10:23:17 crc kubenswrapper[4998]: I0227 10:23:17.690494 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-h2zrl" podUID="d34656b6-50d4-4173-a40b-5a9eddb99397" containerName="oauth-openshift" containerID="cri-o://603b5f817c5cb9fb822d92c54a11e961ca0df9a4fdf59a1393046f7177b32015" gracePeriod=15 Feb 27 10:23:18 crc kubenswrapper[4998]: I0227 10:23:18.060917 4998 generic.go:334] "Generic (PLEG): container finished" podID="d34656b6-50d4-4173-a40b-5a9eddb99397" containerID="603b5f817c5cb9fb822d92c54a11e961ca0df9a4fdf59a1393046f7177b32015" exitCode=0 Feb 27 10:23:18 crc kubenswrapper[4998]: I0227 10:23:18.061017 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-h2zrl" event={"ID":"d34656b6-50d4-4173-a40b-5a9eddb99397","Type":"ContainerDied","Data":"603b5f817c5cb9fb822d92c54a11e961ca0df9a4fdf59a1393046f7177b32015"} Feb 27 10:23:18 crc kubenswrapper[4998]: I0227 10:23:18.231302 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-h2zrl" Feb 27 10:23:18 crc kubenswrapper[4998]: I0227 10:23:18.314352 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d34656b6-50d4-4173-a40b-5a9eddb99397-v4-0-config-system-service-ca\") pod \"d34656b6-50d4-4173-a40b-5a9eddb99397\" (UID: \"d34656b6-50d4-4173-a40b-5a9eddb99397\") " Feb 27 10:23:18 crc kubenswrapper[4998]: I0227 10:23:18.314405 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d34656b6-50d4-4173-a40b-5a9eddb99397-audit-dir\") pod \"d34656b6-50d4-4173-a40b-5a9eddb99397\" (UID: \"d34656b6-50d4-4173-a40b-5a9eddb99397\") " Feb 27 10:23:18 crc kubenswrapper[4998]: I0227 10:23:18.314431 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d34656b6-50d4-4173-a40b-5a9eddb99397-audit-policies\") pod \"d34656b6-50d4-4173-a40b-5a9eddb99397\" (UID: \"d34656b6-50d4-4173-a40b-5a9eddb99397\") " Feb 27 10:23:18 crc kubenswrapper[4998]: I0227 10:23:18.314453 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d34656b6-50d4-4173-a40b-5a9eddb99397-v4-0-config-system-cliconfig\") pod \"d34656b6-50d4-4173-a40b-5a9eddb99397\" (UID: \"d34656b6-50d4-4173-a40b-5a9eddb99397\") " Feb 27 10:23:18 crc kubenswrapper[4998]: I0227 10:23:18.314493 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d34656b6-50d4-4173-a40b-5a9eddb99397-v4-0-config-user-template-login\") pod \"d34656b6-50d4-4173-a40b-5a9eddb99397\" (UID: \"d34656b6-50d4-4173-a40b-5a9eddb99397\") " Feb 27 10:23:18 crc kubenswrapper[4998]: I0227 10:23:18.314525 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d34656b6-50d4-4173-a40b-5a9eddb99397-v4-0-config-system-serving-cert\") pod \"d34656b6-50d4-4173-a40b-5a9eddb99397\" (UID: \"d34656b6-50d4-4173-a40b-5a9eddb99397\") " Feb 27 10:23:18 crc kubenswrapper[4998]: I0227 10:23:18.314547 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d34656b6-50d4-4173-a40b-5a9eddb99397-v4-0-config-system-ocp-branding-template\") pod \"d34656b6-50d4-4173-a40b-5a9eddb99397\" (UID: \"d34656b6-50d4-4173-a40b-5a9eddb99397\") " Feb 27 10:23:18 crc kubenswrapper[4998]: I0227 10:23:18.314568 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d34656b6-50d4-4173-a40b-5a9eddb99397-v4-0-config-user-template-error\") pod \"d34656b6-50d4-4173-a40b-5a9eddb99397\" (UID: \"d34656b6-50d4-4173-a40b-5a9eddb99397\") " Feb 27 10:23:18 crc kubenswrapper[4998]: I0227 10:23:18.314587 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6mbjh\" (UniqueName: \"kubernetes.io/projected/d34656b6-50d4-4173-a40b-5a9eddb99397-kube-api-access-6mbjh\") pod \"d34656b6-50d4-4173-a40b-5a9eddb99397\" (UID: \"d34656b6-50d4-4173-a40b-5a9eddb99397\") " Feb 27 10:23:18 crc kubenswrapper[4998]: I0227 10:23:18.314613 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d34656b6-50d4-4173-a40b-5a9eddb99397-v4-0-config-user-idp-0-file-data\") pod \"d34656b6-50d4-4173-a40b-5a9eddb99397\" (UID: \"d34656b6-50d4-4173-a40b-5a9eddb99397\") " Feb 27 10:23:18 crc kubenswrapper[4998]: I0227 10:23:18.314628 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d34656b6-50d4-4173-a40b-5a9eddb99397-v4-0-config-user-template-provider-selection\") pod \"d34656b6-50d4-4173-a40b-5a9eddb99397\" (UID: \"d34656b6-50d4-4173-a40b-5a9eddb99397\") " Feb 27 10:23:18 crc kubenswrapper[4998]: I0227 10:23:18.314648 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d34656b6-50d4-4173-a40b-5a9eddb99397-v4-0-config-system-router-certs\") pod \"d34656b6-50d4-4173-a40b-5a9eddb99397\" (UID: \"d34656b6-50d4-4173-a40b-5a9eddb99397\") " Feb 27 10:23:18 crc kubenswrapper[4998]: I0227 10:23:18.314672 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d34656b6-50d4-4173-a40b-5a9eddb99397-v4-0-config-system-session\") pod \"d34656b6-50d4-4173-a40b-5a9eddb99397\" (UID: \"d34656b6-50d4-4173-a40b-5a9eddb99397\") " Feb 27 10:23:18 crc kubenswrapper[4998]: I0227 10:23:18.314710 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d34656b6-50d4-4173-a40b-5a9eddb99397-v4-0-config-system-trusted-ca-bundle\") pod \"d34656b6-50d4-4173-a40b-5a9eddb99397\" (UID: \"d34656b6-50d4-4173-a40b-5a9eddb99397\") " Feb 27 10:23:18 crc kubenswrapper[4998]: I0227 10:23:18.315357 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d34656b6-50d4-4173-a40b-5a9eddb99397-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "d34656b6-50d4-4173-a40b-5a9eddb99397" (UID: "d34656b6-50d4-4173-a40b-5a9eddb99397"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:23:18 crc kubenswrapper[4998]: I0227 10:23:18.315362 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d34656b6-50d4-4173-a40b-5a9eddb99397-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "d34656b6-50d4-4173-a40b-5a9eddb99397" (UID: "d34656b6-50d4-4173-a40b-5a9eddb99397"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 10:23:18 crc kubenswrapper[4998]: I0227 10:23:18.315909 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d34656b6-50d4-4173-a40b-5a9eddb99397-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "d34656b6-50d4-4173-a40b-5a9eddb99397" (UID: "d34656b6-50d4-4173-a40b-5a9eddb99397"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:23:18 crc kubenswrapper[4998]: I0227 10:23:18.316052 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d34656b6-50d4-4173-a40b-5a9eddb99397-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "d34656b6-50d4-4173-a40b-5a9eddb99397" (UID: "d34656b6-50d4-4173-a40b-5a9eddb99397"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:23:18 crc kubenswrapper[4998]: I0227 10:23:18.316070 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d34656b6-50d4-4173-a40b-5a9eddb99397-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "d34656b6-50d4-4173-a40b-5a9eddb99397" (UID: "d34656b6-50d4-4173-a40b-5a9eddb99397"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:23:18 crc kubenswrapper[4998]: I0227 10:23:18.326855 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d34656b6-50d4-4173-a40b-5a9eddb99397-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "d34656b6-50d4-4173-a40b-5a9eddb99397" (UID: "d34656b6-50d4-4173-a40b-5a9eddb99397"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:23:18 crc kubenswrapper[4998]: I0227 10:23:18.327454 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d34656b6-50d4-4173-a40b-5a9eddb99397-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "d34656b6-50d4-4173-a40b-5a9eddb99397" (UID: "d34656b6-50d4-4173-a40b-5a9eddb99397"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:23:18 crc kubenswrapper[4998]: I0227 10:23:18.327702 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d34656b6-50d4-4173-a40b-5a9eddb99397-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "d34656b6-50d4-4173-a40b-5a9eddb99397" (UID: "d34656b6-50d4-4173-a40b-5a9eddb99397"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:23:18 crc kubenswrapper[4998]: I0227 10:23:18.327707 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d34656b6-50d4-4173-a40b-5a9eddb99397-kube-api-access-6mbjh" (OuterVolumeSpecName: "kube-api-access-6mbjh") pod "d34656b6-50d4-4173-a40b-5a9eddb99397" (UID: "d34656b6-50d4-4173-a40b-5a9eddb99397"). InnerVolumeSpecName "kube-api-access-6mbjh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:23:18 crc kubenswrapper[4998]: I0227 10:23:18.328420 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d34656b6-50d4-4173-a40b-5a9eddb99397-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "d34656b6-50d4-4173-a40b-5a9eddb99397" (UID: "d34656b6-50d4-4173-a40b-5a9eddb99397"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:23:18 crc kubenswrapper[4998]: I0227 10:23:18.328556 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d34656b6-50d4-4173-a40b-5a9eddb99397-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "d34656b6-50d4-4173-a40b-5a9eddb99397" (UID: "d34656b6-50d4-4173-a40b-5a9eddb99397"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:23:18 crc kubenswrapper[4998]: I0227 10:23:18.328854 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d34656b6-50d4-4173-a40b-5a9eddb99397-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "d34656b6-50d4-4173-a40b-5a9eddb99397" (UID: "d34656b6-50d4-4173-a40b-5a9eddb99397"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:23:18 crc kubenswrapper[4998]: I0227 10:23:18.330557 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d34656b6-50d4-4173-a40b-5a9eddb99397-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "d34656b6-50d4-4173-a40b-5a9eddb99397" (UID: "d34656b6-50d4-4173-a40b-5a9eddb99397"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:23:18 crc kubenswrapper[4998]: I0227 10:23:18.330697 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d34656b6-50d4-4173-a40b-5a9eddb99397-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "d34656b6-50d4-4173-a40b-5a9eddb99397" (UID: "d34656b6-50d4-4173-a40b-5a9eddb99397"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:23:18 crc kubenswrapper[4998]: I0227 10:23:18.362712 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 10:23:18 crc kubenswrapper[4998]: I0227 10:23:18.363235 4998 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 27 10:23:18 crc kubenswrapper[4998]: I0227 10:23:18.363384 4998 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 27 10:23:18 crc kubenswrapper[4998]: I0227 10:23:18.416346 4998 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d34656b6-50d4-4173-a40b-5a9eddb99397-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 27 10:23:18 crc kubenswrapper[4998]: I0227 10:23:18.416386 4998 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d34656b6-50d4-4173-a40b-5a9eddb99397-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 27 10:23:18 crc kubenswrapper[4998]: I0227 10:23:18.416402 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6mbjh\" (UniqueName: \"kubernetes.io/projected/d34656b6-50d4-4173-a40b-5a9eddb99397-kube-api-access-6mbjh\") on node \"crc\" DevicePath \"\"" Feb 27 10:23:18 crc kubenswrapper[4998]: I0227 10:23:18.416415 4998 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d34656b6-50d4-4173-a40b-5a9eddb99397-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 27 10:23:18 crc kubenswrapper[4998]: I0227 10:23:18.416426 4998 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d34656b6-50d4-4173-a40b-5a9eddb99397-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 27 10:23:18 crc kubenswrapper[4998]: I0227 10:23:18.416437 4998 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d34656b6-50d4-4173-a40b-5a9eddb99397-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 27 10:23:18 crc kubenswrapper[4998]: I0227 10:23:18.416446 4998 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d34656b6-50d4-4173-a40b-5a9eddb99397-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 27 10:23:18 crc kubenswrapper[4998]: I0227 10:23:18.416455 4998 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d34656b6-50d4-4173-a40b-5a9eddb99397-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:23:18 crc kubenswrapper[4998]: I0227 10:23:18.416466 4998 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d34656b6-50d4-4173-a40b-5a9eddb99397-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 27 10:23:18 crc kubenswrapper[4998]: I0227 10:23:18.416475 4998 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d34656b6-50d4-4173-a40b-5a9eddb99397-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 27 10:23:18 crc kubenswrapper[4998]: I0227 10:23:18.416483 4998 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d34656b6-50d4-4173-a40b-5a9eddb99397-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 27 10:23:18 crc kubenswrapper[4998]: I0227 10:23:18.416491 4998 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d34656b6-50d4-4173-a40b-5a9eddb99397-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 27 10:23:18 crc kubenswrapper[4998]: I0227 10:23:18.416499 4998 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d34656b6-50d4-4173-a40b-5a9eddb99397-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 27 10:23:18 crc kubenswrapper[4998]: I0227 10:23:18.416508 4998 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d34656b6-50d4-4173-a40b-5a9eddb99397-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 10:23:18 crc kubenswrapper[4998]: I0227 10:23:18.458963 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 27 10:23:19 crc kubenswrapper[4998]: I0227 10:23:19.066829 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-h2zrl" event={"ID":"d34656b6-50d4-4173-a40b-5a9eddb99397","Type":"ContainerDied","Data":"4bb522700ddb2116d394225f1d70a75592cea5509207081f17f3cb59ec6ea747"} Feb 27 10:23:19 crc kubenswrapper[4998]: I0227 10:23:19.066872 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-h2zrl" Feb 27 10:23:19 crc kubenswrapper[4998]: I0227 10:23:19.066879 4998 scope.go:117] "RemoveContainer" containerID="603b5f817c5cb9fb822d92c54a11e961ca0df9a4fdf59a1393046f7177b32015" Feb 27 10:23:19 crc kubenswrapper[4998]: I0227 10:23:19.779388 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 10:23:19 crc kubenswrapper[4998]: I0227 10:23:19.779465 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 10:23:19 crc kubenswrapper[4998]: I0227 10:23:19.785025 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 10:23:21 crc kubenswrapper[4998]: I0227 10:23:21.383409 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 27 10:23:21 crc kubenswrapper[4998]: I0227 10:23:21.728148 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 27 10:23:21 crc kubenswrapper[4998]: I0227 10:23:21.730030 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 27 10:23:22 crc kubenswrapper[4998]: I0227 10:23:22.066365 4998 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 10:23:22 crc kubenswrapper[4998]: I0227 10:23:22.182251 4998 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="680c9203-e073-4048-9e23-54add77c55c1" Feb 27 10:23:22 crc kubenswrapper[4998]: E0227 10:23:22.638067 4998 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\": Failed to watch *v1.Secret: unknown (get secrets)" logger="UnhandledError" Feb 27 10:23:23 crc kubenswrapper[4998]: I0227 10:23:23.089010 4998 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5fc05123-f698-45ff-a3c3-13c18e466cdb" Feb 27 10:23:23 crc kubenswrapper[4998]: I0227 10:23:23.089040 4998 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5fc05123-f698-45ff-a3c3-13c18e466cdb" Feb 27 10:23:23 crc kubenswrapper[4998]: I0227 10:23:23.093929 4998 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="680c9203-e073-4048-9e23-54add77c55c1" Feb 27 10:23:23 crc kubenswrapper[4998]: I0227 10:23:23.095532 4998 status_manager.go:308] "Container readiness changed before pod has synced" pod="openshift-kube-apiserver/kube-apiserver-crc" containerID="cri-o://96bd4b3b69e6b08aa74484e9d3c96fc721baddacae116e7d92906f198470801a" Feb 27 10:23:23 crc kubenswrapper[4998]: I0227 10:23:23.095564 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 10:23:24 crc kubenswrapper[4998]: I0227 10:23:24.094158 4998 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5fc05123-f698-45ff-a3c3-13c18e466cdb" Feb 27 10:23:24 crc kubenswrapper[4998]: I0227 10:23:24.094195 4998 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5fc05123-f698-45ff-a3c3-13c18e466cdb" Feb 27 10:23:24 crc kubenswrapper[4998]: I0227 10:23:24.100203 4998 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="680c9203-e073-4048-9e23-54add77c55c1" Feb 27 10:23:28 crc kubenswrapper[4998]: I0227 10:23:28.363898 4998 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 27 10:23:28 crc kubenswrapper[4998]: I0227 10:23:28.364273 4998 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 27 10:23:31 crc kubenswrapper[4998]: I0227 10:23:31.402069 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 27 10:23:32 crc kubenswrapper[4998]: I0227 10:23:32.033347 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 27 10:23:32 crc kubenswrapper[4998]: I0227 10:23:32.110300 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 27 10:23:32 crc kubenswrapper[4998]: I0227 10:23:32.223149 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 27 10:23:32 crc kubenswrapper[4998]: I0227 10:23:32.336051 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 27 10:23:32 crc kubenswrapper[4998]: I0227 10:23:32.819068 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 27 10:23:32 crc kubenswrapper[4998]: I0227 10:23:32.865794 4998 scope.go:117] "RemoveContainer" containerID="df65cc66d990595a8fdb751234ffd8b9e890f53de56c736241bc8b52e7341357" Feb 27 10:23:32 crc kubenswrapper[4998]: I0227 10:23:32.887358 4998 scope.go:117] "RemoveContainer" containerID="8b42644d7b4769348dda3a3ed5f82a860712159a6e62df2ee6a04b6e890851fb" Feb 27 10:23:32 crc kubenswrapper[4998]: I0227 10:23:32.910401 4998 scope.go:117] "RemoveContainer" containerID="9f0cce1c3309bc7de11ea57038f7a306b781b41434337fbf3da176ca2dba2b91" Feb 27 10:23:32 crc kubenswrapper[4998]: I0227 10:23:32.929499 4998 scope.go:117] "RemoveContainer" containerID="93c102b441273f39ca361ed86f6ef259e15d8adb7eea414db62fabebfda0dee1" Feb 27 10:23:32 crc kubenswrapper[4998]: I0227 10:23:32.956204 4998 scope.go:117] "RemoveContainer" containerID="2c807f36a54905b570613990b7428942b90cbad2d8d8dfbab02ee177d5575644" Feb 27 10:23:33 crc kubenswrapper[4998]: I0227 10:23:33.343286 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 27 10:23:33 crc kubenswrapper[4998]: I0227 10:23:33.633889 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 27 10:23:33 crc kubenswrapper[4998]: E0227 10:23:33.779081 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-cqllr], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 10:23:33 crc kubenswrapper[4998]: E0227 10:23:33.794397 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[networking-console-plugin-cert nginx-conf], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 10:23:33 crc kubenswrapper[4998]: E0227 10:23:33.803690 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[metrics-certs], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-multus/network-metrics-daemon-86xkz" podUID="40178d6d-6068-4937-b7d5-883538892cc5" Feb 27 10:23:33 crc kubenswrapper[4998]: E0227 10:23:33.810062 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-s2dwl], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 10:23:33 crc kubenswrapper[4998]: I0227 10:23:33.821311 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/40178d6d-6068-4937-b7d5-883538892cc5-metrics-certs\") pod \"network-metrics-daemon-86xkz\" (UID: \"40178d6d-6068-4937-b7d5-883538892cc5\") " pod="openshift-multus/network-metrics-daemon-86xkz" Feb 27 10:23:33 crc kubenswrapper[4998]: I0227 10:23:33.827278 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/40178d6d-6068-4937-b7d5-883538892cc5-metrics-certs\") pod \"network-metrics-daemon-86xkz\" (UID: \"40178d6d-6068-4937-b7d5-883538892cc5\") " pod="openshift-multus/network-metrics-daemon-86xkz" Feb 27 10:23:34 crc kubenswrapper[4998]: I0227 10:23:34.256100 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 27 10:23:34 crc kubenswrapper[4998]: I0227 10:23:34.271424 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 27 10:23:34 crc kubenswrapper[4998]: I0227 10:23:34.311900 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 27 10:23:34 crc kubenswrapper[4998]: I0227 10:23:34.396212 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 27 10:23:34 crc kubenswrapper[4998]: I0227 10:23:34.422218 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 27 10:23:34 crc kubenswrapper[4998]: I0227 10:23:34.612325 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 27 10:23:34 crc kubenswrapper[4998]: I0227 10:23:34.647217 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 27 10:23:34 crc kubenswrapper[4998]: I0227 10:23:34.731219 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 27 10:23:34 crc kubenswrapper[4998]: I0227 10:23:34.785037 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 27 10:23:34 crc kubenswrapper[4998]: I0227 10:23:34.871786 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 27 10:23:34 crc kubenswrapper[4998]: I0227 10:23:34.899719 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 27 10:23:34 crc kubenswrapper[4998]: I0227 10:23:34.954380 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 27 10:23:35 crc kubenswrapper[4998]: I0227 10:23:35.017343 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 27 10:23:35 crc kubenswrapper[4998]: I0227 10:23:35.027676 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 27 10:23:35 crc kubenswrapper[4998]: I0227 10:23:35.043066 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 27 10:23:35 crc kubenswrapper[4998]: I0227 10:23:35.051581 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 27 10:23:35 crc kubenswrapper[4998]: I0227 10:23:35.197557 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 27 10:23:35 crc kubenswrapper[4998]: I0227 10:23:35.361626 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 27 10:23:35 crc kubenswrapper[4998]: I0227 10:23:35.611381 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 27 10:23:35 crc kubenswrapper[4998]: I0227 10:23:35.650844 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 27 10:23:35 crc kubenswrapper[4998]: I0227 10:23:35.664219 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 27 10:23:35 crc kubenswrapper[4998]: I0227 10:23:35.761364 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 27 10:23:35 crc kubenswrapper[4998]: I0227 10:23:35.860382 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 27 10:23:35 crc kubenswrapper[4998]: I0227 10:23:35.878476 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 27 10:23:35 crc kubenswrapper[4998]: I0227 10:23:35.998555 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 27 10:23:36 crc kubenswrapper[4998]: I0227 10:23:36.021010 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 27 10:23:36 crc kubenswrapper[4998]: I0227 10:23:36.060093 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 27 10:23:36 crc kubenswrapper[4998]: I0227 10:23:36.131888 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 27 10:23:36 crc kubenswrapper[4998]: I0227 10:23:36.138590 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 27 10:23:36 crc kubenswrapper[4998]: I0227 10:23:36.150802 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 27 10:23:36 crc kubenswrapper[4998]: I0227 10:23:36.224993 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 27 10:23:36 crc kubenswrapper[4998]: I0227 10:23:36.241610 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 27 10:23:36 crc kubenswrapper[4998]: I0227 10:23:36.287122 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 27 10:23:36 crc kubenswrapper[4998]: I0227 10:23:36.399357 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 27 10:23:36 crc kubenswrapper[4998]: I0227 10:23:36.416761 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 27 10:23:36 crc kubenswrapper[4998]: I0227 10:23:36.506166 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 27 10:23:36 crc kubenswrapper[4998]: I0227 10:23:36.528892 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 27 10:23:36 crc kubenswrapper[4998]: I0227 10:23:36.587900 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 27 10:23:36 crc kubenswrapper[4998]: I0227 10:23:36.592372 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 27 10:23:36 crc kubenswrapper[4998]: I0227 10:23:36.610118 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 27 10:23:36 crc kubenswrapper[4998]: I0227 10:23:36.613800 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 27 10:23:36 crc kubenswrapper[4998]: I0227 10:23:36.838147 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 27 10:23:36 crc kubenswrapper[4998]: I0227 10:23:36.885987 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 27 10:23:36 crc kubenswrapper[4998]: I0227 10:23:36.987091 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 27 10:23:36 crc kubenswrapper[4998]: I0227 10:23:36.997376 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 27 10:23:37 crc kubenswrapper[4998]: I0227 10:23:37.039794 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 27 10:23:37 crc kubenswrapper[4998]: I0227 10:23:37.047136 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 27 10:23:37 crc kubenswrapper[4998]: I0227 10:23:37.144001 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 27 10:23:37 crc kubenswrapper[4998]: I0227 10:23:37.145160 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 27 10:23:37 crc kubenswrapper[4998]: I0227 10:23:37.173117 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 27 10:23:37 crc kubenswrapper[4998]: I0227 10:23:37.185673 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 27 10:23:37 crc kubenswrapper[4998]: I0227 10:23:37.229841 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 27 10:23:37 crc kubenswrapper[4998]: I0227 10:23:37.272980 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 27 10:23:37 crc kubenswrapper[4998]: I0227 10:23:37.304326 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 27 10:23:37 crc kubenswrapper[4998]: I0227 10:23:37.325741 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 27 10:23:37 crc kubenswrapper[4998]: I0227 10:23:37.335692 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 27 10:23:37 crc kubenswrapper[4998]: I0227 10:23:37.420841 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 27 10:23:37 crc kubenswrapper[4998]: I0227 10:23:37.498424 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 27 10:23:37 crc kubenswrapper[4998]: I0227 10:23:37.513029 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 27 10:23:37 crc kubenswrapper[4998]: I0227 10:23:37.566423 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 27 10:23:37 crc kubenswrapper[4998]: I0227 10:23:37.587814 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 27 10:23:37 crc kubenswrapper[4998]: I0227 10:23:37.588331 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 27 10:23:37 crc kubenswrapper[4998]: I0227 10:23:37.636297 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 27 10:23:37 crc kubenswrapper[4998]: I0227 10:23:37.644913 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 27 10:23:37 crc kubenswrapper[4998]: I0227 10:23:37.669384 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 27 10:23:37 crc kubenswrapper[4998]: I0227 10:23:37.673241 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 27 10:23:37 crc kubenswrapper[4998]: I0227 10:23:37.764852 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 27 10:23:37 crc kubenswrapper[4998]: I0227 10:23:37.860574 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 27 10:23:37 crc kubenswrapper[4998]: I0227 10:23:37.861698 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 27 10:23:37 crc kubenswrapper[4998]: I0227 10:23:37.940187 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 27 10:23:37 crc kubenswrapper[4998]: I0227 10:23:37.960056 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 27 10:23:38 crc kubenswrapper[4998]: I0227 10:23:38.037664 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 27 10:23:38 crc kubenswrapper[4998]: I0227 10:23:38.043599 4998 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 27 10:23:38 crc kubenswrapper[4998]: I0227 10:23:38.091786 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 27 10:23:38 crc kubenswrapper[4998]: I0227 10:23:38.227318 4998 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 27 10:23:38 crc kubenswrapper[4998]: I0227 10:23:38.235968 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-558db77b4-h2zrl"] Feb 27 10:23:38 crc kubenswrapper[4998]: I0227 10:23:38.236052 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 27 10:23:38 crc kubenswrapper[4998]: I0227 10:23:38.242181 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 10:23:38 crc kubenswrapper[4998]: I0227 10:23:38.257451 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=16.257429368 podStartE2EDuration="16.257429368s" podCreationTimestamp="2026-02-27 10:23:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:23:38.25490115 +0000 UTC m=+370.253172198" watchObservedRunningTime="2026-02-27 10:23:38.257429368 +0000 UTC m=+370.255700336" Feb 27 10:23:38 crc kubenswrapper[4998]: I0227 10:23:38.270266 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 27 10:23:38 crc kubenswrapper[4998]: I0227 10:23:38.347632 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 27 10:23:38 crc kubenswrapper[4998]: I0227 10:23:38.367528 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 10:23:38 crc kubenswrapper[4998]: I0227 10:23:38.371309 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 10:23:38 crc kubenswrapper[4998]: I0227 10:23:38.414055 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 27 10:23:38 crc kubenswrapper[4998]: I0227 10:23:38.591340 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 27 10:23:38 crc kubenswrapper[4998]: I0227 10:23:38.624344 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 27 10:23:38 crc kubenswrapper[4998]: I0227 10:23:38.732203 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 27 10:23:38 crc kubenswrapper[4998]: I0227 10:23:38.738056 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 27 10:23:38 crc kubenswrapper[4998]: I0227 10:23:38.756597 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 27 10:23:38 crc kubenswrapper[4998]: I0227 10:23:38.772757 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d34656b6-50d4-4173-a40b-5a9eddb99397" path="/var/lib/kubelet/pods/d34656b6-50d4-4173-a40b-5a9eddb99397/volumes" Feb 27 10:23:38 crc kubenswrapper[4998]: I0227 10:23:38.811917 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 27 10:23:38 crc kubenswrapper[4998]: I0227 10:23:38.845604 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 27 10:23:38 crc kubenswrapper[4998]: I0227 10:23:38.966289 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 27 10:23:39 crc kubenswrapper[4998]: I0227 10:23:39.047184 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 27 10:23:39 crc kubenswrapper[4998]: I0227 10:23:39.232529 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 27 10:23:39 crc kubenswrapper[4998]: I0227 10:23:39.237563 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 27 10:23:39 crc kubenswrapper[4998]: I0227 10:23:39.265618 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 27 10:23:39 crc kubenswrapper[4998]: I0227 10:23:39.282726 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 27 10:23:39 crc kubenswrapper[4998]: I0227 10:23:39.296662 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 27 10:23:39 crc kubenswrapper[4998]: I0227 10:23:39.340468 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 27 10:23:39 crc kubenswrapper[4998]: I0227 10:23:39.365419 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 27 10:23:39 crc kubenswrapper[4998]: I0227 10:23:39.486495 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 27 10:23:39 crc kubenswrapper[4998]: I0227 10:23:39.524180 4998 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 27 10:23:39 crc kubenswrapper[4998]: I0227 10:23:39.683427 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 27 10:23:39 crc kubenswrapper[4998]: I0227 10:23:39.709821 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 27 10:23:39 crc kubenswrapper[4998]: I0227 10:23:39.710416 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 27 10:23:39 crc kubenswrapper[4998]: I0227 10:23:39.754357 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 27 10:23:39 crc kubenswrapper[4998]: I0227 10:23:39.763924 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 27 10:23:39 crc kubenswrapper[4998]: I0227 10:23:39.785276 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 27 10:23:39 crc kubenswrapper[4998]: I0227 10:23:39.803508 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 27 10:23:39 crc kubenswrapper[4998]: I0227 10:23:39.819716 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 27 10:23:39 crc kubenswrapper[4998]: I0227 10:23:39.877217 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 27 10:23:39 crc kubenswrapper[4998]: I0227 10:23:39.928300 4998 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 27 10:23:40 crc kubenswrapper[4998]: I0227 10:23:40.006965 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 27 10:23:40 crc kubenswrapper[4998]: I0227 10:23:40.015262 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 27 10:23:40 crc kubenswrapper[4998]: I0227 10:23:40.029175 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 27 10:23:40 crc kubenswrapper[4998]: I0227 10:23:40.057560 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 27 10:23:40 crc kubenswrapper[4998]: I0227 10:23:40.130060 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 27 10:23:40 crc kubenswrapper[4998]: I0227 10:23:40.245801 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 27 10:23:40 crc kubenswrapper[4998]: I0227 10:23:40.359604 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 27 10:23:40 crc kubenswrapper[4998]: I0227 10:23:40.360486 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 27 10:23:40 crc kubenswrapper[4998]: I0227 10:23:40.373751 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 27 10:23:40 crc kubenswrapper[4998]: I0227 10:23:40.439004 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 27 10:23:40 crc kubenswrapper[4998]: I0227 10:23:40.481516 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 27 10:23:40 crc kubenswrapper[4998]: I0227 10:23:40.507255 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 27 10:23:40 crc kubenswrapper[4998]: I0227 10:23:40.565351 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 27 10:23:40 crc kubenswrapper[4998]: I0227 10:23:40.616591 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 27 10:23:40 crc kubenswrapper[4998]: I0227 10:23:40.625765 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 27 10:23:40 crc kubenswrapper[4998]: I0227 10:23:40.706114 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 27 10:23:40 crc kubenswrapper[4998]: I0227 10:23:40.728871 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 27 10:23:40 crc kubenswrapper[4998]: I0227 10:23:40.796168 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 27 10:23:40 crc kubenswrapper[4998]: I0227 10:23:40.801855 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 27 10:23:40 crc kubenswrapper[4998]: I0227 10:23:40.846849 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 27 10:23:40 crc kubenswrapper[4998]: I0227 10:23:40.867308 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 27 10:23:40 crc kubenswrapper[4998]: I0227 10:23:40.905997 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 27 10:23:40 crc kubenswrapper[4998]: I0227 10:23:40.911319 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 27 10:23:40 crc kubenswrapper[4998]: I0227 10:23:40.912053 4998 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 27 10:23:41 crc kubenswrapper[4998]: I0227 10:23:41.048170 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 27 10:23:41 crc kubenswrapper[4998]: I0227 10:23:41.150085 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 27 10:23:41 crc kubenswrapper[4998]: I0227 10:23:41.290717 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 27 10:23:41 crc kubenswrapper[4998]: I0227 10:23:41.367114 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 27 10:23:41 crc kubenswrapper[4998]: I0227 10:23:41.373614 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 27 10:23:41 crc kubenswrapper[4998]: I0227 10:23:41.399823 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 27 10:23:41 crc kubenswrapper[4998]: I0227 10:23:41.487312 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 27 10:23:41 crc kubenswrapper[4998]: I0227 10:23:41.522153 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 27 10:23:41 crc kubenswrapper[4998]: I0227 10:23:41.524913 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 27 10:23:41 crc kubenswrapper[4998]: I0227 10:23:41.667572 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 27 10:23:41 crc kubenswrapper[4998]: I0227 10:23:41.745317 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 27 10:23:41 crc kubenswrapper[4998]: I0227 10:23:41.920113 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 27 10:23:42 crc kubenswrapper[4998]: I0227 10:23:42.005955 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 27 10:23:42 crc kubenswrapper[4998]: I0227 10:23:42.019342 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 27 10:23:42 crc kubenswrapper[4998]: I0227 10:23:42.123090 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 27 10:23:42 crc kubenswrapper[4998]: I0227 10:23:42.368421 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 27 10:23:42 crc kubenswrapper[4998]: I0227 10:23:42.441771 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 27 10:23:42 crc kubenswrapper[4998]: I0227 10:23:42.447543 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 27 10:23:42 crc kubenswrapper[4998]: I0227 10:23:42.447727 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 27 10:23:42 crc kubenswrapper[4998]: I0227 10:23:42.518400 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 27 10:23:42 crc kubenswrapper[4998]: I0227 10:23:42.521051 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 27 10:23:42 crc kubenswrapper[4998]: I0227 10:23:42.537544 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 27 10:23:42 crc kubenswrapper[4998]: I0227 10:23:42.571626 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 27 10:23:42 crc kubenswrapper[4998]: I0227 10:23:42.625790 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 27 10:23:42 crc kubenswrapper[4998]: I0227 10:23:42.633830 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 27 10:23:42 crc kubenswrapper[4998]: I0227 10:23:42.679361 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 27 10:23:42 crc kubenswrapper[4998]: I0227 10:23:42.730638 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 27 10:23:42 crc kubenswrapper[4998]: I0227 10:23:42.831730 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 27 10:23:42 crc kubenswrapper[4998]: I0227 10:23:42.834572 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 27 10:23:42 crc kubenswrapper[4998]: I0227 10:23:42.991548 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 27 10:23:43 crc kubenswrapper[4998]: I0227 10:23:43.057860 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 27 10:23:43 crc kubenswrapper[4998]: I0227 10:23:43.114661 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 27 10:23:43 crc kubenswrapper[4998]: I0227 10:23:43.209803 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 27 10:23:43 crc kubenswrapper[4998]: I0227 10:23:43.248179 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 27 10:23:43 crc kubenswrapper[4998]: I0227 10:23:43.320333 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 27 10:23:43 crc kubenswrapper[4998]: I0227 10:23:43.546293 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 27 10:23:43 crc kubenswrapper[4998]: I0227 10:23:43.652012 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 27 10:23:43 crc kubenswrapper[4998]: I0227 10:23:43.685991 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 27 10:23:43 crc kubenswrapper[4998]: I0227 10:23:43.774257 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 27 10:23:43 crc kubenswrapper[4998]: I0227 10:23:43.832126 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 27 10:23:43 crc kubenswrapper[4998]: I0227 10:23:43.933957 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 27 10:23:43 crc kubenswrapper[4998]: I0227 10:23:43.997055 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 27 10:23:44 crc kubenswrapper[4998]: I0227 10:23:44.135723 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 27 10:23:44 crc kubenswrapper[4998]: I0227 10:23:44.216050 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 27 10:23:44 crc kubenswrapper[4998]: I0227 10:23:44.254551 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 27 10:23:44 crc kubenswrapper[4998]: I0227 10:23:44.390253 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 27 10:23:44 crc kubenswrapper[4998]: I0227 10:23:44.526414 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 27 10:23:44 crc kubenswrapper[4998]: I0227 10:23:44.550459 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 27 10:23:44 crc kubenswrapper[4998]: I0227 10:23:44.563456 4998 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 27 10:23:44 crc kubenswrapper[4998]: I0227 10:23:44.563698 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://0ef5641ff4de281f5fa26ccb7af64ebc9256bd84cf0df69ec6cfaa9d44490bee" gracePeriod=5 Feb 27 10:23:44 crc kubenswrapper[4998]: I0227 10:23:44.569163 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 27 10:23:44 crc kubenswrapper[4998]: I0227 10:23:44.604470 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 27 10:23:44 crc kubenswrapper[4998]: I0227 10:23:44.615720 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 27 10:23:44 crc kubenswrapper[4998]: I0227 10:23:44.741499 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 27 10:23:44 crc kubenswrapper[4998]: I0227 10:23:44.746594 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 27 10:23:44 crc kubenswrapper[4998]: I0227 10:23:44.762080 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 27 10:23:44 crc kubenswrapper[4998]: I0227 10:23:44.763896 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:23:44 crc kubenswrapper[4998]: I0227 10:23:44.764590 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 27 10:23:44 crc kubenswrapper[4998]: I0227 10:23:44.786542 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 27 10:23:44 crc kubenswrapper[4998]: I0227 10:23:44.981938 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 27 10:23:44 crc kubenswrapper[4998]: I0227 10:23:44.987738 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 27 10:23:45 crc kubenswrapper[4998]: I0227 10:23:45.021791 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 27 10:23:45 crc kubenswrapper[4998]: I0227 10:23:45.049735 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 27 10:23:45 crc kubenswrapper[4998]: I0227 10:23:45.105537 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 27 10:23:45 crc kubenswrapper[4998]: I0227 10:23:45.148282 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 27 10:23:45 crc kubenswrapper[4998]: I0227 10:23:45.258371 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 27 10:23:45 crc kubenswrapper[4998]: I0227 10:23:45.396869 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 27 10:23:45 crc kubenswrapper[4998]: I0227 10:23:45.463445 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 27 10:23:45 crc kubenswrapper[4998]: I0227 10:23:45.503170 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 27 10:23:45 crc kubenswrapper[4998]: I0227 10:23:45.571480 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 27 10:23:45 crc kubenswrapper[4998]: I0227 10:23:45.707745 4998 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 27 10:23:45 crc kubenswrapper[4998]: I0227 10:23:45.717531 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 27 10:23:45 crc kubenswrapper[4998]: I0227 10:23:45.744739 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 27 10:23:45 crc kubenswrapper[4998]: I0227 10:23:45.756640 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 27 10:23:45 crc kubenswrapper[4998]: I0227 10:23:45.830120 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 27 10:23:45 crc kubenswrapper[4998]: I0227 10:23:45.863982 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 27 10:23:45 crc kubenswrapper[4998]: I0227 10:23:45.919629 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 27 10:23:45 crc kubenswrapper[4998]: I0227 10:23:45.933162 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 27 10:23:45 crc kubenswrapper[4998]: I0227 10:23:45.993975 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 27 10:23:45 crc kubenswrapper[4998]: I0227 10:23:45.999474 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 27 10:23:46 crc kubenswrapper[4998]: I0227 10:23:46.074604 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 27 10:23:46 crc kubenswrapper[4998]: I0227 10:23:46.138259 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 27 10:23:46 crc kubenswrapper[4998]: I0227 10:23:46.224568 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 27 10:23:46 crc kubenswrapper[4998]: I0227 10:23:46.313260 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 27 10:23:46 crc kubenswrapper[4998]: I0227 10:23:46.384536 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 27 10:23:46 crc kubenswrapper[4998]: I0227 10:23:46.588423 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 27 10:23:46 crc kubenswrapper[4998]: I0227 10:23:46.764442 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-86xkz" Feb 27 10:23:46 crc kubenswrapper[4998]: I0227 10:23:46.765173 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:23:46 crc kubenswrapper[4998]: I0227 10:23:46.767534 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 27 10:23:46 crc kubenswrapper[4998]: I0227 10:23:46.770594 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 27 10:23:46 crc kubenswrapper[4998]: I0227 10:23:46.775939 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-86xkz" Feb 27 10:23:46 crc kubenswrapper[4998]: I0227 10:23:46.800110 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 27 10:23:47 crc kubenswrapper[4998]: I0227 10:23:47.060388 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 27 10:23:47 crc kubenswrapper[4998]: I0227 10:23:47.128815 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 27 10:23:47 crc kubenswrapper[4998]: I0227 10:23:47.212002 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 27 10:23:47 crc kubenswrapper[4998]: I0227 10:23:47.223646 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-86xkz"] Feb 27 10:23:47 crc kubenswrapper[4998]: I0227 10:23:47.636361 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 27 10:23:48 crc kubenswrapper[4998]: I0227 10:23:48.123354 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 27 10:23:48 crc kubenswrapper[4998]: I0227 10:23:48.205100 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 27 10:23:48 crc kubenswrapper[4998]: I0227 10:23:48.236108 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-86xkz" event={"ID":"40178d6d-6068-4937-b7d5-883538892cc5","Type":"ContainerStarted","Data":"180046643a084e48dbd7b5f93b11f17d1ba3c6cdcc3eaa3c643aae1c06a8a92b"} Feb 27 10:23:48 crc kubenswrapper[4998]: I0227 10:23:48.236523 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-86xkz" event={"ID":"40178d6d-6068-4937-b7d5-883538892cc5","Type":"ContainerStarted","Data":"6872f35b5dcc4be556d3dd2b6f71bab86420652379b2d7e2d01ac777f7890eb9"} Feb 27 10:23:48 crc kubenswrapper[4998]: I0227 10:23:48.236723 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-86xkz" event={"ID":"40178d6d-6068-4937-b7d5-883538892cc5","Type":"ContainerStarted","Data":"afad73b0d80faf75268c6d471d6ac8f1ea3377632782098b11bde75ae0f1c5fe"} Feb 27 10:23:48 crc kubenswrapper[4998]: I0227 10:23:48.254593 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-86xkz" podStartSLOduration=311.254576661 podStartE2EDuration="5m11.254576661s" podCreationTimestamp="2026-02-27 10:18:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:23:48.252630169 +0000 UTC m=+380.250901137" watchObservedRunningTime="2026-02-27 10:23:48.254576661 +0000 UTC m=+380.252847629" Feb 27 10:23:48 crc kubenswrapper[4998]: I0227 10:23:48.332985 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-79cb59f449-4msj4"] Feb 27 10:23:48 crc kubenswrapper[4998]: E0227 10:23:48.333175 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d34656b6-50d4-4173-a40b-5a9eddb99397" containerName="oauth-openshift" Feb 27 10:23:48 crc kubenswrapper[4998]: I0227 10:23:48.333186 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="d34656b6-50d4-4173-a40b-5a9eddb99397" containerName="oauth-openshift" Feb 27 10:23:48 crc kubenswrapper[4998]: E0227 10:23:48.333201 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 27 10:23:48 crc kubenswrapper[4998]: I0227 10:23:48.333207 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 27 10:23:48 crc kubenswrapper[4998]: E0227 10:23:48.333214 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef5a2466-eb1e-408c-934b-cf47168986b8" containerName="installer" Feb 27 10:23:48 crc kubenswrapper[4998]: I0227 10:23:48.333233 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef5a2466-eb1e-408c-934b-cf47168986b8" containerName="installer" Feb 27 10:23:48 crc kubenswrapper[4998]: I0227 10:23:48.333346 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="d34656b6-50d4-4173-a40b-5a9eddb99397" containerName="oauth-openshift" Feb 27 10:23:48 crc kubenswrapper[4998]: I0227 10:23:48.333358 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef5a2466-eb1e-408c-934b-cf47168986b8" containerName="installer" Feb 27 10:23:48 crc kubenswrapper[4998]: I0227 10:23:48.333368 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 27 10:23:48 crc kubenswrapper[4998]: I0227 10:23:48.333692 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-79cb59f449-4msj4" Feb 27 10:23:48 crc kubenswrapper[4998]: I0227 10:23:48.337072 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 27 10:23:48 crc kubenswrapper[4998]: I0227 10:23:48.340832 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 27 10:23:48 crc kubenswrapper[4998]: I0227 10:23:48.341260 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 27 10:23:48 crc kubenswrapper[4998]: I0227 10:23:48.341299 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 27 10:23:48 crc kubenswrapper[4998]: I0227 10:23:48.341474 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 27 10:23:48 crc kubenswrapper[4998]: I0227 10:23:48.341532 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 27 10:23:48 crc kubenswrapper[4998]: I0227 10:23:48.341802 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 27 10:23:48 crc kubenswrapper[4998]: I0227 10:23:48.342407 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 27 10:23:48 crc kubenswrapper[4998]: I0227 10:23:48.343530 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 27 10:23:48 crc kubenswrapper[4998]: I0227 10:23:48.343670 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 27 10:23:48 crc kubenswrapper[4998]: I0227 10:23:48.343831 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 27 10:23:48 crc kubenswrapper[4998]: I0227 10:23:48.346100 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 27 10:23:48 crc kubenswrapper[4998]: I0227 10:23:48.352279 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 27 10:23:48 crc kubenswrapper[4998]: I0227 10:23:48.353422 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 27 10:23:48 crc kubenswrapper[4998]: I0227 10:23:48.355266 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-79cb59f449-4msj4"] Feb 27 10:23:48 crc kubenswrapper[4998]: I0227 10:23:48.356305 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 27 10:23:48 crc kubenswrapper[4998]: I0227 10:23:48.430242 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/29a9145f-313e-44b3-b1b3-9cd51931f1f3-v4-0-config-user-template-error\") pod \"oauth-openshift-79cb59f449-4msj4\" (UID: \"29a9145f-313e-44b3-b1b3-9cd51931f1f3\") " pod="openshift-authentication/oauth-openshift-79cb59f449-4msj4" Feb 27 10:23:48 crc kubenswrapper[4998]: I0227 10:23:48.430285 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/29a9145f-313e-44b3-b1b3-9cd51931f1f3-audit-policies\") pod \"oauth-openshift-79cb59f449-4msj4\" (UID: \"29a9145f-313e-44b3-b1b3-9cd51931f1f3\") " pod="openshift-authentication/oauth-openshift-79cb59f449-4msj4" Feb 27 10:23:48 crc kubenswrapper[4998]: I0227 10:23:48.430316 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/29a9145f-313e-44b3-b1b3-9cd51931f1f3-v4-0-config-system-session\") pod \"oauth-openshift-79cb59f449-4msj4\" (UID: \"29a9145f-313e-44b3-b1b3-9cd51931f1f3\") " pod="openshift-authentication/oauth-openshift-79cb59f449-4msj4" Feb 27 10:23:48 crc kubenswrapper[4998]: I0227 10:23:48.430334 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/29a9145f-313e-44b3-b1b3-9cd51931f1f3-audit-dir\") pod \"oauth-openshift-79cb59f449-4msj4\" (UID: \"29a9145f-313e-44b3-b1b3-9cd51931f1f3\") " pod="openshift-authentication/oauth-openshift-79cb59f449-4msj4" Feb 27 10:23:48 crc kubenswrapper[4998]: I0227 10:23:48.430404 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/29a9145f-313e-44b3-b1b3-9cd51931f1f3-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-79cb59f449-4msj4\" (UID: \"29a9145f-313e-44b3-b1b3-9cd51931f1f3\") " pod="openshift-authentication/oauth-openshift-79cb59f449-4msj4" Feb 27 10:23:48 crc kubenswrapper[4998]: I0227 10:23:48.430454 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/29a9145f-313e-44b3-b1b3-9cd51931f1f3-v4-0-config-user-template-login\") pod \"oauth-openshift-79cb59f449-4msj4\" (UID: \"29a9145f-313e-44b3-b1b3-9cd51931f1f3\") " pod="openshift-authentication/oauth-openshift-79cb59f449-4msj4" Feb 27 10:23:48 crc kubenswrapper[4998]: I0227 10:23:48.430490 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/29a9145f-313e-44b3-b1b3-9cd51931f1f3-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-79cb59f449-4msj4\" (UID: \"29a9145f-313e-44b3-b1b3-9cd51931f1f3\") " pod="openshift-authentication/oauth-openshift-79cb59f449-4msj4" Feb 27 10:23:48 crc kubenswrapper[4998]: I0227 10:23:48.430604 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/29a9145f-313e-44b3-b1b3-9cd51931f1f3-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-79cb59f449-4msj4\" (UID: \"29a9145f-313e-44b3-b1b3-9cd51931f1f3\") " pod="openshift-authentication/oauth-openshift-79cb59f449-4msj4" Feb 27 10:23:48 crc kubenswrapper[4998]: I0227 10:23:48.430627 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/29a9145f-313e-44b3-b1b3-9cd51931f1f3-v4-0-config-system-service-ca\") pod \"oauth-openshift-79cb59f449-4msj4\" (UID: \"29a9145f-313e-44b3-b1b3-9cd51931f1f3\") " pod="openshift-authentication/oauth-openshift-79cb59f449-4msj4" Feb 27 10:23:48 crc kubenswrapper[4998]: I0227 10:23:48.430736 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5pz9\" (UniqueName: \"kubernetes.io/projected/29a9145f-313e-44b3-b1b3-9cd51931f1f3-kube-api-access-r5pz9\") pod \"oauth-openshift-79cb59f449-4msj4\" (UID: \"29a9145f-313e-44b3-b1b3-9cd51931f1f3\") " pod="openshift-authentication/oauth-openshift-79cb59f449-4msj4" Feb 27 10:23:48 crc kubenswrapper[4998]: I0227 10:23:48.430800 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/29a9145f-313e-44b3-b1b3-9cd51931f1f3-v4-0-config-system-cliconfig\") pod \"oauth-openshift-79cb59f449-4msj4\" (UID: \"29a9145f-313e-44b3-b1b3-9cd51931f1f3\") " pod="openshift-authentication/oauth-openshift-79cb59f449-4msj4" Feb 27 10:23:48 crc kubenswrapper[4998]: I0227 10:23:48.430836 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/29a9145f-313e-44b3-b1b3-9cd51931f1f3-v4-0-config-system-router-certs\") pod \"oauth-openshift-79cb59f449-4msj4\" (UID: \"29a9145f-313e-44b3-b1b3-9cd51931f1f3\") " pod="openshift-authentication/oauth-openshift-79cb59f449-4msj4" Feb 27 10:23:48 crc kubenswrapper[4998]: I0227 10:23:48.430850 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/29a9145f-313e-44b3-b1b3-9cd51931f1f3-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-79cb59f449-4msj4\" (UID: \"29a9145f-313e-44b3-b1b3-9cd51931f1f3\") " pod="openshift-authentication/oauth-openshift-79cb59f449-4msj4" Feb 27 10:23:48 crc kubenswrapper[4998]: I0227 10:23:48.430870 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/29a9145f-313e-44b3-b1b3-9cd51931f1f3-v4-0-config-system-serving-cert\") pod \"oauth-openshift-79cb59f449-4msj4\" (UID: \"29a9145f-313e-44b3-b1b3-9cd51931f1f3\") " pod="openshift-authentication/oauth-openshift-79cb59f449-4msj4" Feb 27 10:23:48 crc kubenswrapper[4998]: I0227 10:23:48.495171 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 27 10:23:48 crc kubenswrapper[4998]: I0227 10:23:48.517251 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 27 10:23:48 crc kubenswrapper[4998]: I0227 10:23:48.532032 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/29a9145f-313e-44b3-b1b3-9cd51931f1f3-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-79cb59f449-4msj4\" (UID: \"29a9145f-313e-44b3-b1b3-9cd51931f1f3\") " pod="openshift-authentication/oauth-openshift-79cb59f449-4msj4" Feb 27 10:23:48 crc kubenswrapper[4998]: I0227 10:23:48.532270 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/29a9145f-313e-44b3-b1b3-9cd51931f1f3-v4-0-config-system-service-ca\") pod \"oauth-openshift-79cb59f449-4msj4\" (UID: \"29a9145f-313e-44b3-b1b3-9cd51931f1f3\") " pod="openshift-authentication/oauth-openshift-79cb59f449-4msj4" Feb 27 10:23:48 crc kubenswrapper[4998]: I0227 10:23:48.533005 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5pz9\" (UniqueName: \"kubernetes.io/projected/29a9145f-313e-44b3-b1b3-9cd51931f1f3-kube-api-access-r5pz9\") pod \"oauth-openshift-79cb59f449-4msj4\" (UID: \"29a9145f-313e-44b3-b1b3-9cd51931f1f3\") " pod="openshift-authentication/oauth-openshift-79cb59f449-4msj4" Feb 27 10:23:48 crc kubenswrapper[4998]: I0227 10:23:48.533469 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/29a9145f-313e-44b3-b1b3-9cd51931f1f3-v4-0-config-system-service-ca\") pod \"oauth-openshift-79cb59f449-4msj4\" (UID: \"29a9145f-313e-44b3-b1b3-9cd51931f1f3\") " pod="openshift-authentication/oauth-openshift-79cb59f449-4msj4" Feb 27 10:23:48 crc kubenswrapper[4998]: I0227 10:23:48.533492 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/29a9145f-313e-44b3-b1b3-9cd51931f1f3-v4-0-config-system-cliconfig\") pod \"oauth-openshift-79cb59f449-4msj4\" (UID: \"29a9145f-313e-44b3-b1b3-9cd51931f1f3\") " pod="openshift-authentication/oauth-openshift-79cb59f449-4msj4" Feb 27 10:23:48 crc kubenswrapper[4998]: I0227 10:23:48.533616 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/29a9145f-313e-44b3-b1b3-9cd51931f1f3-v4-0-config-system-router-certs\") pod \"oauth-openshift-79cb59f449-4msj4\" (UID: \"29a9145f-313e-44b3-b1b3-9cd51931f1f3\") " pod="openshift-authentication/oauth-openshift-79cb59f449-4msj4" Feb 27 10:23:48 crc kubenswrapper[4998]: I0227 10:23:48.533658 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/29a9145f-313e-44b3-b1b3-9cd51931f1f3-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-79cb59f449-4msj4\" (UID: \"29a9145f-313e-44b3-b1b3-9cd51931f1f3\") " pod="openshift-authentication/oauth-openshift-79cb59f449-4msj4" Feb 27 10:23:48 crc kubenswrapper[4998]: I0227 10:23:48.533697 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/29a9145f-313e-44b3-b1b3-9cd51931f1f3-v4-0-config-system-serving-cert\") pod \"oauth-openshift-79cb59f449-4msj4\" (UID: \"29a9145f-313e-44b3-b1b3-9cd51931f1f3\") " pod="openshift-authentication/oauth-openshift-79cb59f449-4msj4" Feb 27 10:23:48 crc kubenswrapper[4998]: I0227 10:23:48.533756 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/29a9145f-313e-44b3-b1b3-9cd51931f1f3-v4-0-config-user-template-error\") pod \"oauth-openshift-79cb59f449-4msj4\" (UID: \"29a9145f-313e-44b3-b1b3-9cd51931f1f3\") " pod="openshift-authentication/oauth-openshift-79cb59f449-4msj4" Feb 27 10:23:48 crc kubenswrapper[4998]: I0227 10:23:48.533809 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/29a9145f-313e-44b3-b1b3-9cd51931f1f3-audit-policies\") pod \"oauth-openshift-79cb59f449-4msj4\" (UID: \"29a9145f-313e-44b3-b1b3-9cd51931f1f3\") " pod="openshift-authentication/oauth-openshift-79cb59f449-4msj4" Feb 27 10:23:48 crc kubenswrapper[4998]: I0227 10:23:48.533877 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/29a9145f-313e-44b3-b1b3-9cd51931f1f3-v4-0-config-system-session\") pod \"oauth-openshift-79cb59f449-4msj4\" (UID: \"29a9145f-313e-44b3-b1b3-9cd51931f1f3\") " pod="openshift-authentication/oauth-openshift-79cb59f449-4msj4" Feb 27 10:23:48 crc kubenswrapper[4998]: I0227 10:23:48.533921 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/29a9145f-313e-44b3-b1b3-9cd51931f1f3-audit-dir\") pod \"oauth-openshift-79cb59f449-4msj4\" (UID: \"29a9145f-313e-44b3-b1b3-9cd51931f1f3\") " pod="openshift-authentication/oauth-openshift-79cb59f449-4msj4" Feb 27 10:23:48 crc kubenswrapper[4998]: I0227 10:23:48.533952 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/29a9145f-313e-44b3-b1b3-9cd51931f1f3-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-79cb59f449-4msj4\" (UID: \"29a9145f-313e-44b3-b1b3-9cd51931f1f3\") " pod="openshift-authentication/oauth-openshift-79cb59f449-4msj4" Feb 27 10:23:48 crc kubenswrapper[4998]: I0227 10:23:48.533984 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/29a9145f-313e-44b3-b1b3-9cd51931f1f3-v4-0-config-user-template-login\") pod \"oauth-openshift-79cb59f449-4msj4\" (UID: \"29a9145f-313e-44b3-b1b3-9cd51931f1f3\") " pod="openshift-authentication/oauth-openshift-79cb59f449-4msj4" Feb 27 10:23:48 crc kubenswrapper[4998]: I0227 10:23:48.534025 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/29a9145f-313e-44b3-b1b3-9cd51931f1f3-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-79cb59f449-4msj4\" (UID: \"29a9145f-313e-44b3-b1b3-9cd51931f1f3\") " pod="openshift-authentication/oauth-openshift-79cb59f449-4msj4" Feb 27 10:23:48 crc kubenswrapper[4998]: I0227 10:23:48.534807 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/29a9145f-313e-44b3-b1b3-9cd51931f1f3-audit-policies\") pod \"oauth-openshift-79cb59f449-4msj4\" (UID: \"29a9145f-313e-44b3-b1b3-9cd51931f1f3\") " pod="openshift-authentication/oauth-openshift-79cb59f449-4msj4" Feb 27 10:23:48 crc kubenswrapper[4998]: I0227 10:23:48.534935 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/29a9145f-313e-44b3-b1b3-9cd51931f1f3-audit-dir\") pod \"oauth-openshift-79cb59f449-4msj4\" (UID: \"29a9145f-313e-44b3-b1b3-9cd51931f1f3\") " pod="openshift-authentication/oauth-openshift-79cb59f449-4msj4" Feb 27 10:23:48 crc kubenswrapper[4998]: I0227 10:23:48.535676 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/29a9145f-313e-44b3-b1b3-9cd51931f1f3-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-79cb59f449-4msj4\" (UID: \"29a9145f-313e-44b3-b1b3-9cd51931f1f3\") " pod="openshift-authentication/oauth-openshift-79cb59f449-4msj4" Feb 27 10:23:48 crc kubenswrapper[4998]: I0227 10:23:48.535953 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/29a9145f-313e-44b3-b1b3-9cd51931f1f3-v4-0-config-system-cliconfig\") pod \"oauth-openshift-79cb59f449-4msj4\" (UID: \"29a9145f-313e-44b3-b1b3-9cd51931f1f3\") " pod="openshift-authentication/oauth-openshift-79cb59f449-4msj4" Feb 27 10:23:48 crc kubenswrapper[4998]: I0227 10:23:48.539277 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/29a9145f-313e-44b3-b1b3-9cd51931f1f3-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-79cb59f449-4msj4\" (UID: \"29a9145f-313e-44b3-b1b3-9cd51931f1f3\") " pod="openshift-authentication/oauth-openshift-79cb59f449-4msj4" Feb 27 10:23:48 crc kubenswrapper[4998]: I0227 10:23:48.541135 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/29a9145f-313e-44b3-b1b3-9cd51931f1f3-v4-0-config-system-serving-cert\") pod \"oauth-openshift-79cb59f449-4msj4\" (UID: \"29a9145f-313e-44b3-b1b3-9cd51931f1f3\") " pod="openshift-authentication/oauth-openshift-79cb59f449-4msj4" Feb 27 10:23:48 crc kubenswrapper[4998]: I0227 10:23:48.542072 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/29a9145f-313e-44b3-b1b3-9cd51931f1f3-v4-0-config-system-router-certs\") pod \"oauth-openshift-79cb59f449-4msj4\" (UID: \"29a9145f-313e-44b3-b1b3-9cd51931f1f3\") " pod="openshift-authentication/oauth-openshift-79cb59f449-4msj4" Feb 27 10:23:48 crc kubenswrapper[4998]: I0227 10:23:48.543480 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/29a9145f-313e-44b3-b1b3-9cd51931f1f3-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-79cb59f449-4msj4\" (UID: \"29a9145f-313e-44b3-b1b3-9cd51931f1f3\") " pod="openshift-authentication/oauth-openshift-79cb59f449-4msj4" Feb 27 10:23:48 crc kubenswrapper[4998]: I0227 10:23:48.543620 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/29a9145f-313e-44b3-b1b3-9cd51931f1f3-v4-0-config-user-template-error\") pod \"oauth-openshift-79cb59f449-4msj4\" (UID: \"29a9145f-313e-44b3-b1b3-9cd51931f1f3\") " pod="openshift-authentication/oauth-openshift-79cb59f449-4msj4" Feb 27 10:23:48 crc kubenswrapper[4998]: I0227 10:23:48.544042 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/29a9145f-313e-44b3-b1b3-9cd51931f1f3-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-79cb59f449-4msj4\" (UID: \"29a9145f-313e-44b3-b1b3-9cd51931f1f3\") " pod="openshift-authentication/oauth-openshift-79cb59f449-4msj4" Feb 27 10:23:48 crc kubenswrapper[4998]: I0227 10:23:48.551486 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/29a9145f-313e-44b3-b1b3-9cd51931f1f3-v4-0-config-user-template-login\") pod \"oauth-openshift-79cb59f449-4msj4\" (UID: \"29a9145f-313e-44b3-b1b3-9cd51931f1f3\") " pod="openshift-authentication/oauth-openshift-79cb59f449-4msj4" Feb 27 10:23:48 crc kubenswrapper[4998]: I0227 10:23:48.557303 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/29a9145f-313e-44b3-b1b3-9cd51931f1f3-v4-0-config-system-session\") pod \"oauth-openshift-79cb59f449-4msj4\" (UID: \"29a9145f-313e-44b3-b1b3-9cd51931f1f3\") " pod="openshift-authentication/oauth-openshift-79cb59f449-4msj4" Feb 27 10:23:48 crc kubenswrapper[4998]: I0227 10:23:48.576191 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5pz9\" (UniqueName: \"kubernetes.io/projected/29a9145f-313e-44b3-b1b3-9cd51931f1f3-kube-api-access-r5pz9\") pod \"oauth-openshift-79cb59f449-4msj4\" (UID: \"29a9145f-313e-44b3-b1b3-9cd51931f1f3\") " pod="openshift-authentication/oauth-openshift-79cb59f449-4msj4" Feb 27 10:23:48 crc kubenswrapper[4998]: I0227 10:23:48.656313 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-79cb59f449-4msj4" Feb 27 10:23:48 crc kubenswrapper[4998]: I0227 10:23:48.764637 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:23:48 crc kubenswrapper[4998]: I0227 10:23:48.897750 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-79cb59f449-4msj4"] Feb 27 10:23:49 crc kubenswrapper[4998]: I0227 10:23:49.252599 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-79cb59f449-4msj4" event={"ID":"29a9145f-313e-44b3-b1b3-9cd51931f1f3","Type":"ContainerStarted","Data":"12834bb2c2d396c587e26ea2b8db8ddd51575e12a03a212b688c6d26be796578"} Feb 27 10:23:49 crc kubenswrapper[4998]: I0227 10:23:49.252657 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-79cb59f449-4msj4" event={"ID":"29a9145f-313e-44b3-b1b3-9cd51931f1f3","Type":"ContainerStarted","Data":"b7b7ed25b3723bf38294c1efbf8b4c94eb2098420415fae20a2a309a56ca171c"} Feb 27 10:23:49 crc kubenswrapper[4998]: I0227 10:23:49.277312 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-79cb59f449-4msj4" podStartSLOduration=57.277296056 podStartE2EDuration="57.277296056s" podCreationTimestamp="2026-02-27 10:22:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:23:49.275193889 +0000 UTC m=+381.273464867" watchObservedRunningTime="2026-02-27 10:23:49.277296056 +0000 UTC m=+381.275567024" Feb 27 10:23:50 crc kubenswrapper[4998]: I0227 10:23:50.142927 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 27 10:23:50 crc kubenswrapper[4998]: I0227 10:23:50.144327 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 10:23:50 crc kubenswrapper[4998]: I0227 10:23:50.165722 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 27 10:23:50 crc kubenswrapper[4998]: I0227 10:23:50.165778 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 27 10:23:50 crc kubenswrapper[4998]: I0227 10:23:50.165829 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 27 10:23:50 crc kubenswrapper[4998]: I0227 10:23:50.165847 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 27 10:23:50 crc kubenswrapper[4998]: I0227 10:23:50.165866 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 27 10:23:50 crc kubenswrapper[4998]: I0227 10:23:50.165916 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 10:23:50 crc kubenswrapper[4998]: I0227 10:23:50.165936 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 10:23:50 crc kubenswrapper[4998]: I0227 10:23:50.165939 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 10:23:50 crc kubenswrapper[4998]: I0227 10:23:50.166024 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 10:23:50 crc kubenswrapper[4998]: I0227 10:23:50.166131 4998 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 27 10:23:50 crc kubenswrapper[4998]: I0227 10:23:50.166143 4998 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Feb 27 10:23:50 crc kubenswrapper[4998]: I0227 10:23:50.166152 4998 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 27 10:23:50 crc kubenswrapper[4998]: I0227 10:23:50.166159 4998 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Feb 27 10:23:50 crc kubenswrapper[4998]: I0227 10:23:50.177473 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 10:23:50 crc kubenswrapper[4998]: I0227 10:23:50.261454 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 27 10:23:50 crc kubenswrapper[4998]: I0227 10:23:50.261524 4998 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="0ef5641ff4de281f5fa26ccb7af64ebc9256bd84cf0df69ec6cfaa9d44490bee" exitCode=137 Feb 27 10:23:50 crc kubenswrapper[4998]: I0227 10:23:50.261660 4998 scope.go:117] "RemoveContainer" containerID="0ef5641ff4de281f5fa26ccb7af64ebc9256bd84cf0df69ec6cfaa9d44490bee" Feb 27 10:23:50 crc kubenswrapper[4998]: I0227 10:23:50.261837 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-79cb59f449-4msj4" Feb 27 10:23:50 crc kubenswrapper[4998]: I0227 10:23:50.261860 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 10:23:50 crc kubenswrapper[4998]: I0227 10:23:50.266511 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-79cb59f449-4msj4" Feb 27 10:23:50 crc kubenswrapper[4998]: I0227 10:23:50.266877 4998 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 27 10:23:50 crc kubenswrapper[4998]: I0227 10:23:50.281384 4998 scope.go:117] "RemoveContainer" containerID="0ef5641ff4de281f5fa26ccb7af64ebc9256bd84cf0df69ec6cfaa9d44490bee" Feb 27 10:23:50 crc kubenswrapper[4998]: E0227 10:23:50.281873 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ef5641ff4de281f5fa26ccb7af64ebc9256bd84cf0df69ec6cfaa9d44490bee\": container with ID starting with 0ef5641ff4de281f5fa26ccb7af64ebc9256bd84cf0df69ec6cfaa9d44490bee not found: ID does not exist" containerID="0ef5641ff4de281f5fa26ccb7af64ebc9256bd84cf0df69ec6cfaa9d44490bee" Feb 27 10:23:50 crc kubenswrapper[4998]: I0227 10:23:50.281911 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ef5641ff4de281f5fa26ccb7af64ebc9256bd84cf0df69ec6cfaa9d44490bee"} err="failed to get container status \"0ef5641ff4de281f5fa26ccb7af64ebc9256bd84cf0df69ec6cfaa9d44490bee\": rpc error: code = NotFound desc = could not find container \"0ef5641ff4de281f5fa26ccb7af64ebc9256bd84cf0df69ec6cfaa9d44490bee\": container with ID starting with 0ef5641ff4de281f5fa26ccb7af64ebc9256bd84cf0df69ec6cfaa9d44490bee not found: ID does not exist" Feb 27 10:23:50 crc kubenswrapper[4998]: I0227 10:23:50.772116 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Feb 27 10:24:00 crc kubenswrapper[4998]: I0227 10:24:00.172730 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536464-dq25g"] Feb 27 10:24:00 crc kubenswrapper[4998]: I0227 10:24:00.173834 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536464-dq25g" Feb 27 10:24:00 crc kubenswrapper[4998]: I0227 10:24:00.175510 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-b74ch" Feb 27 10:24:00 crc kubenswrapper[4998]: I0227 10:24:00.175521 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 10:24:00 crc kubenswrapper[4998]: I0227 10:24:00.176124 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 10:24:00 crc kubenswrapper[4998]: I0227 10:24:00.182011 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536464-dq25g"] Feb 27 10:24:00 crc kubenswrapper[4998]: I0227 10:24:00.295253 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4bnb\" (UniqueName: \"kubernetes.io/projected/f3688a39-d826-4d07-9d29-d75243003515-kube-api-access-j4bnb\") pod \"auto-csr-approver-29536464-dq25g\" (UID: \"f3688a39-d826-4d07-9d29-d75243003515\") " pod="openshift-infra/auto-csr-approver-29536464-dq25g" Feb 27 10:24:00 crc kubenswrapper[4998]: I0227 10:24:00.396932 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4bnb\" (UniqueName: \"kubernetes.io/projected/f3688a39-d826-4d07-9d29-d75243003515-kube-api-access-j4bnb\") pod \"auto-csr-approver-29536464-dq25g\" (UID: \"f3688a39-d826-4d07-9d29-d75243003515\") " pod="openshift-infra/auto-csr-approver-29536464-dq25g" Feb 27 10:24:00 crc kubenswrapper[4998]: I0227 10:24:00.417043 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4bnb\" (UniqueName: \"kubernetes.io/projected/f3688a39-d826-4d07-9d29-d75243003515-kube-api-access-j4bnb\") pod \"auto-csr-approver-29536464-dq25g\" (UID: \"f3688a39-d826-4d07-9d29-d75243003515\") " pod="openshift-infra/auto-csr-approver-29536464-dq25g" Feb 27 10:24:00 crc kubenswrapper[4998]: I0227 10:24:00.502613 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536464-dq25g" Feb 27 10:24:00 crc kubenswrapper[4998]: I0227 10:24:00.888008 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536464-dq25g"] Feb 27 10:24:01 crc kubenswrapper[4998]: I0227 10:24:01.319895 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536464-dq25g" event={"ID":"f3688a39-d826-4d07-9d29-d75243003515","Type":"ContainerStarted","Data":"1639d417377836bc59e73055466d67909cd9ce70b4401e2037699da01dfeae5b"} Feb 27 10:24:02 crc kubenswrapper[4998]: I0227 10:24:02.342196 4998 generic.go:334] "Generic (PLEG): container finished" podID="f3688a39-d826-4d07-9d29-d75243003515" containerID="6c6e2536bb483858431764173e6b24017bc191e958409ea587c963ea1d34fef8" exitCode=0 Feb 27 10:24:02 crc kubenswrapper[4998]: I0227 10:24:02.342478 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536464-dq25g" event={"ID":"f3688a39-d826-4d07-9d29-d75243003515","Type":"ContainerDied","Data":"6c6e2536bb483858431764173e6b24017bc191e958409ea587c963ea1d34fef8"} Feb 27 10:24:03 crc kubenswrapper[4998]: I0227 10:24:03.594690 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536464-dq25g" Feb 27 10:24:03 crc kubenswrapper[4998]: I0227 10:24:03.739438 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j4bnb\" (UniqueName: \"kubernetes.io/projected/f3688a39-d826-4d07-9d29-d75243003515-kube-api-access-j4bnb\") pod \"f3688a39-d826-4d07-9d29-d75243003515\" (UID: \"f3688a39-d826-4d07-9d29-d75243003515\") " Feb 27 10:24:03 crc kubenswrapper[4998]: I0227 10:24:03.745695 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3688a39-d826-4d07-9d29-d75243003515-kube-api-access-j4bnb" (OuterVolumeSpecName: "kube-api-access-j4bnb") pod "f3688a39-d826-4d07-9d29-d75243003515" (UID: "f3688a39-d826-4d07-9d29-d75243003515"). InnerVolumeSpecName "kube-api-access-j4bnb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:24:03 crc kubenswrapper[4998]: I0227 10:24:03.841548 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j4bnb\" (UniqueName: \"kubernetes.io/projected/f3688a39-d826-4d07-9d29-d75243003515-kube-api-access-j4bnb\") on node \"crc\" DevicePath \"\"" Feb 27 10:24:04 crc kubenswrapper[4998]: I0227 10:24:04.358697 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536464-dq25g" event={"ID":"f3688a39-d826-4d07-9d29-d75243003515","Type":"ContainerDied","Data":"1639d417377836bc59e73055466d67909cd9ce70b4401e2037699da01dfeae5b"} Feb 27 10:24:04 crc kubenswrapper[4998]: I0227 10:24:04.358738 4998 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1639d417377836bc59e73055466d67909cd9ce70b4401e2037699da01dfeae5b" Feb 27 10:24:04 crc kubenswrapper[4998]: I0227 10:24:04.359040 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536464-dq25g" Feb 27 10:24:23 crc kubenswrapper[4998]: I0227 10:24:23.077340 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 27 10:24:40 crc kubenswrapper[4998]: I0227 10:24:40.504560 4998 patch_prober.go:28] interesting pod/machine-config-daemon-m6kr5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 10:24:40 crc kubenswrapper[4998]: I0227 10:24:40.505103 4998 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 10:25:04 crc kubenswrapper[4998]: I0227 10:25:04.305552 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-l7vs8"] Feb 27 10:25:04 crc kubenswrapper[4998]: I0227 10:25:04.306567 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-l7vs8" podUID="5d365be7-41cf-4570-a8fb-ef974affdb95" containerName="registry-server" containerID="cri-o://4e79d2ec61f6186bf9c8f9cab79b7bf4ab9ffb7f070484ccd3d18cafb27ad4c9" gracePeriod=2 Feb 27 10:25:04 crc kubenswrapper[4998]: I0227 10:25:04.660510 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l7vs8" Feb 27 10:25:04 crc kubenswrapper[4998]: I0227 10:25:04.704264 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-972ls\" (UniqueName: \"kubernetes.io/projected/5d365be7-41cf-4570-a8fb-ef974affdb95-kube-api-access-972ls\") pod \"5d365be7-41cf-4570-a8fb-ef974affdb95\" (UID: \"5d365be7-41cf-4570-a8fb-ef974affdb95\") " Feb 27 10:25:04 crc kubenswrapper[4998]: I0227 10:25:04.704374 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d365be7-41cf-4570-a8fb-ef974affdb95-catalog-content\") pod \"5d365be7-41cf-4570-a8fb-ef974affdb95\" (UID: \"5d365be7-41cf-4570-a8fb-ef974affdb95\") " Feb 27 10:25:04 crc kubenswrapper[4998]: I0227 10:25:04.704430 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d365be7-41cf-4570-a8fb-ef974affdb95-utilities\") pod \"5d365be7-41cf-4570-a8fb-ef974affdb95\" (UID: \"5d365be7-41cf-4570-a8fb-ef974affdb95\") " Feb 27 10:25:04 crc kubenswrapper[4998]: I0227 10:25:04.705499 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d365be7-41cf-4570-a8fb-ef974affdb95-utilities" (OuterVolumeSpecName: "utilities") pod "5d365be7-41cf-4570-a8fb-ef974affdb95" (UID: "5d365be7-41cf-4570-a8fb-ef974affdb95"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:25:04 crc kubenswrapper[4998]: I0227 10:25:04.709785 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d365be7-41cf-4570-a8fb-ef974affdb95-kube-api-access-972ls" (OuterVolumeSpecName: "kube-api-access-972ls") pod "5d365be7-41cf-4570-a8fb-ef974affdb95" (UID: "5d365be7-41cf-4570-a8fb-ef974affdb95"). InnerVolumeSpecName "kube-api-access-972ls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:25:04 crc kubenswrapper[4998]: I0227 10:25:04.785486 4998 generic.go:334] "Generic (PLEG): container finished" podID="5d365be7-41cf-4570-a8fb-ef974affdb95" containerID="4e79d2ec61f6186bf9c8f9cab79b7bf4ab9ffb7f070484ccd3d18cafb27ad4c9" exitCode=0 Feb 27 10:25:04 crc kubenswrapper[4998]: I0227 10:25:04.785545 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l7vs8" event={"ID":"5d365be7-41cf-4570-a8fb-ef974affdb95","Type":"ContainerDied","Data":"4e79d2ec61f6186bf9c8f9cab79b7bf4ab9ffb7f070484ccd3d18cafb27ad4c9"} Feb 27 10:25:04 crc kubenswrapper[4998]: I0227 10:25:04.785589 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l7vs8" event={"ID":"5d365be7-41cf-4570-a8fb-ef974affdb95","Type":"ContainerDied","Data":"d58e9922e839a8ab7e3932375510ab76b505cfa2430bda779613c32344983028"} Feb 27 10:25:04 crc kubenswrapper[4998]: I0227 10:25:04.785616 4998 scope.go:117] "RemoveContainer" containerID="4e79d2ec61f6186bf9c8f9cab79b7bf4ab9ffb7f070484ccd3d18cafb27ad4c9" Feb 27 10:25:04 crc kubenswrapper[4998]: I0227 10:25:04.786255 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l7vs8" Feb 27 10:25:04 crc kubenswrapper[4998]: I0227 10:25:04.805848 4998 scope.go:117] "RemoveContainer" containerID="7d9bd734251bd7221062ed7b1e51ad5daa0836d4cd9ec22d8422d8a298133a02" Feb 27 10:25:04 crc kubenswrapper[4998]: I0227 10:25:04.806260 4998 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d365be7-41cf-4570-a8fb-ef974affdb95-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 10:25:04 crc kubenswrapper[4998]: I0227 10:25:04.806305 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-972ls\" (UniqueName: \"kubernetes.io/projected/5d365be7-41cf-4570-a8fb-ef974affdb95-kube-api-access-972ls\") on node \"crc\" DevicePath \"\"" Feb 27 10:25:04 crc kubenswrapper[4998]: I0227 10:25:04.824645 4998 scope.go:117] "RemoveContainer" containerID="17caf79fbc4a776fced267f5159f61cdcffb1810336229fcf4015a13b9b7325d" Feb 27 10:25:04 crc kubenswrapper[4998]: I0227 10:25:04.843748 4998 scope.go:117] "RemoveContainer" containerID="4e79d2ec61f6186bf9c8f9cab79b7bf4ab9ffb7f070484ccd3d18cafb27ad4c9" Feb 27 10:25:04 crc kubenswrapper[4998]: E0227 10:25:04.844180 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e79d2ec61f6186bf9c8f9cab79b7bf4ab9ffb7f070484ccd3d18cafb27ad4c9\": container with ID starting with 4e79d2ec61f6186bf9c8f9cab79b7bf4ab9ffb7f070484ccd3d18cafb27ad4c9 not found: ID does not exist" containerID="4e79d2ec61f6186bf9c8f9cab79b7bf4ab9ffb7f070484ccd3d18cafb27ad4c9" Feb 27 10:25:04 crc kubenswrapper[4998]: I0227 10:25:04.844239 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e79d2ec61f6186bf9c8f9cab79b7bf4ab9ffb7f070484ccd3d18cafb27ad4c9"} err="failed to get container status \"4e79d2ec61f6186bf9c8f9cab79b7bf4ab9ffb7f070484ccd3d18cafb27ad4c9\": rpc error: code = NotFound desc = could not find container \"4e79d2ec61f6186bf9c8f9cab79b7bf4ab9ffb7f070484ccd3d18cafb27ad4c9\": container with ID starting with 4e79d2ec61f6186bf9c8f9cab79b7bf4ab9ffb7f070484ccd3d18cafb27ad4c9 not found: ID does not exist" Feb 27 10:25:04 crc kubenswrapper[4998]: I0227 10:25:04.844267 4998 scope.go:117] "RemoveContainer" containerID="7d9bd734251bd7221062ed7b1e51ad5daa0836d4cd9ec22d8422d8a298133a02" Feb 27 10:25:04 crc kubenswrapper[4998]: E0227 10:25:04.844591 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d9bd734251bd7221062ed7b1e51ad5daa0836d4cd9ec22d8422d8a298133a02\": container with ID starting with 7d9bd734251bd7221062ed7b1e51ad5daa0836d4cd9ec22d8422d8a298133a02 not found: ID does not exist" containerID="7d9bd734251bd7221062ed7b1e51ad5daa0836d4cd9ec22d8422d8a298133a02" Feb 27 10:25:04 crc kubenswrapper[4998]: I0227 10:25:04.844713 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d9bd734251bd7221062ed7b1e51ad5daa0836d4cd9ec22d8422d8a298133a02"} err="failed to get container status \"7d9bd734251bd7221062ed7b1e51ad5daa0836d4cd9ec22d8422d8a298133a02\": rpc error: code = NotFound desc = could not find container \"7d9bd734251bd7221062ed7b1e51ad5daa0836d4cd9ec22d8422d8a298133a02\": container with ID starting with 7d9bd734251bd7221062ed7b1e51ad5daa0836d4cd9ec22d8422d8a298133a02 not found: ID does not exist" Feb 27 10:25:04 crc kubenswrapper[4998]: I0227 10:25:04.844827 4998 scope.go:117] "RemoveContainer" containerID="17caf79fbc4a776fced267f5159f61cdcffb1810336229fcf4015a13b9b7325d" Feb 27 10:25:04 crc kubenswrapper[4998]: E0227 10:25:04.845456 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17caf79fbc4a776fced267f5159f61cdcffb1810336229fcf4015a13b9b7325d\": container with ID starting with 17caf79fbc4a776fced267f5159f61cdcffb1810336229fcf4015a13b9b7325d not found: ID does not exist" containerID="17caf79fbc4a776fced267f5159f61cdcffb1810336229fcf4015a13b9b7325d" Feb 27 10:25:04 crc kubenswrapper[4998]: I0227 10:25:04.845482 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17caf79fbc4a776fced267f5159f61cdcffb1810336229fcf4015a13b9b7325d"} err="failed to get container status \"17caf79fbc4a776fced267f5159f61cdcffb1810336229fcf4015a13b9b7325d\": rpc error: code = NotFound desc = could not find container \"17caf79fbc4a776fced267f5159f61cdcffb1810336229fcf4015a13b9b7325d\": container with ID starting with 17caf79fbc4a776fced267f5159f61cdcffb1810336229fcf4015a13b9b7325d not found: ID does not exist" Feb 27 10:25:04 crc kubenswrapper[4998]: I0227 10:25:04.852804 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d365be7-41cf-4570-a8fb-ef974affdb95-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5d365be7-41cf-4570-a8fb-ef974affdb95" (UID: "5d365be7-41cf-4570-a8fb-ef974affdb95"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:25:04 crc kubenswrapper[4998]: I0227 10:25:04.907987 4998 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d365be7-41cf-4570-a8fb-ef974affdb95-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 10:25:05 crc kubenswrapper[4998]: I0227 10:25:05.127787 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-l7vs8"] Feb 27 10:25:05 crc kubenswrapper[4998]: I0227 10:25:05.134748 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-l7vs8"] Feb 27 10:25:06 crc kubenswrapper[4998]: I0227 10:25:06.776436 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d365be7-41cf-4570-a8fb-ef974affdb95" path="/var/lib/kubelet/pods/5d365be7-41cf-4570-a8fb-ef974affdb95/volumes" Feb 27 10:25:10 crc kubenswrapper[4998]: I0227 10:25:10.504598 4998 patch_prober.go:28] interesting pod/machine-config-daemon-m6kr5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 10:25:10 crc kubenswrapper[4998]: I0227 10:25:10.504690 4998 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 10:25:15 crc kubenswrapper[4998]: I0227 10:25:15.172705 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-m8z6k"] Feb 27 10:25:15 crc kubenswrapper[4998]: E0227 10:25:15.173492 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d365be7-41cf-4570-a8fb-ef974affdb95" containerName="extract-content" Feb 27 10:25:15 crc kubenswrapper[4998]: I0227 10:25:15.173511 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d365be7-41cf-4570-a8fb-ef974affdb95" containerName="extract-content" Feb 27 10:25:15 crc kubenswrapper[4998]: E0227 10:25:15.173528 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d365be7-41cf-4570-a8fb-ef974affdb95" containerName="registry-server" Feb 27 10:25:15 crc kubenswrapper[4998]: I0227 10:25:15.173534 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d365be7-41cf-4570-a8fb-ef974affdb95" containerName="registry-server" Feb 27 10:25:15 crc kubenswrapper[4998]: E0227 10:25:15.173543 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3688a39-d826-4d07-9d29-d75243003515" containerName="oc" Feb 27 10:25:15 crc kubenswrapper[4998]: I0227 10:25:15.173551 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3688a39-d826-4d07-9d29-d75243003515" containerName="oc" Feb 27 10:25:15 crc kubenswrapper[4998]: E0227 10:25:15.173561 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d365be7-41cf-4570-a8fb-ef974affdb95" containerName="extract-utilities" Feb 27 10:25:15 crc kubenswrapper[4998]: I0227 10:25:15.173569 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d365be7-41cf-4570-a8fb-ef974affdb95" containerName="extract-utilities" Feb 27 10:25:15 crc kubenswrapper[4998]: I0227 10:25:15.173688 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3688a39-d826-4d07-9d29-d75243003515" containerName="oc" Feb 27 10:25:15 crc kubenswrapper[4998]: I0227 10:25:15.173710 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d365be7-41cf-4570-a8fb-ef974affdb95" containerName="registry-server" Feb 27 10:25:15 crc kubenswrapper[4998]: I0227 10:25:15.174172 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-m8z6k" Feb 27 10:25:15 crc kubenswrapper[4998]: I0227 10:25:15.185010 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-m8z6k"] Feb 27 10:25:15 crc kubenswrapper[4998]: I0227 10:25:15.257410 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/12a25e87-a2f5-4d56-9029-3cbe6159258d-ca-trust-extracted\") pod \"image-registry-66df7c8f76-m8z6k\" (UID: \"12a25e87-a2f5-4d56-9029-3cbe6159258d\") " pod="openshift-image-registry/image-registry-66df7c8f76-m8z6k" Feb 27 10:25:15 crc kubenswrapper[4998]: I0227 10:25:15.257457 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/12a25e87-a2f5-4d56-9029-3cbe6159258d-installation-pull-secrets\") pod \"image-registry-66df7c8f76-m8z6k\" (UID: \"12a25e87-a2f5-4d56-9029-3cbe6159258d\") " pod="openshift-image-registry/image-registry-66df7c8f76-m8z6k" Feb 27 10:25:15 crc kubenswrapper[4998]: I0227 10:25:15.257504 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/12a25e87-a2f5-4d56-9029-3cbe6159258d-trusted-ca\") pod \"image-registry-66df7c8f76-m8z6k\" (UID: \"12a25e87-a2f5-4d56-9029-3cbe6159258d\") " pod="openshift-image-registry/image-registry-66df7c8f76-m8z6k" Feb 27 10:25:15 crc kubenswrapper[4998]: I0227 10:25:15.257539 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-m8z6k\" (UID: \"12a25e87-a2f5-4d56-9029-3cbe6159258d\") " pod="openshift-image-registry/image-registry-66df7c8f76-m8z6k" Feb 27 10:25:15 crc kubenswrapper[4998]: I0227 10:25:15.257556 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/12a25e87-a2f5-4d56-9029-3cbe6159258d-registry-tls\") pod \"image-registry-66df7c8f76-m8z6k\" (UID: \"12a25e87-a2f5-4d56-9029-3cbe6159258d\") " pod="openshift-image-registry/image-registry-66df7c8f76-m8z6k" Feb 27 10:25:15 crc kubenswrapper[4998]: I0227 10:25:15.257576 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzznp\" (UniqueName: \"kubernetes.io/projected/12a25e87-a2f5-4d56-9029-3cbe6159258d-kube-api-access-jzznp\") pod \"image-registry-66df7c8f76-m8z6k\" (UID: \"12a25e87-a2f5-4d56-9029-3cbe6159258d\") " pod="openshift-image-registry/image-registry-66df7c8f76-m8z6k" Feb 27 10:25:15 crc kubenswrapper[4998]: I0227 10:25:15.257615 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/12a25e87-a2f5-4d56-9029-3cbe6159258d-bound-sa-token\") pod \"image-registry-66df7c8f76-m8z6k\" (UID: \"12a25e87-a2f5-4d56-9029-3cbe6159258d\") " pod="openshift-image-registry/image-registry-66df7c8f76-m8z6k" Feb 27 10:25:15 crc kubenswrapper[4998]: I0227 10:25:15.257644 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/12a25e87-a2f5-4d56-9029-3cbe6159258d-registry-certificates\") pod \"image-registry-66df7c8f76-m8z6k\" (UID: \"12a25e87-a2f5-4d56-9029-3cbe6159258d\") " pod="openshift-image-registry/image-registry-66df7c8f76-m8z6k" Feb 27 10:25:15 crc kubenswrapper[4998]: I0227 10:25:15.276316 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-m8z6k\" (UID: \"12a25e87-a2f5-4d56-9029-3cbe6159258d\") " pod="openshift-image-registry/image-registry-66df7c8f76-m8z6k" Feb 27 10:25:15 crc kubenswrapper[4998]: I0227 10:25:15.359041 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/12a25e87-a2f5-4d56-9029-3cbe6159258d-ca-trust-extracted\") pod \"image-registry-66df7c8f76-m8z6k\" (UID: \"12a25e87-a2f5-4d56-9029-3cbe6159258d\") " pod="openshift-image-registry/image-registry-66df7c8f76-m8z6k" Feb 27 10:25:15 crc kubenswrapper[4998]: I0227 10:25:15.359089 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/12a25e87-a2f5-4d56-9029-3cbe6159258d-installation-pull-secrets\") pod \"image-registry-66df7c8f76-m8z6k\" (UID: \"12a25e87-a2f5-4d56-9029-3cbe6159258d\") " pod="openshift-image-registry/image-registry-66df7c8f76-m8z6k" Feb 27 10:25:15 crc kubenswrapper[4998]: I0227 10:25:15.359113 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/12a25e87-a2f5-4d56-9029-3cbe6159258d-trusted-ca\") pod \"image-registry-66df7c8f76-m8z6k\" (UID: \"12a25e87-a2f5-4d56-9029-3cbe6159258d\") " pod="openshift-image-registry/image-registry-66df7c8f76-m8z6k" Feb 27 10:25:15 crc kubenswrapper[4998]: I0227 10:25:15.359143 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/12a25e87-a2f5-4d56-9029-3cbe6159258d-registry-tls\") pod \"image-registry-66df7c8f76-m8z6k\" (UID: \"12a25e87-a2f5-4d56-9029-3cbe6159258d\") " pod="openshift-image-registry/image-registry-66df7c8f76-m8z6k" Feb 27 10:25:15 crc kubenswrapper[4998]: I0227 10:25:15.359159 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzznp\" (UniqueName: \"kubernetes.io/projected/12a25e87-a2f5-4d56-9029-3cbe6159258d-kube-api-access-jzznp\") pod \"image-registry-66df7c8f76-m8z6k\" (UID: \"12a25e87-a2f5-4d56-9029-3cbe6159258d\") " pod="openshift-image-registry/image-registry-66df7c8f76-m8z6k" Feb 27 10:25:15 crc kubenswrapper[4998]: I0227 10:25:15.359190 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/12a25e87-a2f5-4d56-9029-3cbe6159258d-bound-sa-token\") pod \"image-registry-66df7c8f76-m8z6k\" (UID: \"12a25e87-a2f5-4d56-9029-3cbe6159258d\") " pod="openshift-image-registry/image-registry-66df7c8f76-m8z6k" Feb 27 10:25:15 crc kubenswrapper[4998]: I0227 10:25:15.359216 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/12a25e87-a2f5-4d56-9029-3cbe6159258d-registry-certificates\") pod \"image-registry-66df7c8f76-m8z6k\" (UID: \"12a25e87-a2f5-4d56-9029-3cbe6159258d\") " pod="openshift-image-registry/image-registry-66df7c8f76-m8z6k" Feb 27 10:25:15 crc kubenswrapper[4998]: I0227 10:25:15.360041 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/12a25e87-a2f5-4d56-9029-3cbe6159258d-ca-trust-extracted\") pod \"image-registry-66df7c8f76-m8z6k\" (UID: \"12a25e87-a2f5-4d56-9029-3cbe6159258d\") " pod="openshift-image-registry/image-registry-66df7c8f76-m8z6k" Feb 27 10:25:15 crc kubenswrapper[4998]: I0227 10:25:15.361268 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/12a25e87-a2f5-4d56-9029-3cbe6159258d-registry-certificates\") pod \"image-registry-66df7c8f76-m8z6k\" (UID: \"12a25e87-a2f5-4d56-9029-3cbe6159258d\") " pod="openshift-image-registry/image-registry-66df7c8f76-m8z6k" Feb 27 10:25:15 crc kubenswrapper[4998]: I0227 10:25:15.361296 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/12a25e87-a2f5-4d56-9029-3cbe6159258d-trusted-ca\") pod \"image-registry-66df7c8f76-m8z6k\" (UID: \"12a25e87-a2f5-4d56-9029-3cbe6159258d\") " pod="openshift-image-registry/image-registry-66df7c8f76-m8z6k" Feb 27 10:25:15 crc kubenswrapper[4998]: I0227 10:25:15.365293 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/12a25e87-a2f5-4d56-9029-3cbe6159258d-installation-pull-secrets\") pod \"image-registry-66df7c8f76-m8z6k\" (UID: \"12a25e87-a2f5-4d56-9029-3cbe6159258d\") " pod="openshift-image-registry/image-registry-66df7c8f76-m8z6k" Feb 27 10:25:15 crc kubenswrapper[4998]: I0227 10:25:15.365343 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/12a25e87-a2f5-4d56-9029-3cbe6159258d-registry-tls\") pod \"image-registry-66df7c8f76-m8z6k\" (UID: \"12a25e87-a2f5-4d56-9029-3cbe6159258d\") " pod="openshift-image-registry/image-registry-66df7c8f76-m8z6k" Feb 27 10:25:15 crc kubenswrapper[4998]: I0227 10:25:15.375853 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jzznp\" (UniqueName: \"kubernetes.io/projected/12a25e87-a2f5-4d56-9029-3cbe6159258d-kube-api-access-jzznp\") pod \"image-registry-66df7c8f76-m8z6k\" (UID: \"12a25e87-a2f5-4d56-9029-3cbe6159258d\") " pod="openshift-image-registry/image-registry-66df7c8f76-m8z6k" Feb 27 10:25:15 crc kubenswrapper[4998]: I0227 10:25:15.376026 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/12a25e87-a2f5-4d56-9029-3cbe6159258d-bound-sa-token\") pod \"image-registry-66df7c8f76-m8z6k\" (UID: \"12a25e87-a2f5-4d56-9029-3cbe6159258d\") " pod="openshift-image-registry/image-registry-66df7c8f76-m8z6k" Feb 27 10:25:15 crc kubenswrapper[4998]: I0227 10:25:15.493884 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-m8z6k" Feb 27 10:25:15 crc kubenswrapper[4998]: I0227 10:25:15.678380 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-m8z6k"] Feb 27 10:25:15 crc kubenswrapper[4998]: W0227 10:25:15.693357 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod12a25e87_a2f5_4d56_9029_3cbe6159258d.slice/crio-77c525c64d0b571c4a1405ef986ea5fe3f07332230e93e14fa6a8b604185f51b WatchSource:0}: Error finding container 77c525c64d0b571c4a1405ef986ea5fe3f07332230e93e14fa6a8b604185f51b: Status 404 returned error can't find the container with id 77c525c64d0b571c4a1405ef986ea5fe3f07332230e93e14fa6a8b604185f51b Feb 27 10:25:15 crc kubenswrapper[4998]: I0227 10:25:15.856851 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-m8z6k" event={"ID":"12a25e87-a2f5-4d56-9029-3cbe6159258d","Type":"ContainerStarted","Data":"91cbeea1fb3114f465fed361f5dfafec103d0167d988775cad9cc27b9d902761"} Feb 27 10:25:15 crc kubenswrapper[4998]: I0227 10:25:15.857208 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-m8z6k" Feb 27 10:25:15 crc kubenswrapper[4998]: I0227 10:25:15.857222 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-m8z6k" event={"ID":"12a25e87-a2f5-4d56-9029-3cbe6159258d","Type":"ContainerStarted","Data":"77c525c64d0b571c4a1405ef986ea5fe3f07332230e93e14fa6a8b604185f51b"} Feb 27 10:25:15 crc kubenswrapper[4998]: I0227 10:25:15.866205 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:25:15 crc kubenswrapper[4998]: I0227 10:25:15.866494 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:25:15 crc kubenswrapper[4998]: I0227 10:25:15.867330 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:25:15 crc kubenswrapper[4998]: I0227 10:25:15.873044 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:25:15 crc kubenswrapper[4998]: I0227 10:25:15.885804 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-m8z6k" podStartSLOduration=0.885727256 podStartE2EDuration="885.727256ms" podCreationTimestamp="2026-02-27 10:25:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:25:15.879176652 +0000 UTC m=+467.877447700" watchObservedRunningTime="2026-02-27 10:25:15.885727256 +0000 UTC m=+467.883998274" Feb 27 10:25:16 crc kubenswrapper[4998]: I0227 10:25:16.065765 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:25:16 crc kubenswrapper[4998]: W0227 10:25:16.274482 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-013454a8df79f772eb1f429ba26764f8b2b4a1aae0915d66dca46b8cc49c06d9 WatchSource:0}: Error finding container 013454a8df79f772eb1f429ba26764f8b2b4a1aae0915d66dca46b8cc49c06d9: Status 404 returned error can't find the container with id 013454a8df79f772eb1f429ba26764f8b2b4a1aae0915d66dca46b8cc49c06d9 Feb 27 10:25:16 crc kubenswrapper[4998]: I0227 10:25:16.863036 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"878cfd08c1f99f4731499fb6daaa8178c65998b60b63e161d1b9c18c034eb118"} Feb 27 10:25:16 crc kubenswrapper[4998]: I0227 10:25:16.863716 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"013454a8df79f772eb1f429ba26764f8b2b4a1aae0915d66dca46b8cc49c06d9"} Feb 27 10:25:16 crc kubenswrapper[4998]: I0227 10:25:16.885717 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:25:16 crc kubenswrapper[4998]: I0227 10:25:16.892876 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:25:16 crc kubenswrapper[4998]: I0227 10:25:16.987965 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:25:17 crc kubenswrapper[4998]: I0227 10:25:17.009702 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:25:17 crc kubenswrapper[4998]: I0227 10:25:17.066546 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:25:17 crc kubenswrapper[4998]: I0227 10:25:17.165736 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:25:17 crc kubenswrapper[4998]: W0227 10:25:17.392830 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-20c6e54ad1d86eebd15235dc1a0564583e6d6daa440207d6b5570519d8a8eee4 WatchSource:0}: Error finding container 20c6e54ad1d86eebd15235dc1a0564583e6d6daa440207d6b5570519d8a8eee4: Status 404 returned error can't find the container with id 20c6e54ad1d86eebd15235dc1a0564583e6d6daa440207d6b5570519d8a8eee4 Feb 27 10:25:17 crc kubenswrapper[4998]: I0227 10:25:17.870069 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"07fc3933bc243800385b317742218acb391cf5b291d8cb10d33d091ef44fe6a4"} Feb 27 10:25:17 crc kubenswrapper[4998]: I0227 10:25:17.870125 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"f9b458aac78f2fce53e86ba490bf6d32ad538171dfdbb85a05ffd5ead6e4b5b8"} Feb 27 10:25:17 crc kubenswrapper[4998]: I0227 10:25:17.871785 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"2747cc6015d9ec2813c392b60bcbfc90d96c1f8012b52a2993de2d12ccd6a81e"} Feb 27 10:25:17 crc kubenswrapper[4998]: I0227 10:25:17.871823 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"20c6e54ad1d86eebd15235dc1a0564583e6d6daa440207d6b5570519d8a8eee4"} Feb 27 10:25:17 crc kubenswrapper[4998]: I0227 10:25:17.872006 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:25:20 crc kubenswrapper[4998]: I0227 10:25:20.858746 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5v4md"] Feb 27 10:25:20 crc kubenswrapper[4998]: I0227 10:25:20.859601 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5v4md" podUID="de440cc8-1a01-4c10-83e6-027afdacde0c" containerName="registry-server" containerID="cri-o://bafecbdde451921e88f0f4ea4463de30d677ec035432447fbc4985cb14347eb0" gracePeriod=30 Feb 27 10:25:20 crc kubenswrapper[4998]: I0227 10:25:20.878660 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sxgdw"] Feb 27 10:25:20 crc kubenswrapper[4998]: I0227 10:25:20.878968 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-sxgdw" podUID="f5d59240-590d-47d4-95f7-de0c01a8d3e2" containerName="registry-server" containerID="cri-o://162043baa4dadda544f11bf778dc6cadbc2ce1b777335ebb490b67900277db8c" gracePeriod=30 Feb 27 10:25:20 crc kubenswrapper[4998]: I0227 10:25:20.900675 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-2vxxl"] Feb 27 10:25:20 crc kubenswrapper[4998]: I0227 10:25:20.900974 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-2vxxl" podUID="9881d4cb-217e-455b-b8f3-0ad24a1e51d7" containerName="marketplace-operator" containerID="cri-o://d9af5ec79418bc2734d362b060b15b7fb5372988ac2361c5d7f37321ffe84590" gracePeriod=30 Feb 27 10:25:20 crc kubenswrapper[4998]: I0227 10:25:20.907879 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-n6ft6"] Feb 27 10:25:20 crc kubenswrapper[4998]: I0227 10:25:20.908212 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-n6ft6" podUID="1f770761-42e0-4e42-92c0-1e7fb8e45a49" containerName="registry-server" containerID="cri-o://b505ce2826ecb1df3ed2898d6f985851f5c28c9be530e002bb0f8177a161e452" gracePeriod=30 Feb 27 10:25:20 crc kubenswrapper[4998]: I0227 10:25:20.930210 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-lpczb"] Feb 27 10:25:20 crc kubenswrapper[4998]: I0227 10:25:20.931291 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-lpczb" Feb 27 10:25:20 crc kubenswrapper[4998]: I0227 10:25:20.934008 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-lpczb"] Feb 27 10:25:20 crc kubenswrapper[4998]: I0227 10:25:20.943945 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-s7v8l"] Feb 27 10:25:20 crc kubenswrapper[4998]: I0227 10:25:20.944177 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-s7v8l" podUID="c0b13491-88ff-401a-9df3-dc6c981fb11c" containerName="registry-server" containerID="cri-o://75406e77eeb9350094d2cb1a187e294d6f5122612bd7576fad75f37d95c44342" gracePeriod=30 Feb 27 10:25:21 crc kubenswrapper[4998]: I0227 10:25:21.036269 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/72f2d961-29af-48b5-b073-9c1de03ed288-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-lpczb\" (UID: \"72f2d961-29af-48b5-b073-9c1de03ed288\") " pod="openshift-marketplace/marketplace-operator-79b997595-lpczb" Feb 27 10:25:21 crc kubenswrapper[4998]: I0227 10:25:21.036365 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2dr5\" (UniqueName: \"kubernetes.io/projected/72f2d961-29af-48b5-b073-9c1de03ed288-kube-api-access-p2dr5\") pod \"marketplace-operator-79b997595-lpczb\" (UID: \"72f2d961-29af-48b5-b073-9c1de03ed288\") " pod="openshift-marketplace/marketplace-operator-79b997595-lpczb" Feb 27 10:25:21 crc kubenswrapper[4998]: I0227 10:25:21.036391 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/72f2d961-29af-48b5-b073-9c1de03ed288-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-lpczb\" (UID: \"72f2d961-29af-48b5-b073-9c1de03ed288\") " pod="openshift-marketplace/marketplace-operator-79b997595-lpczb" Feb 27 10:25:21 crc kubenswrapper[4998]: I0227 10:25:21.137835 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2dr5\" (UniqueName: \"kubernetes.io/projected/72f2d961-29af-48b5-b073-9c1de03ed288-kube-api-access-p2dr5\") pod \"marketplace-operator-79b997595-lpczb\" (UID: \"72f2d961-29af-48b5-b073-9c1de03ed288\") " pod="openshift-marketplace/marketplace-operator-79b997595-lpczb" Feb 27 10:25:21 crc kubenswrapper[4998]: I0227 10:25:21.137879 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/72f2d961-29af-48b5-b073-9c1de03ed288-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-lpczb\" (UID: \"72f2d961-29af-48b5-b073-9c1de03ed288\") " pod="openshift-marketplace/marketplace-operator-79b997595-lpczb" Feb 27 10:25:21 crc kubenswrapper[4998]: I0227 10:25:21.137940 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/72f2d961-29af-48b5-b073-9c1de03ed288-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-lpczb\" (UID: \"72f2d961-29af-48b5-b073-9c1de03ed288\") " pod="openshift-marketplace/marketplace-operator-79b997595-lpczb" Feb 27 10:25:21 crc kubenswrapper[4998]: I0227 10:25:21.139126 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/72f2d961-29af-48b5-b073-9c1de03ed288-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-lpczb\" (UID: \"72f2d961-29af-48b5-b073-9c1de03ed288\") " pod="openshift-marketplace/marketplace-operator-79b997595-lpczb" Feb 27 10:25:21 crc kubenswrapper[4998]: I0227 10:25:21.152462 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/72f2d961-29af-48b5-b073-9c1de03ed288-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-lpczb\" (UID: \"72f2d961-29af-48b5-b073-9c1de03ed288\") " pod="openshift-marketplace/marketplace-operator-79b997595-lpczb" Feb 27 10:25:21 crc kubenswrapper[4998]: I0227 10:25:21.155939 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2dr5\" (UniqueName: \"kubernetes.io/projected/72f2d961-29af-48b5-b073-9c1de03ed288-kube-api-access-p2dr5\") pod \"marketplace-operator-79b997595-lpczb\" (UID: \"72f2d961-29af-48b5-b073-9c1de03ed288\") " pod="openshift-marketplace/marketplace-operator-79b997595-lpczb" Feb 27 10:25:21 crc kubenswrapper[4998]: I0227 10:25:21.299755 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-lpczb" Feb 27 10:25:21 crc kubenswrapper[4998]: I0227 10:25:21.305294 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5v4md" Feb 27 10:25:21 crc kubenswrapper[4998]: I0227 10:25:21.309503 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-2vxxl" Feb 27 10:25:21 crc kubenswrapper[4998]: I0227 10:25:21.315667 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sxgdw" Feb 27 10:25:21 crc kubenswrapper[4998]: I0227 10:25:21.347962 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n6ft6" Feb 27 10:25:21 crc kubenswrapper[4998]: I0227 10:25:21.357769 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s7v8l" Feb 27 10:25:21 crc kubenswrapper[4998]: I0227 10:25:21.444492 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-klsqf\" (UniqueName: \"kubernetes.io/projected/9881d4cb-217e-455b-b8f3-0ad24a1e51d7-kube-api-access-klsqf\") pod \"9881d4cb-217e-455b-b8f3-0ad24a1e51d7\" (UID: \"9881d4cb-217e-455b-b8f3-0ad24a1e51d7\") " Feb 27 10:25:21 crc kubenswrapper[4998]: I0227 10:25:21.444538 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0b13491-88ff-401a-9df3-dc6c981fb11c-utilities\") pod \"c0b13491-88ff-401a-9df3-dc6c981fb11c\" (UID: \"c0b13491-88ff-401a-9df3-dc6c981fb11c\") " Feb 27 10:25:21 crc kubenswrapper[4998]: I0227 10:25:21.444572 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de440cc8-1a01-4c10-83e6-027afdacde0c-utilities\") pod \"de440cc8-1a01-4c10-83e6-027afdacde0c\" (UID: \"de440cc8-1a01-4c10-83e6-027afdacde0c\") " Feb 27 10:25:21 crc kubenswrapper[4998]: I0227 10:25:21.444608 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqx2d\" (UniqueName: \"kubernetes.io/projected/f5d59240-590d-47d4-95f7-de0c01a8d3e2-kube-api-access-fqx2d\") pod \"f5d59240-590d-47d4-95f7-de0c01a8d3e2\" (UID: \"f5d59240-590d-47d4-95f7-de0c01a8d3e2\") " Feb 27 10:25:21 crc kubenswrapper[4998]: I0227 10:25:21.444630 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f770761-42e0-4e42-92c0-1e7fb8e45a49-utilities\") pod \"1f770761-42e0-4e42-92c0-1e7fb8e45a49\" (UID: \"1f770761-42e0-4e42-92c0-1e7fb8e45a49\") " Feb 27 10:25:21 crc kubenswrapper[4998]: I0227 10:25:21.444662 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de440cc8-1a01-4c10-83e6-027afdacde0c-catalog-content\") pod \"de440cc8-1a01-4c10-83e6-027afdacde0c\" (UID: \"de440cc8-1a01-4c10-83e6-027afdacde0c\") " Feb 27 10:25:21 crc kubenswrapper[4998]: I0227 10:25:21.444697 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2fmpz\" (UniqueName: \"kubernetes.io/projected/1f770761-42e0-4e42-92c0-1e7fb8e45a49-kube-api-access-2fmpz\") pod \"1f770761-42e0-4e42-92c0-1e7fb8e45a49\" (UID: \"1f770761-42e0-4e42-92c0-1e7fb8e45a49\") " Feb 27 10:25:21 crc kubenswrapper[4998]: I0227 10:25:21.444727 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f770761-42e0-4e42-92c0-1e7fb8e45a49-catalog-content\") pod \"1f770761-42e0-4e42-92c0-1e7fb8e45a49\" (UID: \"1f770761-42e0-4e42-92c0-1e7fb8e45a49\") " Feb 27 10:25:21 crc kubenswrapper[4998]: I0227 10:25:21.444777 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5d59240-590d-47d4-95f7-de0c01a8d3e2-utilities\") pod \"f5d59240-590d-47d4-95f7-de0c01a8d3e2\" (UID: \"f5d59240-590d-47d4-95f7-de0c01a8d3e2\") " Feb 27 10:25:21 crc kubenswrapper[4998]: I0227 10:25:21.444792 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jzpbc\" (UniqueName: \"kubernetes.io/projected/de440cc8-1a01-4c10-83e6-027afdacde0c-kube-api-access-jzpbc\") pod \"de440cc8-1a01-4c10-83e6-027afdacde0c\" (UID: \"de440cc8-1a01-4c10-83e6-027afdacde0c\") " Feb 27 10:25:21 crc kubenswrapper[4998]: I0227 10:25:21.444820 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0b13491-88ff-401a-9df3-dc6c981fb11c-catalog-content\") pod \"c0b13491-88ff-401a-9df3-dc6c981fb11c\" (UID: \"c0b13491-88ff-401a-9df3-dc6c981fb11c\") " Feb 27 10:25:21 crc kubenswrapper[4998]: I0227 10:25:21.444856 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9881d4cb-217e-455b-b8f3-0ad24a1e51d7-marketplace-trusted-ca\") pod \"9881d4cb-217e-455b-b8f3-0ad24a1e51d7\" (UID: \"9881d4cb-217e-455b-b8f3-0ad24a1e51d7\") " Feb 27 10:25:21 crc kubenswrapper[4998]: I0227 10:25:21.444882 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5d59240-590d-47d4-95f7-de0c01a8d3e2-catalog-content\") pod \"f5d59240-590d-47d4-95f7-de0c01a8d3e2\" (UID: \"f5d59240-590d-47d4-95f7-de0c01a8d3e2\") " Feb 27 10:25:21 crc kubenswrapper[4998]: I0227 10:25:21.444901 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2rtb9\" (UniqueName: \"kubernetes.io/projected/c0b13491-88ff-401a-9df3-dc6c981fb11c-kube-api-access-2rtb9\") pod \"c0b13491-88ff-401a-9df3-dc6c981fb11c\" (UID: \"c0b13491-88ff-401a-9df3-dc6c981fb11c\") " Feb 27 10:25:21 crc kubenswrapper[4998]: I0227 10:25:21.444945 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9881d4cb-217e-455b-b8f3-0ad24a1e51d7-marketplace-operator-metrics\") pod \"9881d4cb-217e-455b-b8f3-0ad24a1e51d7\" (UID: \"9881d4cb-217e-455b-b8f3-0ad24a1e51d7\") " Feb 27 10:25:21 crc kubenswrapper[4998]: I0227 10:25:21.445561 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de440cc8-1a01-4c10-83e6-027afdacde0c-utilities" (OuterVolumeSpecName: "utilities") pod "de440cc8-1a01-4c10-83e6-027afdacde0c" (UID: "de440cc8-1a01-4c10-83e6-027afdacde0c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:25:21 crc kubenswrapper[4998]: I0227 10:25:21.446262 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0b13491-88ff-401a-9df3-dc6c981fb11c-utilities" (OuterVolumeSpecName: "utilities") pod "c0b13491-88ff-401a-9df3-dc6c981fb11c" (UID: "c0b13491-88ff-401a-9df3-dc6c981fb11c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:25:21 crc kubenswrapper[4998]: I0227 10:25:21.448169 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5d59240-590d-47d4-95f7-de0c01a8d3e2-utilities" (OuterVolumeSpecName: "utilities") pod "f5d59240-590d-47d4-95f7-de0c01a8d3e2" (UID: "f5d59240-590d-47d4-95f7-de0c01a8d3e2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:25:21 crc kubenswrapper[4998]: I0227 10:25:21.448909 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de440cc8-1a01-4c10-83e6-027afdacde0c-kube-api-access-jzpbc" (OuterVolumeSpecName: "kube-api-access-jzpbc") pod "de440cc8-1a01-4c10-83e6-027afdacde0c" (UID: "de440cc8-1a01-4c10-83e6-027afdacde0c"). InnerVolumeSpecName "kube-api-access-jzpbc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:25:21 crc kubenswrapper[4998]: I0227 10:25:21.449022 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f770761-42e0-4e42-92c0-1e7fb8e45a49-utilities" (OuterVolumeSpecName: "utilities") pod "1f770761-42e0-4e42-92c0-1e7fb8e45a49" (UID: "1f770761-42e0-4e42-92c0-1e7fb8e45a49"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:25:21 crc kubenswrapper[4998]: I0227 10:25:21.449986 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9881d4cb-217e-455b-b8f3-0ad24a1e51d7-kube-api-access-klsqf" (OuterVolumeSpecName: "kube-api-access-klsqf") pod "9881d4cb-217e-455b-b8f3-0ad24a1e51d7" (UID: "9881d4cb-217e-455b-b8f3-0ad24a1e51d7"). InnerVolumeSpecName "kube-api-access-klsqf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:25:21 crc kubenswrapper[4998]: I0227 10:25:21.450450 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9881d4cb-217e-455b-b8f3-0ad24a1e51d7-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "9881d4cb-217e-455b-b8f3-0ad24a1e51d7" (UID: "9881d4cb-217e-455b-b8f3-0ad24a1e51d7"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:25:21 crc kubenswrapper[4998]: I0227 10:25:21.450827 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0b13491-88ff-401a-9df3-dc6c981fb11c-kube-api-access-2rtb9" (OuterVolumeSpecName: "kube-api-access-2rtb9") pod "c0b13491-88ff-401a-9df3-dc6c981fb11c" (UID: "c0b13491-88ff-401a-9df3-dc6c981fb11c"). InnerVolumeSpecName "kube-api-access-2rtb9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:25:21 crc kubenswrapper[4998]: I0227 10:25:21.453834 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5d59240-590d-47d4-95f7-de0c01a8d3e2-kube-api-access-fqx2d" (OuterVolumeSpecName: "kube-api-access-fqx2d") pod "f5d59240-590d-47d4-95f7-de0c01a8d3e2" (UID: "f5d59240-590d-47d4-95f7-de0c01a8d3e2"). InnerVolumeSpecName "kube-api-access-fqx2d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:25:21 crc kubenswrapper[4998]: I0227 10:25:21.454022 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9881d4cb-217e-455b-b8f3-0ad24a1e51d7-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "9881d4cb-217e-455b-b8f3-0ad24a1e51d7" (UID: "9881d4cb-217e-455b-b8f3-0ad24a1e51d7"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:25:21 crc kubenswrapper[4998]: I0227 10:25:21.457961 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f770761-42e0-4e42-92c0-1e7fb8e45a49-kube-api-access-2fmpz" (OuterVolumeSpecName: "kube-api-access-2fmpz") pod "1f770761-42e0-4e42-92c0-1e7fb8e45a49" (UID: "1f770761-42e0-4e42-92c0-1e7fb8e45a49"). InnerVolumeSpecName "kube-api-access-2fmpz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:25:21 crc kubenswrapper[4998]: I0227 10:25:21.495704 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f770761-42e0-4e42-92c0-1e7fb8e45a49-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1f770761-42e0-4e42-92c0-1e7fb8e45a49" (UID: "1f770761-42e0-4e42-92c0-1e7fb8e45a49"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:25:21 crc kubenswrapper[4998]: I0227 10:25:21.506544 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de440cc8-1a01-4c10-83e6-027afdacde0c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "de440cc8-1a01-4c10-83e6-027afdacde0c" (UID: "de440cc8-1a01-4c10-83e6-027afdacde0c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:25:21 crc kubenswrapper[4998]: I0227 10:25:21.532176 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5d59240-590d-47d4-95f7-de0c01a8d3e2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f5d59240-590d-47d4-95f7-de0c01a8d3e2" (UID: "f5d59240-590d-47d4-95f7-de0c01a8d3e2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:25:21 crc kubenswrapper[4998]: I0227 10:25:21.546181 4998 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de440cc8-1a01-4c10-83e6-027afdacde0c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 10:25:21 crc kubenswrapper[4998]: I0227 10:25:21.546218 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2fmpz\" (UniqueName: \"kubernetes.io/projected/1f770761-42e0-4e42-92c0-1e7fb8e45a49-kube-api-access-2fmpz\") on node \"crc\" DevicePath \"\"" Feb 27 10:25:21 crc kubenswrapper[4998]: I0227 10:25:21.546267 4998 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f770761-42e0-4e42-92c0-1e7fb8e45a49-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 10:25:21 crc kubenswrapper[4998]: I0227 10:25:21.546277 4998 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5d59240-590d-47d4-95f7-de0c01a8d3e2-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 10:25:21 crc kubenswrapper[4998]: I0227 10:25:21.546286 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jzpbc\" (UniqueName: \"kubernetes.io/projected/de440cc8-1a01-4c10-83e6-027afdacde0c-kube-api-access-jzpbc\") on node \"crc\" DevicePath \"\"" Feb 27 10:25:21 crc kubenswrapper[4998]: I0227 10:25:21.546295 4998 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9881d4cb-217e-455b-b8f3-0ad24a1e51d7-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 27 10:25:21 crc kubenswrapper[4998]: I0227 10:25:21.546303 4998 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5d59240-590d-47d4-95f7-de0c01a8d3e2-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 10:25:21 crc kubenswrapper[4998]: I0227 10:25:21.546312 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2rtb9\" (UniqueName: \"kubernetes.io/projected/c0b13491-88ff-401a-9df3-dc6c981fb11c-kube-api-access-2rtb9\") on node \"crc\" DevicePath \"\"" Feb 27 10:25:21 crc kubenswrapper[4998]: I0227 10:25:21.546320 4998 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9881d4cb-217e-455b-b8f3-0ad24a1e51d7-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 27 10:25:21 crc kubenswrapper[4998]: I0227 10:25:21.546329 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-klsqf\" (UniqueName: \"kubernetes.io/projected/9881d4cb-217e-455b-b8f3-0ad24a1e51d7-kube-api-access-klsqf\") on node \"crc\" DevicePath \"\"" Feb 27 10:25:21 crc kubenswrapper[4998]: I0227 10:25:21.546338 4998 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0b13491-88ff-401a-9df3-dc6c981fb11c-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 10:25:21 crc kubenswrapper[4998]: I0227 10:25:21.546345 4998 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de440cc8-1a01-4c10-83e6-027afdacde0c-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 10:25:21 crc kubenswrapper[4998]: I0227 10:25:21.546353 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqx2d\" (UniqueName: \"kubernetes.io/projected/f5d59240-590d-47d4-95f7-de0c01a8d3e2-kube-api-access-fqx2d\") on node \"crc\" DevicePath \"\"" Feb 27 10:25:21 crc kubenswrapper[4998]: I0227 10:25:21.546362 4998 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f770761-42e0-4e42-92c0-1e7fb8e45a49-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 10:25:21 crc kubenswrapper[4998]: I0227 10:25:21.551547 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-lpczb"] Feb 27 10:25:21 crc kubenswrapper[4998]: W0227 10:25:21.557338 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod72f2d961_29af_48b5_b073_9c1de03ed288.slice/crio-b87feb182fe0139c46c68fbb0050b57761bc40b9d2cbc69d78c8e881d4dc9813 WatchSource:0}: Error finding container b87feb182fe0139c46c68fbb0050b57761bc40b9d2cbc69d78c8e881d4dc9813: Status 404 returned error can't find the container with id b87feb182fe0139c46c68fbb0050b57761bc40b9d2cbc69d78c8e881d4dc9813 Feb 27 10:25:21 crc kubenswrapper[4998]: I0227 10:25:21.620698 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0b13491-88ff-401a-9df3-dc6c981fb11c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c0b13491-88ff-401a-9df3-dc6c981fb11c" (UID: "c0b13491-88ff-401a-9df3-dc6c981fb11c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:25:21 crc kubenswrapper[4998]: I0227 10:25:21.647367 4998 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0b13491-88ff-401a-9df3-dc6c981fb11c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 10:25:21 crc kubenswrapper[4998]: I0227 10:25:21.903939 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-lpczb" event={"ID":"72f2d961-29af-48b5-b073-9c1de03ed288","Type":"ContainerStarted","Data":"7f74e943d5fb24990ef7d616a127a20d5722572be6e8c48e6afec7f06b32bb3e"} Feb 27 10:25:21 crc kubenswrapper[4998]: I0227 10:25:21.903982 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-lpczb" event={"ID":"72f2d961-29af-48b5-b073-9c1de03ed288","Type":"ContainerStarted","Data":"b87feb182fe0139c46c68fbb0050b57761bc40b9d2cbc69d78c8e881d4dc9813"} Feb 27 10:25:21 crc kubenswrapper[4998]: I0227 10:25:21.904602 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-lpczb" Feb 27 10:25:21 crc kubenswrapper[4998]: I0227 10:25:21.905458 4998 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-lpczb container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.72:8080/healthz\": dial tcp 10.217.0.72:8080: connect: connection refused" start-of-body= Feb 27 10:25:21 crc kubenswrapper[4998]: I0227 10:25:21.905558 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-2vxxl" Feb 27 10:25:21 crc kubenswrapper[4998]: I0227 10:25:21.905510 4998 generic.go:334] "Generic (PLEG): container finished" podID="9881d4cb-217e-455b-b8f3-0ad24a1e51d7" containerID="d9af5ec79418bc2734d362b060b15b7fb5372988ac2361c5d7f37321ffe84590" exitCode=0 Feb 27 10:25:21 crc kubenswrapper[4998]: I0227 10:25:21.905658 4998 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-lpczb" podUID="72f2d961-29af-48b5-b073-9c1de03ed288" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.72:8080/healthz\": dial tcp 10.217.0.72:8080: connect: connection refused" Feb 27 10:25:21 crc kubenswrapper[4998]: I0227 10:25:21.905539 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-2vxxl" event={"ID":"9881d4cb-217e-455b-b8f3-0ad24a1e51d7","Type":"ContainerDied","Data":"d9af5ec79418bc2734d362b060b15b7fb5372988ac2361c5d7f37321ffe84590"} Feb 27 10:25:21 crc kubenswrapper[4998]: I0227 10:25:21.905781 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-2vxxl" event={"ID":"9881d4cb-217e-455b-b8f3-0ad24a1e51d7","Type":"ContainerDied","Data":"a9d5b7dcd59174d82f9b5e579f11d6598d673771bfdf79ed28439493adb089a2"} Feb 27 10:25:21 crc kubenswrapper[4998]: I0227 10:25:21.905844 4998 scope.go:117] "RemoveContainer" containerID="d9af5ec79418bc2734d362b060b15b7fb5372988ac2361c5d7f37321ffe84590" Feb 27 10:25:21 crc kubenswrapper[4998]: I0227 10:25:21.910185 4998 generic.go:334] "Generic (PLEG): container finished" podID="f5d59240-590d-47d4-95f7-de0c01a8d3e2" containerID="162043baa4dadda544f11bf778dc6cadbc2ce1b777335ebb490b67900277db8c" exitCode=0 Feb 27 10:25:21 crc kubenswrapper[4998]: I0227 10:25:21.910283 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sxgdw" event={"ID":"f5d59240-590d-47d4-95f7-de0c01a8d3e2","Type":"ContainerDied","Data":"162043baa4dadda544f11bf778dc6cadbc2ce1b777335ebb490b67900277db8c"} Feb 27 10:25:21 crc kubenswrapper[4998]: I0227 10:25:21.910317 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sxgdw" event={"ID":"f5d59240-590d-47d4-95f7-de0c01a8d3e2","Type":"ContainerDied","Data":"e67a887271a70f53462b51c83dc92aafbdfa254451c1a2d290a67b7d220032a7"} Feb 27 10:25:21 crc kubenswrapper[4998]: I0227 10:25:21.910398 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sxgdw" Feb 27 10:25:21 crc kubenswrapper[4998]: I0227 10:25:21.916831 4998 generic.go:334] "Generic (PLEG): container finished" podID="1f770761-42e0-4e42-92c0-1e7fb8e45a49" containerID="b505ce2826ecb1df3ed2898d6f985851f5c28c9be530e002bb0f8177a161e452" exitCode=0 Feb 27 10:25:21 crc kubenswrapper[4998]: I0227 10:25:21.916926 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n6ft6" event={"ID":"1f770761-42e0-4e42-92c0-1e7fb8e45a49","Type":"ContainerDied","Data":"b505ce2826ecb1df3ed2898d6f985851f5c28c9be530e002bb0f8177a161e452"} Feb 27 10:25:21 crc kubenswrapper[4998]: I0227 10:25:21.916952 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n6ft6" event={"ID":"1f770761-42e0-4e42-92c0-1e7fb8e45a49","Type":"ContainerDied","Data":"ed9a49348248b9ce9780780623ceaaf70ce2e00deae3fa828662ca38fef25807"} Feb 27 10:25:21 crc kubenswrapper[4998]: I0227 10:25:21.917010 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n6ft6" Feb 27 10:25:21 crc kubenswrapper[4998]: I0227 10:25:21.921135 4998 scope.go:117] "RemoveContainer" containerID="d9af5ec79418bc2734d362b060b15b7fb5372988ac2361c5d7f37321ffe84590" Feb 27 10:25:21 crc kubenswrapper[4998]: E0227 10:25:21.925917 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d9af5ec79418bc2734d362b060b15b7fb5372988ac2361c5d7f37321ffe84590\": container with ID starting with d9af5ec79418bc2734d362b060b15b7fb5372988ac2361c5d7f37321ffe84590 not found: ID does not exist" containerID="d9af5ec79418bc2734d362b060b15b7fb5372988ac2361c5d7f37321ffe84590" Feb 27 10:25:21 crc kubenswrapper[4998]: I0227 10:25:21.925973 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9af5ec79418bc2734d362b060b15b7fb5372988ac2361c5d7f37321ffe84590"} err="failed to get container status \"d9af5ec79418bc2734d362b060b15b7fb5372988ac2361c5d7f37321ffe84590\": rpc error: code = NotFound desc = could not find container \"d9af5ec79418bc2734d362b060b15b7fb5372988ac2361c5d7f37321ffe84590\": container with ID starting with d9af5ec79418bc2734d362b060b15b7fb5372988ac2361c5d7f37321ffe84590 not found: ID does not exist" Feb 27 10:25:21 crc kubenswrapper[4998]: I0227 10:25:21.926008 4998 scope.go:117] "RemoveContainer" containerID="162043baa4dadda544f11bf778dc6cadbc2ce1b777335ebb490b67900277db8c" Feb 27 10:25:21 crc kubenswrapper[4998]: I0227 10:25:21.936090 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-lpczb" podStartSLOduration=1.936070853 podStartE2EDuration="1.936070853s" podCreationTimestamp="2026-02-27 10:25:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:25:21.930181727 +0000 UTC m=+473.928452705" watchObservedRunningTime="2026-02-27 10:25:21.936070853 +0000 UTC m=+473.934341821" Feb 27 10:25:21 crc kubenswrapper[4998]: I0227 10:25:21.940994 4998 generic.go:334] "Generic (PLEG): container finished" podID="c0b13491-88ff-401a-9df3-dc6c981fb11c" containerID="75406e77eeb9350094d2cb1a187e294d6f5122612bd7576fad75f37d95c44342" exitCode=0 Feb 27 10:25:21 crc kubenswrapper[4998]: I0227 10:25:21.942068 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s7v8l" Feb 27 10:25:21 crc kubenswrapper[4998]: I0227 10:25:21.942080 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s7v8l" event={"ID":"c0b13491-88ff-401a-9df3-dc6c981fb11c","Type":"ContainerDied","Data":"75406e77eeb9350094d2cb1a187e294d6f5122612bd7576fad75f37d95c44342"} Feb 27 10:25:21 crc kubenswrapper[4998]: I0227 10:25:21.943651 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s7v8l" event={"ID":"c0b13491-88ff-401a-9df3-dc6c981fb11c","Type":"ContainerDied","Data":"ae13e69305cf46c16a9578363ec9a766ef0331332627e76e22f0e954e78b3755"} Feb 27 10:25:21 crc kubenswrapper[4998]: I0227 10:25:21.959732 4998 scope.go:117] "RemoveContainer" containerID="cfca23ff4ad102d37f35e093e0dc7376fe336a4d305c764c440873cb44f628b4" Feb 27 10:25:21 crc kubenswrapper[4998]: I0227 10:25:21.960901 4998 generic.go:334] "Generic (PLEG): container finished" podID="de440cc8-1a01-4c10-83e6-027afdacde0c" containerID="bafecbdde451921e88f0f4ea4463de30d677ec035432447fbc4985cb14347eb0" exitCode=0 Feb 27 10:25:21 crc kubenswrapper[4998]: I0227 10:25:21.960946 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5v4md" event={"ID":"de440cc8-1a01-4c10-83e6-027afdacde0c","Type":"ContainerDied","Data":"bafecbdde451921e88f0f4ea4463de30d677ec035432447fbc4985cb14347eb0"} Feb 27 10:25:21 crc kubenswrapper[4998]: I0227 10:25:21.960980 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5v4md" event={"ID":"de440cc8-1a01-4c10-83e6-027afdacde0c","Type":"ContainerDied","Data":"0c47855489fc5cb1a38fb1f8c84bc7fa689076edd84807ec906a47d0a3c234e4"} Feb 27 10:25:21 crc kubenswrapper[4998]: I0227 10:25:21.961066 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5v4md" Feb 27 10:25:21 crc kubenswrapper[4998]: I0227 10:25:21.979599 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-2vxxl"] Feb 27 10:25:21 crc kubenswrapper[4998]: I0227 10:25:21.986759 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-2vxxl"] Feb 27 10:25:21 crc kubenswrapper[4998]: I0227 10:25:21.990828 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sxgdw"] Feb 27 10:25:21 crc kubenswrapper[4998]: I0227 10:25:21.994139 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-sxgdw"] Feb 27 10:25:21 crc kubenswrapper[4998]: I0227 10:25:21.996068 4998 scope.go:117] "RemoveContainer" containerID="b85f16b4e9246ce5e828de8ed35dfe535832417f4c69bcca3307435ee6b6f323" Feb 27 10:25:22 crc kubenswrapper[4998]: I0227 10:25:22.004769 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-n6ft6"] Feb 27 10:25:22 crc kubenswrapper[4998]: I0227 10:25:22.013840 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-n6ft6"] Feb 27 10:25:22 crc kubenswrapper[4998]: I0227 10:25:22.027677 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-s7v8l"] Feb 27 10:25:22 crc kubenswrapper[4998]: I0227 10:25:22.031045 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-s7v8l"] Feb 27 10:25:22 crc kubenswrapper[4998]: I0227 10:25:22.044894 4998 scope.go:117] "RemoveContainer" containerID="162043baa4dadda544f11bf778dc6cadbc2ce1b777335ebb490b67900277db8c" Feb 27 10:25:22 crc kubenswrapper[4998]: E0227 10:25:22.045703 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"162043baa4dadda544f11bf778dc6cadbc2ce1b777335ebb490b67900277db8c\": container with ID starting with 162043baa4dadda544f11bf778dc6cadbc2ce1b777335ebb490b67900277db8c not found: ID does not exist" containerID="162043baa4dadda544f11bf778dc6cadbc2ce1b777335ebb490b67900277db8c" Feb 27 10:25:22 crc kubenswrapper[4998]: I0227 10:25:22.045745 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"162043baa4dadda544f11bf778dc6cadbc2ce1b777335ebb490b67900277db8c"} err="failed to get container status \"162043baa4dadda544f11bf778dc6cadbc2ce1b777335ebb490b67900277db8c\": rpc error: code = NotFound desc = could not find container \"162043baa4dadda544f11bf778dc6cadbc2ce1b777335ebb490b67900277db8c\": container with ID starting with 162043baa4dadda544f11bf778dc6cadbc2ce1b777335ebb490b67900277db8c not found: ID does not exist" Feb 27 10:25:22 crc kubenswrapper[4998]: I0227 10:25:22.045779 4998 scope.go:117] "RemoveContainer" containerID="cfca23ff4ad102d37f35e093e0dc7376fe336a4d305c764c440873cb44f628b4" Feb 27 10:25:22 crc kubenswrapper[4998]: E0227 10:25:22.046154 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cfca23ff4ad102d37f35e093e0dc7376fe336a4d305c764c440873cb44f628b4\": container with ID starting with cfca23ff4ad102d37f35e093e0dc7376fe336a4d305c764c440873cb44f628b4 not found: ID does not exist" containerID="cfca23ff4ad102d37f35e093e0dc7376fe336a4d305c764c440873cb44f628b4" Feb 27 10:25:22 crc kubenswrapper[4998]: I0227 10:25:22.046182 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cfca23ff4ad102d37f35e093e0dc7376fe336a4d305c764c440873cb44f628b4"} err="failed to get container status \"cfca23ff4ad102d37f35e093e0dc7376fe336a4d305c764c440873cb44f628b4\": rpc error: code = NotFound desc = could not find container \"cfca23ff4ad102d37f35e093e0dc7376fe336a4d305c764c440873cb44f628b4\": container with ID starting with cfca23ff4ad102d37f35e093e0dc7376fe336a4d305c764c440873cb44f628b4 not found: ID does not exist" Feb 27 10:25:22 crc kubenswrapper[4998]: I0227 10:25:22.046202 4998 scope.go:117] "RemoveContainer" containerID="b85f16b4e9246ce5e828de8ed35dfe535832417f4c69bcca3307435ee6b6f323" Feb 27 10:25:22 crc kubenswrapper[4998]: E0227 10:25:22.047815 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b85f16b4e9246ce5e828de8ed35dfe535832417f4c69bcca3307435ee6b6f323\": container with ID starting with b85f16b4e9246ce5e828de8ed35dfe535832417f4c69bcca3307435ee6b6f323 not found: ID does not exist" containerID="b85f16b4e9246ce5e828de8ed35dfe535832417f4c69bcca3307435ee6b6f323" Feb 27 10:25:22 crc kubenswrapper[4998]: I0227 10:25:22.047851 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b85f16b4e9246ce5e828de8ed35dfe535832417f4c69bcca3307435ee6b6f323"} err="failed to get container status \"b85f16b4e9246ce5e828de8ed35dfe535832417f4c69bcca3307435ee6b6f323\": rpc error: code = NotFound desc = could not find container \"b85f16b4e9246ce5e828de8ed35dfe535832417f4c69bcca3307435ee6b6f323\": container with ID starting with b85f16b4e9246ce5e828de8ed35dfe535832417f4c69bcca3307435ee6b6f323 not found: ID does not exist" Feb 27 10:25:22 crc kubenswrapper[4998]: I0227 10:25:22.047874 4998 scope.go:117] "RemoveContainer" containerID="b505ce2826ecb1df3ed2898d6f985851f5c28c9be530e002bb0f8177a161e452" Feb 27 10:25:22 crc kubenswrapper[4998]: I0227 10:25:22.069177 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5v4md"] Feb 27 10:25:22 crc kubenswrapper[4998]: I0227 10:25:22.073048 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5v4md"] Feb 27 10:25:22 crc kubenswrapper[4998]: I0227 10:25:22.075499 4998 scope.go:117] "RemoveContainer" containerID="c9225ffd134f1cd0c59eb285e23bce72e3c8b7e871c130eedce090345a2446d8" Feb 27 10:25:22 crc kubenswrapper[4998]: I0227 10:25:22.095089 4998 scope.go:117] "RemoveContainer" containerID="18033807b33477bfde4aed336ca5a92a737869be7b61d1fdff1fb250556a860e" Feb 27 10:25:22 crc kubenswrapper[4998]: I0227 10:25:22.114615 4998 scope.go:117] "RemoveContainer" containerID="b505ce2826ecb1df3ed2898d6f985851f5c28c9be530e002bb0f8177a161e452" Feb 27 10:25:22 crc kubenswrapper[4998]: E0227 10:25:22.115147 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b505ce2826ecb1df3ed2898d6f985851f5c28c9be530e002bb0f8177a161e452\": container with ID starting with b505ce2826ecb1df3ed2898d6f985851f5c28c9be530e002bb0f8177a161e452 not found: ID does not exist" containerID="b505ce2826ecb1df3ed2898d6f985851f5c28c9be530e002bb0f8177a161e452" Feb 27 10:25:22 crc kubenswrapper[4998]: I0227 10:25:22.115198 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b505ce2826ecb1df3ed2898d6f985851f5c28c9be530e002bb0f8177a161e452"} err="failed to get container status \"b505ce2826ecb1df3ed2898d6f985851f5c28c9be530e002bb0f8177a161e452\": rpc error: code = NotFound desc = could not find container \"b505ce2826ecb1df3ed2898d6f985851f5c28c9be530e002bb0f8177a161e452\": container with ID starting with b505ce2826ecb1df3ed2898d6f985851f5c28c9be530e002bb0f8177a161e452 not found: ID does not exist" Feb 27 10:25:22 crc kubenswrapper[4998]: I0227 10:25:22.115241 4998 scope.go:117] "RemoveContainer" containerID="c9225ffd134f1cd0c59eb285e23bce72e3c8b7e871c130eedce090345a2446d8" Feb 27 10:25:22 crc kubenswrapper[4998]: E0227 10:25:22.115643 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9225ffd134f1cd0c59eb285e23bce72e3c8b7e871c130eedce090345a2446d8\": container with ID starting with c9225ffd134f1cd0c59eb285e23bce72e3c8b7e871c130eedce090345a2446d8 not found: ID does not exist" containerID="c9225ffd134f1cd0c59eb285e23bce72e3c8b7e871c130eedce090345a2446d8" Feb 27 10:25:22 crc kubenswrapper[4998]: I0227 10:25:22.115675 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9225ffd134f1cd0c59eb285e23bce72e3c8b7e871c130eedce090345a2446d8"} err="failed to get container status \"c9225ffd134f1cd0c59eb285e23bce72e3c8b7e871c130eedce090345a2446d8\": rpc error: code = NotFound desc = could not find container \"c9225ffd134f1cd0c59eb285e23bce72e3c8b7e871c130eedce090345a2446d8\": container with ID starting with c9225ffd134f1cd0c59eb285e23bce72e3c8b7e871c130eedce090345a2446d8 not found: ID does not exist" Feb 27 10:25:22 crc kubenswrapper[4998]: I0227 10:25:22.115696 4998 scope.go:117] "RemoveContainer" containerID="18033807b33477bfde4aed336ca5a92a737869be7b61d1fdff1fb250556a860e" Feb 27 10:25:22 crc kubenswrapper[4998]: E0227 10:25:22.116036 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18033807b33477bfde4aed336ca5a92a737869be7b61d1fdff1fb250556a860e\": container with ID starting with 18033807b33477bfde4aed336ca5a92a737869be7b61d1fdff1fb250556a860e not found: ID does not exist" containerID="18033807b33477bfde4aed336ca5a92a737869be7b61d1fdff1fb250556a860e" Feb 27 10:25:22 crc kubenswrapper[4998]: I0227 10:25:22.116058 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18033807b33477bfde4aed336ca5a92a737869be7b61d1fdff1fb250556a860e"} err="failed to get container status \"18033807b33477bfde4aed336ca5a92a737869be7b61d1fdff1fb250556a860e\": rpc error: code = NotFound desc = could not find container \"18033807b33477bfde4aed336ca5a92a737869be7b61d1fdff1fb250556a860e\": container with ID starting with 18033807b33477bfde4aed336ca5a92a737869be7b61d1fdff1fb250556a860e not found: ID does not exist" Feb 27 10:25:22 crc kubenswrapper[4998]: I0227 10:25:22.116070 4998 scope.go:117] "RemoveContainer" containerID="75406e77eeb9350094d2cb1a187e294d6f5122612bd7576fad75f37d95c44342" Feb 27 10:25:22 crc kubenswrapper[4998]: I0227 10:25:22.129567 4998 scope.go:117] "RemoveContainer" containerID="dc81ad87ba311e77d3cf11251fcc5005494c45fe93b4d15b78555a7ec6ff4cc0" Feb 27 10:25:22 crc kubenswrapper[4998]: I0227 10:25:22.143704 4998 scope.go:117] "RemoveContainer" containerID="7a4197c8878bf6156d50484ccfd6bd94a2601a22b6e45f366c6d1f280ff9cc18" Feb 27 10:25:22 crc kubenswrapper[4998]: I0227 10:25:22.162172 4998 scope.go:117] "RemoveContainer" containerID="75406e77eeb9350094d2cb1a187e294d6f5122612bd7576fad75f37d95c44342" Feb 27 10:25:22 crc kubenswrapper[4998]: E0227 10:25:22.162766 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75406e77eeb9350094d2cb1a187e294d6f5122612bd7576fad75f37d95c44342\": container with ID starting with 75406e77eeb9350094d2cb1a187e294d6f5122612bd7576fad75f37d95c44342 not found: ID does not exist" containerID="75406e77eeb9350094d2cb1a187e294d6f5122612bd7576fad75f37d95c44342" Feb 27 10:25:22 crc kubenswrapper[4998]: I0227 10:25:22.162800 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75406e77eeb9350094d2cb1a187e294d6f5122612bd7576fad75f37d95c44342"} err="failed to get container status \"75406e77eeb9350094d2cb1a187e294d6f5122612bd7576fad75f37d95c44342\": rpc error: code = NotFound desc = could not find container \"75406e77eeb9350094d2cb1a187e294d6f5122612bd7576fad75f37d95c44342\": container with ID starting with 75406e77eeb9350094d2cb1a187e294d6f5122612bd7576fad75f37d95c44342 not found: ID does not exist" Feb 27 10:25:22 crc kubenswrapper[4998]: I0227 10:25:22.162831 4998 scope.go:117] "RemoveContainer" containerID="dc81ad87ba311e77d3cf11251fcc5005494c45fe93b4d15b78555a7ec6ff4cc0" Feb 27 10:25:22 crc kubenswrapper[4998]: E0227 10:25:22.163244 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc81ad87ba311e77d3cf11251fcc5005494c45fe93b4d15b78555a7ec6ff4cc0\": container with ID starting with dc81ad87ba311e77d3cf11251fcc5005494c45fe93b4d15b78555a7ec6ff4cc0 not found: ID does not exist" containerID="dc81ad87ba311e77d3cf11251fcc5005494c45fe93b4d15b78555a7ec6ff4cc0" Feb 27 10:25:22 crc kubenswrapper[4998]: I0227 10:25:22.163279 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc81ad87ba311e77d3cf11251fcc5005494c45fe93b4d15b78555a7ec6ff4cc0"} err="failed to get container status \"dc81ad87ba311e77d3cf11251fcc5005494c45fe93b4d15b78555a7ec6ff4cc0\": rpc error: code = NotFound desc = could not find container \"dc81ad87ba311e77d3cf11251fcc5005494c45fe93b4d15b78555a7ec6ff4cc0\": container with ID starting with dc81ad87ba311e77d3cf11251fcc5005494c45fe93b4d15b78555a7ec6ff4cc0 not found: ID does not exist" Feb 27 10:25:22 crc kubenswrapper[4998]: I0227 10:25:22.163297 4998 scope.go:117] "RemoveContainer" containerID="7a4197c8878bf6156d50484ccfd6bd94a2601a22b6e45f366c6d1f280ff9cc18" Feb 27 10:25:22 crc kubenswrapper[4998]: E0227 10:25:22.163901 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a4197c8878bf6156d50484ccfd6bd94a2601a22b6e45f366c6d1f280ff9cc18\": container with ID starting with 7a4197c8878bf6156d50484ccfd6bd94a2601a22b6e45f366c6d1f280ff9cc18 not found: ID does not exist" containerID="7a4197c8878bf6156d50484ccfd6bd94a2601a22b6e45f366c6d1f280ff9cc18" Feb 27 10:25:22 crc kubenswrapper[4998]: I0227 10:25:22.163922 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a4197c8878bf6156d50484ccfd6bd94a2601a22b6e45f366c6d1f280ff9cc18"} err="failed to get container status \"7a4197c8878bf6156d50484ccfd6bd94a2601a22b6e45f366c6d1f280ff9cc18\": rpc error: code = NotFound desc = could not find container \"7a4197c8878bf6156d50484ccfd6bd94a2601a22b6e45f366c6d1f280ff9cc18\": container with ID starting with 7a4197c8878bf6156d50484ccfd6bd94a2601a22b6e45f366c6d1f280ff9cc18 not found: ID does not exist" Feb 27 10:25:22 crc kubenswrapper[4998]: I0227 10:25:22.163939 4998 scope.go:117] "RemoveContainer" containerID="bafecbdde451921e88f0f4ea4463de30d677ec035432447fbc4985cb14347eb0" Feb 27 10:25:22 crc kubenswrapper[4998]: I0227 10:25:22.182606 4998 scope.go:117] "RemoveContainer" containerID="1d7d92b9acca418a0294c6d03171daa3189a4c186061b716dad4aecd11818d55" Feb 27 10:25:22 crc kubenswrapper[4998]: I0227 10:25:22.199654 4998 scope.go:117] "RemoveContainer" containerID="98a93b2d84e6569118234d8426502ed7a8181ac62ef134a51d3d710c993a96c5" Feb 27 10:25:22 crc kubenswrapper[4998]: I0227 10:25:22.212098 4998 scope.go:117] "RemoveContainer" containerID="bafecbdde451921e88f0f4ea4463de30d677ec035432447fbc4985cb14347eb0" Feb 27 10:25:22 crc kubenswrapper[4998]: E0227 10:25:22.213113 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bafecbdde451921e88f0f4ea4463de30d677ec035432447fbc4985cb14347eb0\": container with ID starting with bafecbdde451921e88f0f4ea4463de30d677ec035432447fbc4985cb14347eb0 not found: ID does not exist" containerID="bafecbdde451921e88f0f4ea4463de30d677ec035432447fbc4985cb14347eb0" Feb 27 10:25:22 crc kubenswrapper[4998]: I0227 10:25:22.213204 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bafecbdde451921e88f0f4ea4463de30d677ec035432447fbc4985cb14347eb0"} err="failed to get container status \"bafecbdde451921e88f0f4ea4463de30d677ec035432447fbc4985cb14347eb0\": rpc error: code = NotFound desc = could not find container \"bafecbdde451921e88f0f4ea4463de30d677ec035432447fbc4985cb14347eb0\": container with ID starting with bafecbdde451921e88f0f4ea4463de30d677ec035432447fbc4985cb14347eb0 not found: ID does not exist" Feb 27 10:25:22 crc kubenswrapper[4998]: I0227 10:25:22.213307 4998 scope.go:117] "RemoveContainer" containerID="1d7d92b9acca418a0294c6d03171daa3189a4c186061b716dad4aecd11818d55" Feb 27 10:25:22 crc kubenswrapper[4998]: E0227 10:25:22.213721 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d7d92b9acca418a0294c6d03171daa3189a4c186061b716dad4aecd11818d55\": container with ID starting with 1d7d92b9acca418a0294c6d03171daa3189a4c186061b716dad4aecd11818d55 not found: ID does not exist" containerID="1d7d92b9acca418a0294c6d03171daa3189a4c186061b716dad4aecd11818d55" Feb 27 10:25:22 crc kubenswrapper[4998]: I0227 10:25:22.213767 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d7d92b9acca418a0294c6d03171daa3189a4c186061b716dad4aecd11818d55"} err="failed to get container status \"1d7d92b9acca418a0294c6d03171daa3189a4c186061b716dad4aecd11818d55\": rpc error: code = NotFound desc = could not find container \"1d7d92b9acca418a0294c6d03171daa3189a4c186061b716dad4aecd11818d55\": container with ID starting with 1d7d92b9acca418a0294c6d03171daa3189a4c186061b716dad4aecd11818d55 not found: ID does not exist" Feb 27 10:25:22 crc kubenswrapper[4998]: I0227 10:25:22.213799 4998 scope.go:117] "RemoveContainer" containerID="98a93b2d84e6569118234d8426502ed7a8181ac62ef134a51d3d710c993a96c5" Feb 27 10:25:22 crc kubenswrapper[4998]: E0227 10:25:22.214094 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98a93b2d84e6569118234d8426502ed7a8181ac62ef134a51d3d710c993a96c5\": container with ID starting with 98a93b2d84e6569118234d8426502ed7a8181ac62ef134a51d3d710c993a96c5 not found: ID does not exist" containerID="98a93b2d84e6569118234d8426502ed7a8181ac62ef134a51d3d710c993a96c5" Feb 27 10:25:22 crc kubenswrapper[4998]: I0227 10:25:22.214124 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98a93b2d84e6569118234d8426502ed7a8181ac62ef134a51d3d710c993a96c5"} err="failed to get container status \"98a93b2d84e6569118234d8426502ed7a8181ac62ef134a51d3d710c993a96c5\": rpc error: code = NotFound desc = could not find container \"98a93b2d84e6569118234d8426502ed7a8181ac62ef134a51d3d710c993a96c5\": container with ID starting with 98a93b2d84e6569118234d8426502ed7a8181ac62ef134a51d3d710c993a96c5 not found: ID does not exist" Feb 27 10:25:22 crc kubenswrapper[4998]: I0227 10:25:22.773573 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f770761-42e0-4e42-92c0-1e7fb8e45a49" path="/var/lib/kubelet/pods/1f770761-42e0-4e42-92c0-1e7fb8e45a49/volumes" Feb 27 10:25:22 crc kubenswrapper[4998]: I0227 10:25:22.775036 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9881d4cb-217e-455b-b8f3-0ad24a1e51d7" path="/var/lib/kubelet/pods/9881d4cb-217e-455b-b8f3-0ad24a1e51d7/volumes" Feb 27 10:25:22 crc kubenswrapper[4998]: I0227 10:25:22.775707 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0b13491-88ff-401a-9df3-dc6c981fb11c" path="/var/lib/kubelet/pods/c0b13491-88ff-401a-9df3-dc6c981fb11c/volumes" Feb 27 10:25:22 crc kubenswrapper[4998]: I0227 10:25:22.777516 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de440cc8-1a01-4c10-83e6-027afdacde0c" path="/var/lib/kubelet/pods/de440cc8-1a01-4c10-83e6-027afdacde0c/volumes" Feb 27 10:25:22 crc kubenswrapper[4998]: I0227 10:25:22.778749 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5d59240-590d-47d4-95f7-de0c01a8d3e2" path="/var/lib/kubelet/pods/f5d59240-590d-47d4-95f7-de0c01a8d3e2/volumes" Feb 27 10:25:22 crc kubenswrapper[4998]: I0227 10:25:22.986548 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-lpczb" Feb 27 10:25:23 crc kubenswrapper[4998]: I0227 10:25:23.901772 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-2rbwc"] Feb 27 10:25:23 crc kubenswrapper[4998]: E0227 10:25:23.902033 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5d59240-590d-47d4-95f7-de0c01a8d3e2" containerName="registry-server" Feb 27 10:25:23 crc kubenswrapper[4998]: I0227 10:25:23.902049 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5d59240-590d-47d4-95f7-de0c01a8d3e2" containerName="registry-server" Feb 27 10:25:23 crc kubenswrapper[4998]: E0227 10:25:23.902066 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f770761-42e0-4e42-92c0-1e7fb8e45a49" containerName="extract-utilities" Feb 27 10:25:23 crc kubenswrapper[4998]: I0227 10:25:23.902075 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f770761-42e0-4e42-92c0-1e7fb8e45a49" containerName="extract-utilities" Feb 27 10:25:23 crc kubenswrapper[4998]: E0227 10:25:23.902091 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5d59240-590d-47d4-95f7-de0c01a8d3e2" containerName="extract-utilities" Feb 27 10:25:23 crc kubenswrapper[4998]: I0227 10:25:23.902099 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5d59240-590d-47d4-95f7-de0c01a8d3e2" containerName="extract-utilities" Feb 27 10:25:23 crc kubenswrapper[4998]: E0227 10:25:23.902109 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de440cc8-1a01-4c10-83e6-027afdacde0c" containerName="extract-content" Feb 27 10:25:23 crc kubenswrapper[4998]: I0227 10:25:23.902114 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="de440cc8-1a01-4c10-83e6-027afdacde0c" containerName="extract-content" Feb 27 10:25:23 crc kubenswrapper[4998]: E0227 10:25:23.902122 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9881d4cb-217e-455b-b8f3-0ad24a1e51d7" containerName="marketplace-operator" Feb 27 10:25:23 crc kubenswrapper[4998]: I0227 10:25:23.902128 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="9881d4cb-217e-455b-b8f3-0ad24a1e51d7" containerName="marketplace-operator" Feb 27 10:25:23 crc kubenswrapper[4998]: E0227 10:25:23.902135 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de440cc8-1a01-4c10-83e6-027afdacde0c" containerName="extract-utilities" Feb 27 10:25:23 crc kubenswrapper[4998]: I0227 10:25:23.902141 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="de440cc8-1a01-4c10-83e6-027afdacde0c" containerName="extract-utilities" Feb 27 10:25:23 crc kubenswrapper[4998]: E0227 10:25:23.902173 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de440cc8-1a01-4c10-83e6-027afdacde0c" containerName="registry-server" Feb 27 10:25:23 crc kubenswrapper[4998]: I0227 10:25:23.902179 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="de440cc8-1a01-4c10-83e6-027afdacde0c" containerName="registry-server" Feb 27 10:25:23 crc kubenswrapper[4998]: E0227 10:25:23.902187 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0b13491-88ff-401a-9df3-dc6c981fb11c" containerName="registry-server" Feb 27 10:25:23 crc kubenswrapper[4998]: I0227 10:25:23.902193 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0b13491-88ff-401a-9df3-dc6c981fb11c" containerName="registry-server" Feb 27 10:25:23 crc kubenswrapper[4998]: E0227 10:25:23.902202 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0b13491-88ff-401a-9df3-dc6c981fb11c" containerName="extract-content" Feb 27 10:25:23 crc kubenswrapper[4998]: I0227 10:25:23.902208 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0b13491-88ff-401a-9df3-dc6c981fb11c" containerName="extract-content" Feb 27 10:25:23 crc kubenswrapper[4998]: E0227 10:25:23.902219 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0b13491-88ff-401a-9df3-dc6c981fb11c" containerName="extract-utilities" Feb 27 10:25:23 crc kubenswrapper[4998]: I0227 10:25:23.902238 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0b13491-88ff-401a-9df3-dc6c981fb11c" containerName="extract-utilities" Feb 27 10:25:23 crc kubenswrapper[4998]: E0227 10:25:23.902248 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f770761-42e0-4e42-92c0-1e7fb8e45a49" containerName="registry-server" Feb 27 10:25:23 crc kubenswrapper[4998]: I0227 10:25:23.902254 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f770761-42e0-4e42-92c0-1e7fb8e45a49" containerName="registry-server" Feb 27 10:25:23 crc kubenswrapper[4998]: E0227 10:25:23.902263 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f770761-42e0-4e42-92c0-1e7fb8e45a49" containerName="extract-content" Feb 27 10:25:23 crc kubenswrapper[4998]: I0227 10:25:23.902270 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f770761-42e0-4e42-92c0-1e7fb8e45a49" containerName="extract-content" Feb 27 10:25:23 crc kubenswrapper[4998]: E0227 10:25:23.902278 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5d59240-590d-47d4-95f7-de0c01a8d3e2" containerName="extract-content" Feb 27 10:25:23 crc kubenswrapper[4998]: I0227 10:25:23.902284 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5d59240-590d-47d4-95f7-de0c01a8d3e2" containerName="extract-content" Feb 27 10:25:23 crc kubenswrapper[4998]: I0227 10:25:23.902417 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0b13491-88ff-401a-9df3-dc6c981fb11c" containerName="registry-server" Feb 27 10:25:23 crc kubenswrapper[4998]: I0227 10:25:23.902429 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="9881d4cb-217e-455b-b8f3-0ad24a1e51d7" containerName="marketplace-operator" Feb 27 10:25:23 crc kubenswrapper[4998]: I0227 10:25:23.902438 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="de440cc8-1a01-4c10-83e6-027afdacde0c" containerName="registry-server" Feb 27 10:25:23 crc kubenswrapper[4998]: I0227 10:25:23.902448 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5d59240-590d-47d4-95f7-de0c01a8d3e2" containerName="registry-server" Feb 27 10:25:23 crc kubenswrapper[4998]: I0227 10:25:23.902457 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f770761-42e0-4e42-92c0-1e7fb8e45a49" containerName="registry-server" Feb 27 10:25:23 crc kubenswrapper[4998]: I0227 10:25:23.903314 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2rbwc" Feb 27 10:25:23 crc kubenswrapper[4998]: I0227 10:25:23.906377 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 27 10:25:23 crc kubenswrapper[4998]: I0227 10:25:23.908350 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2rbwc"] Feb 27 10:25:23 crc kubenswrapper[4998]: I0227 10:25:23.992650 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efd74380-0bb3-4d33-a593-35a6f3388e3d-utilities\") pod \"redhat-marketplace-2rbwc\" (UID: \"efd74380-0bb3-4d33-a593-35a6f3388e3d\") " pod="openshift-marketplace/redhat-marketplace-2rbwc" Feb 27 10:25:23 crc kubenswrapper[4998]: I0227 10:25:23.992722 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efd74380-0bb3-4d33-a593-35a6f3388e3d-catalog-content\") pod \"redhat-marketplace-2rbwc\" (UID: \"efd74380-0bb3-4d33-a593-35a6f3388e3d\") " pod="openshift-marketplace/redhat-marketplace-2rbwc" Feb 27 10:25:23 crc kubenswrapper[4998]: I0227 10:25:23.992752 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gbfr\" (UniqueName: \"kubernetes.io/projected/efd74380-0bb3-4d33-a593-35a6f3388e3d-kube-api-access-7gbfr\") pod \"redhat-marketplace-2rbwc\" (UID: \"efd74380-0bb3-4d33-a593-35a6f3388e3d\") " pod="openshift-marketplace/redhat-marketplace-2rbwc" Feb 27 10:25:24 crc kubenswrapper[4998]: I0227 10:25:24.093779 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efd74380-0bb3-4d33-a593-35a6f3388e3d-utilities\") pod \"redhat-marketplace-2rbwc\" (UID: \"efd74380-0bb3-4d33-a593-35a6f3388e3d\") " pod="openshift-marketplace/redhat-marketplace-2rbwc" Feb 27 10:25:24 crc kubenswrapper[4998]: I0227 10:25:24.093937 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efd74380-0bb3-4d33-a593-35a6f3388e3d-catalog-content\") pod \"redhat-marketplace-2rbwc\" (UID: \"efd74380-0bb3-4d33-a593-35a6f3388e3d\") " pod="openshift-marketplace/redhat-marketplace-2rbwc" Feb 27 10:25:24 crc kubenswrapper[4998]: I0227 10:25:24.094056 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gbfr\" (UniqueName: \"kubernetes.io/projected/efd74380-0bb3-4d33-a593-35a6f3388e3d-kube-api-access-7gbfr\") pod \"redhat-marketplace-2rbwc\" (UID: \"efd74380-0bb3-4d33-a593-35a6f3388e3d\") " pod="openshift-marketplace/redhat-marketplace-2rbwc" Feb 27 10:25:24 crc kubenswrapper[4998]: I0227 10:25:24.094831 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efd74380-0bb3-4d33-a593-35a6f3388e3d-catalog-content\") pod \"redhat-marketplace-2rbwc\" (UID: \"efd74380-0bb3-4d33-a593-35a6f3388e3d\") " pod="openshift-marketplace/redhat-marketplace-2rbwc" Feb 27 10:25:24 crc kubenswrapper[4998]: I0227 10:25:24.094856 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efd74380-0bb3-4d33-a593-35a6f3388e3d-utilities\") pod \"redhat-marketplace-2rbwc\" (UID: \"efd74380-0bb3-4d33-a593-35a6f3388e3d\") " pod="openshift-marketplace/redhat-marketplace-2rbwc" Feb 27 10:25:24 crc kubenswrapper[4998]: I0227 10:25:24.117270 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gbfr\" (UniqueName: \"kubernetes.io/projected/efd74380-0bb3-4d33-a593-35a6f3388e3d-kube-api-access-7gbfr\") pod \"redhat-marketplace-2rbwc\" (UID: \"efd74380-0bb3-4d33-a593-35a6f3388e3d\") " pod="openshift-marketplace/redhat-marketplace-2rbwc" Feb 27 10:25:24 crc kubenswrapper[4998]: I0227 10:25:24.220432 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2rbwc" Feb 27 10:25:24 crc kubenswrapper[4998]: I0227 10:25:24.386884 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2rbwc"] Feb 27 10:25:24 crc kubenswrapper[4998]: W0227 10:25:24.395572 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podefd74380_0bb3_4d33_a593_35a6f3388e3d.slice/crio-a50fd3c2556af449a5fa1d353c8aaab3f092b68a0594eb4a3abc1ad7594be60e WatchSource:0}: Error finding container a50fd3c2556af449a5fa1d353c8aaab3f092b68a0594eb4a3abc1ad7594be60e: Status 404 returned error can't find the container with id a50fd3c2556af449a5fa1d353c8aaab3f092b68a0594eb4a3abc1ad7594be60e Feb 27 10:25:24 crc kubenswrapper[4998]: I0227 10:25:24.680833 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vkrls"] Feb 27 10:25:24 crc kubenswrapper[4998]: I0227 10:25:24.682969 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vkrls" Feb 27 10:25:24 crc kubenswrapper[4998]: I0227 10:25:24.685521 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 27 10:25:24 crc kubenswrapper[4998]: I0227 10:25:24.687731 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vkrls"] Feb 27 10:25:24 crc kubenswrapper[4998]: I0227 10:25:24.802942 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fbd2e6ba-efd1-454e-81b3-4f1e1d5e6380-utilities\") pod \"redhat-operators-vkrls\" (UID: \"fbd2e6ba-efd1-454e-81b3-4f1e1d5e6380\") " pod="openshift-marketplace/redhat-operators-vkrls" Feb 27 10:25:24 crc kubenswrapper[4998]: I0227 10:25:24.803000 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlcf7\" (UniqueName: \"kubernetes.io/projected/fbd2e6ba-efd1-454e-81b3-4f1e1d5e6380-kube-api-access-zlcf7\") pod \"redhat-operators-vkrls\" (UID: \"fbd2e6ba-efd1-454e-81b3-4f1e1d5e6380\") " pod="openshift-marketplace/redhat-operators-vkrls" Feb 27 10:25:24 crc kubenswrapper[4998]: I0227 10:25:24.803023 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fbd2e6ba-efd1-454e-81b3-4f1e1d5e6380-catalog-content\") pod \"redhat-operators-vkrls\" (UID: \"fbd2e6ba-efd1-454e-81b3-4f1e1d5e6380\") " pod="openshift-marketplace/redhat-operators-vkrls" Feb 27 10:25:24 crc kubenswrapper[4998]: I0227 10:25:24.904763 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zlcf7\" (UniqueName: \"kubernetes.io/projected/fbd2e6ba-efd1-454e-81b3-4f1e1d5e6380-kube-api-access-zlcf7\") pod \"redhat-operators-vkrls\" (UID: \"fbd2e6ba-efd1-454e-81b3-4f1e1d5e6380\") " pod="openshift-marketplace/redhat-operators-vkrls" Feb 27 10:25:24 crc kubenswrapper[4998]: I0227 10:25:24.904813 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fbd2e6ba-efd1-454e-81b3-4f1e1d5e6380-catalog-content\") pod \"redhat-operators-vkrls\" (UID: \"fbd2e6ba-efd1-454e-81b3-4f1e1d5e6380\") " pod="openshift-marketplace/redhat-operators-vkrls" Feb 27 10:25:24 crc kubenswrapper[4998]: I0227 10:25:24.904912 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fbd2e6ba-efd1-454e-81b3-4f1e1d5e6380-utilities\") pod \"redhat-operators-vkrls\" (UID: \"fbd2e6ba-efd1-454e-81b3-4f1e1d5e6380\") " pod="openshift-marketplace/redhat-operators-vkrls" Feb 27 10:25:24 crc kubenswrapper[4998]: I0227 10:25:24.905241 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fbd2e6ba-efd1-454e-81b3-4f1e1d5e6380-catalog-content\") pod \"redhat-operators-vkrls\" (UID: \"fbd2e6ba-efd1-454e-81b3-4f1e1d5e6380\") " pod="openshift-marketplace/redhat-operators-vkrls" Feb 27 10:25:24 crc kubenswrapper[4998]: I0227 10:25:24.905310 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fbd2e6ba-efd1-454e-81b3-4f1e1d5e6380-utilities\") pod \"redhat-operators-vkrls\" (UID: \"fbd2e6ba-efd1-454e-81b3-4f1e1d5e6380\") " pod="openshift-marketplace/redhat-operators-vkrls" Feb 27 10:25:24 crc kubenswrapper[4998]: I0227 10:25:24.923392 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zlcf7\" (UniqueName: \"kubernetes.io/projected/fbd2e6ba-efd1-454e-81b3-4f1e1d5e6380-kube-api-access-zlcf7\") pod \"redhat-operators-vkrls\" (UID: \"fbd2e6ba-efd1-454e-81b3-4f1e1d5e6380\") " pod="openshift-marketplace/redhat-operators-vkrls" Feb 27 10:25:24 crc kubenswrapper[4998]: I0227 10:25:24.994993 4998 generic.go:334] "Generic (PLEG): container finished" podID="efd74380-0bb3-4d33-a593-35a6f3388e3d" containerID="ba94767d6384223d7513b2a5ef2e3de952fae6c89336dc3ef4dc73a705bf0fdd" exitCode=0 Feb 27 10:25:24 crc kubenswrapper[4998]: I0227 10:25:24.995036 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2rbwc" event={"ID":"efd74380-0bb3-4d33-a593-35a6f3388e3d","Type":"ContainerDied","Data":"ba94767d6384223d7513b2a5ef2e3de952fae6c89336dc3ef4dc73a705bf0fdd"} Feb 27 10:25:24 crc kubenswrapper[4998]: I0227 10:25:24.995069 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2rbwc" event={"ID":"efd74380-0bb3-4d33-a593-35a6f3388e3d","Type":"ContainerStarted","Data":"a50fd3c2556af449a5fa1d353c8aaab3f092b68a0594eb4a3abc1ad7594be60e"} Feb 27 10:25:24 crc kubenswrapper[4998]: I0227 10:25:24.997318 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vkrls" Feb 27 10:25:25 crc kubenswrapper[4998]: I0227 10:25:25.387748 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vkrls"] Feb 27 10:25:25 crc kubenswrapper[4998]: W0227 10:25:25.392378 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfbd2e6ba_efd1_454e_81b3_4f1e1d5e6380.slice/crio-12340943b08b78a3efd21fdb69c56293afed44933ef9042b9d968022176e1b33 WatchSource:0}: Error finding container 12340943b08b78a3efd21fdb69c56293afed44933ef9042b9d968022176e1b33: Status 404 returned error can't find the container with id 12340943b08b78a3efd21fdb69c56293afed44933ef9042b9d968022176e1b33 Feb 27 10:25:26 crc kubenswrapper[4998]: I0227 10:25:26.002968 4998 generic.go:334] "Generic (PLEG): container finished" podID="fbd2e6ba-efd1-454e-81b3-4f1e1d5e6380" containerID="c20d161c4899acccda85dd152a30f2e8d0e0100e96a724c37704989b17013a14" exitCode=0 Feb 27 10:25:26 crc kubenswrapper[4998]: I0227 10:25:26.003033 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vkrls" event={"ID":"fbd2e6ba-efd1-454e-81b3-4f1e1d5e6380","Type":"ContainerDied","Data":"c20d161c4899acccda85dd152a30f2e8d0e0100e96a724c37704989b17013a14"} Feb 27 10:25:26 crc kubenswrapper[4998]: I0227 10:25:26.003103 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vkrls" event={"ID":"fbd2e6ba-efd1-454e-81b3-4f1e1d5e6380","Type":"ContainerStarted","Data":"12340943b08b78a3efd21fdb69c56293afed44933ef9042b9d968022176e1b33"} Feb 27 10:25:26 crc kubenswrapper[4998]: I0227 10:25:26.006177 4998 generic.go:334] "Generic (PLEG): container finished" podID="efd74380-0bb3-4d33-a593-35a6f3388e3d" containerID="1d6c848a309278410eeaf0e2e6fc31e0e2df78e282cc2f2811f8f2c0e1f5da1e" exitCode=0 Feb 27 10:25:26 crc kubenswrapper[4998]: I0227 10:25:26.006248 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2rbwc" event={"ID":"efd74380-0bb3-4d33-a593-35a6f3388e3d","Type":"ContainerDied","Data":"1d6c848a309278410eeaf0e2e6fc31e0e2df78e282cc2f2811f8f2c0e1f5da1e"} Feb 27 10:25:26 crc kubenswrapper[4998]: I0227 10:25:26.483650 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-sf2xm"] Feb 27 10:25:26 crc kubenswrapper[4998]: I0227 10:25:26.485547 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sf2xm" Feb 27 10:25:26 crc kubenswrapper[4998]: I0227 10:25:26.489893 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 27 10:25:26 crc kubenswrapper[4998]: I0227 10:25:26.493597 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sf2xm"] Feb 27 10:25:26 crc kubenswrapper[4998]: I0227 10:25:26.628995 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9rqj\" (UniqueName: \"kubernetes.io/projected/f9f52852-ea09-4a76-a196-c48346479c71-kube-api-access-c9rqj\") pod \"community-operators-sf2xm\" (UID: \"f9f52852-ea09-4a76-a196-c48346479c71\") " pod="openshift-marketplace/community-operators-sf2xm" Feb 27 10:25:26 crc kubenswrapper[4998]: I0227 10:25:26.629061 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9f52852-ea09-4a76-a196-c48346479c71-utilities\") pod \"community-operators-sf2xm\" (UID: \"f9f52852-ea09-4a76-a196-c48346479c71\") " pod="openshift-marketplace/community-operators-sf2xm" Feb 27 10:25:26 crc kubenswrapper[4998]: I0227 10:25:26.629091 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9f52852-ea09-4a76-a196-c48346479c71-catalog-content\") pod \"community-operators-sf2xm\" (UID: \"f9f52852-ea09-4a76-a196-c48346479c71\") " pod="openshift-marketplace/community-operators-sf2xm" Feb 27 10:25:26 crc kubenswrapper[4998]: I0227 10:25:26.730324 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9rqj\" (UniqueName: \"kubernetes.io/projected/f9f52852-ea09-4a76-a196-c48346479c71-kube-api-access-c9rqj\") pod \"community-operators-sf2xm\" (UID: \"f9f52852-ea09-4a76-a196-c48346479c71\") " pod="openshift-marketplace/community-operators-sf2xm" Feb 27 10:25:26 crc kubenswrapper[4998]: I0227 10:25:26.730405 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9f52852-ea09-4a76-a196-c48346479c71-utilities\") pod \"community-operators-sf2xm\" (UID: \"f9f52852-ea09-4a76-a196-c48346479c71\") " pod="openshift-marketplace/community-operators-sf2xm" Feb 27 10:25:26 crc kubenswrapper[4998]: I0227 10:25:26.730444 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9f52852-ea09-4a76-a196-c48346479c71-catalog-content\") pod \"community-operators-sf2xm\" (UID: \"f9f52852-ea09-4a76-a196-c48346479c71\") " pod="openshift-marketplace/community-operators-sf2xm" Feb 27 10:25:26 crc kubenswrapper[4998]: I0227 10:25:26.730924 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9f52852-ea09-4a76-a196-c48346479c71-catalog-content\") pod \"community-operators-sf2xm\" (UID: \"f9f52852-ea09-4a76-a196-c48346479c71\") " pod="openshift-marketplace/community-operators-sf2xm" Feb 27 10:25:26 crc kubenswrapper[4998]: I0227 10:25:26.731131 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9f52852-ea09-4a76-a196-c48346479c71-utilities\") pod \"community-operators-sf2xm\" (UID: \"f9f52852-ea09-4a76-a196-c48346479c71\") " pod="openshift-marketplace/community-operators-sf2xm" Feb 27 10:25:26 crc kubenswrapper[4998]: I0227 10:25:26.752204 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9rqj\" (UniqueName: \"kubernetes.io/projected/f9f52852-ea09-4a76-a196-c48346479c71-kube-api-access-c9rqj\") pod \"community-operators-sf2xm\" (UID: \"f9f52852-ea09-4a76-a196-c48346479c71\") " pod="openshift-marketplace/community-operators-sf2xm" Feb 27 10:25:26 crc kubenswrapper[4998]: I0227 10:25:26.809192 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sf2xm" Feb 27 10:25:28 crc kubenswrapper[4998]: I0227 10:25:26.995864 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sf2xm"] Feb 27 10:25:28 crc kubenswrapper[4998]: W0227 10:25:27.000779 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf9f52852_ea09_4a76_a196_c48346479c71.slice/crio-fb549892fdf168464c6dc4ea7f98c86421b3fb1ef9d3a988c0ef2ef87fef8e9b WatchSource:0}: Error finding container fb549892fdf168464c6dc4ea7f98c86421b3fb1ef9d3a988c0ef2ef87fef8e9b: Status 404 returned error can't find the container with id fb549892fdf168464c6dc4ea7f98c86421b3fb1ef9d3a988c0ef2ef87fef8e9b Feb 27 10:25:28 crc kubenswrapper[4998]: I0227 10:25:27.016802 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2rbwc" event={"ID":"efd74380-0bb3-4d33-a593-35a6f3388e3d","Type":"ContainerStarted","Data":"0fab5bce3a96f92670c1791579a820d3268cc62a5a2b0f4d3c1c9684699e134b"} Feb 27 10:25:28 crc kubenswrapper[4998]: I0227 10:25:27.024798 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vkrls" event={"ID":"fbd2e6ba-efd1-454e-81b3-4f1e1d5e6380","Type":"ContainerStarted","Data":"51dec73cebc7916d9ac168d7d3047d62e39e26e5ca3826c97346506e84701cb4"} Feb 27 10:25:28 crc kubenswrapper[4998]: I0227 10:25:27.035483 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sf2xm" event={"ID":"f9f52852-ea09-4a76-a196-c48346479c71","Type":"ContainerStarted","Data":"fb549892fdf168464c6dc4ea7f98c86421b3fb1ef9d3a988c0ef2ef87fef8e9b"} Feb 27 10:25:28 crc kubenswrapper[4998]: I0227 10:25:27.052061 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-2rbwc" podStartSLOduration=2.681605393 podStartE2EDuration="4.052040242s" podCreationTimestamp="2026-02-27 10:25:23 +0000 UTC" firstStartedPulling="2026-02-27 10:25:24.997103477 +0000 UTC m=+476.995374445" lastFinishedPulling="2026-02-27 10:25:26.367538316 +0000 UTC m=+478.365809294" observedRunningTime="2026-02-27 10:25:27.048691384 +0000 UTC m=+479.046962352" watchObservedRunningTime="2026-02-27 10:25:27.052040242 +0000 UTC m=+479.050311210" Feb 27 10:25:28 crc kubenswrapper[4998]: I0227 10:25:27.080057 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5hqqj"] Feb 27 10:25:28 crc kubenswrapper[4998]: I0227 10:25:27.087650 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5hqqj" Feb 27 10:25:28 crc kubenswrapper[4998]: I0227 10:25:27.090393 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5hqqj"] Feb 27 10:25:28 crc kubenswrapper[4998]: I0227 10:25:27.094356 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 27 10:25:28 crc kubenswrapper[4998]: I0227 10:25:27.134764 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fqlc\" (UniqueName: \"kubernetes.io/projected/dc780063-d484-4f3b-9baf-c1071d1a5b23-kube-api-access-6fqlc\") pod \"certified-operators-5hqqj\" (UID: \"dc780063-d484-4f3b-9baf-c1071d1a5b23\") " pod="openshift-marketplace/certified-operators-5hqqj" Feb 27 10:25:28 crc kubenswrapper[4998]: I0227 10:25:27.134800 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc780063-d484-4f3b-9baf-c1071d1a5b23-catalog-content\") pod \"certified-operators-5hqqj\" (UID: \"dc780063-d484-4f3b-9baf-c1071d1a5b23\") " pod="openshift-marketplace/certified-operators-5hqqj" Feb 27 10:25:28 crc kubenswrapper[4998]: I0227 10:25:27.134849 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc780063-d484-4f3b-9baf-c1071d1a5b23-utilities\") pod \"certified-operators-5hqqj\" (UID: \"dc780063-d484-4f3b-9baf-c1071d1a5b23\") " pod="openshift-marketplace/certified-operators-5hqqj" Feb 27 10:25:28 crc kubenswrapper[4998]: I0227 10:25:27.237359 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc780063-d484-4f3b-9baf-c1071d1a5b23-utilities\") pod \"certified-operators-5hqqj\" (UID: \"dc780063-d484-4f3b-9baf-c1071d1a5b23\") " pod="openshift-marketplace/certified-operators-5hqqj" Feb 27 10:25:28 crc kubenswrapper[4998]: I0227 10:25:27.237454 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fqlc\" (UniqueName: \"kubernetes.io/projected/dc780063-d484-4f3b-9baf-c1071d1a5b23-kube-api-access-6fqlc\") pod \"certified-operators-5hqqj\" (UID: \"dc780063-d484-4f3b-9baf-c1071d1a5b23\") " pod="openshift-marketplace/certified-operators-5hqqj" Feb 27 10:25:28 crc kubenswrapper[4998]: I0227 10:25:27.237479 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc780063-d484-4f3b-9baf-c1071d1a5b23-catalog-content\") pod \"certified-operators-5hqqj\" (UID: \"dc780063-d484-4f3b-9baf-c1071d1a5b23\") " pod="openshift-marketplace/certified-operators-5hqqj" Feb 27 10:25:28 crc kubenswrapper[4998]: I0227 10:25:27.237903 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc780063-d484-4f3b-9baf-c1071d1a5b23-catalog-content\") pod \"certified-operators-5hqqj\" (UID: \"dc780063-d484-4f3b-9baf-c1071d1a5b23\") " pod="openshift-marketplace/certified-operators-5hqqj" Feb 27 10:25:28 crc kubenswrapper[4998]: I0227 10:25:27.238104 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc780063-d484-4f3b-9baf-c1071d1a5b23-utilities\") pod \"certified-operators-5hqqj\" (UID: \"dc780063-d484-4f3b-9baf-c1071d1a5b23\") " pod="openshift-marketplace/certified-operators-5hqqj" Feb 27 10:25:28 crc kubenswrapper[4998]: I0227 10:25:27.257189 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6fqlc\" (UniqueName: \"kubernetes.io/projected/dc780063-d484-4f3b-9baf-c1071d1a5b23-kube-api-access-6fqlc\") pod \"certified-operators-5hqqj\" (UID: \"dc780063-d484-4f3b-9baf-c1071d1a5b23\") " pod="openshift-marketplace/certified-operators-5hqqj" Feb 27 10:25:28 crc kubenswrapper[4998]: I0227 10:25:27.431840 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5hqqj" Feb 27 10:25:28 crc kubenswrapper[4998]: I0227 10:25:28.046269 4998 generic.go:334] "Generic (PLEG): container finished" podID="f9f52852-ea09-4a76-a196-c48346479c71" containerID="ace4f67f68528e2c8b3992d7d4d254e3cbc4ae4643b850755d75b2d21e2c119e" exitCode=0 Feb 27 10:25:28 crc kubenswrapper[4998]: I0227 10:25:28.046512 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sf2xm" event={"ID":"f9f52852-ea09-4a76-a196-c48346479c71","Type":"ContainerDied","Data":"ace4f67f68528e2c8b3992d7d4d254e3cbc4ae4643b850755d75b2d21e2c119e"} Feb 27 10:25:28 crc kubenswrapper[4998]: I0227 10:25:28.049147 4998 generic.go:334] "Generic (PLEG): container finished" podID="fbd2e6ba-efd1-454e-81b3-4f1e1d5e6380" containerID="51dec73cebc7916d9ac168d7d3047d62e39e26e5ca3826c97346506e84701cb4" exitCode=0 Feb 27 10:25:28 crc kubenswrapper[4998]: I0227 10:25:28.049205 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vkrls" event={"ID":"fbd2e6ba-efd1-454e-81b3-4f1e1d5e6380","Type":"ContainerDied","Data":"51dec73cebc7916d9ac168d7d3047d62e39e26e5ca3826c97346506e84701cb4"} Feb 27 10:25:28 crc kubenswrapper[4998]: W0227 10:25:28.260194 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddc780063_d484_4f3b_9baf_c1071d1a5b23.slice/crio-ec96ac6c8ba4662d4b5bb139b7311d19253e4ffb1716f4ad54262e65ebb544c2 WatchSource:0}: Error finding container ec96ac6c8ba4662d4b5bb139b7311d19253e4ffb1716f4ad54262e65ebb544c2: Status 404 returned error can't find the container with id ec96ac6c8ba4662d4b5bb139b7311d19253e4ffb1716f4ad54262e65ebb544c2 Feb 27 10:25:28 crc kubenswrapper[4998]: I0227 10:25:28.273263 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5hqqj"] Feb 27 10:25:29 crc kubenswrapper[4998]: I0227 10:25:29.054873 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sf2xm" event={"ID":"f9f52852-ea09-4a76-a196-c48346479c71","Type":"ContainerStarted","Data":"a7a5487650ef31d0878ffa7c8dd1a925cc18a678b61369e150c13c288829e3aa"} Feb 27 10:25:29 crc kubenswrapper[4998]: I0227 10:25:29.056991 4998 generic.go:334] "Generic (PLEG): container finished" podID="dc780063-d484-4f3b-9baf-c1071d1a5b23" containerID="800144401f9cdcb7a74ce983979fee962f3fd6fc568ed3af42e5e21886b514e2" exitCode=0 Feb 27 10:25:29 crc kubenswrapper[4998]: I0227 10:25:29.057099 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5hqqj" event={"ID":"dc780063-d484-4f3b-9baf-c1071d1a5b23","Type":"ContainerDied","Data":"800144401f9cdcb7a74ce983979fee962f3fd6fc568ed3af42e5e21886b514e2"} Feb 27 10:25:29 crc kubenswrapper[4998]: I0227 10:25:29.057118 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5hqqj" event={"ID":"dc780063-d484-4f3b-9baf-c1071d1a5b23","Type":"ContainerStarted","Data":"ec96ac6c8ba4662d4b5bb139b7311d19253e4ffb1716f4ad54262e65ebb544c2"} Feb 27 10:25:29 crc kubenswrapper[4998]: I0227 10:25:29.061287 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vkrls" event={"ID":"fbd2e6ba-efd1-454e-81b3-4f1e1d5e6380","Type":"ContainerStarted","Data":"8e26baabde1a3809ac6fa3cf5a0525f0bd199dd0bed56d9588efd9db255c17d9"} Feb 27 10:25:29 crc kubenswrapper[4998]: I0227 10:25:29.117986 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vkrls" podStartSLOduration=2.663716993 podStartE2EDuration="5.117930928s" podCreationTimestamp="2026-02-27 10:25:24 +0000 UTC" firstStartedPulling="2026-02-27 10:25:26.004434215 +0000 UTC m=+478.002705183" lastFinishedPulling="2026-02-27 10:25:28.45864815 +0000 UTC m=+480.456919118" observedRunningTime="2026-02-27 10:25:29.096019368 +0000 UTC m=+481.094290346" watchObservedRunningTime="2026-02-27 10:25:29.117930928 +0000 UTC m=+481.116201896" Feb 27 10:25:30 crc kubenswrapper[4998]: I0227 10:25:30.072274 4998 generic.go:334] "Generic (PLEG): container finished" podID="dc780063-d484-4f3b-9baf-c1071d1a5b23" containerID="23d268034254ceae99ffe8551dc8b36388bad45c9cf33e279d19d6978a3689cc" exitCode=0 Feb 27 10:25:30 crc kubenswrapper[4998]: I0227 10:25:30.072473 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5hqqj" event={"ID":"dc780063-d484-4f3b-9baf-c1071d1a5b23","Type":"ContainerDied","Data":"23d268034254ceae99ffe8551dc8b36388bad45c9cf33e279d19d6978a3689cc"} Feb 27 10:25:30 crc kubenswrapper[4998]: I0227 10:25:30.075705 4998 generic.go:334] "Generic (PLEG): container finished" podID="f9f52852-ea09-4a76-a196-c48346479c71" containerID="a7a5487650ef31d0878ffa7c8dd1a925cc18a678b61369e150c13c288829e3aa" exitCode=0 Feb 27 10:25:30 crc kubenswrapper[4998]: I0227 10:25:30.076007 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sf2xm" event={"ID":"f9f52852-ea09-4a76-a196-c48346479c71","Type":"ContainerDied","Data":"a7a5487650ef31d0878ffa7c8dd1a925cc18a678b61369e150c13c288829e3aa"} Feb 27 10:25:31 crc kubenswrapper[4998]: I0227 10:25:31.086534 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5hqqj" event={"ID":"dc780063-d484-4f3b-9baf-c1071d1a5b23","Type":"ContainerStarted","Data":"ad286d860afaa73859e36c2e5a072cd90a581e79835a274f8cc79626572bfd12"} Feb 27 10:25:31 crc kubenswrapper[4998]: I0227 10:25:31.089972 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sf2xm" event={"ID":"f9f52852-ea09-4a76-a196-c48346479c71","Type":"ContainerStarted","Data":"deaeb8376d7e055b04c70dc67e63719fe401a59b80e0a32f2d4c9fae5d0ee876"} Feb 27 10:25:31 crc kubenswrapper[4998]: I0227 10:25:31.105217 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5hqqj" podStartSLOduration=2.669946474 podStartE2EDuration="4.105197912s" podCreationTimestamp="2026-02-27 10:25:27 +0000 UTC" firstStartedPulling="2026-02-27 10:25:29.058406761 +0000 UTC m=+481.056677729" lastFinishedPulling="2026-02-27 10:25:30.493658169 +0000 UTC m=+482.491929167" observedRunningTime="2026-02-27 10:25:31.10170855 +0000 UTC m=+483.099979538" watchObservedRunningTime="2026-02-27 10:25:31.105197912 +0000 UTC m=+483.103468890" Feb 27 10:25:31 crc kubenswrapper[4998]: I0227 10:25:31.120659 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-sf2xm" podStartSLOduration=2.616660147 podStartE2EDuration="5.1206413s" podCreationTimestamp="2026-02-27 10:25:26 +0000 UTC" firstStartedPulling="2026-02-27 10:25:28.049062419 +0000 UTC m=+480.047333397" lastFinishedPulling="2026-02-27 10:25:30.553043582 +0000 UTC m=+482.551314550" observedRunningTime="2026-02-27 10:25:31.11985027 +0000 UTC m=+483.118121238" watchObservedRunningTime="2026-02-27 10:25:31.1206413 +0000 UTC m=+483.118912268" Feb 27 10:25:33 crc kubenswrapper[4998]: I0227 10:25:33.086105 4998 scope.go:117] "RemoveContainer" containerID="22e749ae59c7bd8ab4f2d458cd33ccbae459eb43375c8abcdd275ce7f3978d5b" Feb 27 10:25:34 crc kubenswrapper[4998]: I0227 10:25:34.221418 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-2rbwc" Feb 27 10:25:34 crc kubenswrapper[4998]: I0227 10:25:34.221470 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-2rbwc" Feb 27 10:25:34 crc kubenswrapper[4998]: I0227 10:25:34.264792 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-2rbwc" Feb 27 10:25:34 crc kubenswrapper[4998]: I0227 10:25:34.998261 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vkrls" Feb 27 10:25:34 crc kubenswrapper[4998]: I0227 10:25:34.998504 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vkrls" Feb 27 10:25:35 crc kubenswrapper[4998]: I0227 10:25:35.042596 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vkrls" Feb 27 10:25:35 crc kubenswrapper[4998]: I0227 10:25:35.145072 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-2rbwc" Feb 27 10:25:35 crc kubenswrapper[4998]: I0227 10:25:35.167460 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vkrls" Feb 27 10:25:35 crc kubenswrapper[4998]: I0227 10:25:35.498943 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-m8z6k" Feb 27 10:25:35 crc kubenswrapper[4998]: I0227 10:25:35.580807 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-2m9rf"] Feb 27 10:25:36 crc kubenswrapper[4998]: I0227 10:25:36.810054 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-sf2xm" Feb 27 10:25:36 crc kubenswrapper[4998]: I0227 10:25:36.810199 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-sf2xm" Feb 27 10:25:36 crc kubenswrapper[4998]: I0227 10:25:36.862793 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-sf2xm" Feb 27 10:25:37 crc kubenswrapper[4998]: I0227 10:25:37.157782 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-sf2xm" Feb 27 10:25:37 crc kubenswrapper[4998]: I0227 10:25:37.432832 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5hqqj" Feb 27 10:25:37 crc kubenswrapper[4998]: I0227 10:25:37.432914 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5hqqj" Feb 27 10:25:37 crc kubenswrapper[4998]: I0227 10:25:37.476668 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5hqqj" Feb 27 10:25:38 crc kubenswrapper[4998]: I0227 10:25:38.163065 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5hqqj" Feb 27 10:25:40 crc kubenswrapper[4998]: I0227 10:25:40.505399 4998 patch_prober.go:28] interesting pod/machine-config-daemon-m6kr5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 10:25:40 crc kubenswrapper[4998]: I0227 10:25:40.505830 4998 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 10:25:40 crc kubenswrapper[4998]: I0227 10:25:40.505892 4998 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" Feb 27 10:25:40 crc kubenswrapper[4998]: I0227 10:25:40.506911 4998 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ef0e9e290020f4de6e2dbb18ef565f138df88eee5d534e2df45cae2f81d96bd3"} pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 27 10:25:40 crc kubenswrapper[4998]: I0227 10:25:40.507075 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" containerName="machine-config-daemon" containerID="cri-o://ef0e9e290020f4de6e2dbb18ef565f138df88eee5d534e2df45cae2f81d96bd3" gracePeriod=600 Feb 27 10:25:41 crc kubenswrapper[4998]: I0227 10:25:41.141462 4998 generic.go:334] "Generic (PLEG): container finished" podID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" containerID="ef0e9e290020f4de6e2dbb18ef565f138df88eee5d534e2df45cae2f81d96bd3" exitCode=0 Feb 27 10:25:41 crc kubenswrapper[4998]: I0227 10:25:41.141501 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" event={"ID":"400c5e2f-5448-49c6-bf8e-04b21e552bb2","Type":"ContainerDied","Data":"ef0e9e290020f4de6e2dbb18ef565f138df88eee5d534e2df45cae2f81d96bd3"} Feb 27 10:25:41 crc kubenswrapper[4998]: I0227 10:25:41.141527 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" event={"ID":"400c5e2f-5448-49c6-bf8e-04b21e552bb2","Type":"ContainerStarted","Data":"234da51b68ba7f355a9213d1b205beeeaf0cebf43b06c886158db76841ca5c10"} Feb 27 10:25:41 crc kubenswrapper[4998]: I0227 10:25:41.141544 4998 scope.go:117] "RemoveContainer" containerID="94514be337c3278cc1dfc8b2f0c50050f03294a2cc4ac6a72c62695d2fe4152a" Feb 27 10:25:47 crc kubenswrapper[4998]: I0227 10:25:47.173626 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:26:00 crc kubenswrapper[4998]: I0227 10:26:00.139218 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536466-rm8rs"] Feb 27 10:26:00 crc kubenswrapper[4998]: I0227 10:26:00.140415 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536466-rm8rs" Feb 27 10:26:00 crc kubenswrapper[4998]: I0227 10:26:00.142761 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-b74ch" Feb 27 10:26:00 crc kubenswrapper[4998]: I0227 10:26:00.142924 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 10:26:00 crc kubenswrapper[4998]: I0227 10:26:00.146434 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 10:26:00 crc kubenswrapper[4998]: I0227 10:26:00.149468 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536466-rm8rs"] Feb 27 10:26:00 crc kubenswrapper[4998]: I0227 10:26:00.187334 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2fb7\" (UniqueName: \"kubernetes.io/projected/2c2a58bd-244a-4888-943f-2a222e58689b-kube-api-access-f2fb7\") pod \"auto-csr-approver-29536466-rm8rs\" (UID: \"2c2a58bd-244a-4888-943f-2a222e58689b\") " pod="openshift-infra/auto-csr-approver-29536466-rm8rs" Feb 27 10:26:00 crc kubenswrapper[4998]: I0227 10:26:00.288944 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2fb7\" (UniqueName: \"kubernetes.io/projected/2c2a58bd-244a-4888-943f-2a222e58689b-kube-api-access-f2fb7\") pod \"auto-csr-approver-29536466-rm8rs\" (UID: \"2c2a58bd-244a-4888-943f-2a222e58689b\") " pod="openshift-infra/auto-csr-approver-29536466-rm8rs" Feb 27 10:26:00 crc kubenswrapper[4998]: I0227 10:26:00.308465 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2fb7\" (UniqueName: \"kubernetes.io/projected/2c2a58bd-244a-4888-943f-2a222e58689b-kube-api-access-f2fb7\") pod \"auto-csr-approver-29536466-rm8rs\" (UID: \"2c2a58bd-244a-4888-943f-2a222e58689b\") " pod="openshift-infra/auto-csr-approver-29536466-rm8rs" Feb 27 10:26:00 crc kubenswrapper[4998]: I0227 10:26:00.466025 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536466-rm8rs" Feb 27 10:26:00 crc kubenswrapper[4998]: I0227 10:26:00.631701 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-2m9rf" podUID="4055490d-1d4a-4b0b-bf94-e2eaa714bc49" containerName="registry" containerID="cri-o://db4e6ac1b424105eed07e278b5c82b027a8845c9e32233fc047ea48cf5a90b6f" gracePeriod=30 Feb 27 10:26:00 crc kubenswrapper[4998]: I0227 10:26:00.641576 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536466-rm8rs"] Feb 27 10:26:00 crc kubenswrapper[4998]: I0227 10:26:00.954458 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-2m9rf" Feb 27 10:26:01 crc kubenswrapper[4998]: I0227 10:26:01.000473 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4055490d-1d4a-4b0b-bf94-e2eaa714bc49-registry-certificates\") pod \"4055490d-1d4a-4b0b-bf94-e2eaa714bc49\" (UID: \"4055490d-1d4a-4b0b-bf94-e2eaa714bc49\") " Feb 27 10:26:01 crc kubenswrapper[4998]: I0227 10:26:01.000578 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pbgr4\" (UniqueName: \"kubernetes.io/projected/4055490d-1d4a-4b0b-bf94-e2eaa714bc49-kube-api-access-pbgr4\") pod \"4055490d-1d4a-4b0b-bf94-e2eaa714bc49\" (UID: \"4055490d-1d4a-4b0b-bf94-e2eaa714bc49\") " Feb 27 10:26:01 crc kubenswrapper[4998]: I0227 10:26:01.000756 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"4055490d-1d4a-4b0b-bf94-e2eaa714bc49\" (UID: \"4055490d-1d4a-4b0b-bf94-e2eaa714bc49\") " Feb 27 10:26:01 crc kubenswrapper[4998]: I0227 10:26:01.000801 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4055490d-1d4a-4b0b-bf94-e2eaa714bc49-installation-pull-secrets\") pod \"4055490d-1d4a-4b0b-bf94-e2eaa714bc49\" (UID: \"4055490d-1d4a-4b0b-bf94-e2eaa714bc49\") " Feb 27 10:26:01 crc kubenswrapper[4998]: I0227 10:26:01.000841 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4055490d-1d4a-4b0b-bf94-e2eaa714bc49-trusted-ca\") pod \"4055490d-1d4a-4b0b-bf94-e2eaa714bc49\" (UID: \"4055490d-1d4a-4b0b-bf94-e2eaa714bc49\") " Feb 27 10:26:01 crc kubenswrapper[4998]: I0227 10:26:01.000889 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4055490d-1d4a-4b0b-bf94-e2eaa714bc49-registry-tls\") pod \"4055490d-1d4a-4b0b-bf94-e2eaa714bc49\" (UID: \"4055490d-1d4a-4b0b-bf94-e2eaa714bc49\") " Feb 27 10:26:01 crc kubenswrapper[4998]: I0227 10:26:01.000925 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4055490d-1d4a-4b0b-bf94-e2eaa714bc49-ca-trust-extracted\") pod \"4055490d-1d4a-4b0b-bf94-e2eaa714bc49\" (UID: \"4055490d-1d4a-4b0b-bf94-e2eaa714bc49\") " Feb 27 10:26:01 crc kubenswrapper[4998]: I0227 10:26:01.000952 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4055490d-1d4a-4b0b-bf94-e2eaa714bc49-bound-sa-token\") pod \"4055490d-1d4a-4b0b-bf94-e2eaa714bc49\" (UID: \"4055490d-1d4a-4b0b-bf94-e2eaa714bc49\") " Feb 27 10:26:01 crc kubenswrapper[4998]: I0227 10:26:01.001354 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4055490d-1d4a-4b0b-bf94-e2eaa714bc49-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "4055490d-1d4a-4b0b-bf94-e2eaa714bc49" (UID: "4055490d-1d4a-4b0b-bf94-e2eaa714bc49"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:26:01 crc kubenswrapper[4998]: I0227 10:26:01.001782 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4055490d-1d4a-4b0b-bf94-e2eaa714bc49-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "4055490d-1d4a-4b0b-bf94-e2eaa714bc49" (UID: "4055490d-1d4a-4b0b-bf94-e2eaa714bc49"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:26:01 crc kubenswrapper[4998]: I0227 10:26:01.006032 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4055490d-1d4a-4b0b-bf94-e2eaa714bc49-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "4055490d-1d4a-4b0b-bf94-e2eaa714bc49" (UID: "4055490d-1d4a-4b0b-bf94-e2eaa714bc49"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:26:01 crc kubenswrapper[4998]: I0227 10:26:01.006153 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4055490d-1d4a-4b0b-bf94-e2eaa714bc49-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "4055490d-1d4a-4b0b-bf94-e2eaa714bc49" (UID: "4055490d-1d4a-4b0b-bf94-e2eaa714bc49"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:26:01 crc kubenswrapper[4998]: I0227 10:26:01.006342 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4055490d-1d4a-4b0b-bf94-e2eaa714bc49-kube-api-access-pbgr4" (OuterVolumeSpecName: "kube-api-access-pbgr4") pod "4055490d-1d4a-4b0b-bf94-e2eaa714bc49" (UID: "4055490d-1d4a-4b0b-bf94-e2eaa714bc49"). InnerVolumeSpecName "kube-api-access-pbgr4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:26:01 crc kubenswrapper[4998]: I0227 10:26:01.008733 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4055490d-1d4a-4b0b-bf94-e2eaa714bc49-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "4055490d-1d4a-4b0b-bf94-e2eaa714bc49" (UID: "4055490d-1d4a-4b0b-bf94-e2eaa714bc49"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:26:01 crc kubenswrapper[4998]: I0227 10:26:01.009187 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "4055490d-1d4a-4b0b-bf94-e2eaa714bc49" (UID: "4055490d-1d4a-4b0b-bf94-e2eaa714bc49"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 27 10:26:01 crc kubenswrapper[4998]: I0227 10:26:01.020465 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4055490d-1d4a-4b0b-bf94-e2eaa714bc49-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "4055490d-1d4a-4b0b-bf94-e2eaa714bc49" (UID: "4055490d-1d4a-4b0b-bf94-e2eaa714bc49"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:26:01 crc kubenswrapper[4998]: I0227 10:26:01.102142 4998 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4055490d-1d4a-4b0b-bf94-e2eaa714bc49-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 27 10:26:01 crc kubenswrapper[4998]: I0227 10:26:01.102171 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pbgr4\" (UniqueName: \"kubernetes.io/projected/4055490d-1d4a-4b0b-bf94-e2eaa714bc49-kube-api-access-pbgr4\") on node \"crc\" DevicePath \"\"" Feb 27 10:26:01 crc kubenswrapper[4998]: I0227 10:26:01.102181 4998 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4055490d-1d4a-4b0b-bf94-e2eaa714bc49-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 27 10:26:01 crc kubenswrapper[4998]: I0227 10:26:01.102193 4998 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4055490d-1d4a-4b0b-bf94-e2eaa714bc49-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 27 10:26:01 crc kubenswrapper[4998]: I0227 10:26:01.102201 4998 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4055490d-1d4a-4b0b-bf94-e2eaa714bc49-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 27 10:26:01 crc kubenswrapper[4998]: I0227 10:26:01.102209 4998 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4055490d-1d4a-4b0b-bf94-e2eaa714bc49-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 27 10:26:01 crc kubenswrapper[4998]: I0227 10:26:01.102218 4998 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4055490d-1d4a-4b0b-bf94-e2eaa714bc49-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 27 10:26:01 crc kubenswrapper[4998]: I0227 10:26:01.279107 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536466-rm8rs" event={"ID":"2c2a58bd-244a-4888-943f-2a222e58689b","Type":"ContainerStarted","Data":"89fadbca31705fbb971881fe94ff2a90356fdc5aa893cde1ce24e4fb4a0936e8"} Feb 27 10:26:01 crc kubenswrapper[4998]: I0227 10:26:01.280019 4998 generic.go:334] "Generic (PLEG): container finished" podID="4055490d-1d4a-4b0b-bf94-e2eaa714bc49" containerID="db4e6ac1b424105eed07e278b5c82b027a8845c9e32233fc047ea48cf5a90b6f" exitCode=0 Feb 27 10:26:01 crc kubenswrapper[4998]: I0227 10:26:01.280040 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-2m9rf" event={"ID":"4055490d-1d4a-4b0b-bf94-e2eaa714bc49","Type":"ContainerDied","Data":"db4e6ac1b424105eed07e278b5c82b027a8845c9e32233fc047ea48cf5a90b6f"} Feb 27 10:26:01 crc kubenswrapper[4998]: I0227 10:26:01.280054 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-2m9rf" event={"ID":"4055490d-1d4a-4b0b-bf94-e2eaa714bc49","Type":"ContainerDied","Data":"dc898960be9c89eb94ebbc11a240cba660c0c1f56449eef533994680d6237798"} Feb 27 10:26:01 crc kubenswrapper[4998]: I0227 10:26:01.280069 4998 scope.go:117] "RemoveContainer" containerID="db4e6ac1b424105eed07e278b5c82b027a8845c9e32233fc047ea48cf5a90b6f" Feb 27 10:26:01 crc kubenswrapper[4998]: I0227 10:26:01.280161 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-2m9rf" Feb 27 10:26:01 crc kubenswrapper[4998]: I0227 10:26:01.306178 4998 scope.go:117] "RemoveContainer" containerID="db4e6ac1b424105eed07e278b5c82b027a8845c9e32233fc047ea48cf5a90b6f" Feb 27 10:26:01 crc kubenswrapper[4998]: E0227 10:26:01.314732 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db4e6ac1b424105eed07e278b5c82b027a8845c9e32233fc047ea48cf5a90b6f\": container with ID starting with db4e6ac1b424105eed07e278b5c82b027a8845c9e32233fc047ea48cf5a90b6f not found: ID does not exist" containerID="db4e6ac1b424105eed07e278b5c82b027a8845c9e32233fc047ea48cf5a90b6f" Feb 27 10:26:01 crc kubenswrapper[4998]: I0227 10:26:01.315017 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db4e6ac1b424105eed07e278b5c82b027a8845c9e32233fc047ea48cf5a90b6f"} err="failed to get container status \"db4e6ac1b424105eed07e278b5c82b027a8845c9e32233fc047ea48cf5a90b6f\": rpc error: code = NotFound desc = could not find container \"db4e6ac1b424105eed07e278b5c82b027a8845c9e32233fc047ea48cf5a90b6f\": container with ID starting with db4e6ac1b424105eed07e278b5c82b027a8845c9e32233fc047ea48cf5a90b6f not found: ID does not exist" Feb 27 10:26:01 crc kubenswrapper[4998]: I0227 10:26:01.325498 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-2m9rf"] Feb 27 10:26:01 crc kubenswrapper[4998]: I0227 10:26:01.330498 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-2m9rf"] Feb 27 10:26:02 crc kubenswrapper[4998]: I0227 10:26:02.289858 4998 generic.go:334] "Generic (PLEG): container finished" podID="2c2a58bd-244a-4888-943f-2a222e58689b" containerID="dae27eebac9de940e4f2181001ba3137d52c7d6043e94e5f852f24b8afc2e781" exitCode=0 Feb 27 10:26:02 crc kubenswrapper[4998]: I0227 10:26:02.289988 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536466-rm8rs" event={"ID":"2c2a58bd-244a-4888-943f-2a222e58689b","Type":"ContainerDied","Data":"dae27eebac9de940e4f2181001ba3137d52c7d6043e94e5f852f24b8afc2e781"} Feb 27 10:26:02 crc kubenswrapper[4998]: I0227 10:26:02.773012 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4055490d-1d4a-4b0b-bf94-e2eaa714bc49" path="/var/lib/kubelet/pods/4055490d-1d4a-4b0b-bf94-e2eaa714bc49/volumes" Feb 27 10:26:03 crc kubenswrapper[4998]: I0227 10:26:03.493385 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536466-rm8rs" Feb 27 10:26:03 crc kubenswrapper[4998]: I0227 10:26:03.569142 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f2fb7\" (UniqueName: \"kubernetes.io/projected/2c2a58bd-244a-4888-943f-2a222e58689b-kube-api-access-f2fb7\") pod \"2c2a58bd-244a-4888-943f-2a222e58689b\" (UID: \"2c2a58bd-244a-4888-943f-2a222e58689b\") " Feb 27 10:26:03 crc kubenswrapper[4998]: I0227 10:26:03.574181 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c2a58bd-244a-4888-943f-2a222e58689b-kube-api-access-f2fb7" (OuterVolumeSpecName: "kube-api-access-f2fb7") pod "2c2a58bd-244a-4888-943f-2a222e58689b" (UID: "2c2a58bd-244a-4888-943f-2a222e58689b"). InnerVolumeSpecName "kube-api-access-f2fb7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:26:03 crc kubenswrapper[4998]: I0227 10:26:03.671179 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f2fb7\" (UniqueName: \"kubernetes.io/projected/2c2a58bd-244a-4888-943f-2a222e58689b-kube-api-access-f2fb7\") on node \"crc\" DevicePath \"\"" Feb 27 10:26:04 crc kubenswrapper[4998]: I0227 10:26:04.302299 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536466-rm8rs" event={"ID":"2c2a58bd-244a-4888-943f-2a222e58689b","Type":"ContainerDied","Data":"89fadbca31705fbb971881fe94ff2a90356fdc5aa893cde1ce24e4fb4a0936e8"} Feb 27 10:26:04 crc kubenswrapper[4998]: I0227 10:26:04.302350 4998 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89fadbca31705fbb971881fe94ff2a90356fdc5aa893cde1ce24e4fb4a0936e8" Feb 27 10:26:04 crc kubenswrapper[4998]: I0227 10:26:04.302389 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536466-rm8rs" Feb 27 10:26:04 crc kubenswrapper[4998]: I0227 10:26:04.544461 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536460-jgglv"] Feb 27 10:26:04 crc kubenswrapper[4998]: I0227 10:26:04.550770 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536460-jgglv"] Feb 27 10:26:04 crc kubenswrapper[4998]: I0227 10:26:04.773145 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36a1e36a-d138-4606-a280-ef688b10a438" path="/var/lib/kubelet/pods/36a1e36a-d138-4606-a280-ef688b10a438/volumes" Feb 27 10:27:40 crc kubenswrapper[4998]: I0227 10:27:40.504476 4998 patch_prober.go:28] interesting pod/machine-config-daemon-m6kr5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 10:27:40 crc kubenswrapper[4998]: I0227 10:27:40.505047 4998 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 10:28:00 crc kubenswrapper[4998]: I0227 10:28:00.129623 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536468-q85rh"] Feb 27 10:28:00 crc kubenswrapper[4998]: E0227 10:28:00.131082 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4055490d-1d4a-4b0b-bf94-e2eaa714bc49" containerName="registry" Feb 27 10:28:00 crc kubenswrapper[4998]: I0227 10:28:00.131097 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="4055490d-1d4a-4b0b-bf94-e2eaa714bc49" containerName="registry" Feb 27 10:28:00 crc kubenswrapper[4998]: E0227 10:28:00.131115 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c2a58bd-244a-4888-943f-2a222e58689b" containerName="oc" Feb 27 10:28:00 crc kubenswrapper[4998]: I0227 10:28:00.131122 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c2a58bd-244a-4888-943f-2a222e58689b" containerName="oc" Feb 27 10:28:00 crc kubenswrapper[4998]: I0227 10:28:00.131212 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c2a58bd-244a-4888-943f-2a222e58689b" containerName="oc" Feb 27 10:28:00 crc kubenswrapper[4998]: I0227 10:28:00.131247 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="4055490d-1d4a-4b0b-bf94-e2eaa714bc49" containerName="registry" Feb 27 10:28:00 crc kubenswrapper[4998]: I0227 10:28:00.131663 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536468-q85rh" Feb 27 10:28:00 crc kubenswrapper[4998]: I0227 10:28:00.134577 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-b74ch" Feb 27 10:28:00 crc kubenswrapper[4998]: I0227 10:28:00.134746 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 10:28:00 crc kubenswrapper[4998]: I0227 10:28:00.134745 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 10:28:00 crc kubenswrapper[4998]: I0227 10:28:00.135143 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536468-q85rh"] Feb 27 10:28:00 crc kubenswrapper[4998]: I0227 10:28:00.311971 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pv2tn\" (UniqueName: \"kubernetes.io/projected/257fe562-007d-4f87-b3a6-f4f0fab0fa07-kube-api-access-pv2tn\") pod \"auto-csr-approver-29536468-q85rh\" (UID: \"257fe562-007d-4f87-b3a6-f4f0fab0fa07\") " pod="openshift-infra/auto-csr-approver-29536468-q85rh" Feb 27 10:28:00 crc kubenswrapper[4998]: I0227 10:28:00.413509 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pv2tn\" (UniqueName: \"kubernetes.io/projected/257fe562-007d-4f87-b3a6-f4f0fab0fa07-kube-api-access-pv2tn\") pod \"auto-csr-approver-29536468-q85rh\" (UID: \"257fe562-007d-4f87-b3a6-f4f0fab0fa07\") " pod="openshift-infra/auto-csr-approver-29536468-q85rh" Feb 27 10:28:00 crc kubenswrapper[4998]: I0227 10:28:00.441160 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pv2tn\" (UniqueName: \"kubernetes.io/projected/257fe562-007d-4f87-b3a6-f4f0fab0fa07-kube-api-access-pv2tn\") pod \"auto-csr-approver-29536468-q85rh\" (UID: \"257fe562-007d-4f87-b3a6-f4f0fab0fa07\") " pod="openshift-infra/auto-csr-approver-29536468-q85rh" Feb 27 10:28:00 crc kubenswrapper[4998]: I0227 10:28:00.513705 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536468-q85rh" Feb 27 10:28:00 crc kubenswrapper[4998]: I0227 10:28:00.709374 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536468-q85rh"] Feb 27 10:28:00 crc kubenswrapper[4998]: I0227 10:28:00.720148 4998 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 27 10:28:00 crc kubenswrapper[4998]: I0227 10:28:00.957446 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536468-q85rh" event={"ID":"257fe562-007d-4f87-b3a6-f4f0fab0fa07","Type":"ContainerStarted","Data":"f542619c1fa90585e1cf0fc40a438642336b42a27ace82858d4f6ae0fa57fa0b"} Feb 27 10:28:02 crc kubenswrapper[4998]: I0227 10:28:02.973038 4998 generic.go:334] "Generic (PLEG): container finished" podID="257fe562-007d-4f87-b3a6-f4f0fab0fa07" containerID="94626b05378281ffd1ded52b44e04f2b9c381229d9f561efab667d0d6bf74250" exitCode=0 Feb 27 10:28:02 crc kubenswrapper[4998]: I0227 10:28:02.973115 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536468-q85rh" event={"ID":"257fe562-007d-4f87-b3a6-f4f0fab0fa07","Type":"ContainerDied","Data":"94626b05378281ffd1ded52b44e04f2b9c381229d9f561efab667d0d6bf74250"} Feb 27 10:28:04 crc kubenswrapper[4998]: I0227 10:28:04.220364 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536468-q85rh" Feb 27 10:28:04 crc kubenswrapper[4998]: I0227 10:28:04.376692 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pv2tn\" (UniqueName: \"kubernetes.io/projected/257fe562-007d-4f87-b3a6-f4f0fab0fa07-kube-api-access-pv2tn\") pod \"257fe562-007d-4f87-b3a6-f4f0fab0fa07\" (UID: \"257fe562-007d-4f87-b3a6-f4f0fab0fa07\") " Feb 27 10:28:04 crc kubenswrapper[4998]: I0227 10:28:04.382576 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/257fe562-007d-4f87-b3a6-f4f0fab0fa07-kube-api-access-pv2tn" (OuterVolumeSpecName: "kube-api-access-pv2tn") pod "257fe562-007d-4f87-b3a6-f4f0fab0fa07" (UID: "257fe562-007d-4f87-b3a6-f4f0fab0fa07"). InnerVolumeSpecName "kube-api-access-pv2tn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:28:04 crc kubenswrapper[4998]: I0227 10:28:04.477907 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pv2tn\" (UniqueName: \"kubernetes.io/projected/257fe562-007d-4f87-b3a6-f4f0fab0fa07-kube-api-access-pv2tn\") on node \"crc\" DevicePath \"\"" Feb 27 10:28:04 crc kubenswrapper[4998]: I0227 10:28:04.987689 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536468-q85rh" event={"ID":"257fe562-007d-4f87-b3a6-f4f0fab0fa07","Type":"ContainerDied","Data":"f542619c1fa90585e1cf0fc40a438642336b42a27ace82858d4f6ae0fa57fa0b"} Feb 27 10:28:04 crc kubenswrapper[4998]: I0227 10:28:04.987726 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536468-q85rh" Feb 27 10:28:04 crc kubenswrapper[4998]: I0227 10:28:04.987732 4998 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f542619c1fa90585e1cf0fc40a438642336b42a27ace82858d4f6ae0fa57fa0b" Feb 27 10:28:05 crc kubenswrapper[4998]: I0227 10:28:05.284730 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536462-2r92d"] Feb 27 10:28:05 crc kubenswrapper[4998]: I0227 10:28:05.288178 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536462-2r92d"] Feb 27 10:28:06 crc kubenswrapper[4998]: I0227 10:28:06.774268 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ea17fe7-41e2-4264-909f-0e905886524b" path="/var/lib/kubelet/pods/7ea17fe7-41e2-4264-909f-0e905886524b/volumes" Feb 27 10:28:10 crc kubenswrapper[4998]: I0227 10:28:10.504336 4998 patch_prober.go:28] interesting pod/machine-config-daemon-m6kr5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 10:28:10 crc kubenswrapper[4998]: I0227 10:28:10.504412 4998 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 10:28:33 crc kubenswrapper[4998]: I0227 10:28:33.207871 4998 scope.go:117] "RemoveContainer" containerID="df0281943c7e1d6fc2638382d6d0da22b78cdbccb85786d23764b01450e40f0c" Feb 27 10:28:33 crc kubenswrapper[4998]: I0227 10:28:33.245843 4998 scope.go:117] "RemoveContainer" containerID="713976cb87fce173e3b17a3d70cb1e821996f3247786856d82b49adc76d54e8a" Feb 27 10:28:40 crc kubenswrapper[4998]: I0227 10:28:40.504877 4998 patch_prober.go:28] interesting pod/machine-config-daemon-m6kr5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 10:28:40 crc kubenswrapper[4998]: I0227 10:28:40.505376 4998 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 10:28:40 crc kubenswrapper[4998]: I0227 10:28:40.505423 4998 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" Feb 27 10:28:40 crc kubenswrapper[4998]: I0227 10:28:40.506019 4998 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"234da51b68ba7f355a9213d1b205beeeaf0cebf43b06c886158db76841ca5c10"} pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 27 10:28:40 crc kubenswrapper[4998]: I0227 10:28:40.506076 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" containerName="machine-config-daemon" containerID="cri-o://234da51b68ba7f355a9213d1b205beeeaf0cebf43b06c886158db76841ca5c10" gracePeriod=600 Feb 27 10:28:41 crc kubenswrapper[4998]: I0227 10:28:41.197517 4998 generic.go:334] "Generic (PLEG): container finished" podID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" containerID="234da51b68ba7f355a9213d1b205beeeaf0cebf43b06c886158db76841ca5c10" exitCode=0 Feb 27 10:28:41 crc kubenswrapper[4998]: I0227 10:28:41.197614 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" event={"ID":"400c5e2f-5448-49c6-bf8e-04b21e552bb2","Type":"ContainerDied","Data":"234da51b68ba7f355a9213d1b205beeeaf0cebf43b06c886158db76841ca5c10"} Feb 27 10:28:41 crc kubenswrapper[4998]: I0227 10:28:41.198311 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" event={"ID":"400c5e2f-5448-49c6-bf8e-04b21e552bb2","Type":"ContainerStarted","Data":"9dd84c2d84273411f555b8433ac91db1f4b3ffabd27398f5ba0d8023fe393865"} Feb 27 10:28:41 crc kubenswrapper[4998]: I0227 10:28:41.198339 4998 scope.go:117] "RemoveContainer" containerID="ef0e9e290020f4de6e2dbb18ef565f138df88eee5d534e2df45cae2f81d96bd3" Feb 27 10:30:00 crc kubenswrapper[4998]: I0227 10:30:00.140595 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536470-fzk6x"] Feb 27 10:30:00 crc kubenswrapper[4998]: E0227 10:30:00.143176 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="257fe562-007d-4f87-b3a6-f4f0fab0fa07" containerName="oc" Feb 27 10:30:00 crc kubenswrapper[4998]: I0227 10:30:00.143200 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="257fe562-007d-4f87-b3a6-f4f0fab0fa07" containerName="oc" Feb 27 10:30:00 crc kubenswrapper[4998]: I0227 10:30:00.143446 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="257fe562-007d-4f87-b3a6-f4f0fab0fa07" containerName="oc" Feb 27 10:30:00 crc kubenswrapper[4998]: I0227 10:30:00.144170 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536470-fzk6x" Feb 27 10:30:00 crc kubenswrapper[4998]: I0227 10:30:00.146407 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536470-jpm4k"] Feb 27 10:30:00 crc kubenswrapper[4998]: I0227 10:30:00.147454 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536470-jpm4k" Feb 27 10:30:00 crc kubenswrapper[4998]: I0227 10:30:00.148599 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 10:30:00 crc kubenswrapper[4998]: I0227 10:30:00.148831 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 10:30:00 crc kubenswrapper[4998]: I0227 10:30:00.149103 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-b74ch" Feb 27 10:30:00 crc kubenswrapper[4998]: I0227 10:30:00.150345 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 27 10:30:00 crc kubenswrapper[4998]: I0227 10:30:00.150743 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 27 10:30:00 crc kubenswrapper[4998]: I0227 10:30:00.151412 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536470-jpm4k"] Feb 27 10:30:00 crc kubenswrapper[4998]: I0227 10:30:00.155612 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536470-fzk6x"] Feb 27 10:30:00 crc kubenswrapper[4998]: I0227 10:30:00.178319 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7cd08a0b-2493-44df-993a-6d9acfe1bf6e-config-volume\") pod \"collect-profiles-29536470-jpm4k\" (UID: \"7cd08a0b-2493-44df-993a-6d9acfe1bf6e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536470-jpm4k" Feb 27 10:30:00 crc kubenswrapper[4998]: I0227 10:30:00.178413 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snn9p\" (UniqueName: \"kubernetes.io/projected/7cd08a0b-2493-44df-993a-6d9acfe1bf6e-kube-api-access-snn9p\") pod \"collect-profiles-29536470-jpm4k\" (UID: \"7cd08a0b-2493-44df-993a-6d9acfe1bf6e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536470-jpm4k" Feb 27 10:30:00 crc kubenswrapper[4998]: I0227 10:30:00.179831 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7cd08a0b-2493-44df-993a-6d9acfe1bf6e-secret-volume\") pod \"collect-profiles-29536470-jpm4k\" (UID: \"7cd08a0b-2493-44df-993a-6d9acfe1bf6e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536470-jpm4k" Feb 27 10:30:00 crc kubenswrapper[4998]: I0227 10:30:00.179951 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dgwq\" (UniqueName: \"kubernetes.io/projected/eab32a1f-971a-416a-9568-fa7b00bb0476-kube-api-access-4dgwq\") pod \"auto-csr-approver-29536470-fzk6x\" (UID: \"eab32a1f-971a-416a-9568-fa7b00bb0476\") " pod="openshift-infra/auto-csr-approver-29536470-fzk6x" Feb 27 10:30:00 crc kubenswrapper[4998]: I0227 10:30:00.281156 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7cd08a0b-2493-44df-993a-6d9acfe1bf6e-config-volume\") pod \"collect-profiles-29536470-jpm4k\" (UID: \"7cd08a0b-2493-44df-993a-6d9acfe1bf6e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536470-jpm4k" Feb 27 10:30:00 crc kubenswrapper[4998]: I0227 10:30:00.281275 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snn9p\" (UniqueName: \"kubernetes.io/projected/7cd08a0b-2493-44df-993a-6d9acfe1bf6e-kube-api-access-snn9p\") pod \"collect-profiles-29536470-jpm4k\" (UID: \"7cd08a0b-2493-44df-993a-6d9acfe1bf6e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536470-jpm4k" Feb 27 10:30:00 crc kubenswrapper[4998]: I0227 10:30:00.281300 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7cd08a0b-2493-44df-993a-6d9acfe1bf6e-secret-volume\") pod \"collect-profiles-29536470-jpm4k\" (UID: \"7cd08a0b-2493-44df-993a-6d9acfe1bf6e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536470-jpm4k" Feb 27 10:30:00 crc kubenswrapper[4998]: I0227 10:30:00.281328 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dgwq\" (UniqueName: \"kubernetes.io/projected/eab32a1f-971a-416a-9568-fa7b00bb0476-kube-api-access-4dgwq\") pod \"auto-csr-approver-29536470-fzk6x\" (UID: \"eab32a1f-971a-416a-9568-fa7b00bb0476\") " pod="openshift-infra/auto-csr-approver-29536470-fzk6x" Feb 27 10:30:00 crc kubenswrapper[4998]: I0227 10:30:00.282314 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7cd08a0b-2493-44df-993a-6d9acfe1bf6e-config-volume\") pod \"collect-profiles-29536470-jpm4k\" (UID: \"7cd08a0b-2493-44df-993a-6d9acfe1bf6e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536470-jpm4k" Feb 27 10:30:00 crc kubenswrapper[4998]: I0227 10:30:00.287670 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7cd08a0b-2493-44df-993a-6d9acfe1bf6e-secret-volume\") pod \"collect-profiles-29536470-jpm4k\" (UID: \"7cd08a0b-2493-44df-993a-6d9acfe1bf6e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536470-jpm4k" Feb 27 10:30:00 crc kubenswrapper[4998]: I0227 10:30:00.298851 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snn9p\" (UniqueName: \"kubernetes.io/projected/7cd08a0b-2493-44df-993a-6d9acfe1bf6e-kube-api-access-snn9p\") pod \"collect-profiles-29536470-jpm4k\" (UID: \"7cd08a0b-2493-44df-993a-6d9acfe1bf6e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536470-jpm4k" Feb 27 10:30:00 crc kubenswrapper[4998]: I0227 10:30:00.299142 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dgwq\" (UniqueName: \"kubernetes.io/projected/eab32a1f-971a-416a-9568-fa7b00bb0476-kube-api-access-4dgwq\") pod \"auto-csr-approver-29536470-fzk6x\" (UID: \"eab32a1f-971a-416a-9568-fa7b00bb0476\") " pod="openshift-infra/auto-csr-approver-29536470-fzk6x" Feb 27 10:30:00 crc kubenswrapper[4998]: I0227 10:30:00.471467 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536470-fzk6x" Feb 27 10:30:00 crc kubenswrapper[4998]: I0227 10:30:00.486497 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536470-jpm4k" Feb 27 10:30:00 crc kubenswrapper[4998]: I0227 10:30:00.661357 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536470-fzk6x"] Feb 27 10:30:00 crc kubenswrapper[4998]: I0227 10:30:00.702519 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536470-jpm4k"] Feb 27 10:30:00 crc kubenswrapper[4998]: W0227 10:30:00.706771 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7cd08a0b_2493_44df_993a_6d9acfe1bf6e.slice/crio-84be56ce91f57696cfa688c7c07b471d5eb6058367cd63836242852f74e2c22e WatchSource:0}: Error finding container 84be56ce91f57696cfa688c7c07b471d5eb6058367cd63836242852f74e2c22e: Status 404 returned error can't find the container with id 84be56ce91f57696cfa688c7c07b471d5eb6058367cd63836242852f74e2c22e Feb 27 10:30:01 crc kubenswrapper[4998]: I0227 10:30:01.693375 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536470-fzk6x" event={"ID":"eab32a1f-971a-416a-9568-fa7b00bb0476","Type":"ContainerStarted","Data":"e514c4ba36e54bdf90c3fa7e33dfc26a66da8a4ec4c5ef387d8a24435ae900bd"} Feb 27 10:30:01 crc kubenswrapper[4998]: I0227 10:30:01.695888 4998 generic.go:334] "Generic (PLEG): container finished" podID="7cd08a0b-2493-44df-993a-6d9acfe1bf6e" containerID="672de149f4e450ea35aadc20bda64b8dd7d7a18530a5daa545bbcaad8f84a0d2" exitCode=0 Feb 27 10:30:01 crc kubenswrapper[4998]: I0227 10:30:01.695936 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536470-jpm4k" event={"ID":"7cd08a0b-2493-44df-993a-6d9acfe1bf6e","Type":"ContainerDied","Data":"672de149f4e450ea35aadc20bda64b8dd7d7a18530a5daa545bbcaad8f84a0d2"} Feb 27 10:30:01 crc kubenswrapper[4998]: I0227 10:30:01.695970 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536470-jpm4k" event={"ID":"7cd08a0b-2493-44df-993a-6d9acfe1bf6e","Type":"ContainerStarted","Data":"84be56ce91f57696cfa688c7c07b471d5eb6058367cd63836242852f74e2c22e"} Feb 27 10:30:02 crc kubenswrapper[4998]: I0227 10:30:02.702283 4998 generic.go:334] "Generic (PLEG): container finished" podID="eab32a1f-971a-416a-9568-fa7b00bb0476" containerID="aef75ba98977dfd47e03f2db6a71e5c20a68bcf8ce9430d3c2bd545cbc85f697" exitCode=0 Feb 27 10:30:02 crc kubenswrapper[4998]: I0227 10:30:02.702390 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536470-fzk6x" event={"ID":"eab32a1f-971a-416a-9568-fa7b00bb0476","Type":"ContainerDied","Data":"aef75ba98977dfd47e03f2db6a71e5c20a68bcf8ce9430d3c2bd545cbc85f697"} Feb 27 10:30:02 crc kubenswrapper[4998]: I0227 10:30:02.893631 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536470-jpm4k" Feb 27 10:30:03 crc kubenswrapper[4998]: I0227 10:30:03.012264 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7cd08a0b-2493-44df-993a-6d9acfe1bf6e-config-volume\") pod \"7cd08a0b-2493-44df-993a-6d9acfe1bf6e\" (UID: \"7cd08a0b-2493-44df-993a-6d9acfe1bf6e\") " Feb 27 10:30:03 crc kubenswrapper[4998]: I0227 10:30:03.012303 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7cd08a0b-2493-44df-993a-6d9acfe1bf6e-secret-volume\") pod \"7cd08a0b-2493-44df-993a-6d9acfe1bf6e\" (UID: \"7cd08a0b-2493-44df-993a-6d9acfe1bf6e\") " Feb 27 10:30:03 crc kubenswrapper[4998]: I0227 10:30:03.012382 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-snn9p\" (UniqueName: \"kubernetes.io/projected/7cd08a0b-2493-44df-993a-6d9acfe1bf6e-kube-api-access-snn9p\") pod \"7cd08a0b-2493-44df-993a-6d9acfe1bf6e\" (UID: \"7cd08a0b-2493-44df-993a-6d9acfe1bf6e\") " Feb 27 10:30:03 crc kubenswrapper[4998]: I0227 10:30:03.013160 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7cd08a0b-2493-44df-993a-6d9acfe1bf6e-config-volume" (OuterVolumeSpecName: "config-volume") pod "7cd08a0b-2493-44df-993a-6d9acfe1bf6e" (UID: "7cd08a0b-2493-44df-993a-6d9acfe1bf6e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:30:03 crc kubenswrapper[4998]: I0227 10:30:03.018828 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cd08a0b-2493-44df-993a-6d9acfe1bf6e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "7cd08a0b-2493-44df-993a-6d9acfe1bf6e" (UID: "7cd08a0b-2493-44df-993a-6d9acfe1bf6e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:30:03 crc kubenswrapper[4998]: I0227 10:30:03.019759 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cd08a0b-2493-44df-993a-6d9acfe1bf6e-kube-api-access-snn9p" (OuterVolumeSpecName: "kube-api-access-snn9p") pod "7cd08a0b-2493-44df-993a-6d9acfe1bf6e" (UID: "7cd08a0b-2493-44df-993a-6d9acfe1bf6e"). InnerVolumeSpecName "kube-api-access-snn9p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:30:03 crc kubenswrapper[4998]: I0227 10:30:03.113908 4998 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7cd08a0b-2493-44df-993a-6d9acfe1bf6e-config-volume\") on node \"crc\" DevicePath \"\"" Feb 27 10:30:03 crc kubenswrapper[4998]: I0227 10:30:03.113947 4998 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7cd08a0b-2493-44df-993a-6d9acfe1bf6e-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 27 10:30:03 crc kubenswrapper[4998]: I0227 10:30:03.113963 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-snn9p\" (UniqueName: \"kubernetes.io/projected/7cd08a0b-2493-44df-993a-6d9acfe1bf6e-kube-api-access-snn9p\") on node \"crc\" DevicePath \"\"" Feb 27 10:30:03 crc kubenswrapper[4998]: I0227 10:30:03.709807 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536470-jpm4k" event={"ID":"7cd08a0b-2493-44df-993a-6d9acfe1bf6e","Type":"ContainerDied","Data":"84be56ce91f57696cfa688c7c07b471d5eb6058367cd63836242852f74e2c22e"} Feb 27 10:30:03 crc kubenswrapper[4998]: I0227 10:30:03.709833 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536470-jpm4k" Feb 27 10:30:03 crc kubenswrapper[4998]: I0227 10:30:03.709849 4998 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="84be56ce91f57696cfa688c7c07b471d5eb6058367cd63836242852f74e2c22e" Feb 27 10:30:03 crc kubenswrapper[4998]: I0227 10:30:03.911280 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536470-fzk6x" Feb 27 10:30:04 crc kubenswrapper[4998]: I0227 10:30:04.024909 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4dgwq\" (UniqueName: \"kubernetes.io/projected/eab32a1f-971a-416a-9568-fa7b00bb0476-kube-api-access-4dgwq\") pod \"eab32a1f-971a-416a-9568-fa7b00bb0476\" (UID: \"eab32a1f-971a-416a-9568-fa7b00bb0476\") " Feb 27 10:30:04 crc kubenswrapper[4998]: I0227 10:30:04.030442 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eab32a1f-971a-416a-9568-fa7b00bb0476-kube-api-access-4dgwq" (OuterVolumeSpecName: "kube-api-access-4dgwq") pod "eab32a1f-971a-416a-9568-fa7b00bb0476" (UID: "eab32a1f-971a-416a-9568-fa7b00bb0476"). InnerVolumeSpecName "kube-api-access-4dgwq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:30:04 crc kubenswrapper[4998]: I0227 10:30:04.126167 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4dgwq\" (UniqueName: \"kubernetes.io/projected/eab32a1f-971a-416a-9568-fa7b00bb0476-kube-api-access-4dgwq\") on node \"crc\" DevicePath \"\"" Feb 27 10:30:04 crc kubenswrapper[4998]: I0227 10:30:04.731691 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536470-fzk6x" event={"ID":"eab32a1f-971a-416a-9568-fa7b00bb0476","Type":"ContainerDied","Data":"e514c4ba36e54bdf90c3fa7e33dfc26a66da8a4ec4c5ef387d8a24435ae900bd"} Feb 27 10:30:04 crc kubenswrapper[4998]: I0227 10:30:04.731777 4998 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e514c4ba36e54bdf90c3fa7e33dfc26a66da8a4ec4c5ef387d8a24435ae900bd" Feb 27 10:30:04 crc kubenswrapper[4998]: I0227 10:30:04.731717 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536470-fzk6x" Feb 27 10:30:04 crc kubenswrapper[4998]: I0227 10:30:04.964108 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536464-dq25g"] Feb 27 10:30:04 crc kubenswrapper[4998]: I0227 10:30:04.967400 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536464-dq25g"] Feb 27 10:30:06 crc kubenswrapper[4998]: I0227 10:30:06.772722 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3688a39-d826-4d07-9d29-d75243003515" path="/var/lib/kubelet/pods/f3688a39-d826-4d07-9d29-d75243003515/volumes" Feb 27 10:30:33 crc kubenswrapper[4998]: I0227 10:30:33.321731 4998 scope.go:117] "RemoveContainer" containerID="6c6e2536bb483858431764173e6b24017bc191e958409ea587c963ea1d34fef8" Feb 27 10:30:40 crc kubenswrapper[4998]: I0227 10:30:40.504663 4998 patch_prober.go:28] interesting pod/machine-config-daemon-m6kr5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 10:30:40 crc kubenswrapper[4998]: I0227 10:30:40.505173 4998 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 10:30:52 crc kubenswrapper[4998]: I0227 10:30:52.072285 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-lfw62"] Feb 27 10:30:52 crc kubenswrapper[4998]: E0227 10:30:52.073090 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cd08a0b-2493-44df-993a-6d9acfe1bf6e" containerName="collect-profiles" Feb 27 10:30:52 crc kubenswrapper[4998]: I0227 10:30:52.073110 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cd08a0b-2493-44df-993a-6d9acfe1bf6e" containerName="collect-profiles" Feb 27 10:30:52 crc kubenswrapper[4998]: E0227 10:30:52.073123 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eab32a1f-971a-416a-9568-fa7b00bb0476" containerName="oc" Feb 27 10:30:52 crc kubenswrapper[4998]: I0227 10:30:52.073130 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="eab32a1f-971a-416a-9568-fa7b00bb0476" containerName="oc" Feb 27 10:30:52 crc kubenswrapper[4998]: I0227 10:30:52.073260 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="7cd08a0b-2493-44df-993a-6d9acfe1bf6e" containerName="collect-profiles" Feb 27 10:30:52 crc kubenswrapper[4998]: I0227 10:30:52.073284 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="eab32a1f-971a-416a-9568-fa7b00bb0476" containerName="oc" Feb 27 10:30:52 crc kubenswrapper[4998]: I0227 10:30:52.073782 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-lfw62" Feb 27 10:30:52 crc kubenswrapper[4998]: I0227 10:30:52.076287 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 27 10:30:52 crc kubenswrapper[4998]: I0227 10:30:52.076574 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 27 10:30:52 crc kubenswrapper[4998]: I0227 10:30:52.076605 4998 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-slq87" Feb 27 10:30:52 crc kubenswrapper[4998]: I0227 10:30:52.078727 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-lfw62"] Feb 27 10:30:52 crc kubenswrapper[4998]: I0227 10:30:52.083423 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-b2wdg"] Feb 27 10:30:52 crc kubenswrapper[4998]: I0227 10:30:52.084183 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-b2wdg" Feb 27 10:30:52 crc kubenswrapper[4998]: I0227 10:30:52.092980 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-v8j65"] Feb 27 10:30:52 crc kubenswrapper[4998]: I0227 10:30:52.093858 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-v8j65" Feb 27 10:30:52 crc kubenswrapper[4998]: I0227 10:30:52.094316 4998 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-lnbp2" Feb 27 10:30:52 crc kubenswrapper[4998]: I0227 10:30:52.097405 4998 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-hq7q5" Feb 27 10:30:52 crc kubenswrapper[4998]: I0227 10:30:52.110822 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-v8j65"] Feb 27 10:30:52 crc kubenswrapper[4998]: I0227 10:30:52.114519 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-b2wdg"] Feb 27 10:30:52 crc kubenswrapper[4998]: I0227 10:30:52.255762 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cj7hz\" (UniqueName: \"kubernetes.io/projected/a05eaee3-fb55-4085-95d2-a78ded3a3799-kube-api-access-cj7hz\") pod \"cert-manager-webhook-687f57d79b-v8j65\" (UID: \"a05eaee3-fb55-4085-95d2-a78ded3a3799\") " pod="cert-manager/cert-manager-webhook-687f57d79b-v8j65" Feb 27 10:30:52 crc kubenswrapper[4998]: I0227 10:30:52.255846 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slcpb\" (UniqueName: \"kubernetes.io/projected/2ec7f8ec-f44c-458c-b848-641340387d02-kube-api-access-slcpb\") pod \"cert-manager-cainjector-cf98fcc89-lfw62\" (UID: \"2ec7f8ec-f44c-458c-b848-641340387d02\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-lfw62" Feb 27 10:30:52 crc kubenswrapper[4998]: I0227 10:30:52.255877 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-946pw\" (UniqueName: \"kubernetes.io/projected/06d91e45-d849-4950-98ad-57110fb7f9bf-kube-api-access-946pw\") pod \"cert-manager-858654f9db-b2wdg\" (UID: \"06d91e45-d849-4950-98ad-57110fb7f9bf\") " pod="cert-manager/cert-manager-858654f9db-b2wdg" Feb 27 10:30:52 crc kubenswrapper[4998]: I0227 10:30:52.356740 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slcpb\" (UniqueName: \"kubernetes.io/projected/2ec7f8ec-f44c-458c-b848-641340387d02-kube-api-access-slcpb\") pod \"cert-manager-cainjector-cf98fcc89-lfw62\" (UID: \"2ec7f8ec-f44c-458c-b848-641340387d02\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-lfw62" Feb 27 10:30:52 crc kubenswrapper[4998]: I0227 10:30:52.356787 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-946pw\" (UniqueName: \"kubernetes.io/projected/06d91e45-d849-4950-98ad-57110fb7f9bf-kube-api-access-946pw\") pod \"cert-manager-858654f9db-b2wdg\" (UID: \"06d91e45-d849-4950-98ad-57110fb7f9bf\") " pod="cert-manager/cert-manager-858654f9db-b2wdg" Feb 27 10:30:52 crc kubenswrapper[4998]: I0227 10:30:52.356856 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cj7hz\" (UniqueName: \"kubernetes.io/projected/a05eaee3-fb55-4085-95d2-a78ded3a3799-kube-api-access-cj7hz\") pod \"cert-manager-webhook-687f57d79b-v8j65\" (UID: \"a05eaee3-fb55-4085-95d2-a78ded3a3799\") " pod="cert-manager/cert-manager-webhook-687f57d79b-v8j65" Feb 27 10:30:52 crc kubenswrapper[4998]: I0227 10:30:52.376776 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-946pw\" (UniqueName: \"kubernetes.io/projected/06d91e45-d849-4950-98ad-57110fb7f9bf-kube-api-access-946pw\") pod \"cert-manager-858654f9db-b2wdg\" (UID: \"06d91e45-d849-4950-98ad-57110fb7f9bf\") " pod="cert-manager/cert-manager-858654f9db-b2wdg" Feb 27 10:30:52 crc kubenswrapper[4998]: I0227 10:30:52.376966 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cj7hz\" (UniqueName: \"kubernetes.io/projected/a05eaee3-fb55-4085-95d2-a78ded3a3799-kube-api-access-cj7hz\") pod \"cert-manager-webhook-687f57d79b-v8j65\" (UID: \"a05eaee3-fb55-4085-95d2-a78ded3a3799\") " pod="cert-manager/cert-manager-webhook-687f57d79b-v8j65" Feb 27 10:30:52 crc kubenswrapper[4998]: I0227 10:30:52.377516 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slcpb\" (UniqueName: \"kubernetes.io/projected/2ec7f8ec-f44c-458c-b848-641340387d02-kube-api-access-slcpb\") pod \"cert-manager-cainjector-cf98fcc89-lfw62\" (UID: \"2ec7f8ec-f44c-458c-b848-641340387d02\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-lfw62" Feb 27 10:30:52 crc kubenswrapper[4998]: I0227 10:30:52.390558 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-lfw62" Feb 27 10:30:52 crc kubenswrapper[4998]: I0227 10:30:52.405420 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-b2wdg" Feb 27 10:30:52 crc kubenswrapper[4998]: I0227 10:30:52.414525 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-v8j65" Feb 27 10:30:52 crc kubenswrapper[4998]: I0227 10:30:52.608610 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-lfw62"] Feb 27 10:30:52 crc kubenswrapper[4998]: W0227 10:30:52.615072 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2ec7f8ec_f44c_458c_b848_641340387d02.slice/crio-c100ce65483c2974c674bc13955f5f1d56811365a6b56ab9f2df5c9a7d3004f3 WatchSource:0}: Error finding container c100ce65483c2974c674bc13955f5f1d56811365a6b56ab9f2df5c9a7d3004f3: Status 404 returned error can't find the container with id c100ce65483c2974c674bc13955f5f1d56811365a6b56ab9f2df5c9a7d3004f3 Feb 27 10:30:52 crc kubenswrapper[4998]: I0227 10:30:52.634874 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-b2wdg"] Feb 27 10:30:52 crc kubenswrapper[4998]: I0227 10:30:52.671553 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-v8j65"] Feb 27 10:30:52 crc kubenswrapper[4998]: W0227 10:30:52.672590 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda05eaee3_fb55_4085_95d2_a78ded3a3799.slice/crio-8234336e48468ec6197cd39be1ed473eb24430c9cbedc3e7dfc9e179db35ebbf WatchSource:0}: Error finding container 8234336e48468ec6197cd39be1ed473eb24430c9cbedc3e7dfc9e179db35ebbf: Status 404 returned error can't find the container with id 8234336e48468ec6197cd39be1ed473eb24430c9cbedc3e7dfc9e179db35ebbf Feb 27 10:30:53 crc kubenswrapper[4998]: I0227 10:30:53.001463 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-b2wdg" event={"ID":"06d91e45-d849-4950-98ad-57110fb7f9bf","Type":"ContainerStarted","Data":"5a4773895969cfd2ac8dc8b8cdcbcb66b3e93c6d5f5221a0c7f3d04aeca0cce8"} Feb 27 10:30:53 crc kubenswrapper[4998]: I0227 10:30:53.002719 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-lfw62" event={"ID":"2ec7f8ec-f44c-458c-b848-641340387d02","Type":"ContainerStarted","Data":"c100ce65483c2974c674bc13955f5f1d56811365a6b56ab9f2df5c9a7d3004f3"} Feb 27 10:30:53 crc kubenswrapper[4998]: I0227 10:30:53.003654 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-v8j65" event={"ID":"a05eaee3-fb55-4085-95d2-a78ded3a3799","Type":"ContainerStarted","Data":"8234336e48468ec6197cd39be1ed473eb24430c9cbedc3e7dfc9e179db35ebbf"} Feb 27 10:30:57 crc kubenswrapper[4998]: I0227 10:30:57.026567 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-b2wdg" event={"ID":"06d91e45-d849-4950-98ad-57110fb7f9bf","Type":"ContainerStarted","Data":"a25c6a9e3cebee1d3d6c466d2a29bef582bc3540cce28fe40a757cbb3bfd216e"} Feb 27 10:30:57 crc kubenswrapper[4998]: I0227 10:30:57.028539 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-lfw62" event={"ID":"2ec7f8ec-f44c-458c-b848-641340387d02","Type":"ContainerStarted","Data":"4a2a699cdda52925d9a7d5d76fbf7013c6e592c6d73c46bdcc947672231d920b"} Feb 27 10:30:57 crc kubenswrapper[4998]: I0227 10:30:57.029463 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-v8j65" event={"ID":"a05eaee3-fb55-4085-95d2-a78ded3a3799","Type":"ContainerStarted","Data":"ca2ec28647c649de0c496c37f82a12158164031ffcb90586d3ae9935a6e4547f"} Feb 27 10:30:57 crc kubenswrapper[4998]: I0227 10:30:57.029707 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-v8j65" Feb 27 10:30:57 crc kubenswrapper[4998]: I0227 10:30:57.044516 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-b2wdg" podStartSLOduration=1.008006073 podStartE2EDuration="5.044490209s" podCreationTimestamp="2026-02-27 10:30:52 +0000 UTC" firstStartedPulling="2026-02-27 10:30:52.644648716 +0000 UTC m=+804.642919684" lastFinishedPulling="2026-02-27 10:30:56.681132842 +0000 UTC m=+808.679403820" observedRunningTime="2026-02-27 10:30:57.041585111 +0000 UTC m=+809.039856079" watchObservedRunningTime="2026-02-27 10:30:57.044490209 +0000 UTC m=+809.042761187" Feb 27 10:30:57 crc kubenswrapper[4998]: I0227 10:30:57.060310 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-v8j65" podStartSLOduration=1.070808399 podStartE2EDuration="5.060291176s" podCreationTimestamp="2026-02-27 10:30:52 +0000 UTC" firstStartedPulling="2026-02-27 10:30:52.674972334 +0000 UTC m=+804.673243302" lastFinishedPulling="2026-02-27 10:30:56.664455111 +0000 UTC m=+808.662726079" observedRunningTime="2026-02-27 10:30:57.059072333 +0000 UTC m=+809.057343301" watchObservedRunningTime="2026-02-27 10:30:57.060291176 +0000 UTC m=+809.058562144" Feb 27 10:30:57 crc kubenswrapper[4998]: I0227 10:30:57.097898 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-lfw62" podStartSLOduration=1.096529015 podStartE2EDuration="5.097882202s" podCreationTimestamp="2026-02-27 10:30:52 +0000 UTC" firstStartedPulling="2026-02-27 10:30:52.620585305 +0000 UTC m=+804.618856273" lastFinishedPulling="2026-02-27 10:30:56.621938492 +0000 UTC m=+808.620209460" observedRunningTime="2026-02-27 10:30:57.095692253 +0000 UTC m=+809.093963221" watchObservedRunningTime="2026-02-27 10:30:57.097882202 +0000 UTC m=+809.096153170" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.268329 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-wh9xl"] Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.270194 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" podUID="bceef7ff-b99d-432e-b9cb-7c538c82b74b" containerName="ovn-controller" containerID="cri-o://f80297fee7fa0ab3d0d6a5975984b83a2d0fa2a368a3fc6d85affe2af75e326e" gracePeriod=30 Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.270283 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" podUID="bceef7ff-b99d-432e-b9cb-7c538c82b74b" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://4f8bd4e21e6e078a8e49ae2264c035072fb77179deb36176c991ff790ea25b0e" gracePeriod=30 Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.270286 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" podUID="bceef7ff-b99d-432e-b9cb-7c538c82b74b" containerName="nbdb" containerID="cri-o://0cc4f81dc4311732e185b32536af72d8391b070ab0b822258c13c55b9053cf53" gracePeriod=30 Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.270302 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" podUID="bceef7ff-b99d-432e-b9cb-7c538c82b74b" containerName="northd" containerID="cri-o://e38b883925e98a1b15113d65d2f9b3e133e05c14ac63651039076b8518d42467" gracePeriod=30 Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.270320 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" podUID="bceef7ff-b99d-432e-b9cb-7c538c82b74b" containerName="kube-rbac-proxy-node" containerID="cri-o://b586f4d902be28e9eed6ab0b15e1c5bbd3dbf2add143844fff0835e088a1e3fb" gracePeriod=30 Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.270371 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" podUID="bceef7ff-b99d-432e-b9cb-7c538c82b74b" containerName="ovn-acl-logging" containerID="cri-o://92c409cddda85ffd90dc0522d28d181bb804f68f82561e82217b3e78cd9938ba" gracePeriod=30 Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.270389 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" podUID="bceef7ff-b99d-432e-b9cb-7c538c82b74b" containerName="sbdb" containerID="cri-o://6f39b4e222d74a12d8889750f34e5bb8e07d73c9d7fcdbaaebe18066fba744a6" gracePeriod=30 Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.327035 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" podUID="bceef7ff-b99d-432e-b9cb-7c538c82b74b" containerName="ovnkube-controller" containerID="cri-o://e9eaff90d2da92c1eae5a79cd037518bcdfab96974f21aa40ee77de2eba576af" gracePeriod=30 Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.427693 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-v8j65" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.622176 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wh9xl_bceef7ff-b99d-432e-b9cb-7c538c82b74b/ovnkube-controller/3.log" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.624565 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wh9xl_bceef7ff-b99d-432e-b9cb-7c538c82b74b/ovn-acl-logging/0.log" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.625072 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wh9xl_bceef7ff-b99d-432e-b9cb-7c538c82b74b/ovn-controller/0.log" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.625541 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.679362 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-4542c"] Feb 27 10:31:02 crc kubenswrapper[4998]: E0227 10:31:02.679566 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bceef7ff-b99d-432e-b9cb-7c538c82b74b" containerName="ovnkube-controller" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.679578 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="bceef7ff-b99d-432e-b9cb-7c538c82b74b" containerName="ovnkube-controller" Feb 27 10:31:02 crc kubenswrapper[4998]: E0227 10:31:02.679588 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bceef7ff-b99d-432e-b9cb-7c538c82b74b" containerName="northd" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.679593 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="bceef7ff-b99d-432e-b9cb-7c538c82b74b" containerName="northd" Feb 27 10:31:02 crc kubenswrapper[4998]: E0227 10:31:02.679602 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bceef7ff-b99d-432e-b9cb-7c538c82b74b" containerName="ovnkube-controller" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.679608 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="bceef7ff-b99d-432e-b9cb-7c538c82b74b" containerName="ovnkube-controller" Feb 27 10:31:02 crc kubenswrapper[4998]: E0227 10:31:02.679620 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bceef7ff-b99d-432e-b9cb-7c538c82b74b" containerName="ovn-controller" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.679625 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="bceef7ff-b99d-432e-b9cb-7c538c82b74b" containerName="ovn-controller" Feb 27 10:31:02 crc kubenswrapper[4998]: E0227 10:31:02.679631 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bceef7ff-b99d-432e-b9cb-7c538c82b74b" containerName="kube-rbac-proxy-ovn-metrics" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.679637 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="bceef7ff-b99d-432e-b9cb-7c538c82b74b" containerName="kube-rbac-proxy-ovn-metrics" Feb 27 10:31:02 crc kubenswrapper[4998]: E0227 10:31:02.679645 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bceef7ff-b99d-432e-b9cb-7c538c82b74b" containerName="ovn-acl-logging" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.679650 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="bceef7ff-b99d-432e-b9cb-7c538c82b74b" containerName="ovn-acl-logging" Feb 27 10:31:02 crc kubenswrapper[4998]: E0227 10:31:02.679657 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bceef7ff-b99d-432e-b9cb-7c538c82b74b" containerName="kube-rbac-proxy-node" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.679663 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="bceef7ff-b99d-432e-b9cb-7c538c82b74b" containerName="kube-rbac-proxy-node" Feb 27 10:31:02 crc kubenswrapper[4998]: E0227 10:31:02.679672 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bceef7ff-b99d-432e-b9cb-7c538c82b74b" containerName="kubecfg-setup" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.679677 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="bceef7ff-b99d-432e-b9cb-7c538c82b74b" containerName="kubecfg-setup" Feb 27 10:31:02 crc kubenswrapper[4998]: E0227 10:31:02.679685 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bceef7ff-b99d-432e-b9cb-7c538c82b74b" containerName="ovnkube-controller" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.679690 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="bceef7ff-b99d-432e-b9cb-7c538c82b74b" containerName="ovnkube-controller" Feb 27 10:31:02 crc kubenswrapper[4998]: E0227 10:31:02.679697 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bceef7ff-b99d-432e-b9cb-7c538c82b74b" containerName="nbdb" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.679703 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="bceef7ff-b99d-432e-b9cb-7c538c82b74b" containerName="nbdb" Feb 27 10:31:02 crc kubenswrapper[4998]: E0227 10:31:02.679713 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bceef7ff-b99d-432e-b9cb-7c538c82b74b" containerName="sbdb" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.679718 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="bceef7ff-b99d-432e-b9cb-7c538c82b74b" containerName="sbdb" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.679801 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="bceef7ff-b99d-432e-b9cb-7c538c82b74b" containerName="kube-rbac-proxy-ovn-metrics" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.679811 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="bceef7ff-b99d-432e-b9cb-7c538c82b74b" containerName="ovn-controller" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.679819 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="bceef7ff-b99d-432e-b9cb-7c538c82b74b" containerName="northd" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.679827 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="bceef7ff-b99d-432e-b9cb-7c538c82b74b" containerName="ovnkube-controller" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.679834 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="bceef7ff-b99d-432e-b9cb-7c538c82b74b" containerName="sbdb" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.679842 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="bceef7ff-b99d-432e-b9cb-7c538c82b74b" containerName="kube-rbac-proxy-node" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.679850 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="bceef7ff-b99d-432e-b9cb-7c538c82b74b" containerName="ovnkube-controller" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.679857 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="bceef7ff-b99d-432e-b9cb-7c538c82b74b" containerName="ovn-acl-logging" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.679863 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="bceef7ff-b99d-432e-b9cb-7c538c82b74b" containerName="ovnkube-controller" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.679871 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="bceef7ff-b99d-432e-b9cb-7c538c82b74b" containerName="ovnkube-controller" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.679880 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="bceef7ff-b99d-432e-b9cb-7c538c82b74b" containerName="nbdb" Feb 27 10:31:02 crc kubenswrapper[4998]: E0227 10:31:02.679957 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bceef7ff-b99d-432e-b9cb-7c538c82b74b" containerName="ovnkube-controller" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.679964 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="bceef7ff-b99d-432e-b9cb-7c538c82b74b" containerName="ovnkube-controller" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.680049 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="bceef7ff-b99d-432e-b9cb-7c538c82b74b" containerName="ovnkube-controller" Feb 27 10:31:02 crc kubenswrapper[4998]: E0227 10:31:02.680137 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bceef7ff-b99d-432e-b9cb-7c538c82b74b" containerName="ovnkube-controller" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.680143 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="bceef7ff-b99d-432e-b9cb-7c538c82b74b" containerName="ovnkube-controller" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.681563 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-4542c" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.692253 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/074b4037-45f7-4414-bdd8-b40978263896-host-slash\") pod \"ovnkube-node-4542c\" (UID: \"074b4037-45f7-4414-bdd8-b40978263896\") " pod="openshift-ovn-kubernetes/ovnkube-node-4542c" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.692299 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/074b4037-45f7-4414-bdd8-b40978263896-var-lib-openvswitch\") pod \"ovnkube-node-4542c\" (UID: \"074b4037-45f7-4414-bdd8-b40978263896\") " pod="openshift-ovn-kubernetes/ovnkube-node-4542c" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.692318 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/074b4037-45f7-4414-bdd8-b40978263896-node-log\") pod \"ovnkube-node-4542c\" (UID: \"074b4037-45f7-4414-bdd8-b40978263896\") " pod="openshift-ovn-kubernetes/ovnkube-node-4542c" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.692333 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/074b4037-45f7-4414-bdd8-b40978263896-host-run-ovn-kubernetes\") pod \"ovnkube-node-4542c\" (UID: \"074b4037-45f7-4414-bdd8-b40978263896\") " pod="openshift-ovn-kubernetes/ovnkube-node-4542c" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.692357 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/074b4037-45f7-4414-bdd8-b40978263896-ovn-node-metrics-cert\") pod \"ovnkube-node-4542c\" (UID: \"074b4037-45f7-4414-bdd8-b40978263896\") " pod="openshift-ovn-kubernetes/ovnkube-node-4542c" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.692377 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/074b4037-45f7-4414-bdd8-b40978263896-run-systemd\") pod \"ovnkube-node-4542c\" (UID: \"074b4037-45f7-4414-bdd8-b40978263896\") " pod="openshift-ovn-kubernetes/ovnkube-node-4542c" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.692399 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/074b4037-45f7-4414-bdd8-b40978263896-host-run-netns\") pod \"ovnkube-node-4542c\" (UID: \"074b4037-45f7-4414-bdd8-b40978263896\") " pod="openshift-ovn-kubernetes/ovnkube-node-4542c" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.692416 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/074b4037-45f7-4414-bdd8-b40978263896-run-ovn\") pod \"ovnkube-node-4542c\" (UID: \"074b4037-45f7-4414-bdd8-b40978263896\") " pod="openshift-ovn-kubernetes/ovnkube-node-4542c" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.692437 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/074b4037-45f7-4414-bdd8-b40978263896-env-overrides\") pod \"ovnkube-node-4542c\" (UID: \"074b4037-45f7-4414-bdd8-b40978263896\") " pod="openshift-ovn-kubernetes/ovnkube-node-4542c" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.692452 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/074b4037-45f7-4414-bdd8-b40978263896-ovnkube-script-lib\") pod \"ovnkube-node-4542c\" (UID: \"074b4037-45f7-4414-bdd8-b40978263896\") " pod="openshift-ovn-kubernetes/ovnkube-node-4542c" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.692471 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/074b4037-45f7-4414-bdd8-b40978263896-host-cni-bin\") pod \"ovnkube-node-4542c\" (UID: \"074b4037-45f7-4414-bdd8-b40978263896\") " pod="openshift-ovn-kubernetes/ovnkube-node-4542c" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.692488 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/074b4037-45f7-4414-bdd8-b40978263896-etc-openvswitch\") pod \"ovnkube-node-4542c\" (UID: \"074b4037-45f7-4414-bdd8-b40978263896\") " pod="openshift-ovn-kubernetes/ovnkube-node-4542c" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.692501 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/074b4037-45f7-4414-bdd8-b40978263896-host-kubelet\") pod \"ovnkube-node-4542c\" (UID: \"074b4037-45f7-4414-bdd8-b40978263896\") " pod="openshift-ovn-kubernetes/ovnkube-node-4542c" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.692519 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/074b4037-45f7-4414-bdd8-b40978263896-ovnkube-config\") pod \"ovnkube-node-4542c\" (UID: \"074b4037-45f7-4414-bdd8-b40978263896\") " pod="openshift-ovn-kubernetes/ovnkube-node-4542c" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.692539 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/074b4037-45f7-4414-bdd8-b40978263896-systemd-units\") pod \"ovnkube-node-4542c\" (UID: \"074b4037-45f7-4414-bdd8-b40978263896\") " pod="openshift-ovn-kubernetes/ovnkube-node-4542c" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.692553 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/074b4037-45f7-4414-bdd8-b40978263896-log-socket\") pod \"ovnkube-node-4542c\" (UID: \"074b4037-45f7-4414-bdd8-b40978263896\") " pod="openshift-ovn-kubernetes/ovnkube-node-4542c" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.692570 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/074b4037-45f7-4414-bdd8-b40978263896-host-cni-netd\") pod \"ovnkube-node-4542c\" (UID: \"074b4037-45f7-4414-bdd8-b40978263896\") " pod="openshift-ovn-kubernetes/ovnkube-node-4542c" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.692590 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/074b4037-45f7-4414-bdd8-b40978263896-run-openvswitch\") pod \"ovnkube-node-4542c\" (UID: \"074b4037-45f7-4414-bdd8-b40978263896\") " pod="openshift-ovn-kubernetes/ovnkube-node-4542c" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.692681 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddmdz\" (UniqueName: \"kubernetes.io/projected/074b4037-45f7-4414-bdd8-b40978263896-kube-api-access-ddmdz\") pod \"ovnkube-node-4542c\" (UID: \"074b4037-45f7-4414-bdd8-b40978263896\") " pod="openshift-ovn-kubernetes/ovnkube-node-4542c" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.692708 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/074b4037-45f7-4414-bdd8-b40978263896-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-4542c\" (UID: \"074b4037-45f7-4414-bdd8-b40978263896\") " pod="openshift-ovn-kubernetes/ovnkube-node-4542c" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.793514 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/bceef7ff-b99d-432e-b9cb-7c538c82b74b-host-cni-bin\") pod \"bceef7ff-b99d-432e-b9cb-7c538c82b74b\" (UID: \"bceef7ff-b99d-432e-b9cb-7c538c82b74b\") " Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.793627 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/bceef7ff-b99d-432e-b9cb-7c538c82b74b-systemd-units\") pod \"bceef7ff-b99d-432e-b9cb-7c538c82b74b\" (UID: \"bceef7ff-b99d-432e-b9cb-7c538c82b74b\") " Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.793716 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/bceef7ff-b99d-432e-b9cb-7c538c82b74b-env-overrides\") pod \"bceef7ff-b99d-432e-b9cb-7c538c82b74b\" (UID: \"bceef7ff-b99d-432e-b9cb-7c538c82b74b\") " Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.793780 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/bceef7ff-b99d-432e-b9cb-7c538c82b74b-ovn-node-metrics-cert\") pod \"bceef7ff-b99d-432e-b9cb-7c538c82b74b\" (UID: \"bceef7ff-b99d-432e-b9cb-7c538c82b74b\") " Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.793815 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bceef7ff-b99d-432e-b9cb-7c538c82b74b-var-lib-openvswitch\") pod \"bceef7ff-b99d-432e-b9cb-7c538c82b74b\" (UID: \"bceef7ff-b99d-432e-b9cb-7c538c82b74b\") " Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.793848 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bceef7ff-b99d-432e-b9cb-7c538c82b74b-etc-openvswitch\") pod \"bceef7ff-b99d-432e-b9cb-7c538c82b74b\" (UID: \"bceef7ff-b99d-432e-b9cb-7c538c82b74b\") " Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.793901 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/bceef7ff-b99d-432e-b9cb-7c538c82b74b-ovnkube-script-lib\") pod \"bceef7ff-b99d-432e-b9cb-7c538c82b74b\" (UID: \"bceef7ff-b99d-432e-b9cb-7c538c82b74b\") " Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.793943 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/bceef7ff-b99d-432e-b9cb-7c538c82b74b-host-run-netns\") pod \"bceef7ff-b99d-432e-b9cb-7c538c82b74b\" (UID: \"bceef7ff-b99d-432e-b9cb-7c538c82b74b\") " Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.793995 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bceef7ff-b99d-432e-b9cb-7c538c82b74b-host-run-ovn-kubernetes\") pod \"bceef7ff-b99d-432e-b9cb-7c538c82b74b\" (UID: \"bceef7ff-b99d-432e-b9cb-7c538c82b74b\") " Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.794035 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/bceef7ff-b99d-432e-b9cb-7c538c82b74b-host-kubelet\") pod \"bceef7ff-b99d-432e-b9cb-7c538c82b74b\" (UID: \"bceef7ff-b99d-432e-b9cb-7c538c82b74b\") " Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.794096 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bceef7ff-b99d-432e-b9cb-7c538c82b74b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"bceef7ff-b99d-432e-b9cb-7c538c82b74b\" (UID: \"bceef7ff-b99d-432e-b9cb-7c538c82b74b\") " Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.794131 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/bceef7ff-b99d-432e-b9cb-7c538c82b74b-run-systemd\") pod \"bceef7ff-b99d-432e-b9cb-7c538c82b74b\" (UID: \"bceef7ff-b99d-432e-b9cb-7c538c82b74b\") " Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.794160 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/bceef7ff-b99d-432e-b9cb-7c538c82b74b-host-slash\") pod \"bceef7ff-b99d-432e-b9cb-7c538c82b74b\" (UID: \"bceef7ff-b99d-432e-b9cb-7c538c82b74b\") " Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.794192 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/bceef7ff-b99d-432e-b9cb-7c538c82b74b-node-log\") pod \"bceef7ff-b99d-432e-b9cb-7c538c82b74b\" (UID: \"bceef7ff-b99d-432e-b9cb-7c538c82b74b\") " Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.794261 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/bceef7ff-b99d-432e-b9cb-7c538c82b74b-run-ovn\") pod \"bceef7ff-b99d-432e-b9cb-7c538c82b74b\" (UID: \"bceef7ff-b99d-432e-b9cb-7c538c82b74b\") " Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.794295 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xxgq\" (UniqueName: \"kubernetes.io/projected/bceef7ff-b99d-432e-b9cb-7c538c82b74b-kube-api-access-9xxgq\") pod \"bceef7ff-b99d-432e-b9cb-7c538c82b74b\" (UID: \"bceef7ff-b99d-432e-b9cb-7c538c82b74b\") " Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.794342 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/bceef7ff-b99d-432e-b9cb-7c538c82b74b-ovnkube-config\") pod \"bceef7ff-b99d-432e-b9cb-7c538c82b74b\" (UID: \"bceef7ff-b99d-432e-b9cb-7c538c82b74b\") " Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.794382 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bceef7ff-b99d-432e-b9cb-7c538c82b74b-host-cni-netd\") pod \"bceef7ff-b99d-432e-b9cb-7c538c82b74b\" (UID: \"bceef7ff-b99d-432e-b9cb-7c538c82b74b\") " Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.794407 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/bceef7ff-b99d-432e-b9cb-7c538c82b74b-log-socket\") pod \"bceef7ff-b99d-432e-b9cb-7c538c82b74b\" (UID: \"bceef7ff-b99d-432e-b9cb-7c538c82b74b\") " Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.794458 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bceef7ff-b99d-432e-b9cb-7c538c82b74b-run-openvswitch\") pod \"bceef7ff-b99d-432e-b9cb-7c538c82b74b\" (UID: \"bceef7ff-b99d-432e-b9cb-7c538c82b74b\") " Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.794661 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/074b4037-45f7-4414-bdd8-b40978263896-host-cni-bin\") pod \"ovnkube-node-4542c\" (UID: \"074b4037-45f7-4414-bdd8-b40978263896\") " pod="openshift-ovn-kubernetes/ovnkube-node-4542c" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.794719 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/074b4037-45f7-4414-bdd8-b40978263896-host-kubelet\") pod \"ovnkube-node-4542c\" (UID: \"074b4037-45f7-4414-bdd8-b40978263896\") " pod="openshift-ovn-kubernetes/ovnkube-node-4542c" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.794767 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/074b4037-45f7-4414-bdd8-b40978263896-etc-openvswitch\") pod \"ovnkube-node-4542c\" (UID: \"074b4037-45f7-4414-bdd8-b40978263896\") " pod="openshift-ovn-kubernetes/ovnkube-node-4542c" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.794814 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/074b4037-45f7-4414-bdd8-b40978263896-ovnkube-config\") pod \"ovnkube-node-4542c\" (UID: \"074b4037-45f7-4414-bdd8-b40978263896\") " pod="openshift-ovn-kubernetes/ovnkube-node-4542c" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.794864 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/074b4037-45f7-4414-bdd8-b40978263896-systemd-units\") pod \"ovnkube-node-4542c\" (UID: \"074b4037-45f7-4414-bdd8-b40978263896\") " pod="openshift-ovn-kubernetes/ovnkube-node-4542c" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.794908 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/074b4037-45f7-4414-bdd8-b40978263896-log-socket\") pod \"ovnkube-node-4542c\" (UID: \"074b4037-45f7-4414-bdd8-b40978263896\") " pod="openshift-ovn-kubernetes/ovnkube-node-4542c" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.794949 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/074b4037-45f7-4414-bdd8-b40978263896-host-cni-netd\") pod \"ovnkube-node-4542c\" (UID: \"074b4037-45f7-4414-bdd8-b40978263896\") " pod="openshift-ovn-kubernetes/ovnkube-node-4542c" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.794997 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/074b4037-45f7-4414-bdd8-b40978263896-run-openvswitch\") pod \"ovnkube-node-4542c\" (UID: \"074b4037-45f7-4414-bdd8-b40978263896\") " pod="openshift-ovn-kubernetes/ovnkube-node-4542c" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.795059 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddmdz\" (UniqueName: \"kubernetes.io/projected/074b4037-45f7-4414-bdd8-b40978263896-kube-api-access-ddmdz\") pod \"ovnkube-node-4542c\" (UID: \"074b4037-45f7-4414-bdd8-b40978263896\") " pod="openshift-ovn-kubernetes/ovnkube-node-4542c" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.795111 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/074b4037-45f7-4414-bdd8-b40978263896-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-4542c\" (UID: \"074b4037-45f7-4414-bdd8-b40978263896\") " pod="openshift-ovn-kubernetes/ovnkube-node-4542c" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.795180 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/074b4037-45f7-4414-bdd8-b40978263896-host-slash\") pod \"ovnkube-node-4542c\" (UID: \"074b4037-45f7-4414-bdd8-b40978263896\") " pod="openshift-ovn-kubernetes/ovnkube-node-4542c" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.795312 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/074b4037-45f7-4414-bdd8-b40978263896-var-lib-openvswitch\") pod \"ovnkube-node-4542c\" (UID: \"074b4037-45f7-4414-bdd8-b40978263896\") " pod="openshift-ovn-kubernetes/ovnkube-node-4542c" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.795362 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/074b4037-45f7-4414-bdd8-b40978263896-host-run-ovn-kubernetes\") pod \"ovnkube-node-4542c\" (UID: \"074b4037-45f7-4414-bdd8-b40978263896\") " pod="openshift-ovn-kubernetes/ovnkube-node-4542c" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.795412 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/074b4037-45f7-4414-bdd8-b40978263896-node-log\") pod \"ovnkube-node-4542c\" (UID: \"074b4037-45f7-4414-bdd8-b40978263896\") " pod="openshift-ovn-kubernetes/ovnkube-node-4542c" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.795470 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/074b4037-45f7-4414-bdd8-b40978263896-ovn-node-metrics-cert\") pod \"ovnkube-node-4542c\" (UID: \"074b4037-45f7-4414-bdd8-b40978263896\") " pod="openshift-ovn-kubernetes/ovnkube-node-4542c" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.795529 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/074b4037-45f7-4414-bdd8-b40978263896-run-systemd\") pod \"ovnkube-node-4542c\" (UID: \"074b4037-45f7-4414-bdd8-b40978263896\") " pod="openshift-ovn-kubernetes/ovnkube-node-4542c" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.795545 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/074b4037-45f7-4414-bdd8-b40978263896-systemd-units\") pod \"ovnkube-node-4542c\" (UID: \"074b4037-45f7-4414-bdd8-b40978263896\") " pod="openshift-ovn-kubernetes/ovnkube-node-4542c" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.795597 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/074b4037-45f7-4414-bdd8-b40978263896-host-run-netns\") pod \"ovnkube-node-4542c\" (UID: \"074b4037-45f7-4414-bdd8-b40978263896\") " pod="openshift-ovn-kubernetes/ovnkube-node-4542c" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.795646 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/074b4037-45f7-4414-bdd8-b40978263896-run-ovn\") pod \"ovnkube-node-4542c\" (UID: \"074b4037-45f7-4414-bdd8-b40978263896\") " pod="openshift-ovn-kubernetes/ovnkube-node-4542c" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.795698 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/074b4037-45f7-4414-bdd8-b40978263896-env-overrides\") pod \"ovnkube-node-4542c\" (UID: \"074b4037-45f7-4414-bdd8-b40978263896\") " pod="openshift-ovn-kubernetes/ovnkube-node-4542c" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.795734 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/074b4037-45f7-4414-bdd8-b40978263896-ovnkube-script-lib\") pod \"ovnkube-node-4542c\" (UID: \"074b4037-45f7-4414-bdd8-b40978263896\") " pod="openshift-ovn-kubernetes/ovnkube-node-4542c" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.796097 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bceef7ff-b99d-432e-b9cb-7c538c82b74b-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "bceef7ff-b99d-432e-b9cb-7c538c82b74b" (UID: "bceef7ff-b99d-432e-b9cb-7c538c82b74b"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.796134 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bceef7ff-b99d-432e-b9cb-7c538c82b74b-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "bceef7ff-b99d-432e-b9cb-7c538c82b74b" (UID: "bceef7ff-b99d-432e-b9cb-7c538c82b74b"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.796154 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bceef7ff-b99d-432e-b9cb-7c538c82b74b-log-socket" (OuterVolumeSpecName: "log-socket") pod "bceef7ff-b99d-432e-b9cb-7c538c82b74b" (UID: "bceef7ff-b99d-432e-b9cb-7c538c82b74b"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.796172 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bceef7ff-b99d-432e-b9cb-7c538c82b74b-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "bceef7ff-b99d-432e-b9cb-7c538c82b74b" (UID: "bceef7ff-b99d-432e-b9cb-7c538c82b74b"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.796204 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/074b4037-45f7-4414-bdd8-b40978263896-host-cni-bin\") pod \"ovnkube-node-4542c\" (UID: \"074b4037-45f7-4414-bdd8-b40978263896\") " pod="openshift-ovn-kubernetes/ovnkube-node-4542c" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.796248 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/074b4037-45f7-4414-bdd8-b40978263896-host-kubelet\") pod \"ovnkube-node-4542c\" (UID: \"074b4037-45f7-4414-bdd8-b40978263896\") " pod="openshift-ovn-kubernetes/ovnkube-node-4542c" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.796275 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/074b4037-45f7-4414-bdd8-b40978263896-etc-openvswitch\") pod \"ovnkube-node-4542c\" (UID: \"074b4037-45f7-4414-bdd8-b40978263896\") " pod="openshift-ovn-kubernetes/ovnkube-node-4542c" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.796271 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bceef7ff-b99d-432e-b9cb-7c538c82b74b-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "bceef7ff-b99d-432e-b9cb-7c538c82b74b" (UID: "bceef7ff-b99d-432e-b9cb-7c538c82b74b"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.796812 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/074b4037-45f7-4414-bdd8-b40978263896-ovnkube-config\") pod \"ovnkube-node-4542c\" (UID: \"074b4037-45f7-4414-bdd8-b40978263896\") " pod="openshift-ovn-kubernetes/ovnkube-node-4542c" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.796854 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bceef7ff-b99d-432e-b9cb-7c538c82b74b-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "bceef7ff-b99d-432e-b9cb-7c538c82b74b" (UID: "bceef7ff-b99d-432e-b9cb-7c538c82b74b"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.796875 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bceef7ff-b99d-432e-b9cb-7c538c82b74b-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "bceef7ff-b99d-432e-b9cb-7c538c82b74b" (UID: "bceef7ff-b99d-432e-b9cb-7c538c82b74b"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.796905 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bceef7ff-b99d-432e-b9cb-7c538c82b74b-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "bceef7ff-b99d-432e-b9cb-7c538c82b74b" (UID: "bceef7ff-b99d-432e-b9cb-7c538c82b74b"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.796940 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bceef7ff-b99d-432e-b9cb-7c538c82b74b-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "bceef7ff-b99d-432e-b9cb-7c538c82b74b" (UID: "bceef7ff-b99d-432e-b9cb-7c538c82b74b"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.796981 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/074b4037-45f7-4414-bdd8-b40978263896-var-lib-openvswitch\") pod \"ovnkube-node-4542c\" (UID: \"074b4037-45f7-4414-bdd8-b40978263896\") " pod="openshift-ovn-kubernetes/ovnkube-node-4542c" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.797020 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/074b4037-45f7-4414-bdd8-b40978263896-log-socket\") pod \"ovnkube-node-4542c\" (UID: \"074b4037-45f7-4414-bdd8-b40978263896\") " pod="openshift-ovn-kubernetes/ovnkube-node-4542c" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.797052 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/074b4037-45f7-4414-bdd8-b40978263896-host-cni-netd\") pod \"ovnkube-node-4542c\" (UID: \"074b4037-45f7-4414-bdd8-b40978263896\") " pod="openshift-ovn-kubernetes/ovnkube-node-4542c" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.797084 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/074b4037-45f7-4414-bdd8-b40978263896-run-openvswitch\") pod \"ovnkube-node-4542c\" (UID: \"074b4037-45f7-4414-bdd8-b40978263896\") " pod="openshift-ovn-kubernetes/ovnkube-node-4542c" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.797136 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/074b4037-45f7-4414-bdd8-b40978263896-ovnkube-script-lib\") pod \"ovnkube-node-4542c\" (UID: \"074b4037-45f7-4414-bdd8-b40978263896\") " pod="openshift-ovn-kubernetes/ovnkube-node-4542c" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.797283 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/074b4037-45f7-4414-bdd8-b40978263896-host-run-ovn-kubernetes\") pod \"ovnkube-node-4542c\" (UID: \"074b4037-45f7-4414-bdd8-b40978263896\") " pod="openshift-ovn-kubernetes/ovnkube-node-4542c" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.797360 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/074b4037-45f7-4414-bdd8-b40978263896-node-log\") pod \"ovnkube-node-4542c\" (UID: \"074b4037-45f7-4414-bdd8-b40978263896\") " pod="openshift-ovn-kubernetes/ovnkube-node-4542c" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.797457 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/074b4037-45f7-4414-bdd8-b40978263896-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-4542c\" (UID: \"074b4037-45f7-4414-bdd8-b40978263896\") " pod="openshift-ovn-kubernetes/ovnkube-node-4542c" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.797497 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/074b4037-45f7-4414-bdd8-b40978263896-host-slash\") pod \"ovnkube-node-4542c\" (UID: \"074b4037-45f7-4414-bdd8-b40978263896\") " pod="openshift-ovn-kubernetes/ovnkube-node-4542c" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.797527 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bceef7ff-b99d-432e-b9cb-7c538c82b74b-host-slash" (OuterVolumeSpecName: "host-slash") pod "bceef7ff-b99d-432e-b9cb-7c538c82b74b" (UID: "bceef7ff-b99d-432e-b9cb-7c538c82b74b"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.797521 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bceef7ff-b99d-432e-b9cb-7c538c82b74b-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "bceef7ff-b99d-432e-b9cb-7c538c82b74b" (UID: "bceef7ff-b99d-432e-b9cb-7c538c82b74b"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.797554 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bceef7ff-b99d-432e-b9cb-7c538c82b74b-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "bceef7ff-b99d-432e-b9cb-7c538c82b74b" (UID: "bceef7ff-b99d-432e-b9cb-7c538c82b74b"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.798289 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bceef7ff-b99d-432e-b9cb-7c538c82b74b-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "bceef7ff-b99d-432e-b9cb-7c538c82b74b" (UID: "bceef7ff-b99d-432e-b9cb-7c538c82b74b"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.798368 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bceef7ff-b99d-432e-b9cb-7c538c82b74b-node-log" (OuterVolumeSpecName: "node-log") pod "bceef7ff-b99d-432e-b9cb-7c538c82b74b" (UID: "bceef7ff-b99d-432e-b9cb-7c538c82b74b"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.798417 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bceef7ff-b99d-432e-b9cb-7c538c82b74b-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "bceef7ff-b99d-432e-b9cb-7c538c82b74b" (UID: "bceef7ff-b99d-432e-b9cb-7c538c82b74b"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.798486 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/074b4037-45f7-4414-bdd8-b40978263896-run-systemd\") pod \"ovnkube-node-4542c\" (UID: \"074b4037-45f7-4414-bdd8-b40978263896\") " pod="openshift-ovn-kubernetes/ovnkube-node-4542c" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.798452 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/074b4037-45f7-4414-bdd8-b40978263896-host-run-netns\") pod \"ovnkube-node-4542c\" (UID: \"074b4037-45f7-4414-bdd8-b40978263896\") " pod="openshift-ovn-kubernetes/ovnkube-node-4542c" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.798438 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bceef7ff-b99d-432e-b9cb-7c538c82b74b-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "bceef7ff-b99d-432e-b9cb-7c538c82b74b" (UID: "bceef7ff-b99d-432e-b9cb-7c538c82b74b"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.798518 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bceef7ff-b99d-432e-b9cb-7c538c82b74b-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "bceef7ff-b99d-432e-b9cb-7c538c82b74b" (UID: "bceef7ff-b99d-432e-b9cb-7c538c82b74b"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.798564 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/074b4037-45f7-4414-bdd8-b40978263896-run-ovn\") pod \"ovnkube-node-4542c\" (UID: \"074b4037-45f7-4414-bdd8-b40978263896\") " pod="openshift-ovn-kubernetes/ovnkube-node-4542c" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.799127 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/074b4037-45f7-4414-bdd8-b40978263896-env-overrides\") pod \"ovnkube-node-4542c\" (UID: \"074b4037-45f7-4414-bdd8-b40978263896\") " pod="openshift-ovn-kubernetes/ovnkube-node-4542c" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.801428 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bceef7ff-b99d-432e-b9cb-7c538c82b74b-kube-api-access-9xxgq" (OuterVolumeSpecName: "kube-api-access-9xxgq") pod "bceef7ff-b99d-432e-b9cb-7c538c82b74b" (UID: "bceef7ff-b99d-432e-b9cb-7c538c82b74b"). InnerVolumeSpecName "kube-api-access-9xxgq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.801766 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/074b4037-45f7-4414-bdd8-b40978263896-ovn-node-metrics-cert\") pod \"ovnkube-node-4542c\" (UID: \"074b4037-45f7-4414-bdd8-b40978263896\") " pod="openshift-ovn-kubernetes/ovnkube-node-4542c" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.802829 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bceef7ff-b99d-432e-b9cb-7c538c82b74b-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "bceef7ff-b99d-432e-b9cb-7c538c82b74b" (UID: "bceef7ff-b99d-432e-b9cb-7c538c82b74b"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.810739 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bceef7ff-b99d-432e-b9cb-7c538c82b74b-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "bceef7ff-b99d-432e-b9cb-7c538c82b74b" (UID: "bceef7ff-b99d-432e-b9cb-7c538c82b74b"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.813120 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddmdz\" (UniqueName: \"kubernetes.io/projected/074b4037-45f7-4414-bdd8-b40978263896-kube-api-access-ddmdz\") pod \"ovnkube-node-4542c\" (UID: \"074b4037-45f7-4414-bdd8-b40978263896\") " pod="openshift-ovn-kubernetes/ovnkube-node-4542c" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.896835 4998 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/bceef7ff-b99d-432e-b9cb-7c538c82b74b-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.896866 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xxgq\" (UniqueName: \"kubernetes.io/projected/bceef7ff-b99d-432e-b9cb-7c538c82b74b-kube-api-access-9xxgq\") on node \"crc\" DevicePath \"\"" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.896876 4998 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/bceef7ff-b99d-432e-b9cb-7c538c82b74b-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.896884 4998 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bceef7ff-b99d-432e-b9cb-7c538c82b74b-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.896892 4998 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/bceef7ff-b99d-432e-b9cb-7c538c82b74b-log-socket\") on node \"crc\" DevicePath \"\"" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.896903 4998 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bceef7ff-b99d-432e-b9cb-7c538c82b74b-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.896910 4998 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/bceef7ff-b99d-432e-b9cb-7c538c82b74b-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.896918 4998 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/bceef7ff-b99d-432e-b9cb-7c538c82b74b-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.896926 4998 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/bceef7ff-b99d-432e-b9cb-7c538c82b74b-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.896935 4998 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/bceef7ff-b99d-432e-b9cb-7c538c82b74b-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.896943 4998 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bceef7ff-b99d-432e-b9cb-7c538c82b74b-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.896950 4998 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bceef7ff-b99d-432e-b9cb-7c538c82b74b-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.896958 4998 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/bceef7ff-b99d-432e-b9cb-7c538c82b74b-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.896966 4998 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/bceef7ff-b99d-432e-b9cb-7c538c82b74b-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.896973 4998 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/bceef7ff-b99d-432e-b9cb-7c538c82b74b-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.896984 4998 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bceef7ff-b99d-432e-b9cb-7c538c82b74b-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.896993 4998 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bceef7ff-b99d-432e-b9cb-7c538c82b74b-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.897001 4998 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/bceef7ff-b99d-432e-b9cb-7c538c82b74b-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.897010 4998 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/bceef7ff-b99d-432e-b9cb-7c538c82b74b-host-slash\") on node \"crc\" DevicePath \"\"" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.897017 4998 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/bceef7ff-b99d-432e-b9cb-7c538c82b74b-node-log\") on node \"crc\" DevicePath \"\"" Feb 27 10:31:02 crc kubenswrapper[4998]: I0227 10:31:02.999016 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-4542c" Feb 27 10:31:03 crc kubenswrapper[4998]: W0227 10:31:03.026527 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod074b4037_45f7_4414_bdd8_b40978263896.slice/crio-80f980d9a7a70cd791ff6ad09a5ef671b1bb94d2ef22cc1d8d9f775211bfe339 WatchSource:0}: Error finding container 80f980d9a7a70cd791ff6ad09a5ef671b1bb94d2ef22cc1d8d9f775211bfe339: Status 404 returned error can't find the container with id 80f980d9a7a70cd791ff6ad09a5ef671b1bb94d2ef22cc1d8d9f775211bfe339 Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.063554 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-46lvx_a046a5ca-7081-4920-98af-1027a5bc29d0/kube-multus/2.log" Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.063980 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-46lvx_a046a5ca-7081-4920-98af-1027a5bc29d0/kube-multus/1.log" Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.064029 4998 generic.go:334] "Generic (PLEG): container finished" podID="a046a5ca-7081-4920-98af-1027a5bc29d0" containerID="d3e2963b299c9c91d93abf85f31c8d17e14dd7e330e911092cdfcb10879314ea" exitCode=2 Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.064091 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-46lvx" event={"ID":"a046a5ca-7081-4920-98af-1027a5bc29d0","Type":"ContainerDied","Data":"d3e2963b299c9c91d93abf85f31c8d17e14dd7e330e911092cdfcb10879314ea"} Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.064131 4998 scope.go:117] "RemoveContainer" containerID="cf965cba1260453351278ff54565d3985b1c2eeee25ddba57db0ce0f8e335a93" Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.064650 4998 scope.go:117] "RemoveContainer" containerID="d3e2963b299c9c91d93abf85f31c8d17e14dd7e330e911092cdfcb10879314ea" Feb 27 10:31:03 crc kubenswrapper[4998]: E0227 10:31:03.064838 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-46lvx_openshift-multus(a046a5ca-7081-4920-98af-1027a5bc29d0)\"" pod="openshift-multus/multus-46lvx" podUID="a046a5ca-7081-4920-98af-1027a5bc29d0" Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.065303 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4542c" event={"ID":"074b4037-45f7-4414-bdd8-b40978263896","Type":"ContainerStarted","Data":"80f980d9a7a70cd791ff6ad09a5ef671b1bb94d2ef22cc1d8d9f775211bfe339"} Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.069906 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wh9xl_bceef7ff-b99d-432e-b9cb-7c538c82b74b/ovnkube-controller/3.log" Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.074749 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wh9xl_bceef7ff-b99d-432e-b9cb-7c538c82b74b/ovn-acl-logging/0.log" Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.075219 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wh9xl_bceef7ff-b99d-432e-b9cb-7c538c82b74b/ovn-controller/0.log" Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.075713 4998 generic.go:334] "Generic (PLEG): container finished" podID="bceef7ff-b99d-432e-b9cb-7c538c82b74b" containerID="e9eaff90d2da92c1eae5a79cd037518bcdfab96974f21aa40ee77de2eba576af" exitCode=0 Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.075737 4998 generic.go:334] "Generic (PLEG): container finished" podID="bceef7ff-b99d-432e-b9cb-7c538c82b74b" containerID="6f39b4e222d74a12d8889750f34e5bb8e07d73c9d7fcdbaaebe18066fba744a6" exitCode=0 Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.075746 4998 generic.go:334] "Generic (PLEG): container finished" podID="bceef7ff-b99d-432e-b9cb-7c538c82b74b" containerID="0cc4f81dc4311732e185b32536af72d8391b070ab0b822258c13c55b9053cf53" exitCode=0 Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.075753 4998 generic.go:334] "Generic (PLEG): container finished" podID="bceef7ff-b99d-432e-b9cb-7c538c82b74b" containerID="e38b883925e98a1b15113d65d2f9b3e133e05c14ac63651039076b8518d42467" exitCode=0 Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.075759 4998 generic.go:334] "Generic (PLEG): container finished" podID="bceef7ff-b99d-432e-b9cb-7c538c82b74b" containerID="4f8bd4e21e6e078a8e49ae2264c035072fb77179deb36176c991ff790ea25b0e" exitCode=0 Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.075765 4998 generic.go:334] "Generic (PLEG): container finished" podID="bceef7ff-b99d-432e-b9cb-7c538c82b74b" containerID="b586f4d902be28e9eed6ab0b15e1c5bbd3dbf2add143844fff0835e088a1e3fb" exitCode=0 Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.075773 4998 generic.go:334] "Generic (PLEG): container finished" podID="bceef7ff-b99d-432e-b9cb-7c538c82b74b" containerID="92c409cddda85ffd90dc0522d28d181bb804f68f82561e82217b3e78cd9938ba" exitCode=143 Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.075779 4998 generic.go:334] "Generic (PLEG): container finished" podID="bceef7ff-b99d-432e-b9cb-7c538c82b74b" containerID="f80297fee7fa0ab3d0d6a5975984b83a2d0fa2a368a3fc6d85affe2af75e326e" exitCode=143 Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.075799 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" event={"ID":"bceef7ff-b99d-432e-b9cb-7c538c82b74b","Type":"ContainerDied","Data":"e9eaff90d2da92c1eae5a79cd037518bcdfab96974f21aa40ee77de2eba576af"} Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.075831 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" event={"ID":"bceef7ff-b99d-432e-b9cb-7c538c82b74b","Type":"ContainerDied","Data":"6f39b4e222d74a12d8889750f34e5bb8e07d73c9d7fcdbaaebe18066fba744a6"} Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.075855 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" event={"ID":"bceef7ff-b99d-432e-b9cb-7c538c82b74b","Type":"ContainerDied","Data":"0cc4f81dc4311732e185b32536af72d8391b070ab0b822258c13c55b9053cf53"} Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.075869 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" event={"ID":"bceef7ff-b99d-432e-b9cb-7c538c82b74b","Type":"ContainerDied","Data":"e38b883925e98a1b15113d65d2f9b3e133e05c14ac63651039076b8518d42467"} Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.075873 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.075883 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" event={"ID":"bceef7ff-b99d-432e-b9cb-7c538c82b74b","Type":"ContainerDied","Data":"4f8bd4e21e6e078a8e49ae2264c035072fb77179deb36176c991ff790ea25b0e"} Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.075899 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" event={"ID":"bceef7ff-b99d-432e-b9cb-7c538c82b74b","Type":"ContainerDied","Data":"b586f4d902be28e9eed6ab0b15e1c5bbd3dbf2add143844fff0835e088a1e3fb"} Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.075914 4998 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e9eaff90d2da92c1eae5a79cd037518bcdfab96974f21aa40ee77de2eba576af"} Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.075926 4998 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2d3b4c0f3e17786c3bc75771f5dffed4f1c11d33aea4bd153117e1eda621a35b"} Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.075932 4998 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6f39b4e222d74a12d8889750f34e5bb8e07d73c9d7fcdbaaebe18066fba744a6"} Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.075938 4998 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0cc4f81dc4311732e185b32536af72d8391b070ab0b822258c13c55b9053cf53"} Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.075943 4998 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e38b883925e98a1b15113d65d2f9b3e133e05c14ac63651039076b8518d42467"} Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.075948 4998 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4f8bd4e21e6e078a8e49ae2264c035072fb77179deb36176c991ff790ea25b0e"} Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.075954 4998 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b586f4d902be28e9eed6ab0b15e1c5bbd3dbf2add143844fff0835e088a1e3fb"} Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.075959 4998 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"92c409cddda85ffd90dc0522d28d181bb804f68f82561e82217b3e78cd9938ba"} Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.075964 4998 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f80297fee7fa0ab3d0d6a5975984b83a2d0fa2a368a3fc6d85affe2af75e326e"} Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.075969 4998 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a581b9e4264b5768a3af01485da05fd006f3cd0d379ef13f228187faa74f297c"} Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.075976 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" event={"ID":"bceef7ff-b99d-432e-b9cb-7c538c82b74b","Type":"ContainerDied","Data":"92c409cddda85ffd90dc0522d28d181bb804f68f82561e82217b3e78cd9938ba"} Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.075984 4998 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e9eaff90d2da92c1eae5a79cd037518bcdfab96974f21aa40ee77de2eba576af"} Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.075992 4998 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2d3b4c0f3e17786c3bc75771f5dffed4f1c11d33aea4bd153117e1eda621a35b"} Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.076005 4998 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6f39b4e222d74a12d8889750f34e5bb8e07d73c9d7fcdbaaebe18066fba744a6"} Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.076013 4998 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0cc4f81dc4311732e185b32536af72d8391b070ab0b822258c13c55b9053cf53"} Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.076020 4998 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e38b883925e98a1b15113d65d2f9b3e133e05c14ac63651039076b8518d42467"} Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.076028 4998 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4f8bd4e21e6e078a8e49ae2264c035072fb77179deb36176c991ff790ea25b0e"} Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.076037 4998 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b586f4d902be28e9eed6ab0b15e1c5bbd3dbf2add143844fff0835e088a1e3fb"} Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.076043 4998 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"92c409cddda85ffd90dc0522d28d181bb804f68f82561e82217b3e78cd9938ba"} Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.076048 4998 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f80297fee7fa0ab3d0d6a5975984b83a2d0fa2a368a3fc6d85affe2af75e326e"} Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.076053 4998 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a581b9e4264b5768a3af01485da05fd006f3cd0d379ef13f228187faa74f297c"} Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.076062 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" event={"ID":"bceef7ff-b99d-432e-b9cb-7c538c82b74b","Type":"ContainerDied","Data":"f80297fee7fa0ab3d0d6a5975984b83a2d0fa2a368a3fc6d85affe2af75e326e"} Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.076074 4998 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e9eaff90d2da92c1eae5a79cd037518bcdfab96974f21aa40ee77de2eba576af"} Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.076080 4998 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2d3b4c0f3e17786c3bc75771f5dffed4f1c11d33aea4bd153117e1eda621a35b"} Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.076086 4998 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6f39b4e222d74a12d8889750f34e5bb8e07d73c9d7fcdbaaebe18066fba744a6"} Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.076095 4998 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0cc4f81dc4311732e185b32536af72d8391b070ab0b822258c13c55b9053cf53"} Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.076101 4998 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e38b883925e98a1b15113d65d2f9b3e133e05c14ac63651039076b8518d42467"} Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.076106 4998 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4f8bd4e21e6e078a8e49ae2264c035072fb77179deb36176c991ff790ea25b0e"} Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.076112 4998 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b586f4d902be28e9eed6ab0b15e1c5bbd3dbf2add143844fff0835e088a1e3fb"} Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.076117 4998 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"92c409cddda85ffd90dc0522d28d181bb804f68f82561e82217b3e78cd9938ba"} Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.076122 4998 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f80297fee7fa0ab3d0d6a5975984b83a2d0fa2a368a3fc6d85affe2af75e326e"} Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.076127 4998 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a581b9e4264b5768a3af01485da05fd006f3cd0d379ef13f228187faa74f297c"} Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.076134 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wh9xl" event={"ID":"bceef7ff-b99d-432e-b9cb-7c538c82b74b","Type":"ContainerDied","Data":"22a3f9cd8b6410eb2f98b272fca1e976ac2afcdccedeffc1283ee8fa073179f6"} Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.076142 4998 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e9eaff90d2da92c1eae5a79cd037518bcdfab96974f21aa40ee77de2eba576af"} Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.076148 4998 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2d3b4c0f3e17786c3bc75771f5dffed4f1c11d33aea4bd153117e1eda621a35b"} Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.076154 4998 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6f39b4e222d74a12d8889750f34e5bb8e07d73c9d7fcdbaaebe18066fba744a6"} Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.076159 4998 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0cc4f81dc4311732e185b32536af72d8391b070ab0b822258c13c55b9053cf53"} Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.076164 4998 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e38b883925e98a1b15113d65d2f9b3e133e05c14ac63651039076b8518d42467"} Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.076170 4998 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4f8bd4e21e6e078a8e49ae2264c035072fb77179deb36176c991ff790ea25b0e"} Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.076175 4998 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b586f4d902be28e9eed6ab0b15e1c5bbd3dbf2add143844fff0835e088a1e3fb"} Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.076180 4998 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"92c409cddda85ffd90dc0522d28d181bb804f68f82561e82217b3e78cd9938ba"} Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.076185 4998 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f80297fee7fa0ab3d0d6a5975984b83a2d0fa2a368a3fc6d85affe2af75e326e"} Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.076190 4998 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a581b9e4264b5768a3af01485da05fd006f3cd0d379ef13f228187faa74f297c"} Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.094177 4998 scope.go:117] "RemoveContainer" containerID="e9eaff90d2da92c1eae5a79cd037518bcdfab96974f21aa40ee77de2eba576af" Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.114186 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-wh9xl"] Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.118649 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-wh9xl"] Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.120473 4998 scope.go:117] "RemoveContainer" containerID="2d3b4c0f3e17786c3bc75771f5dffed4f1c11d33aea4bd153117e1eda621a35b" Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.207517 4998 scope.go:117] "RemoveContainer" containerID="6f39b4e222d74a12d8889750f34e5bb8e07d73c9d7fcdbaaebe18066fba744a6" Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.226195 4998 scope.go:117] "RemoveContainer" containerID="0cc4f81dc4311732e185b32536af72d8391b070ab0b822258c13c55b9053cf53" Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.238077 4998 scope.go:117] "RemoveContainer" containerID="e38b883925e98a1b15113d65d2f9b3e133e05c14ac63651039076b8518d42467" Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.248916 4998 scope.go:117] "RemoveContainer" containerID="4f8bd4e21e6e078a8e49ae2264c035072fb77179deb36176c991ff790ea25b0e" Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.268872 4998 scope.go:117] "RemoveContainer" containerID="b586f4d902be28e9eed6ab0b15e1c5bbd3dbf2add143844fff0835e088a1e3fb" Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.280924 4998 scope.go:117] "RemoveContainer" containerID="92c409cddda85ffd90dc0522d28d181bb804f68f82561e82217b3e78cd9938ba" Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.293471 4998 scope.go:117] "RemoveContainer" containerID="f80297fee7fa0ab3d0d6a5975984b83a2d0fa2a368a3fc6d85affe2af75e326e" Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.308206 4998 scope.go:117] "RemoveContainer" containerID="a581b9e4264b5768a3af01485da05fd006f3cd0d379ef13f228187faa74f297c" Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.321290 4998 scope.go:117] "RemoveContainer" containerID="e9eaff90d2da92c1eae5a79cd037518bcdfab96974f21aa40ee77de2eba576af" Feb 27 10:31:03 crc kubenswrapper[4998]: E0227 10:31:03.321731 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9eaff90d2da92c1eae5a79cd037518bcdfab96974f21aa40ee77de2eba576af\": container with ID starting with e9eaff90d2da92c1eae5a79cd037518bcdfab96974f21aa40ee77de2eba576af not found: ID does not exist" containerID="e9eaff90d2da92c1eae5a79cd037518bcdfab96974f21aa40ee77de2eba576af" Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.321797 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9eaff90d2da92c1eae5a79cd037518bcdfab96974f21aa40ee77de2eba576af"} err="failed to get container status \"e9eaff90d2da92c1eae5a79cd037518bcdfab96974f21aa40ee77de2eba576af\": rpc error: code = NotFound desc = could not find container \"e9eaff90d2da92c1eae5a79cd037518bcdfab96974f21aa40ee77de2eba576af\": container with ID starting with e9eaff90d2da92c1eae5a79cd037518bcdfab96974f21aa40ee77de2eba576af not found: ID does not exist" Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.321841 4998 scope.go:117] "RemoveContainer" containerID="2d3b4c0f3e17786c3bc75771f5dffed4f1c11d33aea4bd153117e1eda621a35b" Feb 27 10:31:03 crc kubenswrapper[4998]: E0227 10:31:03.322276 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d3b4c0f3e17786c3bc75771f5dffed4f1c11d33aea4bd153117e1eda621a35b\": container with ID starting with 2d3b4c0f3e17786c3bc75771f5dffed4f1c11d33aea4bd153117e1eda621a35b not found: ID does not exist" containerID="2d3b4c0f3e17786c3bc75771f5dffed4f1c11d33aea4bd153117e1eda621a35b" Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.322314 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d3b4c0f3e17786c3bc75771f5dffed4f1c11d33aea4bd153117e1eda621a35b"} err="failed to get container status \"2d3b4c0f3e17786c3bc75771f5dffed4f1c11d33aea4bd153117e1eda621a35b\": rpc error: code = NotFound desc = could not find container \"2d3b4c0f3e17786c3bc75771f5dffed4f1c11d33aea4bd153117e1eda621a35b\": container with ID starting with 2d3b4c0f3e17786c3bc75771f5dffed4f1c11d33aea4bd153117e1eda621a35b not found: ID does not exist" Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.322339 4998 scope.go:117] "RemoveContainer" containerID="6f39b4e222d74a12d8889750f34e5bb8e07d73c9d7fcdbaaebe18066fba744a6" Feb 27 10:31:03 crc kubenswrapper[4998]: E0227 10:31:03.322680 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f39b4e222d74a12d8889750f34e5bb8e07d73c9d7fcdbaaebe18066fba744a6\": container with ID starting with 6f39b4e222d74a12d8889750f34e5bb8e07d73c9d7fcdbaaebe18066fba744a6 not found: ID does not exist" containerID="6f39b4e222d74a12d8889750f34e5bb8e07d73c9d7fcdbaaebe18066fba744a6" Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.322716 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f39b4e222d74a12d8889750f34e5bb8e07d73c9d7fcdbaaebe18066fba744a6"} err="failed to get container status \"6f39b4e222d74a12d8889750f34e5bb8e07d73c9d7fcdbaaebe18066fba744a6\": rpc error: code = NotFound desc = could not find container \"6f39b4e222d74a12d8889750f34e5bb8e07d73c9d7fcdbaaebe18066fba744a6\": container with ID starting with 6f39b4e222d74a12d8889750f34e5bb8e07d73c9d7fcdbaaebe18066fba744a6 not found: ID does not exist" Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.322739 4998 scope.go:117] "RemoveContainer" containerID="0cc4f81dc4311732e185b32536af72d8391b070ab0b822258c13c55b9053cf53" Feb 27 10:31:03 crc kubenswrapper[4998]: E0227 10:31:03.323005 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0cc4f81dc4311732e185b32536af72d8391b070ab0b822258c13c55b9053cf53\": container with ID starting with 0cc4f81dc4311732e185b32536af72d8391b070ab0b822258c13c55b9053cf53 not found: ID does not exist" containerID="0cc4f81dc4311732e185b32536af72d8391b070ab0b822258c13c55b9053cf53" Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.323053 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0cc4f81dc4311732e185b32536af72d8391b070ab0b822258c13c55b9053cf53"} err="failed to get container status \"0cc4f81dc4311732e185b32536af72d8391b070ab0b822258c13c55b9053cf53\": rpc error: code = NotFound desc = could not find container \"0cc4f81dc4311732e185b32536af72d8391b070ab0b822258c13c55b9053cf53\": container with ID starting with 0cc4f81dc4311732e185b32536af72d8391b070ab0b822258c13c55b9053cf53 not found: ID does not exist" Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.323070 4998 scope.go:117] "RemoveContainer" containerID="e38b883925e98a1b15113d65d2f9b3e133e05c14ac63651039076b8518d42467" Feb 27 10:31:03 crc kubenswrapper[4998]: E0227 10:31:03.323525 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e38b883925e98a1b15113d65d2f9b3e133e05c14ac63651039076b8518d42467\": container with ID starting with e38b883925e98a1b15113d65d2f9b3e133e05c14ac63651039076b8518d42467 not found: ID does not exist" containerID="e38b883925e98a1b15113d65d2f9b3e133e05c14ac63651039076b8518d42467" Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.323553 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e38b883925e98a1b15113d65d2f9b3e133e05c14ac63651039076b8518d42467"} err="failed to get container status \"e38b883925e98a1b15113d65d2f9b3e133e05c14ac63651039076b8518d42467\": rpc error: code = NotFound desc = could not find container \"e38b883925e98a1b15113d65d2f9b3e133e05c14ac63651039076b8518d42467\": container with ID starting with e38b883925e98a1b15113d65d2f9b3e133e05c14ac63651039076b8518d42467 not found: ID does not exist" Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.323574 4998 scope.go:117] "RemoveContainer" containerID="4f8bd4e21e6e078a8e49ae2264c035072fb77179deb36176c991ff790ea25b0e" Feb 27 10:31:03 crc kubenswrapper[4998]: E0227 10:31:03.323825 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f8bd4e21e6e078a8e49ae2264c035072fb77179deb36176c991ff790ea25b0e\": container with ID starting with 4f8bd4e21e6e078a8e49ae2264c035072fb77179deb36176c991ff790ea25b0e not found: ID does not exist" containerID="4f8bd4e21e6e078a8e49ae2264c035072fb77179deb36176c991ff790ea25b0e" Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.323848 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f8bd4e21e6e078a8e49ae2264c035072fb77179deb36176c991ff790ea25b0e"} err="failed to get container status \"4f8bd4e21e6e078a8e49ae2264c035072fb77179deb36176c991ff790ea25b0e\": rpc error: code = NotFound desc = could not find container \"4f8bd4e21e6e078a8e49ae2264c035072fb77179deb36176c991ff790ea25b0e\": container with ID starting with 4f8bd4e21e6e078a8e49ae2264c035072fb77179deb36176c991ff790ea25b0e not found: ID does not exist" Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.323862 4998 scope.go:117] "RemoveContainer" containerID="b586f4d902be28e9eed6ab0b15e1c5bbd3dbf2add143844fff0835e088a1e3fb" Feb 27 10:31:03 crc kubenswrapper[4998]: E0227 10:31:03.324140 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b586f4d902be28e9eed6ab0b15e1c5bbd3dbf2add143844fff0835e088a1e3fb\": container with ID starting with b586f4d902be28e9eed6ab0b15e1c5bbd3dbf2add143844fff0835e088a1e3fb not found: ID does not exist" containerID="b586f4d902be28e9eed6ab0b15e1c5bbd3dbf2add143844fff0835e088a1e3fb" Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.324186 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b586f4d902be28e9eed6ab0b15e1c5bbd3dbf2add143844fff0835e088a1e3fb"} err="failed to get container status \"b586f4d902be28e9eed6ab0b15e1c5bbd3dbf2add143844fff0835e088a1e3fb\": rpc error: code = NotFound desc = could not find container \"b586f4d902be28e9eed6ab0b15e1c5bbd3dbf2add143844fff0835e088a1e3fb\": container with ID starting with b586f4d902be28e9eed6ab0b15e1c5bbd3dbf2add143844fff0835e088a1e3fb not found: ID does not exist" Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.324209 4998 scope.go:117] "RemoveContainer" containerID="92c409cddda85ffd90dc0522d28d181bb804f68f82561e82217b3e78cd9938ba" Feb 27 10:31:03 crc kubenswrapper[4998]: E0227 10:31:03.324480 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92c409cddda85ffd90dc0522d28d181bb804f68f82561e82217b3e78cd9938ba\": container with ID starting with 92c409cddda85ffd90dc0522d28d181bb804f68f82561e82217b3e78cd9938ba not found: ID does not exist" containerID="92c409cddda85ffd90dc0522d28d181bb804f68f82561e82217b3e78cd9938ba" Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.324509 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92c409cddda85ffd90dc0522d28d181bb804f68f82561e82217b3e78cd9938ba"} err="failed to get container status \"92c409cddda85ffd90dc0522d28d181bb804f68f82561e82217b3e78cd9938ba\": rpc error: code = NotFound desc = could not find container \"92c409cddda85ffd90dc0522d28d181bb804f68f82561e82217b3e78cd9938ba\": container with ID starting with 92c409cddda85ffd90dc0522d28d181bb804f68f82561e82217b3e78cd9938ba not found: ID does not exist" Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.324526 4998 scope.go:117] "RemoveContainer" containerID="f80297fee7fa0ab3d0d6a5975984b83a2d0fa2a368a3fc6d85affe2af75e326e" Feb 27 10:31:03 crc kubenswrapper[4998]: E0227 10:31:03.324808 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f80297fee7fa0ab3d0d6a5975984b83a2d0fa2a368a3fc6d85affe2af75e326e\": container with ID starting with f80297fee7fa0ab3d0d6a5975984b83a2d0fa2a368a3fc6d85affe2af75e326e not found: ID does not exist" containerID="f80297fee7fa0ab3d0d6a5975984b83a2d0fa2a368a3fc6d85affe2af75e326e" Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.324838 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f80297fee7fa0ab3d0d6a5975984b83a2d0fa2a368a3fc6d85affe2af75e326e"} err="failed to get container status \"f80297fee7fa0ab3d0d6a5975984b83a2d0fa2a368a3fc6d85affe2af75e326e\": rpc error: code = NotFound desc = could not find container \"f80297fee7fa0ab3d0d6a5975984b83a2d0fa2a368a3fc6d85affe2af75e326e\": container with ID starting with f80297fee7fa0ab3d0d6a5975984b83a2d0fa2a368a3fc6d85affe2af75e326e not found: ID does not exist" Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.324858 4998 scope.go:117] "RemoveContainer" containerID="a581b9e4264b5768a3af01485da05fd006f3cd0d379ef13f228187faa74f297c" Feb 27 10:31:03 crc kubenswrapper[4998]: E0227 10:31:03.325137 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a581b9e4264b5768a3af01485da05fd006f3cd0d379ef13f228187faa74f297c\": container with ID starting with a581b9e4264b5768a3af01485da05fd006f3cd0d379ef13f228187faa74f297c not found: ID does not exist" containerID="a581b9e4264b5768a3af01485da05fd006f3cd0d379ef13f228187faa74f297c" Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.325177 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a581b9e4264b5768a3af01485da05fd006f3cd0d379ef13f228187faa74f297c"} err="failed to get container status \"a581b9e4264b5768a3af01485da05fd006f3cd0d379ef13f228187faa74f297c\": rpc error: code = NotFound desc = could not find container \"a581b9e4264b5768a3af01485da05fd006f3cd0d379ef13f228187faa74f297c\": container with ID starting with a581b9e4264b5768a3af01485da05fd006f3cd0d379ef13f228187faa74f297c not found: ID does not exist" Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.325191 4998 scope.go:117] "RemoveContainer" containerID="e9eaff90d2da92c1eae5a79cd037518bcdfab96974f21aa40ee77de2eba576af" Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.325483 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9eaff90d2da92c1eae5a79cd037518bcdfab96974f21aa40ee77de2eba576af"} err="failed to get container status \"e9eaff90d2da92c1eae5a79cd037518bcdfab96974f21aa40ee77de2eba576af\": rpc error: code = NotFound desc = could not find container \"e9eaff90d2da92c1eae5a79cd037518bcdfab96974f21aa40ee77de2eba576af\": container with ID starting with e9eaff90d2da92c1eae5a79cd037518bcdfab96974f21aa40ee77de2eba576af not found: ID does not exist" Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.325560 4998 scope.go:117] "RemoveContainer" containerID="2d3b4c0f3e17786c3bc75771f5dffed4f1c11d33aea4bd153117e1eda621a35b" Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.326031 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d3b4c0f3e17786c3bc75771f5dffed4f1c11d33aea4bd153117e1eda621a35b"} err="failed to get container status \"2d3b4c0f3e17786c3bc75771f5dffed4f1c11d33aea4bd153117e1eda621a35b\": rpc error: code = NotFound desc = could not find container \"2d3b4c0f3e17786c3bc75771f5dffed4f1c11d33aea4bd153117e1eda621a35b\": container with ID starting with 2d3b4c0f3e17786c3bc75771f5dffed4f1c11d33aea4bd153117e1eda621a35b not found: ID does not exist" Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.326078 4998 scope.go:117] "RemoveContainer" containerID="6f39b4e222d74a12d8889750f34e5bb8e07d73c9d7fcdbaaebe18066fba744a6" Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.326459 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f39b4e222d74a12d8889750f34e5bb8e07d73c9d7fcdbaaebe18066fba744a6"} err="failed to get container status \"6f39b4e222d74a12d8889750f34e5bb8e07d73c9d7fcdbaaebe18066fba744a6\": rpc error: code = NotFound desc = could not find container \"6f39b4e222d74a12d8889750f34e5bb8e07d73c9d7fcdbaaebe18066fba744a6\": container with ID starting with 6f39b4e222d74a12d8889750f34e5bb8e07d73c9d7fcdbaaebe18066fba744a6 not found: ID does not exist" Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.326492 4998 scope.go:117] "RemoveContainer" containerID="0cc4f81dc4311732e185b32536af72d8391b070ab0b822258c13c55b9053cf53" Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.326785 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0cc4f81dc4311732e185b32536af72d8391b070ab0b822258c13c55b9053cf53"} err="failed to get container status \"0cc4f81dc4311732e185b32536af72d8391b070ab0b822258c13c55b9053cf53\": rpc error: code = NotFound desc = could not find container \"0cc4f81dc4311732e185b32536af72d8391b070ab0b822258c13c55b9053cf53\": container with ID starting with 0cc4f81dc4311732e185b32536af72d8391b070ab0b822258c13c55b9053cf53 not found: ID does not exist" Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.326811 4998 scope.go:117] "RemoveContainer" containerID="e38b883925e98a1b15113d65d2f9b3e133e05c14ac63651039076b8518d42467" Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.327034 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e38b883925e98a1b15113d65d2f9b3e133e05c14ac63651039076b8518d42467"} err="failed to get container status \"e38b883925e98a1b15113d65d2f9b3e133e05c14ac63651039076b8518d42467\": rpc error: code = NotFound desc = could not find container \"e38b883925e98a1b15113d65d2f9b3e133e05c14ac63651039076b8518d42467\": container with ID starting with e38b883925e98a1b15113d65d2f9b3e133e05c14ac63651039076b8518d42467 not found: ID does not exist" Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.327056 4998 scope.go:117] "RemoveContainer" containerID="4f8bd4e21e6e078a8e49ae2264c035072fb77179deb36176c991ff790ea25b0e" Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.327266 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f8bd4e21e6e078a8e49ae2264c035072fb77179deb36176c991ff790ea25b0e"} err="failed to get container status \"4f8bd4e21e6e078a8e49ae2264c035072fb77179deb36176c991ff790ea25b0e\": rpc error: code = NotFound desc = could not find container \"4f8bd4e21e6e078a8e49ae2264c035072fb77179deb36176c991ff790ea25b0e\": container with ID starting with 4f8bd4e21e6e078a8e49ae2264c035072fb77179deb36176c991ff790ea25b0e not found: ID does not exist" Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.327341 4998 scope.go:117] "RemoveContainer" containerID="b586f4d902be28e9eed6ab0b15e1c5bbd3dbf2add143844fff0835e088a1e3fb" Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.327523 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b586f4d902be28e9eed6ab0b15e1c5bbd3dbf2add143844fff0835e088a1e3fb"} err="failed to get container status \"b586f4d902be28e9eed6ab0b15e1c5bbd3dbf2add143844fff0835e088a1e3fb\": rpc error: code = NotFound desc = could not find container \"b586f4d902be28e9eed6ab0b15e1c5bbd3dbf2add143844fff0835e088a1e3fb\": container with ID starting with b586f4d902be28e9eed6ab0b15e1c5bbd3dbf2add143844fff0835e088a1e3fb not found: ID does not exist" Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.327545 4998 scope.go:117] "RemoveContainer" containerID="92c409cddda85ffd90dc0522d28d181bb804f68f82561e82217b3e78cd9938ba" Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.327707 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92c409cddda85ffd90dc0522d28d181bb804f68f82561e82217b3e78cd9938ba"} err="failed to get container status \"92c409cddda85ffd90dc0522d28d181bb804f68f82561e82217b3e78cd9938ba\": rpc error: code = NotFound desc = could not find container \"92c409cddda85ffd90dc0522d28d181bb804f68f82561e82217b3e78cd9938ba\": container with ID starting with 92c409cddda85ffd90dc0522d28d181bb804f68f82561e82217b3e78cd9938ba not found: ID does not exist" Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.327724 4998 scope.go:117] "RemoveContainer" containerID="f80297fee7fa0ab3d0d6a5975984b83a2d0fa2a368a3fc6d85affe2af75e326e" Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.327905 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f80297fee7fa0ab3d0d6a5975984b83a2d0fa2a368a3fc6d85affe2af75e326e"} err="failed to get container status \"f80297fee7fa0ab3d0d6a5975984b83a2d0fa2a368a3fc6d85affe2af75e326e\": rpc error: code = NotFound desc = could not find container \"f80297fee7fa0ab3d0d6a5975984b83a2d0fa2a368a3fc6d85affe2af75e326e\": container with ID starting with f80297fee7fa0ab3d0d6a5975984b83a2d0fa2a368a3fc6d85affe2af75e326e not found: ID does not exist" Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.327923 4998 scope.go:117] "RemoveContainer" containerID="a581b9e4264b5768a3af01485da05fd006f3cd0d379ef13f228187faa74f297c" Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.328108 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a581b9e4264b5768a3af01485da05fd006f3cd0d379ef13f228187faa74f297c"} err="failed to get container status \"a581b9e4264b5768a3af01485da05fd006f3cd0d379ef13f228187faa74f297c\": rpc error: code = NotFound desc = could not find container \"a581b9e4264b5768a3af01485da05fd006f3cd0d379ef13f228187faa74f297c\": container with ID starting with a581b9e4264b5768a3af01485da05fd006f3cd0d379ef13f228187faa74f297c not found: ID does not exist" Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.328125 4998 scope.go:117] "RemoveContainer" containerID="e9eaff90d2da92c1eae5a79cd037518bcdfab96974f21aa40ee77de2eba576af" Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.328462 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9eaff90d2da92c1eae5a79cd037518bcdfab96974f21aa40ee77de2eba576af"} err="failed to get container status \"e9eaff90d2da92c1eae5a79cd037518bcdfab96974f21aa40ee77de2eba576af\": rpc error: code = NotFound desc = could not find container \"e9eaff90d2da92c1eae5a79cd037518bcdfab96974f21aa40ee77de2eba576af\": container with ID starting with e9eaff90d2da92c1eae5a79cd037518bcdfab96974f21aa40ee77de2eba576af not found: ID does not exist" Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.328481 4998 scope.go:117] "RemoveContainer" containerID="2d3b4c0f3e17786c3bc75771f5dffed4f1c11d33aea4bd153117e1eda621a35b" Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.328729 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d3b4c0f3e17786c3bc75771f5dffed4f1c11d33aea4bd153117e1eda621a35b"} err="failed to get container status \"2d3b4c0f3e17786c3bc75771f5dffed4f1c11d33aea4bd153117e1eda621a35b\": rpc error: code = NotFound desc = could not find container \"2d3b4c0f3e17786c3bc75771f5dffed4f1c11d33aea4bd153117e1eda621a35b\": container with ID starting with 2d3b4c0f3e17786c3bc75771f5dffed4f1c11d33aea4bd153117e1eda621a35b not found: ID does not exist" Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.328747 4998 scope.go:117] "RemoveContainer" containerID="6f39b4e222d74a12d8889750f34e5bb8e07d73c9d7fcdbaaebe18066fba744a6" Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.328927 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f39b4e222d74a12d8889750f34e5bb8e07d73c9d7fcdbaaebe18066fba744a6"} err="failed to get container status \"6f39b4e222d74a12d8889750f34e5bb8e07d73c9d7fcdbaaebe18066fba744a6\": rpc error: code = NotFound desc = could not find container \"6f39b4e222d74a12d8889750f34e5bb8e07d73c9d7fcdbaaebe18066fba744a6\": container with ID starting with 6f39b4e222d74a12d8889750f34e5bb8e07d73c9d7fcdbaaebe18066fba744a6 not found: ID does not exist" Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.328943 4998 scope.go:117] "RemoveContainer" containerID="0cc4f81dc4311732e185b32536af72d8391b070ab0b822258c13c55b9053cf53" Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.329198 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0cc4f81dc4311732e185b32536af72d8391b070ab0b822258c13c55b9053cf53"} err="failed to get container status \"0cc4f81dc4311732e185b32536af72d8391b070ab0b822258c13c55b9053cf53\": rpc error: code = NotFound desc = could not find container \"0cc4f81dc4311732e185b32536af72d8391b070ab0b822258c13c55b9053cf53\": container with ID starting with 0cc4f81dc4311732e185b32536af72d8391b070ab0b822258c13c55b9053cf53 not found: ID does not exist" Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.329216 4998 scope.go:117] "RemoveContainer" containerID="e38b883925e98a1b15113d65d2f9b3e133e05c14ac63651039076b8518d42467" Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.329452 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e38b883925e98a1b15113d65d2f9b3e133e05c14ac63651039076b8518d42467"} err="failed to get container status \"e38b883925e98a1b15113d65d2f9b3e133e05c14ac63651039076b8518d42467\": rpc error: code = NotFound desc = could not find container \"e38b883925e98a1b15113d65d2f9b3e133e05c14ac63651039076b8518d42467\": container with ID starting with e38b883925e98a1b15113d65d2f9b3e133e05c14ac63651039076b8518d42467 not found: ID does not exist" Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.329470 4998 scope.go:117] "RemoveContainer" containerID="4f8bd4e21e6e078a8e49ae2264c035072fb77179deb36176c991ff790ea25b0e" Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.329687 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f8bd4e21e6e078a8e49ae2264c035072fb77179deb36176c991ff790ea25b0e"} err="failed to get container status \"4f8bd4e21e6e078a8e49ae2264c035072fb77179deb36176c991ff790ea25b0e\": rpc error: code = NotFound desc = could not find container \"4f8bd4e21e6e078a8e49ae2264c035072fb77179deb36176c991ff790ea25b0e\": container with ID starting with 4f8bd4e21e6e078a8e49ae2264c035072fb77179deb36176c991ff790ea25b0e not found: ID does not exist" Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.329700 4998 scope.go:117] "RemoveContainer" containerID="b586f4d902be28e9eed6ab0b15e1c5bbd3dbf2add143844fff0835e088a1e3fb" Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.329913 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b586f4d902be28e9eed6ab0b15e1c5bbd3dbf2add143844fff0835e088a1e3fb"} err="failed to get container status \"b586f4d902be28e9eed6ab0b15e1c5bbd3dbf2add143844fff0835e088a1e3fb\": rpc error: code = NotFound desc = could not find container \"b586f4d902be28e9eed6ab0b15e1c5bbd3dbf2add143844fff0835e088a1e3fb\": container with ID starting with b586f4d902be28e9eed6ab0b15e1c5bbd3dbf2add143844fff0835e088a1e3fb not found: ID does not exist" Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.329936 4998 scope.go:117] "RemoveContainer" containerID="92c409cddda85ffd90dc0522d28d181bb804f68f82561e82217b3e78cd9938ba" Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.330145 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92c409cddda85ffd90dc0522d28d181bb804f68f82561e82217b3e78cd9938ba"} err="failed to get container status \"92c409cddda85ffd90dc0522d28d181bb804f68f82561e82217b3e78cd9938ba\": rpc error: code = NotFound desc = could not find container \"92c409cddda85ffd90dc0522d28d181bb804f68f82561e82217b3e78cd9938ba\": container with ID starting with 92c409cddda85ffd90dc0522d28d181bb804f68f82561e82217b3e78cd9938ba not found: ID does not exist" Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.330164 4998 scope.go:117] "RemoveContainer" containerID="f80297fee7fa0ab3d0d6a5975984b83a2d0fa2a368a3fc6d85affe2af75e326e" Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.330838 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f80297fee7fa0ab3d0d6a5975984b83a2d0fa2a368a3fc6d85affe2af75e326e"} err="failed to get container status \"f80297fee7fa0ab3d0d6a5975984b83a2d0fa2a368a3fc6d85affe2af75e326e\": rpc error: code = NotFound desc = could not find container \"f80297fee7fa0ab3d0d6a5975984b83a2d0fa2a368a3fc6d85affe2af75e326e\": container with ID starting with f80297fee7fa0ab3d0d6a5975984b83a2d0fa2a368a3fc6d85affe2af75e326e not found: ID does not exist" Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.330860 4998 scope.go:117] "RemoveContainer" containerID="a581b9e4264b5768a3af01485da05fd006f3cd0d379ef13f228187faa74f297c" Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.331101 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a581b9e4264b5768a3af01485da05fd006f3cd0d379ef13f228187faa74f297c"} err="failed to get container status \"a581b9e4264b5768a3af01485da05fd006f3cd0d379ef13f228187faa74f297c\": rpc error: code = NotFound desc = could not find container \"a581b9e4264b5768a3af01485da05fd006f3cd0d379ef13f228187faa74f297c\": container with ID starting with a581b9e4264b5768a3af01485da05fd006f3cd0d379ef13f228187faa74f297c not found: ID does not exist" Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.331138 4998 scope.go:117] "RemoveContainer" containerID="e9eaff90d2da92c1eae5a79cd037518bcdfab96974f21aa40ee77de2eba576af" Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.331401 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9eaff90d2da92c1eae5a79cd037518bcdfab96974f21aa40ee77de2eba576af"} err="failed to get container status \"e9eaff90d2da92c1eae5a79cd037518bcdfab96974f21aa40ee77de2eba576af\": rpc error: code = NotFound desc = could not find container \"e9eaff90d2da92c1eae5a79cd037518bcdfab96974f21aa40ee77de2eba576af\": container with ID starting with e9eaff90d2da92c1eae5a79cd037518bcdfab96974f21aa40ee77de2eba576af not found: ID does not exist" Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.331421 4998 scope.go:117] "RemoveContainer" containerID="2d3b4c0f3e17786c3bc75771f5dffed4f1c11d33aea4bd153117e1eda621a35b" Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.331615 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d3b4c0f3e17786c3bc75771f5dffed4f1c11d33aea4bd153117e1eda621a35b"} err="failed to get container status \"2d3b4c0f3e17786c3bc75771f5dffed4f1c11d33aea4bd153117e1eda621a35b\": rpc error: code = NotFound desc = could not find container \"2d3b4c0f3e17786c3bc75771f5dffed4f1c11d33aea4bd153117e1eda621a35b\": container with ID starting with 2d3b4c0f3e17786c3bc75771f5dffed4f1c11d33aea4bd153117e1eda621a35b not found: ID does not exist" Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.331637 4998 scope.go:117] "RemoveContainer" containerID="6f39b4e222d74a12d8889750f34e5bb8e07d73c9d7fcdbaaebe18066fba744a6" Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.331812 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f39b4e222d74a12d8889750f34e5bb8e07d73c9d7fcdbaaebe18066fba744a6"} err="failed to get container status \"6f39b4e222d74a12d8889750f34e5bb8e07d73c9d7fcdbaaebe18066fba744a6\": rpc error: code = NotFound desc = could not find container \"6f39b4e222d74a12d8889750f34e5bb8e07d73c9d7fcdbaaebe18066fba744a6\": container with ID starting with 6f39b4e222d74a12d8889750f34e5bb8e07d73c9d7fcdbaaebe18066fba744a6 not found: ID does not exist" Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.331829 4998 scope.go:117] "RemoveContainer" containerID="0cc4f81dc4311732e185b32536af72d8391b070ab0b822258c13c55b9053cf53" Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.332019 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0cc4f81dc4311732e185b32536af72d8391b070ab0b822258c13c55b9053cf53"} err="failed to get container status \"0cc4f81dc4311732e185b32536af72d8391b070ab0b822258c13c55b9053cf53\": rpc error: code = NotFound desc = could not find container \"0cc4f81dc4311732e185b32536af72d8391b070ab0b822258c13c55b9053cf53\": container with ID starting with 0cc4f81dc4311732e185b32536af72d8391b070ab0b822258c13c55b9053cf53 not found: ID does not exist" Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.332039 4998 scope.go:117] "RemoveContainer" containerID="e38b883925e98a1b15113d65d2f9b3e133e05c14ac63651039076b8518d42467" Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.332241 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e38b883925e98a1b15113d65d2f9b3e133e05c14ac63651039076b8518d42467"} err="failed to get container status \"e38b883925e98a1b15113d65d2f9b3e133e05c14ac63651039076b8518d42467\": rpc error: code = NotFound desc = could not find container \"e38b883925e98a1b15113d65d2f9b3e133e05c14ac63651039076b8518d42467\": container with ID starting with e38b883925e98a1b15113d65d2f9b3e133e05c14ac63651039076b8518d42467 not found: ID does not exist" Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.332260 4998 scope.go:117] "RemoveContainer" containerID="4f8bd4e21e6e078a8e49ae2264c035072fb77179deb36176c991ff790ea25b0e" Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.332441 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f8bd4e21e6e078a8e49ae2264c035072fb77179deb36176c991ff790ea25b0e"} err="failed to get container status \"4f8bd4e21e6e078a8e49ae2264c035072fb77179deb36176c991ff790ea25b0e\": rpc error: code = NotFound desc = could not find container \"4f8bd4e21e6e078a8e49ae2264c035072fb77179deb36176c991ff790ea25b0e\": container with ID starting with 4f8bd4e21e6e078a8e49ae2264c035072fb77179deb36176c991ff790ea25b0e not found: ID does not exist" Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.332458 4998 scope.go:117] "RemoveContainer" containerID="b586f4d902be28e9eed6ab0b15e1c5bbd3dbf2add143844fff0835e088a1e3fb" Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.332643 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b586f4d902be28e9eed6ab0b15e1c5bbd3dbf2add143844fff0835e088a1e3fb"} err="failed to get container status \"b586f4d902be28e9eed6ab0b15e1c5bbd3dbf2add143844fff0835e088a1e3fb\": rpc error: code = NotFound desc = could not find container \"b586f4d902be28e9eed6ab0b15e1c5bbd3dbf2add143844fff0835e088a1e3fb\": container with ID starting with b586f4d902be28e9eed6ab0b15e1c5bbd3dbf2add143844fff0835e088a1e3fb not found: ID does not exist" Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.332665 4998 scope.go:117] "RemoveContainer" containerID="92c409cddda85ffd90dc0522d28d181bb804f68f82561e82217b3e78cd9938ba" Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.332820 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92c409cddda85ffd90dc0522d28d181bb804f68f82561e82217b3e78cd9938ba"} err="failed to get container status \"92c409cddda85ffd90dc0522d28d181bb804f68f82561e82217b3e78cd9938ba\": rpc error: code = NotFound desc = could not find container \"92c409cddda85ffd90dc0522d28d181bb804f68f82561e82217b3e78cd9938ba\": container with ID starting with 92c409cddda85ffd90dc0522d28d181bb804f68f82561e82217b3e78cd9938ba not found: ID does not exist" Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.332839 4998 scope.go:117] "RemoveContainer" containerID="f80297fee7fa0ab3d0d6a5975984b83a2d0fa2a368a3fc6d85affe2af75e326e" Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.333033 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f80297fee7fa0ab3d0d6a5975984b83a2d0fa2a368a3fc6d85affe2af75e326e"} err="failed to get container status \"f80297fee7fa0ab3d0d6a5975984b83a2d0fa2a368a3fc6d85affe2af75e326e\": rpc error: code = NotFound desc = could not find container \"f80297fee7fa0ab3d0d6a5975984b83a2d0fa2a368a3fc6d85affe2af75e326e\": container with ID starting with f80297fee7fa0ab3d0d6a5975984b83a2d0fa2a368a3fc6d85affe2af75e326e not found: ID does not exist" Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.333050 4998 scope.go:117] "RemoveContainer" containerID="a581b9e4264b5768a3af01485da05fd006f3cd0d379ef13f228187faa74f297c" Feb 27 10:31:03 crc kubenswrapper[4998]: I0227 10:31:03.333239 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a581b9e4264b5768a3af01485da05fd006f3cd0d379ef13f228187faa74f297c"} err="failed to get container status \"a581b9e4264b5768a3af01485da05fd006f3cd0d379ef13f228187faa74f297c\": rpc error: code = NotFound desc = could not find container \"a581b9e4264b5768a3af01485da05fd006f3cd0d379ef13f228187faa74f297c\": container with ID starting with a581b9e4264b5768a3af01485da05fd006f3cd0d379ef13f228187faa74f297c not found: ID does not exist" Feb 27 10:31:04 crc kubenswrapper[4998]: I0227 10:31:04.087093 4998 generic.go:334] "Generic (PLEG): container finished" podID="074b4037-45f7-4414-bdd8-b40978263896" containerID="124542bfe280884e7934d758f290540ba08cac6c431d771fc171e27efa2d7652" exitCode=0 Feb 27 10:31:04 crc kubenswrapper[4998]: I0227 10:31:04.087177 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4542c" event={"ID":"074b4037-45f7-4414-bdd8-b40978263896","Type":"ContainerDied","Data":"124542bfe280884e7934d758f290540ba08cac6c431d771fc171e27efa2d7652"} Feb 27 10:31:04 crc kubenswrapper[4998]: I0227 10:31:04.095869 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-46lvx_a046a5ca-7081-4920-98af-1027a5bc29d0/kube-multus/2.log" Feb 27 10:31:04 crc kubenswrapper[4998]: I0227 10:31:04.771545 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bceef7ff-b99d-432e-b9cb-7c538c82b74b" path="/var/lib/kubelet/pods/bceef7ff-b99d-432e-b9cb-7c538c82b74b/volumes" Feb 27 10:31:05 crc kubenswrapper[4998]: I0227 10:31:05.105727 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4542c" event={"ID":"074b4037-45f7-4414-bdd8-b40978263896","Type":"ContainerStarted","Data":"f3a6b9a0cb9852d54f9de48a1f166ee1424b619efd9481e58a58015f16a134f6"} Feb 27 10:31:05 crc kubenswrapper[4998]: I0227 10:31:05.105794 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4542c" event={"ID":"074b4037-45f7-4414-bdd8-b40978263896","Type":"ContainerStarted","Data":"e3c5a58c19304ca3ff9cfe749b4d17b33ca23e465bedd45c7af7fded1b555fbd"} Feb 27 10:31:05 crc kubenswrapper[4998]: I0227 10:31:05.105806 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4542c" event={"ID":"074b4037-45f7-4414-bdd8-b40978263896","Type":"ContainerStarted","Data":"65ecea9c69d36248c5a682f5320eaaa56fa920b06c51f73d4f9c6fee41a96645"} Feb 27 10:31:05 crc kubenswrapper[4998]: I0227 10:31:05.105820 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4542c" event={"ID":"074b4037-45f7-4414-bdd8-b40978263896","Type":"ContainerStarted","Data":"c0f489dad9faa2919dcb4c87720ecdb69679d17ced5558d1d1ee30852d423dfb"} Feb 27 10:31:05 crc kubenswrapper[4998]: I0227 10:31:05.105852 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4542c" event={"ID":"074b4037-45f7-4414-bdd8-b40978263896","Type":"ContainerStarted","Data":"3825e1cbf49bee30923ee7c4923c55d4aa8d2ccb689f16cfa01ad6637a926fb1"} Feb 27 10:31:05 crc kubenswrapper[4998]: I0227 10:31:05.105862 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4542c" event={"ID":"074b4037-45f7-4414-bdd8-b40978263896","Type":"ContainerStarted","Data":"f17a20898cf3092fa79ca5d3e981d1e9d45f6f38dbe4d46dc58648d929340f63"} Feb 27 10:31:07 crc kubenswrapper[4998]: I0227 10:31:07.121756 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4542c" event={"ID":"074b4037-45f7-4414-bdd8-b40978263896","Type":"ContainerStarted","Data":"3e388c8c69355ba1943c14e8a9a9acfc54ad273312f11a5323af6d30d5823aee"} Feb 27 10:31:10 crc kubenswrapper[4998]: I0227 10:31:10.141951 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4542c" event={"ID":"074b4037-45f7-4414-bdd8-b40978263896","Type":"ContainerStarted","Data":"22fb7acc8a46940ce05072fcea7897e645e1f968dd480993f32bc0d388dd665e"} Feb 27 10:31:10 crc kubenswrapper[4998]: I0227 10:31:10.142387 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-4542c" Feb 27 10:31:10 crc kubenswrapper[4998]: I0227 10:31:10.142436 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-4542c" Feb 27 10:31:10 crc kubenswrapper[4998]: I0227 10:31:10.169694 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-4542c" podStartSLOduration=8.16967862 podStartE2EDuration="8.16967862s" podCreationTimestamp="2026-02-27 10:31:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:31:10.165812446 +0000 UTC m=+822.164083414" watchObservedRunningTime="2026-02-27 10:31:10.16967862 +0000 UTC m=+822.167949588" Feb 27 10:31:10 crc kubenswrapper[4998]: I0227 10:31:10.187757 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-4542c" Feb 27 10:31:10 crc kubenswrapper[4998]: I0227 10:31:10.505012 4998 patch_prober.go:28] interesting pod/machine-config-daemon-m6kr5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 10:31:10 crc kubenswrapper[4998]: I0227 10:31:10.505145 4998 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 10:31:11 crc kubenswrapper[4998]: I0227 10:31:11.147062 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-4542c" Feb 27 10:31:11 crc kubenswrapper[4998]: I0227 10:31:11.182278 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-4542c" Feb 27 10:31:15 crc kubenswrapper[4998]: I0227 10:31:15.765187 4998 scope.go:117] "RemoveContainer" containerID="d3e2963b299c9c91d93abf85f31c8d17e14dd7e330e911092cdfcb10879314ea" Feb 27 10:31:15 crc kubenswrapper[4998]: E0227 10:31:15.765951 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-46lvx_openshift-multus(a046a5ca-7081-4920-98af-1027a5bc29d0)\"" pod="openshift-multus/multus-46lvx" podUID="a046a5ca-7081-4920-98af-1027a5bc29d0" Feb 27 10:31:27 crc kubenswrapper[4998]: I0227 10:31:27.765761 4998 scope.go:117] "RemoveContainer" containerID="d3e2963b299c9c91d93abf85f31c8d17e14dd7e330e911092cdfcb10879314ea" Feb 27 10:31:28 crc kubenswrapper[4998]: I0227 10:31:28.268695 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-46lvx_a046a5ca-7081-4920-98af-1027a5bc29d0/kube-multus/2.log" Feb 27 10:31:28 crc kubenswrapper[4998]: I0227 10:31:28.269039 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-46lvx" event={"ID":"a046a5ca-7081-4920-98af-1027a5bc29d0","Type":"ContainerStarted","Data":"ca313d39686ed2186ff1bbe9fd5159320049eb567b7b3e642c4268d66246b70a"} Feb 27 10:31:33 crc kubenswrapper[4998]: I0227 10:31:33.024262 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-4542c" Feb 27 10:31:38 crc kubenswrapper[4998]: I0227 10:31:38.214356 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a829cc95"] Feb 27 10:31:38 crc kubenswrapper[4998]: I0227 10:31:38.215775 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a829cc95" Feb 27 10:31:38 crc kubenswrapper[4998]: I0227 10:31:38.217898 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 27 10:31:38 crc kubenswrapper[4998]: I0227 10:31:38.222829 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a829cc95"] Feb 27 10:31:38 crc kubenswrapper[4998]: I0227 10:31:38.366819 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/798850d8-3ba1-4af9-a3d5-4df55bf658a4-util\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a829cc95\" (UID: \"798850d8-3ba1-4af9-a3d5-4df55bf658a4\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a829cc95" Feb 27 10:31:38 crc kubenswrapper[4998]: I0227 10:31:38.367288 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/798850d8-3ba1-4af9-a3d5-4df55bf658a4-bundle\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a829cc95\" (UID: \"798850d8-3ba1-4af9-a3d5-4df55bf658a4\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a829cc95" Feb 27 10:31:38 crc kubenswrapper[4998]: I0227 10:31:38.367418 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96w64\" (UniqueName: \"kubernetes.io/projected/798850d8-3ba1-4af9-a3d5-4df55bf658a4-kube-api-access-96w64\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a829cc95\" (UID: \"798850d8-3ba1-4af9-a3d5-4df55bf658a4\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a829cc95" Feb 27 10:31:38 crc kubenswrapper[4998]: I0227 10:31:38.468275 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/798850d8-3ba1-4af9-a3d5-4df55bf658a4-bundle\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a829cc95\" (UID: \"798850d8-3ba1-4af9-a3d5-4df55bf658a4\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a829cc95" Feb 27 10:31:38 crc kubenswrapper[4998]: I0227 10:31:38.468319 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96w64\" (UniqueName: \"kubernetes.io/projected/798850d8-3ba1-4af9-a3d5-4df55bf658a4-kube-api-access-96w64\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a829cc95\" (UID: \"798850d8-3ba1-4af9-a3d5-4df55bf658a4\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a829cc95" Feb 27 10:31:38 crc kubenswrapper[4998]: I0227 10:31:38.468364 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/798850d8-3ba1-4af9-a3d5-4df55bf658a4-util\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a829cc95\" (UID: \"798850d8-3ba1-4af9-a3d5-4df55bf658a4\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a829cc95" Feb 27 10:31:38 crc kubenswrapper[4998]: I0227 10:31:38.469012 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/798850d8-3ba1-4af9-a3d5-4df55bf658a4-util\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a829cc95\" (UID: \"798850d8-3ba1-4af9-a3d5-4df55bf658a4\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a829cc95" Feb 27 10:31:38 crc kubenswrapper[4998]: I0227 10:31:38.469034 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/798850d8-3ba1-4af9-a3d5-4df55bf658a4-bundle\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a829cc95\" (UID: \"798850d8-3ba1-4af9-a3d5-4df55bf658a4\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a829cc95" Feb 27 10:31:38 crc kubenswrapper[4998]: I0227 10:31:38.491584 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96w64\" (UniqueName: \"kubernetes.io/projected/798850d8-3ba1-4af9-a3d5-4df55bf658a4-kube-api-access-96w64\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a829cc95\" (UID: \"798850d8-3ba1-4af9-a3d5-4df55bf658a4\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a829cc95" Feb 27 10:31:38 crc kubenswrapper[4998]: I0227 10:31:38.570900 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a829cc95" Feb 27 10:31:38 crc kubenswrapper[4998]: I0227 10:31:38.951976 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a829cc95"] Feb 27 10:31:39 crc kubenswrapper[4998]: I0227 10:31:39.328999 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a829cc95" event={"ID":"798850d8-3ba1-4af9-a3d5-4df55bf658a4","Type":"ContainerStarted","Data":"2d27c0d35a347bc7615ce96053b8e3f718aeba61e90332178c6b19a5c7814207"} Feb 27 10:31:40 crc kubenswrapper[4998]: I0227 10:31:40.335745 4998 generic.go:334] "Generic (PLEG): container finished" podID="798850d8-3ba1-4af9-a3d5-4df55bf658a4" containerID="64a49c7a15707973eec81a6c76ff2742f0d6d9ef53135e3020f7766d890f2f88" exitCode=0 Feb 27 10:31:40 crc kubenswrapper[4998]: I0227 10:31:40.335803 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a829cc95" event={"ID":"798850d8-3ba1-4af9-a3d5-4df55bf658a4","Type":"ContainerDied","Data":"64a49c7a15707973eec81a6c76ff2742f0d6d9ef53135e3020f7766d890f2f88"} Feb 27 10:31:40 crc kubenswrapper[4998]: I0227 10:31:40.503994 4998 patch_prober.go:28] interesting pod/machine-config-daemon-m6kr5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 10:31:40 crc kubenswrapper[4998]: I0227 10:31:40.504043 4998 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 10:31:40 crc kubenswrapper[4998]: I0227 10:31:40.504078 4998 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" Feb 27 10:31:40 crc kubenswrapper[4998]: I0227 10:31:40.504498 4998 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9dd84c2d84273411f555b8433ac91db1f4b3ffabd27398f5ba0d8023fe393865"} pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 27 10:31:40 crc kubenswrapper[4998]: I0227 10:31:40.504537 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" containerName="machine-config-daemon" containerID="cri-o://9dd84c2d84273411f555b8433ac91db1f4b3ffabd27398f5ba0d8023fe393865" gracePeriod=600 Feb 27 10:31:41 crc kubenswrapper[4998]: I0227 10:31:41.344016 4998 generic.go:334] "Generic (PLEG): container finished" podID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" containerID="9dd84c2d84273411f555b8433ac91db1f4b3ffabd27398f5ba0d8023fe393865" exitCode=0 Feb 27 10:31:41 crc kubenswrapper[4998]: I0227 10:31:41.344057 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" event={"ID":"400c5e2f-5448-49c6-bf8e-04b21e552bb2","Type":"ContainerDied","Data":"9dd84c2d84273411f555b8433ac91db1f4b3ffabd27398f5ba0d8023fe393865"} Feb 27 10:31:41 crc kubenswrapper[4998]: I0227 10:31:41.344407 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" event={"ID":"400c5e2f-5448-49c6-bf8e-04b21e552bb2","Type":"ContainerStarted","Data":"798a591820f18523d1f6d494045865d6035d0c926980498f800d24c0dbf69b5e"} Feb 27 10:31:41 crc kubenswrapper[4998]: I0227 10:31:41.344468 4998 scope.go:117] "RemoveContainer" containerID="234da51b68ba7f355a9213d1b205beeeaf0cebf43b06c886158db76841ca5c10" Feb 27 10:31:43 crc kubenswrapper[4998]: I0227 10:31:43.366109 4998 generic.go:334] "Generic (PLEG): container finished" podID="798850d8-3ba1-4af9-a3d5-4df55bf658a4" containerID="3de05e1ca94faab8ee857b51010258efa68932ac5adad48a8e6c13539430f892" exitCode=0 Feb 27 10:31:43 crc kubenswrapper[4998]: I0227 10:31:43.366197 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a829cc95" event={"ID":"798850d8-3ba1-4af9-a3d5-4df55bf658a4","Type":"ContainerDied","Data":"3de05e1ca94faab8ee857b51010258efa68932ac5adad48a8e6c13539430f892"} Feb 27 10:31:44 crc kubenswrapper[4998]: I0227 10:31:44.374682 4998 generic.go:334] "Generic (PLEG): container finished" podID="798850d8-3ba1-4af9-a3d5-4df55bf658a4" containerID="fb82f57ad5b6b495b292ceb894ff210045a5e8e806f2771b67108641f33485f2" exitCode=0 Feb 27 10:31:44 crc kubenswrapper[4998]: I0227 10:31:44.374753 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a829cc95" event={"ID":"798850d8-3ba1-4af9-a3d5-4df55bf658a4","Type":"ContainerDied","Data":"fb82f57ad5b6b495b292ceb894ff210045a5e8e806f2771b67108641f33485f2"} Feb 27 10:31:45 crc kubenswrapper[4998]: I0227 10:31:45.672571 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a829cc95" Feb 27 10:31:45 crc kubenswrapper[4998]: I0227 10:31:45.765541 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/798850d8-3ba1-4af9-a3d5-4df55bf658a4-bundle\") pod \"798850d8-3ba1-4af9-a3d5-4df55bf658a4\" (UID: \"798850d8-3ba1-4af9-a3d5-4df55bf658a4\") " Feb 27 10:31:45 crc kubenswrapper[4998]: I0227 10:31:45.765677 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/798850d8-3ba1-4af9-a3d5-4df55bf658a4-util\") pod \"798850d8-3ba1-4af9-a3d5-4df55bf658a4\" (UID: \"798850d8-3ba1-4af9-a3d5-4df55bf658a4\") " Feb 27 10:31:45 crc kubenswrapper[4998]: I0227 10:31:45.765735 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-96w64\" (UniqueName: \"kubernetes.io/projected/798850d8-3ba1-4af9-a3d5-4df55bf658a4-kube-api-access-96w64\") pod \"798850d8-3ba1-4af9-a3d5-4df55bf658a4\" (UID: \"798850d8-3ba1-4af9-a3d5-4df55bf658a4\") " Feb 27 10:31:45 crc kubenswrapper[4998]: I0227 10:31:45.766515 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/798850d8-3ba1-4af9-a3d5-4df55bf658a4-bundle" (OuterVolumeSpecName: "bundle") pod "798850d8-3ba1-4af9-a3d5-4df55bf658a4" (UID: "798850d8-3ba1-4af9-a3d5-4df55bf658a4"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:31:45 crc kubenswrapper[4998]: I0227 10:31:45.774496 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/798850d8-3ba1-4af9-a3d5-4df55bf658a4-kube-api-access-96w64" (OuterVolumeSpecName: "kube-api-access-96w64") pod "798850d8-3ba1-4af9-a3d5-4df55bf658a4" (UID: "798850d8-3ba1-4af9-a3d5-4df55bf658a4"). InnerVolumeSpecName "kube-api-access-96w64". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:31:45 crc kubenswrapper[4998]: I0227 10:31:45.779393 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/798850d8-3ba1-4af9-a3d5-4df55bf658a4-util" (OuterVolumeSpecName: "util") pod "798850d8-3ba1-4af9-a3d5-4df55bf658a4" (UID: "798850d8-3ba1-4af9-a3d5-4df55bf658a4"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:31:45 crc kubenswrapper[4998]: I0227 10:31:45.867356 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-96w64\" (UniqueName: \"kubernetes.io/projected/798850d8-3ba1-4af9-a3d5-4df55bf658a4-kube-api-access-96w64\") on node \"crc\" DevicePath \"\"" Feb 27 10:31:45 crc kubenswrapper[4998]: I0227 10:31:45.867407 4998 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/798850d8-3ba1-4af9-a3d5-4df55bf658a4-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:31:45 crc kubenswrapper[4998]: I0227 10:31:45.867418 4998 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/798850d8-3ba1-4af9-a3d5-4df55bf658a4-util\") on node \"crc\" DevicePath \"\"" Feb 27 10:31:46 crc kubenswrapper[4998]: I0227 10:31:46.094012 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-72qnd"] Feb 27 10:31:46 crc kubenswrapper[4998]: E0227 10:31:46.094450 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="798850d8-3ba1-4af9-a3d5-4df55bf658a4" containerName="pull" Feb 27 10:31:46 crc kubenswrapper[4998]: I0227 10:31:46.094468 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="798850d8-3ba1-4af9-a3d5-4df55bf658a4" containerName="pull" Feb 27 10:31:46 crc kubenswrapper[4998]: E0227 10:31:46.094483 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="798850d8-3ba1-4af9-a3d5-4df55bf658a4" containerName="extract" Feb 27 10:31:46 crc kubenswrapper[4998]: I0227 10:31:46.094491 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="798850d8-3ba1-4af9-a3d5-4df55bf658a4" containerName="extract" Feb 27 10:31:46 crc kubenswrapper[4998]: E0227 10:31:46.094515 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="798850d8-3ba1-4af9-a3d5-4df55bf658a4" containerName="util" Feb 27 10:31:46 crc kubenswrapper[4998]: I0227 10:31:46.094525 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="798850d8-3ba1-4af9-a3d5-4df55bf658a4" containerName="util" Feb 27 10:31:46 crc kubenswrapper[4998]: I0227 10:31:46.094622 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="798850d8-3ba1-4af9-a3d5-4df55bf658a4" containerName="extract" Feb 27 10:31:46 crc kubenswrapper[4998]: I0227 10:31:46.095290 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-72qnd" Feb 27 10:31:46 crc kubenswrapper[4998]: I0227 10:31:46.123823 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-72qnd"] Feb 27 10:31:46 crc kubenswrapper[4998]: I0227 10:31:46.271454 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90da9853-8e94-4102-89ea-0b133daa4dc1-utilities\") pod \"redhat-operators-72qnd\" (UID: \"90da9853-8e94-4102-89ea-0b133daa4dc1\") " pod="openshift-marketplace/redhat-operators-72qnd" Feb 27 10:31:46 crc kubenswrapper[4998]: I0227 10:31:46.271510 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87zzx\" (UniqueName: \"kubernetes.io/projected/90da9853-8e94-4102-89ea-0b133daa4dc1-kube-api-access-87zzx\") pod \"redhat-operators-72qnd\" (UID: \"90da9853-8e94-4102-89ea-0b133daa4dc1\") " pod="openshift-marketplace/redhat-operators-72qnd" Feb 27 10:31:46 crc kubenswrapper[4998]: I0227 10:31:46.271673 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90da9853-8e94-4102-89ea-0b133daa4dc1-catalog-content\") pod \"redhat-operators-72qnd\" (UID: \"90da9853-8e94-4102-89ea-0b133daa4dc1\") " pod="openshift-marketplace/redhat-operators-72qnd" Feb 27 10:31:46 crc kubenswrapper[4998]: I0227 10:31:46.372831 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90da9853-8e94-4102-89ea-0b133daa4dc1-utilities\") pod \"redhat-operators-72qnd\" (UID: \"90da9853-8e94-4102-89ea-0b133daa4dc1\") " pod="openshift-marketplace/redhat-operators-72qnd" Feb 27 10:31:46 crc kubenswrapper[4998]: I0227 10:31:46.373183 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87zzx\" (UniqueName: \"kubernetes.io/projected/90da9853-8e94-4102-89ea-0b133daa4dc1-kube-api-access-87zzx\") pod \"redhat-operators-72qnd\" (UID: \"90da9853-8e94-4102-89ea-0b133daa4dc1\") " pod="openshift-marketplace/redhat-operators-72qnd" Feb 27 10:31:46 crc kubenswrapper[4998]: I0227 10:31:46.373258 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90da9853-8e94-4102-89ea-0b133daa4dc1-catalog-content\") pod \"redhat-operators-72qnd\" (UID: \"90da9853-8e94-4102-89ea-0b133daa4dc1\") " pod="openshift-marketplace/redhat-operators-72qnd" Feb 27 10:31:46 crc kubenswrapper[4998]: I0227 10:31:46.373375 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90da9853-8e94-4102-89ea-0b133daa4dc1-utilities\") pod \"redhat-operators-72qnd\" (UID: \"90da9853-8e94-4102-89ea-0b133daa4dc1\") " pod="openshift-marketplace/redhat-operators-72qnd" Feb 27 10:31:46 crc kubenswrapper[4998]: I0227 10:31:46.373607 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90da9853-8e94-4102-89ea-0b133daa4dc1-catalog-content\") pod \"redhat-operators-72qnd\" (UID: \"90da9853-8e94-4102-89ea-0b133daa4dc1\") " pod="openshift-marketplace/redhat-operators-72qnd" Feb 27 10:31:46 crc kubenswrapper[4998]: I0227 10:31:46.390190 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a829cc95" event={"ID":"798850d8-3ba1-4af9-a3d5-4df55bf658a4","Type":"ContainerDied","Data":"2d27c0d35a347bc7615ce96053b8e3f718aeba61e90332178c6b19a5c7814207"} Feb 27 10:31:46 crc kubenswrapper[4998]: I0227 10:31:46.390354 4998 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2d27c0d35a347bc7615ce96053b8e3f718aeba61e90332178c6b19a5c7814207" Feb 27 10:31:46 crc kubenswrapper[4998]: I0227 10:31:46.390483 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a829cc95" Feb 27 10:31:46 crc kubenswrapper[4998]: I0227 10:31:46.390580 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87zzx\" (UniqueName: \"kubernetes.io/projected/90da9853-8e94-4102-89ea-0b133daa4dc1-kube-api-access-87zzx\") pod \"redhat-operators-72qnd\" (UID: \"90da9853-8e94-4102-89ea-0b133daa4dc1\") " pod="openshift-marketplace/redhat-operators-72qnd" Feb 27 10:31:46 crc kubenswrapper[4998]: I0227 10:31:46.412680 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-72qnd" Feb 27 10:31:46 crc kubenswrapper[4998]: I0227 10:31:46.609406 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-72qnd"] Feb 27 10:31:46 crc kubenswrapper[4998]: W0227 10:31:46.618711 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod90da9853_8e94_4102_89ea_0b133daa4dc1.slice/crio-599ad2b857bebd31c552c51c49351d57f1ff366dad769ee44d835547df9ed033 WatchSource:0}: Error finding container 599ad2b857bebd31c552c51c49351d57f1ff366dad769ee44d835547df9ed033: Status 404 returned error can't find the container with id 599ad2b857bebd31c552c51c49351d57f1ff366dad769ee44d835547df9ed033 Feb 27 10:31:47 crc kubenswrapper[4998]: I0227 10:31:47.407926 4998 generic.go:334] "Generic (PLEG): container finished" podID="90da9853-8e94-4102-89ea-0b133daa4dc1" containerID="b25eadf352d12b083f3a56cb94ea5b7d993b3298e5bc94cd927d49fd35529705" exitCode=0 Feb 27 10:31:47 crc kubenswrapper[4998]: I0227 10:31:47.408263 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-72qnd" event={"ID":"90da9853-8e94-4102-89ea-0b133daa4dc1","Type":"ContainerDied","Data":"b25eadf352d12b083f3a56cb94ea5b7d993b3298e5bc94cd927d49fd35529705"} Feb 27 10:31:47 crc kubenswrapper[4998]: I0227 10:31:47.408296 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-72qnd" event={"ID":"90da9853-8e94-4102-89ea-0b133daa4dc1","Type":"ContainerStarted","Data":"599ad2b857bebd31c552c51c49351d57f1ff366dad769ee44d835547df9ed033"} Feb 27 10:31:48 crc kubenswrapper[4998]: I0227 10:31:48.266829 4998 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 27 10:31:49 crc kubenswrapper[4998]: I0227 10:31:49.400266 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-75c5dccd6c-4rvgw"] Feb 27 10:31:49 crc kubenswrapper[4998]: I0227 10:31:49.401327 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-75c5dccd6c-4rvgw" Feb 27 10:31:49 crc kubenswrapper[4998]: I0227 10:31:49.404077 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Feb 27 10:31:49 crc kubenswrapper[4998]: I0227 10:31:49.404325 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-cds2w" Feb 27 10:31:49 crc kubenswrapper[4998]: I0227 10:31:49.404469 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Feb 27 10:31:49 crc kubenswrapper[4998]: I0227 10:31:49.415548 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-75c5dccd6c-4rvgw"] Feb 27 10:31:49 crc kubenswrapper[4998]: I0227 10:31:49.426576 4998 generic.go:334] "Generic (PLEG): container finished" podID="90da9853-8e94-4102-89ea-0b133daa4dc1" containerID="ce6066357f713073bce20023d9db65793f7e83c6aa26736992a1336dbf38dc2a" exitCode=0 Feb 27 10:31:49 crc kubenswrapper[4998]: I0227 10:31:49.426625 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-72qnd" event={"ID":"90da9853-8e94-4102-89ea-0b133daa4dc1","Type":"ContainerDied","Data":"ce6066357f713073bce20023d9db65793f7e83c6aa26736992a1336dbf38dc2a"} Feb 27 10:31:49 crc kubenswrapper[4998]: I0227 10:31:49.511195 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmhn4\" (UniqueName: \"kubernetes.io/projected/df1a20ec-d325-4ed5-b485-8a1460ea8cf3-kube-api-access-lmhn4\") pod \"nmstate-operator-75c5dccd6c-4rvgw\" (UID: \"df1a20ec-d325-4ed5-b485-8a1460ea8cf3\") " pod="openshift-nmstate/nmstate-operator-75c5dccd6c-4rvgw" Feb 27 10:31:49 crc kubenswrapper[4998]: I0227 10:31:49.612835 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmhn4\" (UniqueName: \"kubernetes.io/projected/df1a20ec-d325-4ed5-b485-8a1460ea8cf3-kube-api-access-lmhn4\") pod \"nmstate-operator-75c5dccd6c-4rvgw\" (UID: \"df1a20ec-d325-4ed5-b485-8a1460ea8cf3\") " pod="openshift-nmstate/nmstate-operator-75c5dccd6c-4rvgw" Feb 27 10:31:49 crc kubenswrapper[4998]: I0227 10:31:49.636133 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmhn4\" (UniqueName: \"kubernetes.io/projected/df1a20ec-d325-4ed5-b485-8a1460ea8cf3-kube-api-access-lmhn4\") pod \"nmstate-operator-75c5dccd6c-4rvgw\" (UID: \"df1a20ec-d325-4ed5-b485-8a1460ea8cf3\") " pod="openshift-nmstate/nmstate-operator-75c5dccd6c-4rvgw" Feb 27 10:31:49 crc kubenswrapper[4998]: I0227 10:31:49.738103 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-75c5dccd6c-4rvgw" Feb 27 10:31:51 crc kubenswrapper[4998]: I0227 10:31:51.211147 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-75c5dccd6c-4rvgw"] Feb 27 10:31:52 crc kubenswrapper[4998]: I0227 10:31:52.047873 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-72qnd" event={"ID":"90da9853-8e94-4102-89ea-0b133daa4dc1","Type":"ContainerStarted","Data":"672f1941d27744c3f09093dd5c402651bc380ece165e27431571309581cdbd19"} Feb 27 10:31:52 crc kubenswrapper[4998]: I0227 10:31:52.049504 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-75c5dccd6c-4rvgw" event={"ID":"df1a20ec-d325-4ed5-b485-8a1460ea8cf3","Type":"ContainerStarted","Data":"23daf4a69ec69fe6139f0db0d9b45f3f00f316077adec1b7f989cde252bc8f8a"} Feb 27 10:31:52 crc kubenswrapper[4998]: I0227 10:31:52.073566 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-72qnd" podStartSLOduration=1.893059058 podStartE2EDuration="6.073543672s" podCreationTimestamp="2026-02-27 10:31:46 +0000 UTC" firstStartedPulling="2026-02-27 10:31:47.409539125 +0000 UTC m=+859.407810083" lastFinishedPulling="2026-02-27 10:31:51.590023729 +0000 UTC m=+863.588294697" observedRunningTime="2026-02-27 10:31:52.06991712 +0000 UTC m=+864.068188098" watchObservedRunningTime="2026-02-27 10:31:52.073543672 +0000 UTC m=+864.071814640" Feb 27 10:31:54 crc kubenswrapper[4998]: I0227 10:31:54.062996 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-75c5dccd6c-4rvgw" event={"ID":"df1a20ec-d325-4ed5-b485-8a1460ea8cf3","Type":"ContainerStarted","Data":"7873c4dbe2af3a9b50aaf0612a5328e10584d3a3aa284e791f4ba33e3df24921"} Feb 27 10:31:54 crc kubenswrapper[4998]: I0227 10:31:54.080398 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-75c5dccd6c-4rvgw" podStartSLOduration=2.761168367 podStartE2EDuration="5.080383529s" podCreationTimestamp="2026-02-27 10:31:49 +0000 UTC" firstStartedPulling="2026-02-27 10:31:51.206025131 +0000 UTC m=+863.204296099" lastFinishedPulling="2026-02-27 10:31:53.525240293 +0000 UTC m=+865.523511261" observedRunningTime="2026-02-27 10:31:54.079263633 +0000 UTC m=+866.077534591" watchObservedRunningTime="2026-02-27 10:31:54.080383529 +0000 UTC m=+866.078654497" Feb 27 10:31:56 crc kubenswrapper[4998]: I0227 10:31:56.414163 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-72qnd" Feb 27 10:31:56 crc kubenswrapper[4998]: I0227 10:31:56.414564 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-72qnd" Feb 27 10:31:57 crc kubenswrapper[4998]: I0227 10:31:57.455859 4998 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-72qnd" podUID="90da9853-8e94-4102-89ea-0b133daa4dc1" containerName="registry-server" probeResult="failure" output=< Feb 27 10:31:57 crc kubenswrapper[4998]: timeout: failed to connect service ":50051" within 1s Feb 27 10:31:57 crc kubenswrapper[4998]: > Feb 27 10:31:58 crc kubenswrapper[4998]: I0227 10:31:58.305830 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-69594cc75-kjzpr"] Feb 27 10:31:58 crc kubenswrapper[4998]: I0227 10:31:58.306907 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-69594cc75-kjzpr" Feb 27 10:31:58 crc kubenswrapper[4998]: I0227 10:31:58.312790 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-mvmb5" Feb 27 10:31:58 crc kubenswrapper[4998]: I0227 10:31:58.315372 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-786f45cff4-bksxj"] Feb 27 10:31:58 crc kubenswrapper[4998]: I0227 10:31:58.316261 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-786f45cff4-bksxj" Feb 27 10:31:58 crc kubenswrapper[4998]: I0227 10:31:58.318906 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Feb 27 10:31:58 crc kubenswrapper[4998]: I0227 10:31:58.333803 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-sld54"] Feb 27 10:31:58 crc kubenswrapper[4998]: I0227 10:31:58.334530 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-sld54" Feb 27 10:31:58 crc kubenswrapper[4998]: I0227 10:31:58.338606 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-786f45cff4-bksxj"] Feb 27 10:31:58 crc kubenswrapper[4998]: I0227 10:31:58.373782 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/23f0f1b3-af91-4dd1-8762-9e5ddb1c142e-dbus-socket\") pod \"nmstate-handler-sld54\" (UID: \"23f0f1b3-af91-4dd1-8762-9e5ddb1c142e\") " pod="openshift-nmstate/nmstate-handler-sld54" Feb 27 10:31:58 crc kubenswrapper[4998]: I0227 10:31:58.373824 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/818f1bd6-a5d4-431f-87ca-bf94e3c029de-tls-key-pair\") pod \"nmstate-webhook-786f45cff4-bksxj\" (UID: \"818f1bd6-a5d4-431f-87ca-bf94e3c029de\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-bksxj" Feb 27 10:31:58 crc kubenswrapper[4998]: I0227 10:31:58.373855 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/23f0f1b3-af91-4dd1-8762-9e5ddb1c142e-nmstate-lock\") pod \"nmstate-handler-sld54\" (UID: \"23f0f1b3-af91-4dd1-8762-9e5ddb1c142e\") " pod="openshift-nmstate/nmstate-handler-sld54" Feb 27 10:31:58 crc kubenswrapper[4998]: I0227 10:31:58.373884 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z28md\" (UniqueName: \"kubernetes.io/projected/818f1bd6-a5d4-431f-87ca-bf94e3c029de-kube-api-access-z28md\") pod \"nmstate-webhook-786f45cff4-bksxj\" (UID: \"818f1bd6-a5d4-431f-87ca-bf94e3c029de\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-bksxj" Feb 27 10:31:58 crc kubenswrapper[4998]: I0227 10:31:58.373920 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/23f0f1b3-af91-4dd1-8762-9e5ddb1c142e-ovs-socket\") pod \"nmstate-handler-sld54\" (UID: \"23f0f1b3-af91-4dd1-8762-9e5ddb1c142e\") " pod="openshift-nmstate/nmstate-handler-sld54" Feb 27 10:31:58 crc kubenswrapper[4998]: I0227 10:31:58.373936 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgxjl\" (UniqueName: \"kubernetes.io/projected/b49731c6-730e-4a09-b5e7-21a5890cc8d7-kube-api-access-jgxjl\") pod \"nmstate-metrics-69594cc75-kjzpr\" (UID: \"b49731c6-730e-4a09-b5e7-21a5890cc8d7\") " pod="openshift-nmstate/nmstate-metrics-69594cc75-kjzpr" Feb 27 10:31:58 crc kubenswrapper[4998]: I0227 10:31:58.373983 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqz42\" (UniqueName: \"kubernetes.io/projected/23f0f1b3-af91-4dd1-8762-9e5ddb1c142e-kube-api-access-jqz42\") pod \"nmstate-handler-sld54\" (UID: \"23f0f1b3-af91-4dd1-8762-9e5ddb1c142e\") " pod="openshift-nmstate/nmstate-handler-sld54" Feb 27 10:31:58 crc kubenswrapper[4998]: I0227 10:31:58.379055 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-69594cc75-kjzpr"] Feb 27 10:31:58 crc kubenswrapper[4998]: I0227 10:31:58.457383 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-zr9tf"] Feb 27 10:31:58 crc kubenswrapper[4998]: I0227 10:31:58.458657 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-zr9tf" Feb 27 10:31:58 crc kubenswrapper[4998]: I0227 10:31:58.460670 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-557jb" Feb 27 10:31:58 crc kubenswrapper[4998]: I0227 10:31:58.460989 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Feb 27 10:31:58 crc kubenswrapper[4998]: I0227 10:31:58.461333 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Feb 27 10:31:58 crc kubenswrapper[4998]: I0227 10:31:58.466704 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-zr9tf"] Feb 27 10:31:58 crc kubenswrapper[4998]: I0227 10:31:58.478865 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqz42\" (UniqueName: \"kubernetes.io/projected/23f0f1b3-af91-4dd1-8762-9e5ddb1c142e-kube-api-access-jqz42\") pod \"nmstate-handler-sld54\" (UID: \"23f0f1b3-af91-4dd1-8762-9e5ddb1c142e\") " pod="openshift-nmstate/nmstate-handler-sld54" Feb 27 10:31:58 crc kubenswrapper[4998]: I0227 10:31:58.478924 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/23f0f1b3-af91-4dd1-8762-9e5ddb1c142e-dbus-socket\") pod \"nmstate-handler-sld54\" (UID: \"23f0f1b3-af91-4dd1-8762-9e5ddb1c142e\") " pod="openshift-nmstate/nmstate-handler-sld54" Feb 27 10:31:58 crc kubenswrapper[4998]: I0227 10:31:58.478947 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/818f1bd6-a5d4-431f-87ca-bf94e3c029de-tls-key-pair\") pod \"nmstate-webhook-786f45cff4-bksxj\" (UID: \"818f1bd6-a5d4-431f-87ca-bf94e3c029de\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-bksxj" Feb 27 10:31:58 crc kubenswrapper[4998]: I0227 10:31:58.478972 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/23f0f1b3-af91-4dd1-8762-9e5ddb1c142e-nmstate-lock\") pod \"nmstate-handler-sld54\" (UID: \"23f0f1b3-af91-4dd1-8762-9e5ddb1c142e\") " pod="openshift-nmstate/nmstate-handler-sld54" Feb 27 10:31:58 crc kubenswrapper[4998]: I0227 10:31:58.478991 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z28md\" (UniqueName: \"kubernetes.io/projected/818f1bd6-a5d4-431f-87ca-bf94e3c029de-kube-api-access-z28md\") pod \"nmstate-webhook-786f45cff4-bksxj\" (UID: \"818f1bd6-a5d4-431f-87ca-bf94e3c029de\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-bksxj" Feb 27 10:31:58 crc kubenswrapper[4998]: I0227 10:31:58.479016 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/23f0f1b3-af91-4dd1-8762-9e5ddb1c142e-ovs-socket\") pod \"nmstate-handler-sld54\" (UID: \"23f0f1b3-af91-4dd1-8762-9e5ddb1c142e\") " pod="openshift-nmstate/nmstate-handler-sld54" Feb 27 10:31:58 crc kubenswrapper[4998]: I0227 10:31:58.479030 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgxjl\" (UniqueName: \"kubernetes.io/projected/b49731c6-730e-4a09-b5e7-21a5890cc8d7-kube-api-access-jgxjl\") pod \"nmstate-metrics-69594cc75-kjzpr\" (UID: \"b49731c6-730e-4a09-b5e7-21a5890cc8d7\") " pod="openshift-nmstate/nmstate-metrics-69594cc75-kjzpr" Feb 27 10:31:58 crc kubenswrapper[4998]: E0227 10:31:58.479567 4998 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Feb 27 10:31:58 crc kubenswrapper[4998]: E0227 10:31:58.479681 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/818f1bd6-a5d4-431f-87ca-bf94e3c029de-tls-key-pair podName:818f1bd6-a5d4-431f-87ca-bf94e3c029de nodeName:}" failed. No retries permitted until 2026-02-27 10:31:58.979663321 +0000 UTC m=+870.977934289 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/818f1bd6-a5d4-431f-87ca-bf94e3c029de-tls-key-pair") pod "nmstate-webhook-786f45cff4-bksxj" (UID: "818f1bd6-a5d4-431f-87ca-bf94e3c029de") : secret "openshift-nmstate-webhook" not found Feb 27 10:31:58 crc kubenswrapper[4998]: I0227 10:31:58.479705 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/23f0f1b3-af91-4dd1-8762-9e5ddb1c142e-ovs-socket\") pod \"nmstate-handler-sld54\" (UID: \"23f0f1b3-af91-4dd1-8762-9e5ddb1c142e\") " pod="openshift-nmstate/nmstate-handler-sld54" Feb 27 10:31:58 crc kubenswrapper[4998]: I0227 10:31:58.479573 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/23f0f1b3-af91-4dd1-8762-9e5ddb1c142e-nmstate-lock\") pod \"nmstate-handler-sld54\" (UID: \"23f0f1b3-af91-4dd1-8762-9e5ddb1c142e\") " pod="openshift-nmstate/nmstate-handler-sld54" Feb 27 10:31:58 crc kubenswrapper[4998]: I0227 10:31:58.479870 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/23f0f1b3-af91-4dd1-8762-9e5ddb1c142e-dbus-socket\") pod \"nmstate-handler-sld54\" (UID: \"23f0f1b3-af91-4dd1-8762-9e5ddb1c142e\") " pod="openshift-nmstate/nmstate-handler-sld54" Feb 27 10:31:58 crc kubenswrapper[4998]: I0227 10:31:58.500168 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqz42\" (UniqueName: \"kubernetes.io/projected/23f0f1b3-af91-4dd1-8762-9e5ddb1c142e-kube-api-access-jqz42\") pod \"nmstate-handler-sld54\" (UID: \"23f0f1b3-af91-4dd1-8762-9e5ddb1c142e\") " pod="openshift-nmstate/nmstate-handler-sld54" Feb 27 10:31:58 crc kubenswrapper[4998]: I0227 10:31:58.500393 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z28md\" (UniqueName: \"kubernetes.io/projected/818f1bd6-a5d4-431f-87ca-bf94e3c029de-kube-api-access-z28md\") pod \"nmstate-webhook-786f45cff4-bksxj\" (UID: \"818f1bd6-a5d4-431f-87ca-bf94e3c029de\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-bksxj" Feb 27 10:31:58 crc kubenswrapper[4998]: I0227 10:31:58.518457 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgxjl\" (UniqueName: \"kubernetes.io/projected/b49731c6-730e-4a09-b5e7-21a5890cc8d7-kube-api-access-jgxjl\") pod \"nmstate-metrics-69594cc75-kjzpr\" (UID: \"b49731c6-730e-4a09-b5e7-21a5890cc8d7\") " pod="openshift-nmstate/nmstate-metrics-69594cc75-kjzpr" Feb 27 10:31:58 crc kubenswrapper[4998]: I0227 10:31:58.580420 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/f3c68fd4-04c2-4b0d-8b6f-2b45639a240c-plugin-serving-cert\") pod \"nmstate-console-plugin-5dcbbd79cf-zr9tf\" (UID: \"f3c68fd4-04c2-4b0d-8b6f-2b45639a240c\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-zr9tf" Feb 27 10:31:58 crc kubenswrapper[4998]: I0227 10:31:58.580909 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/f3c68fd4-04c2-4b0d-8b6f-2b45639a240c-nginx-conf\") pod \"nmstate-console-plugin-5dcbbd79cf-zr9tf\" (UID: \"f3c68fd4-04c2-4b0d-8b6f-2b45639a240c\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-zr9tf" Feb 27 10:31:58 crc kubenswrapper[4998]: I0227 10:31:58.581131 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29vkx\" (UniqueName: \"kubernetes.io/projected/f3c68fd4-04c2-4b0d-8b6f-2b45639a240c-kube-api-access-29vkx\") pod \"nmstate-console-plugin-5dcbbd79cf-zr9tf\" (UID: \"f3c68fd4-04c2-4b0d-8b6f-2b45639a240c\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-zr9tf" Feb 27 10:31:58 crc kubenswrapper[4998]: I0227 10:31:58.625347 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-69d76bdbf5-nzw4m"] Feb 27 10:31:58 crc kubenswrapper[4998]: I0227 10:31:58.626039 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-69d76bdbf5-nzw4m" Feb 27 10:31:58 crc kubenswrapper[4998]: I0227 10:31:58.626609 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-69594cc75-kjzpr" Feb 27 10:31:58 crc kubenswrapper[4998]: I0227 10:31:58.653609 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-69d76bdbf5-nzw4m"] Feb 27 10:31:58 crc kubenswrapper[4998]: I0227 10:31:58.668538 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-sld54" Feb 27 10:31:58 crc kubenswrapper[4998]: I0227 10:31:58.682479 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29vkx\" (UniqueName: \"kubernetes.io/projected/f3c68fd4-04c2-4b0d-8b6f-2b45639a240c-kube-api-access-29vkx\") pod \"nmstate-console-plugin-5dcbbd79cf-zr9tf\" (UID: \"f3c68fd4-04c2-4b0d-8b6f-2b45639a240c\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-zr9tf" Feb 27 10:31:58 crc kubenswrapper[4998]: I0227 10:31:58.682577 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/f3c68fd4-04c2-4b0d-8b6f-2b45639a240c-plugin-serving-cert\") pod \"nmstate-console-plugin-5dcbbd79cf-zr9tf\" (UID: \"f3c68fd4-04c2-4b0d-8b6f-2b45639a240c\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-zr9tf" Feb 27 10:31:58 crc kubenswrapper[4998]: I0227 10:31:58.682605 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/f3c68fd4-04c2-4b0d-8b6f-2b45639a240c-nginx-conf\") pod \"nmstate-console-plugin-5dcbbd79cf-zr9tf\" (UID: \"f3c68fd4-04c2-4b0d-8b6f-2b45639a240c\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-zr9tf" Feb 27 10:31:58 crc kubenswrapper[4998]: I0227 10:31:58.683431 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/f3c68fd4-04c2-4b0d-8b6f-2b45639a240c-nginx-conf\") pod \"nmstate-console-plugin-5dcbbd79cf-zr9tf\" (UID: \"f3c68fd4-04c2-4b0d-8b6f-2b45639a240c\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-zr9tf" Feb 27 10:31:58 crc kubenswrapper[4998]: I0227 10:31:58.688131 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/f3c68fd4-04c2-4b0d-8b6f-2b45639a240c-plugin-serving-cert\") pod \"nmstate-console-plugin-5dcbbd79cf-zr9tf\" (UID: \"f3c68fd4-04c2-4b0d-8b6f-2b45639a240c\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-zr9tf" Feb 27 10:31:58 crc kubenswrapper[4998]: I0227 10:31:58.699074 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29vkx\" (UniqueName: \"kubernetes.io/projected/f3c68fd4-04c2-4b0d-8b6f-2b45639a240c-kube-api-access-29vkx\") pod \"nmstate-console-plugin-5dcbbd79cf-zr9tf\" (UID: \"f3c68fd4-04c2-4b0d-8b6f-2b45639a240c\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-zr9tf" Feb 27 10:31:58 crc kubenswrapper[4998]: W0227 10:31:58.700025 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod23f0f1b3_af91_4dd1_8762_9e5ddb1c142e.slice/crio-c3946b9ba38654a667c7e08fd0b0d6148c8ca7cdf9221f7aec88ee382c205f69 WatchSource:0}: Error finding container c3946b9ba38654a667c7e08fd0b0d6148c8ca7cdf9221f7aec88ee382c205f69: Status 404 returned error can't find the container with id c3946b9ba38654a667c7e08fd0b0d6148c8ca7cdf9221f7aec88ee382c205f69 Feb 27 10:31:58 crc kubenswrapper[4998]: I0227 10:31:58.780271 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-zr9tf" Feb 27 10:31:58 crc kubenswrapper[4998]: I0227 10:31:58.785717 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b439662e-1917-4a30-a9f1-1b72dfb2069e-trusted-ca-bundle\") pod \"console-69d76bdbf5-nzw4m\" (UID: \"b439662e-1917-4a30-a9f1-1b72dfb2069e\") " pod="openshift-console/console-69d76bdbf5-nzw4m" Feb 27 10:31:58 crc kubenswrapper[4998]: I0227 10:31:58.785964 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b439662e-1917-4a30-a9f1-1b72dfb2069e-console-config\") pod \"console-69d76bdbf5-nzw4m\" (UID: \"b439662e-1917-4a30-a9f1-1b72dfb2069e\") " pod="openshift-console/console-69d76bdbf5-nzw4m" Feb 27 10:31:58 crc kubenswrapper[4998]: I0227 10:31:58.785987 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sj8h7\" (UniqueName: \"kubernetes.io/projected/b439662e-1917-4a30-a9f1-1b72dfb2069e-kube-api-access-sj8h7\") pod \"console-69d76bdbf5-nzw4m\" (UID: \"b439662e-1917-4a30-a9f1-1b72dfb2069e\") " pod="openshift-console/console-69d76bdbf5-nzw4m" Feb 27 10:31:58 crc kubenswrapper[4998]: I0227 10:31:58.786009 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b439662e-1917-4a30-a9f1-1b72dfb2069e-console-oauth-config\") pod \"console-69d76bdbf5-nzw4m\" (UID: \"b439662e-1917-4a30-a9f1-1b72dfb2069e\") " pod="openshift-console/console-69d76bdbf5-nzw4m" Feb 27 10:31:58 crc kubenswrapper[4998]: I0227 10:31:58.786042 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b439662e-1917-4a30-a9f1-1b72dfb2069e-oauth-serving-cert\") pod \"console-69d76bdbf5-nzw4m\" (UID: \"b439662e-1917-4a30-a9f1-1b72dfb2069e\") " pod="openshift-console/console-69d76bdbf5-nzw4m" Feb 27 10:31:58 crc kubenswrapper[4998]: I0227 10:31:58.786059 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b439662e-1917-4a30-a9f1-1b72dfb2069e-console-serving-cert\") pod \"console-69d76bdbf5-nzw4m\" (UID: \"b439662e-1917-4a30-a9f1-1b72dfb2069e\") " pod="openshift-console/console-69d76bdbf5-nzw4m" Feb 27 10:31:58 crc kubenswrapper[4998]: I0227 10:31:58.786082 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b439662e-1917-4a30-a9f1-1b72dfb2069e-service-ca\") pod \"console-69d76bdbf5-nzw4m\" (UID: \"b439662e-1917-4a30-a9f1-1b72dfb2069e\") " pod="openshift-console/console-69d76bdbf5-nzw4m" Feb 27 10:31:58 crc kubenswrapper[4998]: I0227 10:31:58.886893 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sj8h7\" (UniqueName: \"kubernetes.io/projected/b439662e-1917-4a30-a9f1-1b72dfb2069e-kube-api-access-sj8h7\") pod \"console-69d76bdbf5-nzw4m\" (UID: \"b439662e-1917-4a30-a9f1-1b72dfb2069e\") " pod="openshift-console/console-69d76bdbf5-nzw4m" Feb 27 10:31:58 crc kubenswrapper[4998]: I0227 10:31:58.886971 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b439662e-1917-4a30-a9f1-1b72dfb2069e-console-oauth-config\") pod \"console-69d76bdbf5-nzw4m\" (UID: \"b439662e-1917-4a30-a9f1-1b72dfb2069e\") " pod="openshift-console/console-69d76bdbf5-nzw4m" Feb 27 10:31:58 crc kubenswrapper[4998]: I0227 10:31:58.887007 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b439662e-1917-4a30-a9f1-1b72dfb2069e-oauth-serving-cert\") pod \"console-69d76bdbf5-nzw4m\" (UID: \"b439662e-1917-4a30-a9f1-1b72dfb2069e\") " pod="openshift-console/console-69d76bdbf5-nzw4m" Feb 27 10:31:58 crc kubenswrapper[4998]: I0227 10:31:58.887027 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b439662e-1917-4a30-a9f1-1b72dfb2069e-console-serving-cert\") pod \"console-69d76bdbf5-nzw4m\" (UID: \"b439662e-1917-4a30-a9f1-1b72dfb2069e\") " pod="openshift-console/console-69d76bdbf5-nzw4m" Feb 27 10:31:58 crc kubenswrapper[4998]: I0227 10:31:58.887053 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b439662e-1917-4a30-a9f1-1b72dfb2069e-service-ca\") pod \"console-69d76bdbf5-nzw4m\" (UID: \"b439662e-1917-4a30-a9f1-1b72dfb2069e\") " pod="openshift-console/console-69d76bdbf5-nzw4m" Feb 27 10:31:58 crc kubenswrapper[4998]: I0227 10:31:58.887108 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b439662e-1917-4a30-a9f1-1b72dfb2069e-trusted-ca-bundle\") pod \"console-69d76bdbf5-nzw4m\" (UID: \"b439662e-1917-4a30-a9f1-1b72dfb2069e\") " pod="openshift-console/console-69d76bdbf5-nzw4m" Feb 27 10:31:58 crc kubenswrapper[4998]: I0227 10:31:58.887141 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b439662e-1917-4a30-a9f1-1b72dfb2069e-console-config\") pod \"console-69d76bdbf5-nzw4m\" (UID: \"b439662e-1917-4a30-a9f1-1b72dfb2069e\") " pod="openshift-console/console-69d76bdbf5-nzw4m" Feb 27 10:31:58 crc kubenswrapper[4998]: I0227 10:31:58.887922 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b439662e-1917-4a30-a9f1-1b72dfb2069e-console-config\") pod \"console-69d76bdbf5-nzw4m\" (UID: \"b439662e-1917-4a30-a9f1-1b72dfb2069e\") " pod="openshift-console/console-69d76bdbf5-nzw4m" Feb 27 10:31:58 crc kubenswrapper[4998]: I0227 10:31:58.888511 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b439662e-1917-4a30-a9f1-1b72dfb2069e-service-ca\") pod \"console-69d76bdbf5-nzw4m\" (UID: \"b439662e-1917-4a30-a9f1-1b72dfb2069e\") " pod="openshift-console/console-69d76bdbf5-nzw4m" Feb 27 10:31:58 crc kubenswrapper[4998]: I0227 10:31:58.889849 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b439662e-1917-4a30-a9f1-1b72dfb2069e-trusted-ca-bundle\") pod \"console-69d76bdbf5-nzw4m\" (UID: \"b439662e-1917-4a30-a9f1-1b72dfb2069e\") " pod="openshift-console/console-69d76bdbf5-nzw4m" Feb 27 10:31:58 crc kubenswrapper[4998]: I0227 10:31:58.890314 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b439662e-1917-4a30-a9f1-1b72dfb2069e-oauth-serving-cert\") pod \"console-69d76bdbf5-nzw4m\" (UID: \"b439662e-1917-4a30-a9f1-1b72dfb2069e\") " pod="openshift-console/console-69d76bdbf5-nzw4m" Feb 27 10:31:58 crc kubenswrapper[4998]: I0227 10:31:58.893565 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b439662e-1917-4a30-a9f1-1b72dfb2069e-console-serving-cert\") pod \"console-69d76bdbf5-nzw4m\" (UID: \"b439662e-1917-4a30-a9f1-1b72dfb2069e\") " pod="openshift-console/console-69d76bdbf5-nzw4m" Feb 27 10:31:58 crc kubenswrapper[4998]: I0227 10:31:58.893665 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b439662e-1917-4a30-a9f1-1b72dfb2069e-console-oauth-config\") pod \"console-69d76bdbf5-nzw4m\" (UID: \"b439662e-1917-4a30-a9f1-1b72dfb2069e\") " pod="openshift-console/console-69d76bdbf5-nzw4m" Feb 27 10:31:58 crc kubenswrapper[4998]: I0227 10:31:58.907633 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sj8h7\" (UniqueName: \"kubernetes.io/projected/b439662e-1917-4a30-a9f1-1b72dfb2069e-kube-api-access-sj8h7\") pod \"console-69d76bdbf5-nzw4m\" (UID: \"b439662e-1917-4a30-a9f1-1b72dfb2069e\") " pod="openshift-console/console-69d76bdbf5-nzw4m" Feb 27 10:31:58 crc kubenswrapper[4998]: I0227 10:31:58.939587 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-69d76bdbf5-nzw4m" Feb 27 10:31:58 crc kubenswrapper[4998]: I0227 10:31:58.939816 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-zr9tf"] Feb 27 10:31:58 crc kubenswrapper[4998]: I0227 10:31:58.989108 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/818f1bd6-a5d4-431f-87ca-bf94e3c029de-tls-key-pair\") pod \"nmstate-webhook-786f45cff4-bksxj\" (UID: \"818f1bd6-a5d4-431f-87ca-bf94e3c029de\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-bksxj" Feb 27 10:31:58 crc kubenswrapper[4998]: I0227 10:31:58.993533 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/818f1bd6-a5d4-431f-87ca-bf94e3c029de-tls-key-pair\") pod \"nmstate-webhook-786f45cff4-bksxj\" (UID: \"818f1bd6-a5d4-431f-87ca-bf94e3c029de\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-bksxj" Feb 27 10:31:59 crc kubenswrapper[4998]: I0227 10:31:59.045472 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-69594cc75-kjzpr"] Feb 27 10:31:59 crc kubenswrapper[4998]: W0227 10:31:59.049601 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb49731c6_730e_4a09_b5e7_21a5890cc8d7.slice/crio-02c506044e1a25904ad73b60314b53097162cfcf8c2b274996434019c4736bdb WatchSource:0}: Error finding container 02c506044e1a25904ad73b60314b53097162cfcf8c2b274996434019c4736bdb: Status 404 returned error can't find the container with id 02c506044e1a25904ad73b60314b53097162cfcf8c2b274996434019c4736bdb Feb 27 10:31:59 crc kubenswrapper[4998]: I0227 10:31:59.097692 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-zr9tf" event={"ID":"f3c68fd4-04c2-4b0d-8b6f-2b45639a240c","Type":"ContainerStarted","Data":"68c7b36b00c6bb4b5869457e17c50e23d1e8900531c974a9b55d4d37d38751ad"} Feb 27 10:31:59 crc kubenswrapper[4998]: I0227 10:31:59.098983 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-sld54" event={"ID":"23f0f1b3-af91-4dd1-8762-9e5ddb1c142e","Type":"ContainerStarted","Data":"c3946b9ba38654a667c7e08fd0b0d6148c8ca7cdf9221f7aec88ee382c205f69"} Feb 27 10:31:59 crc kubenswrapper[4998]: I0227 10:31:59.099965 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-69594cc75-kjzpr" event={"ID":"b49731c6-730e-4a09-b5e7-21a5890cc8d7","Type":"ContainerStarted","Data":"02c506044e1a25904ad73b60314b53097162cfcf8c2b274996434019c4736bdb"} Feb 27 10:31:59 crc kubenswrapper[4998]: I0227 10:31:59.237508 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-786f45cff4-bksxj" Feb 27 10:31:59 crc kubenswrapper[4998]: I0227 10:31:59.398613 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-69d76bdbf5-nzw4m"] Feb 27 10:31:59 crc kubenswrapper[4998]: W0227 10:31:59.405391 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb439662e_1917_4a30_a9f1_1b72dfb2069e.slice/crio-0bf6b89cb156c3bc0820f4e96692822230ad2d366fb216568448ed2216897aac WatchSource:0}: Error finding container 0bf6b89cb156c3bc0820f4e96692822230ad2d366fb216568448ed2216897aac: Status 404 returned error can't find the container with id 0bf6b89cb156c3bc0820f4e96692822230ad2d366fb216568448ed2216897aac Feb 27 10:31:59 crc kubenswrapper[4998]: I0227 10:31:59.600396 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-786f45cff4-bksxj"] Feb 27 10:31:59 crc kubenswrapper[4998]: W0227 10:31:59.603707 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod818f1bd6_a5d4_431f_87ca_bf94e3c029de.slice/crio-65d19f3af501cf2005c01a62532e5d5c8d04ec54eccddf0f664f3309c85778fd WatchSource:0}: Error finding container 65d19f3af501cf2005c01a62532e5d5c8d04ec54eccddf0f664f3309c85778fd: Status 404 returned error can't find the container with id 65d19f3af501cf2005c01a62532e5d5c8d04ec54eccddf0f664f3309c85778fd Feb 27 10:32:00 crc kubenswrapper[4998]: I0227 10:32:00.107108 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-786f45cff4-bksxj" event={"ID":"818f1bd6-a5d4-431f-87ca-bf94e3c029de","Type":"ContainerStarted","Data":"65d19f3af501cf2005c01a62532e5d5c8d04ec54eccddf0f664f3309c85778fd"} Feb 27 10:32:00 crc kubenswrapper[4998]: I0227 10:32:00.110747 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-69d76bdbf5-nzw4m" event={"ID":"b439662e-1917-4a30-a9f1-1b72dfb2069e","Type":"ContainerStarted","Data":"0bf6b89cb156c3bc0820f4e96692822230ad2d366fb216568448ed2216897aac"} Feb 27 10:32:00 crc kubenswrapper[4998]: I0227 10:32:00.127815 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536472-f7wqm"] Feb 27 10:32:00 crc kubenswrapper[4998]: I0227 10:32:00.128894 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536472-f7wqm" Feb 27 10:32:00 crc kubenswrapper[4998]: I0227 10:32:00.131884 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-b74ch" Feb 27 10:32:00 crc kubenswrapper[4998]: I0227 10:32:00.132672 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 10:32:00 crc kubenswrapper[4998]: I0227 10:32:00.132969 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 10:32:00 crc kubenswrapper[4998]: I0227 10:32:00.134832 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536472-f7wqm"] Feb 27 10:32:00 crc kubenswrapper[4998]: I0227 10:32:00.319575 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qk7cj\" (UniqueName: \"kubernetes.io/projected/98bf12bf-459c-458c-b028-e0e0b59a3a34-kube-api-access-qk7cj\") pod \"auto-csr-approver-29536472-f7wqm\" (UID: \"98bf12bf-459c-458c-b028-e0e0b59a3a34\") " pod="openshift-infra/auto-csr-approver-29536472-f7wqm" Feb 27 10:32:00 crc kubenswrapper[4998]: I0227 10:32:00.421057 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qk7cj\" (UniqueName: \"kubernetes.io/projected/98bf12bf-459c-458c-b028-e0e0b59a3a34-kube-api-access-qk7cj\") pod \"auto-csr-approver-29536472-f7wqm\" (UID: \"98bf12bf-459c-458c-b028-e0e0b59a3a34\") " pod="openshift-infra/auto-csr-approver-29536472-f7wqm" Feb 27 10:32:00 crc kubenswrapper[4998]: I0227 10:32:00.442846 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qk7cj\" (UniqueName: \"kubernetes.io/projected/98bf12bf-459c-458c-b028-e0e0b59a3a34-kube-api-access-qk7cj\") pod \"auto-csr-approver-29536472-f7wqm\" (UID: \"98bf12bf-459c-458c-b028-e0e0b59a3a34\") " pod="openshift-infra/auto-csr-approver-29536472-f7wqm" Feb 27 10:32:00 crc kubenswrapper[4998]: I0227 10:32:00.456636 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536472-f7wqm" Feb 27 10:32:00 crc kubenswrapper[4998]: I0227 10:32:00.697525 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536472-f7wqm"] Feb 27 10:32:01 crc kubenswrapper[4998]: I0227 10:32:01.119604 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536472-f7wqm" event={"ID":"98bf12bf-459c-458c-b028-e0e0b59a3a34","Type":"ContainerStarted","Data":"007279b9d1deaaef8b8926acdc95734ae996e41649bf0720e9f2ea742a049675"} Feb 27 10:32:02 crc kubenswrapper[4998]: I0227 10:32:02.127977 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-69d76bdbf5-nzw4m" event={"ID":"b439662e-1917-4a30-a9f1-1b72dfb2069e","Type":"ContainerStarted","Data":"e595b6734db602c3327b136ce4f76171cb5bf7e2d9fcaaa1fdc2c1637fe68504"} Feb 27 10:32:02 crc kubenswrapper[4998]: I0227 10:32:02.153624 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-69d76bdbf5-nzw4m" podStartSLOduration=4.153603858 podStartE2EDuration="4.153603858s" podCreationTimestamp="2026-02-27 10:31:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:32:02.151880816 +0000 UTC m=+874.150151804" watchObservedRunningTime="2026-02-27 10:32:02.153603858 +0000 UTC m=+874.151874826" Feb 27 10:32:05 crc kubenswrapper[4998]: I0227 10:32:05.157203 4998 generic.go:334] "Generic (PLEG): container finished" podID="98bf12bf-459c-458c-b028-e0e0b59a3a34" containerID="f3ace2ccd449b5fe79ecb1654a7a36dd9ed12da3407d4638855e5fd4910f4752" exitCode=0 Feb 27 10:32:05 crc kubenswrapper[4998]: I0227 10:32:05.157393 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536472-f7wqm" event={"ID":"98bf12bf-459c-458c-b028-e0e0b59a3a34","Type":"ContainerDied","Data":"f3ace2ccd449b5fe79ecb1654a7a36dd9ed12da3407d4638855e5fd4910f4752"} Feb 27 10:32:05 crc kubenswrapper[4998]: I0227 10:32:05.160039 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-sld54" event={"ID":"23f0f1b3-af91-4dd1-8762-9e5ddb1c142e","Type":"ContainerStarted","Data":"12b5be4baa54b2bcada7324bd0e36c875b46c02e9cf97eac55fb2c913acd88be"} Feb 27 10:32:05 crc kubenswrapper[4998]: I0227 10:32:05.160090 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-sld54" Feb 27 10:32:05 crc kubenswrapper[4998]: I0227 10:32:05.162152 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-786f45cff4-bksxj" event={"ID":"818f1bd6-a5d4-431f-87ca-bf94e3c029de","Type":"ContainerStarted","Data":"d60acb4aaf46df6f5e274bf422a4d2bb241a366abf32242f4114ce784ef13349"} Feb 27 10:32:05 crc kubenswrapper[4998]: I0227 10:32:05.162815 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-786f45cff4-bksxj" Feb 27 10:32:05 crc kubenswrapper[4998]: I0227 10:32:05.164457 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-69594cc75-kjzpr" event={"ID":"b49731c6-730e-4a09-b5e7-21a5890cc8d7","Type":"ContainerStarted","Data":"2b8bd91fabce6e7aeea38bddc97814b2bb96374d1a563b9251f42b1a15d8fa19"} Feb 27 10:32:05 crc kubenswrapper[4998]: I0227 10:32:05.197205 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-786f45cff4-bksxj" podStartSLOduration=2.596704236 podStartE2EDuration="7.197184994s" podCreationTimestamp="2026-02-27 10:31:58 +0000 UTC" firstStartedPulling="2026-02-27 10:31:59.606485231 +0000 UTC m=+871.604756199" lastFinishedPulling="2026-02-27 10:32:04.206965989 +0000 UTC m=+876.205236957" observedRunningTime="2026-02-27 10:32:05.188689191 +0000 UTC m=+877.186960159" watchObservedRunningTime="2026-02-27 10:32:05.197184994 +0000 UTC m=+877.195455962" Feb 27 10:32:05 crc kubenswrapper[4998]: I0227 10:32:05.209281 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-sld54" podStartSLOduration=1.878290069 podStartE2EDuration="7.209265526s" podCreationTimestamp="2026-02-27 10:31:58 +0000 UTC" firstStartedPulling="2026-02-27 10:31:58.702341199 +0000 UTC m=+870.700612167" lastFinishedPulling="2026-02-27 10:32:04.033316636 +0000 UTC m=+876.031587624" observedRunningTime="2026-02-27 10:32:05.205886365 +0000 UTC m=+877.204157353" watchObservedRunningTime="2026-02-27 10:32:05.209265526 +0000 UTC m=+877.207536494" Feb 27 10:32:06 crc kubenswrapper[4998]: I0227 10:32:06.175261 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-zr9tf" event={"ID":"f3c68fd4-04c2-4b0d-8b6f-2b45639a240c","Type":"ContainerStarted","Data":"5ec88375862c2b1131070e0720679828dfa37c120bd0d05b423b3c22513431e9"} Feb 27 10:32:06 crc kubenswrapper[4998]: I0227 10:32:06.190099 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-zr9tf" podStartSLOduration=1.91248578 podStartE2EDuration="8.190079181s" podCreationTimestamp="2026-02-27 10:31:58 +0000 UTC" firstStartedPulling="2026-02-27 10:31:59.037701878 +0000 UTC m=+871.035972866" lastFinishedPulling="2026-02-27 10:32:05.315295289 +0000 UTC m=+877.313566267" observedRunningTime="2026-02-27 10:32:06.188526997 +0000 UTC m=+878.186797965" watchObservedRunningTime="2026-02-27 10:32:06.190079181 +0000 UTC m=+878.188350149" Feb 27 10:32:06 crc kubenswrapper[4998]: I0227 10:32:06.405625 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536472-f7wqm" Feb 27 10:32:06 crc kubenswrapper[4998]: I0227 10:32:06.467448 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-72qnd" Feb 27 10:32:06 crc kubenswrapper[4998]: I0227 10:32:06.504793 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-72qnd" Feb 27 10:32:06 crc kubenswrapper[4998]: I0227 10:32:06.516001 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qk7cj\" (UniqueName: \"kubernetes.io/projected/98bf12bf-459c-458c-b028-e0e0b59a3a34-kube-api-access-qk7cj\") pod \"98bf12bf-459c-458c-b028-e0e0b59a3a34\" (UID: \"98bf12bf-459c-458c-b028-e0e0b59a3a34\") " Feb 27 10:32:06 crc kubenswrapper[4998]: I0227 10:32:06.523493 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98bf12bf-459c-458c-b028-e0e0b59a3a34-kube-api-access-qk7cj" (OuterVolumeSpecName: "kube-api-access-qk7cj") pod "98bf12bf-459c-458c-b028-e0e0b59a3a34" (UID: "98bf12bf-459c-458c-b028-e0e0b59a3a34"). InnerVolumeSpecName "kube-api-access-qk7cj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:32:06 crc kubenswrapper[4998]: I0227 10:32:06.621344 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qk7cj\" (UniqueName: \"kubernetes.io/projected/98bf12bf-459c-458c-b028-e0e0b59a3a34-kube-api-access-qk7cj\") on node \"crc\" DevicePath \"\"" Feb 27 10:32:06 crc kubenswrapper[4998]: I0227 10:32:06.703806 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-72qnd"] Feb 27 10:32:07 crc kubenswrapper[4998]: I0227 10:32:07.182392 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536472-f7wqm" Feb 27 10:32:07 crc kubenswrapper[4998]: I0227 10:32:07.182413 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536472-f7wqm" event={"ID":"98bf12bf-459c-458c-b028-e0e0b59a3a34","Type":"ContainerDied","Data":"007279b9d1deaaef8b8926acdc95734ae996e41649bf0720e9f2ea742a049675"} Feb 27 10:32:07 crc kubenswrapper[4998]: I0227 10:32:07.182744 4998 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="007279b9d1deaaef8b8926acdc95734ae996e41649bf0720e9f2ea742a049675" Feb 27 10:32:07 crc kubenswrapper[4998]: I0227 10:32:07.457551 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536466-rm8rs"] Feb 27 10:32:07 crc kubenswrapper[4998]: I0227 10:32:07.462075 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536466-rm8rs"] Feb 27 10:32:08 crc kubenswrapper[4998]: I0227 10:32:08.191316 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-69594cc75-kjzpr" event={"ID":"b49731c6-730e-4a09-b5e7-21a5890cc8d7","Type":"ContainerStarted","Data":"332771f5e10c0853c78ca46162f648093b7480ac69b93e51e841560e1971fa67"} Feb 27 10:32:08 crc kubenswrapper[4998]: I0227 10:32:08.191460 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-72qnd" podUID="90da9853-8e94-4102-89ea-0b133daa4dc1" containerName="registry-server" containerID="cri-o://672f1941d27744c3f09093dd5c402651bc380ece165e27431571309581cdbd19" gracePeriod=2 Feb 27 10:32:08 crc kubenswrapper[4998]: I0227 10:32:08.772775 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c2a58bd-244a-4888-943f-2a222e58689b" path="/var/lib/kubelet/pods/2c2a58bd-244a-4888-943f-2a222e58689b/volumes" Feb 27 10:32:08 crc kubenswrapper[4998]: I0227 10:32:08.940309 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-69d76bdbf5-nzw4m" Feb 27 10:32:08 crc kubenswrapper[4998]: I0227 10:32:08.940454 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-69d76bdbf5-nzw4m" Feb 27 10:32:08 crc kubenswrapper[4998]: I0227 10:32:08.945726 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-69d76bdbf5-nzw4m" Feb 27 10:32:08 crc kubenswrapper[4998]: I0227 10:32:08.973496 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-69594cc75-kjzpr" podStartSLOduration=2.6976516889999997 podStartE2EDuration="10.973477301s" podCreationTimestamp="2026-02-27 10:31:58 +0000 UTC" firstStartedPulling="2026-02-27 10:31:59.051906888 +0000 UTC m=+871.050177856" lastFinishedPulling="2026-02-27 10:32:07.3277325 +0000 UTC m=+879.326003468" observedRunningTime="2026-02-27 10:32:08.219033287 +0000 UTC m=+880.217304255" watchObservedRunningTime="2026-02-27 10:32:08.973477301 +0000 UTC m=+880.971748289" Feb 27 10:32:09 crc kubenswrapper[4998]: I0227 10:32:09.149499 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-72qnd" Feb 27 10:32:09 crc kubenswrapper[4998]: I0227 10:32:09.204143 4998 generic.go:334] "Generic (PLEG): container finished" podID="90da9853-8e94-4102-89ea-0b133daa4dc1" containerID="672f1941d27744c3f09093dd5c402651bc380ece165e27431571309581cdbd19" exitCode=0 Feb 27 10:32:09 crc kubenswrapper[4998]: I0227 10:32:09.204195 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-72qnd" event={"ID":"90da9853-8e94-4102-89ea-0b133daa4dc1","Type":"ContainerDied","Data":"672f1941d27744c3f09093dd5c402651bc380ece165e27431571309581cdbd19"} Feb 27 10:32:09 crc kubenswrapper[4998]: I0227 10:32:09.204469 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-72qnd" event={"ID":"90da9853-8e94-4102-89ea-0b133daa4dc1","Type":"ContainerDied","Data":"599ad2b857bebd31c552c51c49351d57f1ff366dad769ee44d835547df9ed033"} Feb 27 10:32:09 crc kubenswrapper[4998]: I0227 10:32:09.204238 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-72qnd" Feb 27 10:32:09 crc kubenswrapper[4998]: I0227 10:32:09.204494 4998 scope.go:117] "RemoveContainer" containerID="672f1941d27744c3f09093dd5c402651bc380ece165e27431571309581cdbd19" Feb 27 10:32:09 crc kubenswrapper[4998]: I0227 10:32:09.209107 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-69d76bdbf5-nzw4m" Feb 27 10:32:09 crc kubenswrapper[4998]: I0227 10:32:09.226996 4998 scope.go:117] "RemoveContainer" containerID="ce6066357f713073bce20023d9db65793f7e83c6aa26736992a1336dbf38dc2a" Feb 27 10:32:09 crc kubenswrapper[4998]: I0227 10:32:09.251281 4998 scope.go:117] "RemoveContainer" containerID="b25eadf352d12b083f3a56cb94ea5b7d993b3298e5bc94cd927d49fd35529705" Feb 27 10:32:09 crc kubenswrapper[4998]: I0227 10:32:09.268648 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-n6r8g"] Feb 27 10:32:09 crc kubenswrapper[4998]: I0227 10:32:09.276564 4998 scope.go:117] "RemoveContainer" containerID="672f1941d27744c3f09093dd5c402651bc380ece165e27431571309581cdbd19" Feb 27 10:32:09 crc kubenswrapper[4998]: E0227 10:32:09.276949 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"672f1941d27744c3f09093dd5c402651bc380ece165e27431571309581cdbd19\": container with ID starting with 672f1941d27744c3f09093dd5c402651bc380ece165e27431571309581cdbd19 not found: ID does not exist" containerID="672f1941d27744c3f09093dd5c402651bc380ece165e27431571309581cdbd19" Feb 27 10:32:09 crc kubenswrapper[4998]: I0227 10:32:09.276976 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"672f1941d27744c3f09093dd5c402651bc380ece165e27431571309581cdbd19"} err="failed to get container status \"672f1941d27744c3f09093dd5c402651bc380ece165e27431571309581cdbd19\": rpc error: code = NotFound desc = could not find container \"672f1941d27744c3f09093dd5c402651bc380ece165e27431571309581cdbd19\": container with ID starting with 672f1941d27744c3f09093dd5c402651bc380ece165e27431571309581cdbd19 not found: ID does not exist" Feb 27 10:32:09 crc kubenswrapper[4998]: I0227 10:32:09.276994 4998 scope.go:117] "RemoveContainer" containerID="ce6066357f713073bce20023d9db65793f7e83c6aa26736992a1336dbf38dc2a" Feb 27 10:32:09 crc kubenswrapper[4998]: E0227 10:32:09.277488 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce6066357f713073bce20023d9db65793f7e83c6aa26736992a1336dbf38dc2a\": container with ID starting with ce6066357f713073bce20023d9db65793f7e83c6aa26736992a1336dbf38dc2a not found: ID does not exist" containerID="ce6066357f713073bce20023d9db65793f7e83c6aa26736992a1336dbf38dc2a" Feb 27 10:32:09 crc kubenswrapper[4998]: I0227 10:32:09.277520 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce6066357f713073bce20023d9db65793f7e83c6aa26736992a1336dbf38dc2a"} err="failed to get container status \"ce6066357f713073bce20023d9db65793f7e83c6aa26736992a1336dbf38dc2a\": rpc error: code = NotFound desc = could not find container \"ce6066357f713073bce20023d9db65793f7e83c6aa26736992a1336dbf38dc2a\": container with ID starting with ce6066357f713073bce20023d9db65793f7e83c6aa26736992a1336dbf38dc2a not found: ID does not exist" Feb 27 10:32:09 crc kubenswrapper[4998]: I0227 10:32:09.277542 4998 scope.go:117] "RemoveContainer" containerID="b25eadf352d12b083f3a56cb94ea5b7d993b3298e5bc94cd927d49fd35529705" Feb 27 10:32:09 crc kubenswrapper[4998]: E0227 10:32:09.277828 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b25eadf352d12b083f3a56cb94ea5b7d993b3298e5bc94cd927d49fd35529705\": container with ID starting with b25eadf352d12b083f3a56cb94ea5b7d993b3298e5bc94cd927d49fd35529705 not found: ID does not exist" containerID="b25eadf352d12b083f3a56cb94ea5b7d993b3298e5bc94cd927d49fd35529705" Feb 27 10:32:09 crc kubenswrapper[4998]: I0227 10:32:09.277852 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b25eadf352d12b083f3a56cb94ea5b7d993b3298e5bc94cd927d49fd35529705"} err="failed to get container status \"b25eadf352d12b083f3a56cb94ea5b7d993b3298e5bc94cd927d49fd35529705\": rpc error: code = NotFound desc = could not find container \"b25eadf352d12b083f3a56cb94ea5b7d993b3298e5bc94cd927d49fd35529705\": container with ID starting with b25eadf352d12b083f3a56cb94ea5b7d993b3298e5bc94cd927d49fd35529705 not found: ID does not exist" Feb 27 10:32:09 crc kubenswrapper[4998]: I0227 10:32:09.350575 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90da9853-8e94-4102-89ea-0b133daa4dc1-catalog-content\") pod \"90da9853-8e94-4102-89ea-0b133daa4dc1\" (UID: \"90da9853-8e94-4102-89ea-0b133daa4dc1\") " Feb 27 10:32:09 crc kubenswrapper[4998]: I0227 10:32:09.350707 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-87zzx\" (UniqueName: \"kubernetes.io/projected/90da9853-8e94-4102-89ea-0b133daa4dc1-kube-api-access-87zzx\") pod \"90da9853-8e94-4102-89ea-0b133daa4dc1\" (UID: \"90da9853-8e94-4102-89ea-0b133daa4dc1\") " Feb 27 10:32:09 crc kubenswrapper[4998]: I0227 10:32:09.350756 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90da9853-8e94-4102-89ea-0b133daa4dc1-utilities\") pod \"90da9853-8e94-4102-89ea-0b133daa4dc1\" (UID: \"90da9853-8e94-4102-89ea-0b133daa4dc1\") " Feb 27 10:32:09 crc kubenswrapper[4998]: I0227 10:32:09.354724 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90da9853-8e94-4102-89ea-0b133daa4dc1-utilities" (OuterVolumeSpecName: "utilities") pod "90da9853-8e94-4102-89ea-0b133daa4dc1" (UID: "90da9853-8e94-4102-89ea-0b133daa4dc1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:32:09 crc kubenswrapper[4998]: I0227 10:32:09.357526 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90da9853-8e94-4102-89ea-0b133daa4dc1-kube-api-access-87zzx" (OuterVolumeSpecName: "kube-api-access-87zzx") pod "90da9853-8e94-4102-89ea-0b133daa4dc1" (UID: "90da9853-8e94-4102-89ea-0b133daa4dc1"). InnerVolumeSpecName "kube-api-access-87zzx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:32:09 crc kubenswrapper[4998]: I0227 10:32:09.452838 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-87zzx\" (UniqueName: \"kubernetes.io/projected/90da9853-8e94-4102-89ea-0b133daa4dc1-kube-api-access-87zzx\") on node \"crc\" DevicePath \"\"" Feb 27 10:32:09 crc kubenswrapper[4998]: I0227 10:32:09.452872 4998 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90da9853-8e94-4102-89ea-0b133daa4dc1-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 10:32:09 crc kubenswrapper[4998]: I0227 10:32:09.480042 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90da9853-8e94-4102-89ea-0b133daa4dc1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "90da9853-8e94-4102-89ea-0b133daa4dc1" (UID: "90da9853-8e94-4102-89ea-0b133daa4dc1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:32:09 crc kubenswrapper[4998]: I0227 10:32:09.546786 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-72qnd"] Feb 27 10:32:09 crc kubenswrapper[4998]: I0227 10:32:09.551358 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-72qnd"] Feb 27 10:32:09 crc kubenswrapper[4998]: I0227 10:32:09.553953 4998 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90da9853-8e94-4102-89ea-0b133daa4dc1-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 10:32:10 crc kubenswrapper[4998]: I0227 10:32:10.778758 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90da9853-8e94-4102-89ea-0b133daa4dc1" path="/var/lib/kubelet/pods/90da9853-8e94-4102-89ea-0b133daa4dc1/volumes" Feb 27 10:32:13 crc kubenswrapper[4998]: I0227 10:32:13.703818 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-sld54" Feb 27 10:32:19 crc kubenswrapper[4998]: I0227 10:32:19.246611 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-786f45cff4-bksxj" Feb 27 10:32:31 crc kubenswrapper[4998]: I0227 10:32:31.522417 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4kmlf2"] Feb 27 10:32:31 crc kubenswrapper[4998]: E0227 10:32:31.523162 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90da9853-8e94-4102-89ea-0b133daa4dc1" containerName="extract-content" Feb 27 10:32:31 crc kubenswrapper[4998]: I0227 10:32:31.523178 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="90da9853-8e94-4102-89ea-0b133daa4dc1" containerName="extract-content" Feb 27 10:32:31 crc kubenswrapper[4998]: E0227 10:32:31.523190 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98bf12bf-459c-458c-b028-e0e0b59a3a34" containerName="oc" Feb 27 10:32:31 crc kubenswrapper[4998]: I0227 10:32:31.523197 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="98bf12bf-459c-458c-b028-e0e0b59a3a34" containerName="oc" Feb 27 10:32:31 crc kubenswrapper[4998]: E0227 10:32:31.523206 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90da9853-8e94-4102-89ea-0b133daa4dc1" containerName="extract-utilities" Feb 27 10:32:31 crc kubenswrapper[4998]: I0227 10:32:31.523216 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="90da9853-8e94-4102-89ea-0b133daa4dc1" containerName="extract-utilities" Feb 27 10:32:31 crc kubenswrapper[4998]: E0227 10:32:31.523261 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90da9853-8e94-4102-89ea-0b133daa4dc1" containerName="registry-server" Feb 27 10:32:31 crc kubenswrapper[4998]: I0227 10:32:31.523268 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="90da9853-8e94-4102-89ea-0b133daa4dc1" containerName="registry-server" Feb 27 10:32:31 crc kubenswrapper[4998]: I0227 10:32:31.523386 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="90da9853-8e94-4102-89ea-0b133daa4dc1" containerName="registry-server" Feb 27 10:32:31 crc kubenswrapper[4998]: I0227 10:32:31.523407 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="98bf12bf-459c-458c-b028-e0e0b59a3a34" containerName="oc" Feb 27 10:32:31 crc kubenswrapper[4998]: I0227 10:32:31.524621 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4kmlf2" Feb 27 10:32:31 crc kubenswrapper[4998]: I0227 10:32:31.527270 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 27 10:32:31 crc kubenswrapper[4998]: I0227 10:32:31.530369 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4kmlf2"] Feb 27 10:32:31 crc kubenswrapper[4998]: I0227 10:32:31.566319 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zvj6\" (UniqueName: \"kubernetes.io/projected/6d48cabe-f00e-4d86-895b-6ed01fbb3ef4-kube-api-access-4zvj6\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4kmlf2\" (UID: \"6d48cabe-f00e-4d86-895b-6ed01fbb3ef4\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4kmlf2" Feb 27 10:32:31 crc kubenswrapper[4998]: I0227 10:32:31.566781 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6d48cabe-f00e-4d86-895b-6ed01fbb3ef4-util\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4kmlf2\" (UID: \"6d48cabe-f00e-4d86-895b-6ed01fbb3ef4\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4kmlf2" Feb 27 10:32:31 crc kubenswrapper[4998]: I0227 10:32:31.566888 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6d48cabe-f00e-4d86-895b-6ed01fbb3ef4-bundle\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4kmlf2\" (UID: \"6d48cabe-f00e-4d86-895b-6ed01fbb3ef4\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4kmlf2" Feb 27 10:32:31 crc kubenswrapper[4998]: I0227 10:32:31.668122 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6d48cabe-f00e-4d86-895b-6ed01fbb3ef4-bundle\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4kmlf2\" (UID: \"6d48cabe-f00e-4d86-895b-6ed01fbb3ef4\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4kmlf2" Feb 27 10:32:31 crc kubenswrapper[4998]: I0227 10:32:31.668468 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zvj6\" (UniqueName: \"kubernetes.io/projected/6d48cabe-f00e-4d86-895b-6ed01fbb3ef4-kube-api-access-4zvj6\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4kmlf2\" (UID: \"6d48cabe-f00e-4d86-895b-6ed01fbb3ef4\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4kmlf2" Feb 27 10:32:31 crc kubenswrapper[4998]: I0227 10:32:31.668608 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6d48cabe-f00e-4d86-895b-6ed01fbb3ef4-util\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4kmlf2\" (UID: \"6d48cabe-f00e-4d86-895b-6ed01fbb3ef4\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4kmlf2" Feb 27 10:32:31 crc kubenswrapper[4998]: I0227 10:32:31.669048 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6d48cabe-f00e-4d86-895b-6ed01fbb3ef4-bundle\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4kmlf2\" (UID: \"6d48cabe-f00e-4d86-895b-6ed01fbb3ef4\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4kmlf2" Feb 27 10:32:31 crc kubenswrapper[4998]: I0227 10:32:31.669216 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6d48cabe-f00e-4d86-895b-6ed01fbb3ef4-util\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4kmlf2\" (UID: \"6d48cabe-f00e-4d86-895b-6ed01fbb3ef4\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4kmlf2" Feb 27 10:32:31 crc kubenswrapper[4998]: I0227 10:32:31.691557 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zvj6\" (UniqueName: \"kubernetes.io/projected/6d48cabe-f00e-4d86-895b-6ed01fbb3ef4-kube-api-access-4zvj6\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4kmlf2\" (UID: \"6d48cabe-f00e-4d86-895b-6ed01fbb3ef4\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4kmlf2" Feb 27 10:32:31 crc kubenswrapper[4998]: I0227 10:32:31.841888 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4kmlf2" Feb 27 10:32:32 crc kubenswrapper[4998]: I0227 10:32:32.039723 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4kmlf2"] Feb 27 10:32:32 crc kubenswrapper[4998]: W0227 10:32:32.046941 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6d48cabe_f00e_4d86_895b_6ed01fbb3ef4.slice/crio-e60ae1205be1bb13383556fa99bdd73fb8a865328c99d07ab9f171b8fd5688d4 WatchSource:0}: Error finding container e60ae1205be1bb13383556fa99bdd73fb8a865328c99d07ab9f171b8fd5688d4: Status 404 returned error can't find the container with id e60ae1205be1bb13383556fa99bdd73fb8a865328c99d07ab9f171b8fd5688d4 Feb 27 10:32:32 crc kubenswrapper[4998]: I0227 10:32:32.345419 4998 generic.go:334] "Generic (PLEG): container finished" podID="6d48cabe-f00e-4d86-895b-6ed01fbb3ef4" containerID="9c303947c85ef342d03a00b7bec4b7762f9b5ae7a79294dee669e875d5de56e3" exitCode=0 Feb 27 10:32:32 crc kubenswrapper[4998]: I0227 10:32:32.345741 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4kmlf2" event={"ID":"6d48cabe-f00e-4d86-895b-6ed01fbb3ef4","Type":"ContainerDied","Data":"9c303947c85ef342d03a00b7bec4b7762f9b5ae7a79294dee669e875d5de56e3"} Feb 27 10:32:32 crc kubenswrapper[4998]: I0227 10:32:32.345773 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4kmlf2" event={"ID":"6d48cabe-f00e-4d86-895b-6ed01fbb3ef4","Type":"ContainerStarted","Data":"e60ae1205be1bb13383556fa99bdd73fb8a865328c99d07ab9f171b8fd5688d4"} Feb 27 10:32:33 crc kubenswrapper[4998]: I0227 10:32:33.392972 4998 scope.go:117] "RemoveContainer" containerID="dae27eebac9de940e4f2181001ba3137d52c7d6043e94e5f852f24b8afc2e781" Feb 27 10:32:34 crc kubenswrapper[4998]: I0227 10:32:34.313270 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-n6r8g" podUID="e32a75fa-f16d-4386-a933-4a6bd43f1bdc" containerName="console" containerID="cri-o://8eab16ecf4fbb68fbdc523ca3ecb9b0502bc32155345a98c401dca93c4eeffc5" gracePeriod=15 Feb 27 10:32:34 crc kubenswrapper[4998]: I0227 10:32:34.360956 4998 generic.go:334] "Generic (PLEG): container finished" podID="6d48cabe-f00e-4d86-895b-6ed01fbb3ef4" containerID="5b413a727946d439fe05be0ca275fdcac469b85533592f291a9a10985e818d3d" exitCode=0 Feb 27 10:32:34 crc kubenswrapper[4998]: I0227 10:32:34.361018 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4kmlf2" event={"ID":"6d48cabe-f00e-4d86-895b-6ed01fbb3ef4","Type":"ContainerDied","Data":"5b413a727946d439fe05be0ca275fdcac469b85533592f291a9a10985e818d3d"} Feb 27 10:32:34 crc kubenswrapper[4998]: I0227 10:32:34.700457 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-n6r8g_e32a75fa-f16d-4386-a933-4a6bd43f1bdc/console/0.log" Feb 27 10:32:34 crc kubenswrapper[4998]: I0227 10:32:34.700517 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-n6r8g" Feb 27 10:32:34 crc kubenswrapper[4998]: I0227 10:32:34.807023 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lvdnk\" (UniqueName: \"kubernetes.io/projected/e32a75fa-f16d-4386-a933-4a6bd43f1bdc-kube-api-access-lvdnk\") pod \"e32a75fa-f16d-4386-a933-4a6bd43f1bdc\" (UID: \"e32a75fa-f16d-4386-a933-4a6bd43f1bdc\") " Feb 27 10:32:34 crc kubenswrapper[4998]: I0227 10:32:34.807077 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e32a75fa-f16d-4386-a933-4a6bd43f1bdc-console-config\") pod \"e32a75fa-f16d-4386-a933-4a6bd43f1bdc\" (UID: \"e32a75fa-f16d-4386-a933-4a6bd43f1bdc\") " Feb 27 10:32:34 crc kubenswrapper[4998]: I0227 10:32:34.807115 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e32a75fa-f16d-4386-a933-4a6bd43f1bdc-console-oauth-config\") pod \"e32a75fa-f16d-4386-a933-4a6bd43f1bdc\" (UID: \"e32a75fa-f16d-4386-a933-4a6bd43f1bdc\") " Feb 27 10:32:34 crc kubenswrapper[4998]: I0227 10:32:34.807143 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e32a75fa-f16d-4386-a933-4a6bd43f1bdc-console-serving-cert\") pod \"e32a75fa-f16d-4386-a933-4a6bd43f1bdc\" (UID: \"e32a75fa-f16d-4386-a933-4a6bd43f1bdc\") " Feb 27 10:32:34 crc kubenswrapper[4998]: I0227 10:32:34.807203 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e32a75fa-f16d-4386-a933-4a6bd43f1bdc-trusted-ca-bundle\") pod \"e32a75fa-f16d-4386-a933-4a6bd43f1bdc\" (UID: \"e32a75fa-f16d-4386-a933-4a6bd43f1bdc\") " Feb 27 10:32:34 crc kubenswrapper[4998]: I0227 10:32:34.807245 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e32a75fa-f16d-4386-a933-4a6bd43f1bdc-oauth-serving-cert\") pod \"e32a75fa-f16d-4386-a933-4a6bd43f1bdc\" (UID: \"e32a75fa-f16d-4386-a933-4a6bd43f1bdc\") " Feb 27 10:32:34 crc kubenswrapper[4998]: I0227 10:32:34.807273 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e32a75fa-f16d-4386-a933-4a6bd43f1bdc-service-ca\") pod \"e32a75fa-f16d-4386-a933-4a6bd43f1bdc\" (UID: \"e32a75fa-f16d-4386-a933-4a6bd43f1bdc\") " Feb 27 10:32:34 crc kubenswrapper[4998]: I0227 10:32:34.807959 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e32a75fa-f16d-4386-a933-4a6bd43f1bdc-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "e32a75fa-f16d-4386-a933-4a6bd43f1bdc" (UID: "e32a75fa-f16d-4386-a933-4a6bd43f1bdc"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:32:34 crc kubenswrapper[4998]: I0227 10:32:34.807969 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e32a75fa-f16d-4386-a933-4a6bd43f1bdc-service-ca" (OuterVolumeSpecName: "service-ca") pod "e32a75fa-f16d-4386-a933-4a6bd43f1bdc" (UID: "e32a75fa-f16d-4386-a933-4a6bd43f1bdc"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:32:34 crc kubenswrapper[4998]: I0227 10:32:34.807984 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e32a75fa-f16d-4386-a933-4a6bd43f1bdc-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "e32a75fa-f16d-4386-a933-4a6bd43f1bdc" (UID: "e32a75fa-f16d-4386-a933-4a6bd43f1bdc"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:32:34 crc kubenswrapper[4998]: I0227 10:32:34.808049 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e32a75fa-f16d-4386-a933-4a6bd43f1bdc-console-config" (OuterVolumeSpecName: "console-config") pod "e32a75fa-f16d-4386-a933-4a6bd43f1bdc" (UID: "e32a75fa-f16d-4386-a933-4a6bd43f1bdc"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:32:34 crc kubenswrapper[4998]: I0227 10:32:34.812653 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e32a75fa-f16d-4386-a933-4a6bd43f1bdc-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "e32a75fa-f16d-4386-a933-4a6bd43f1bdc" (UID: "e32a75fa-f16d-4386-a933-4a6bd43f1bdc"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:32:34 crc kubenswrapper[4998]: I0227 10:32:34.813093 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e32a75fa-f16d-4386-a933-4a6bd43f1bdc-kube-api-access-lvdnk" (OuterVolumeSpecName: "kube-api-access-lvdnk") pod "e32a75fa-f16d-4386-a933-4a6bd43f1bdc" (UID: "e32a75fa-f16d-4386-a933-4a6bd43f1bdc"). InnerVolumeSpecName "kube-api-access-lvdnk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:32:34 crc kubenswrapper[4998]: I0227 10:32:34.817483 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e32a75fa-f16d-4386-a933-4a6bd43f1bdc-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "e32a75fa-f16d-4386-a933-4a6bd43f1bdc" (UID: "e32a75fa-f16d-4386-a933-4a6bd43f1bdc"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:32:34 crc kubenswrapper[4998]: I0227 10:32:34.909167 4998 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e32a75fa-f16d-4386-a933-4a6bd43f1bdc-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:32:34 crc kubenswrapper[4998]: I0227 10:32:34.909203 4998 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e32a75fa-f16d-4386-a933-4a6bd43f1bdc-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 10:32:34 crc kubenswrapper[4998]: I0227 10:32:34.909214 4998 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e32a75fa-f16d-4386-a933-4a6bd43f1bdc-service-ca\") on node \"crc\" DevicePath \"\"" Feb 27 10:32:34 crc kubenswrapper[4998]: I0227 10:32:34.909251 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lvdnk\" (UniqueName: \"kubernetes.io/projected/e32a75fa-f16d-4386-a933-4a6bd43f1bdc-kube-api-access-lvdnk\") on node \"crc\" DevicePath \"\"" Feb 27 10:32:34 crc kubenswrapper[4998]: I0227 10:32:34.909268 4998 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e32a75fa-f16d-4386-a933-4a6bd43f1bdc-console-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:32:34 crc kubenswrapper[4998]: I0227 10:32:34.909279 4998 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e32a75fa-f16d-4386-a933-4a6bd43f1bdc-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:32:34 crc kubenswrapper[4998]: I0227 10:32:34.909288 4998 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e32a75fa-f16d-4386-a933-4a6bd43f1bdc-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 10:32:35 crc kubenswrapper[4998]: I0227 10:32:35.371259 4998 generic.go:334] "Generic (PLEG): container finished" podID="6d48cabe-f00e-4d86-895b-6ed01fbb3ef4" containerID="f2b7037c6aab0a7d1fee40b8111a558ff7563de1077e51f085394a0d81e57661" exitCode=0 Feb 27 10:32:35 crc kubenswrapper[4998]: I0227 10:32:35.371344 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4kmlf2" event={"ID":"6d48cabe-f00e-4d86-895b-6ed01fbb3ef4","Type":"ContainerDied","Data":"f2b7037c6aab0a7d1fee40b8111a558ff7563de1077e51f085394a0d81e57661"} Feb 27 10:32:35 crc kubenswrapper[4998]: I0227 10:32:35.373477 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-n6r8g_e32a75fa-f16d-4386-a933-4a6bd43f1bdc/console/0.log" Feb 27 10:32:35 crc kubenswrapper[4998]: I0227 10:32:35.373582 4998 generic.go:334] "Generic (PLEG): container finished" podID="e32a75fa-f16d-4386-a933-4a6bd43f1bdc" containerID="8eab16ecf4fbb68fbdc523ca3ecb9b0502bc32155345a98c401dca93c4eeffc5" exitCode=2 Feb 27 10:32:35 crc kubenswrapper[4998]: I0227 10:32:35.373658 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-n6r8g" event={"ID":"e32a75fa-f16d-4386-a933-4a6bd43f1bdc","Type":"ContainerDied","Data":"8eab16ecf4fbb68fbdc523ca3ecb9b0502bc32155345a98c401dca93c4eeffc5"} Feb 27 10:32:35 crc kubenswrapper[4998]: I0227 10:32:35.373733 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-n6r8g" event={"ID":"e32a75fa-f16d-4386-a933-4a6bd43f1bdc","Type":"ContainerDied","Data":"8920c4d580634fd5572afc291de8b62123e59b25ac2dc2bd5bc8e84fb88f528b"} Feb 27 10:32:35 crc kubenswrapper[4998]: I0227 10:32:35.373796 4998 scope.go:117] "RemoveContainer" containerID="8eab16ecf4fbb68fbdc523ca3ecb9b0502bc32155345a98c401dca93c4eeffc5" Feb 27 10:32:35 crc kubenswrapper[4998]: I0227 10:32:35.373968 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-n6r8g" Feb 27 10:32:35 crc kubenswrapper[4998]: I0227 10:32:35.396023 4998 scope.go:117] "RemoveContainer" containerID="8eab16ecf4fbb68fbdc523ca3ecb9b0502bc32155345a98c401dca93c4eeffc5" Feb 27 10:32:35 crc kubenswrapper[4998]: E0227 10:32:35.397763 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8eab16ecf4fbb68fbdc523ca3ecb9b0502bc32155345a98c401dca93c4eeffc5\": container with ID starting with 8eab16ecf4fbb68fbdc523ca3ecb9b0502bc32155345a98c401dca93c4eeffc5 not found: ID does not exist" containerID="8eab16ecf4fbb68fbdc523ca3ecb9b0502bc32155345a98c401dca93c4eeffc5" Feb 27 10:32:35 crc kubenswrapper[4998]: I0227 10:32:35.397798 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8eab16ecf4fbb68fbdc523ca3ecb9b0502bc32155345a98c401dca93c4eeffc5"} err="failed to get container status \"8eab16ecf4fbb68fbdc523ca3ecb9b0502bc32155345a98c401dca93c4eeffc5\": rpc error: code = NotFound desc = could not find container \"8eab16ecf4fbb68fbdc523ca3ecb9b0502bc32155345a98c401dca93c4eeffc5\": container with ID starting with 8eab16ecf4fbb68fbdc523ca3ecb9b0502bc32155345a98c401dca93c4eeffc5 not found: ID does not exist" Feb 27 10:32:35 crc kubenswrapper[4998]: I0227 10:32:35.420286 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-n6r8g"] Feb 27 10:32:35 crc kubenswrapper[4998]: I0227 10:32:35.422837 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-n6r8g"] Feb 27 10:32:36 crc kubenswrapper[4998]: I0227 10:32:36.632631 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4kmlf2" Feb 27 10:32:36 crc kubenswrapper[4998]: I0227 10:32:36.732967 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4zvj6\" (UniqueName: \"kubernetes.io/projected/6d48cabe-f00e-4d86-895b-6ed01fbb3ef4-kube-api-access-4zvj6\") pod \"6d48cabe-f00e-4d86-895b-6ed01fbb3ef4\" (UID: \"6d48cabe-f00e-4d86-895b-6ed01fbb3ef4\") " Feb 27 10:32:36 crc kubenswrapper[4998]: I0227 10:32:36.733010 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6d48cabe-f00e-4d86-895b-6ed01fbb3ef4-util\") pod \"6d48cabe-f00e-4d86-895b-6ed01fbb3ef4\" (UID: \"6d48cabe-f00e-4d86-895b-6ed01fbb3ef4\") " Feb 27 10:32:36 crc kubenswrapper[4998]: I0227 10:32:36.733072 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6d48cabe-f00e-4d86-895b-6ed01fbb3ef4-bundle\") pod \"6d48cabe-f00e-4d86-895b-6ed01fbb3ef4\" (UID: \"6d48cabe-f00e-4d86-895b-6ed01fbb3ef4\") " Feb 27 10:32:36 crc kubenswrapper[4998]: I0227 10:32:36.734175 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d48cabe-f00e-4d86-895b-6ed01fbb3ef4-bundle" (OuterVolumeSpecName: "bundle") pod "6d48cabe-f00e-4d86-895b-6ed01fbb3ef4" (UID: "6d48cabe-f00e-4d86-895b-6ed01fbb3ef4"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:32:36 crc kubenswrapper[4998]: I0227 10:32:36.736852 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d48cabe-f00e-4d86-895b-6ed01fbb3ef4-kube-api-access-4zvj6" (OuterVolumeSpecName: "kube-api-access-4zvj6") pod "6d48cabe-f00e-4d86-895b-6ed01fbb3ef4" (UID: "6d48cabe-f00e-4d86-895b-6ed01fbb3ef4"). InnerVolumeSpecName "kube-api-access-4zvj6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:32:36 crc kubenswrapper[4998]: I0227 10:32:36.747102 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d48cabe-f00e-4d86-895b-6ed01fbb3ef4-util" (OuterVolumeSpecName: "util") pod "6d48cabe-f00e-4d86-895b-6ed01fbb3ef4" (UID: "6d48cabe-f00e-4d86-895b-6ed01fbb3ef4"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:32:36 crc kubenswrapper[4998]: I0227 10:32:36.772070 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e32a75fa-f16d-4386-a933-4a6bd43f1bdc" path="/var/lib/kubelet/pods/e32a75fa-f16d-4386-a933-4a6bd43f1bdc/volumes" Feb 27 10:32:36 crc kubenswrapper[4998]: I0227 10:32:36.834462 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4zvj6\" (UniqueName: \"kubernetes.io/projected/6d48cabe-f00e-4d86-895b-6ed01fbb3ef4-kube-api-access-4zvj6\") on node \"crc\" DevicePath \"\"" Feb 27 10:32:36 crc kubenswrapper[4998]: I0227 10:32:36.834507 4998 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6d48cabe-f00e-4d86-895b-6ed01fbb3ef4-util\") on node \"crc\" DevicePath \"\"" Feb 27 10:32:36 crc kubenswrapper[4998]: I0227 10:32:36.834520 4998 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6d48cabe-f00e-4d86-895b-6ed01fbb3ef4-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:32:37 crc kubenswrapper[4998]: I0227 10:32:37.388447 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4kmlf2" event={"ID":"6d48cabe-f00e-4d86-895b-6ed01fbb3ef4","Type":"ContainerDied","Data":"e60ae1205be1bb13383556fa99bdd73fb8a865328c99d07ab9f171b8fd5688d4"} Feb 27 10:32:37 crc kubenswrapper[4998]: I0227 10:32:37.388736 4998 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e60ae1205be1bb13383556fa99bdd73fb8a865328c99d07ab9f171b8fd5688d4" Feb 27 10:32:37 crc kubenswrapper[4998]: I0227 10:32:37.388526 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4kmlf2" Feb 27 10:32:46 crc kubenswrapper[4998]: I0227 10:32:46.018545 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-68d695695-666w7"] Feb 27 10:32:46 crc kubenswrapper[4998]: E0227 10:32:46.019097 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d48cabe-f00e-4d86-895b-6ed01fbb3ef4" containerName="pull" Feb 27 10:32:46 crc kubenswrapper[4998]: I0227 10:32:46.019115 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d48cabe-f00e-4d86-895b-6ed01fbb3ef4" containerName="pull" Feb 27 10:32:46 crc kubenswrapper[4998]: E0227 10:32:46.019143 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d48cabe-f00e-4d86-895b-6ed01fbb3ef4" containerName="util" Feb 27 10:32:46 crc kubenswrapper[4998]: I0227 10:32:46.019151 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d48cabe-f00e-4d86-895b-6ed01fbb3ef4" containerName="util" Feb 27 10:32:46 crc kubenswrapper[4998]: E0227 10:32:46.019160 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e32a75fa-f16d-4386-a933-4a6bd43f1bdc" containerName="console" Feb 27 10:32:46 crc kubenswrapper[4998]: I0227 10:32:46.019168 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="e32a75fa-f16d-4386-a933-4a6bd43f1bdc" containerName="console" Feb 27 10:32:46 crc kubenswrapper[4998]: E0227 10:32:46.019180 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d48cabe-f00e-4d86-895b-6ed01fbb3ef4" containerName="extract" Feb 27 10:32:46 crc kubenswrapper[4998]: I0227 10:32:46.019187 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d48cabe-f00e-4d86-895b-6ed01fbb3ef4" containerName="extract" Feb 27 10:32:46 crc kubenswrapper[4998]: I0227 10:32:46.019322 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="e32a75fa-f16d-4386-a933-4a6bd43f1bdc" containerName="console" Feb 27 10:32:46 crc kubenswrapper[4998]: I0227 10:32:46.019337 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d48cabe-f00e-4d86-895b-6ed01fbb3ef4" containerName="extract" Feb 27 10:32:46 crc kubenswrapper[4998]: I0227 10:32:46.019819 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-68d695695-666w7" Feb 27 10:32:46 crc kubenswrapper[4998]: I0227 10:32:46.021858 4998 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Feb 27 10:32:46 crc kubenswrapper[4998]: I0227 10:32:46.022345 4998 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-hs7jk" Feb 27 10:32:46 crc kubenswrapper[4998]: I0227 10:32:46.023689 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Feb 27 10:32:46 crc kubenswrapper[4998]: I0227 10:32:46.023906 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Feb 27 10:32:46 crc kubenswrapper[4998]: I0227 10:32:46.024070 4998 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Feb 27 10:32:46 crc kubenswrapper[4998]: I0227 10:32:46.038439 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-68d695695-666w7"] Feb 27 10:32:46 crc kubenswrapper[4998]: I0227 10:32:46.042520 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/af258c6d-b8fa-403e-95d8-b2d6d3d3c3a4-webhook-cert\") pod \"metallb-operator-controller-manager-68d695695-666w7\" (UID: \"af258c6d-b8fa-403e-95d8-b2d6d3d3c3a4\") " pod="metallb-system/metallb-operator-controller-manager-68d695695-666w7" Feb 27 10:32:46 crc kubenswrapper[4998]: I0227 10:32:46.042565 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/af258c6d-b8fa-403e-95d8-b2d6d3d3c3a4-apiservice-cert\") pod \"metallb-operator-controller-manager-68d695695-666w7\" (UID: \"af258c6d-b8fa-403e-95d8-b2d6d3d3c3a4\") " pod="metallb-system/metallb-operator-controller-manager-68d695695-666w7" Feb 27 10:32:46 crc kubenswrapper[4998]: I0227 10:32:46.042594 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gl9m\" (UniqueName: \"kubernetes.io/projected/af258c6d-b8fa-403e-95d8-b2d6d3d3c3a4-kube-api-access-4gl9m\") pod \"metallb-operator-controller-manager-68d695695-666w7\" (UID: \"af258c6d-b8fa-403e-95d8-b2d6d3d3c3a4\") " pod="metallb-system/metallb-operator-controller-manager-68d695695-666w7" Feb 27 10:32:46 crc kubenswrapper[4998]: I0227 10:32:46.144063 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/af258c6d-b8fa-403e-95d8-b2d6d3d3c3a4-webhook-cert\") pod \"metallb-operator-controller-manager-68d695695-666w7\" (UID: \"af258c6d-b8fa-403e-95d8-b2d6d3d3c3a4\") " pod="metallb-system/metallb-operator-controller-manager-68d695695-666w7" Feb 27 10:32:46 crc kubenswrapper[4998]: I0227 10:32:46.144122 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/af258c6d-b8fa-403e-95d8-b2d6d3d3c3a4-apiservice-cert\") pod \"metallb-operator-controller-manager-68d695695-666w7\" (UID: \"af258c6d-b8fa-403e-95d8-b2d6d3d3c3a4\") " pod="metallb-system/metallb-operator-controller-manager-68d695695-666w7" Feb 27 10:32:46 crc kubenswrapper[4998]: I0227 10:32:46.144152 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4gl9m\" (UniqueName: \"kubernetes.io/projected/af258c6d-b8fa-403e-95d8-b2d6d3d3c3a4-kube-api-access-4gl9m\") pod \"metallb-operator-controller-manager-68d695695-666w7\" (UID: \"af258c6d-b8fa-403e-95d8-b2d6d3d3c3a4\") " pod="metallb-system/metallb-operator-controller-manager-68d695695-666w7" Feb 27 10:32:46 crc kubenswrapper[4998]: I0227 10:32:46.151318 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/af258c6d-b8fa-403e-95d8-b2d6d3d3c3a4-apiservice-cert\") pod \"metallb-operator-controller-manager-68d695695-666w7\" (UID: \"af258c6d-b8fa-403e-95d8-b2d6d3d3c3a4\") " pod="metallb-system/metallb-operator-controller-manager-68d695695-666w7" Feb 27 10:32:46 crc kubenswrapper[4998]: I0227 10:32:46.151669 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/af258c6d-b8fa-403e-95d8-b2d6d3d3c3a4-webhook-cert\") pod \"metallb-operator-controller-manager-68d695695-666w7\" (UID: \"af258c6d-b8fa-403e-95d8-b2d6d3d3c3a4\") " pod="metallb-system/metallb-operator-controller-manager-68d695695-666w7" Feb 27 10:32:46 crc kubenswrapper[4998]: I0227 10:32:46.166613 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gl9m\" (UniqueName: \"kubernetes.io/projected/af258c6d-b8fa-403e-95d8-b2d6d3d3c3a4-kube-api-access-4gl9m\") pod \"metallb-operator-controller-manager-68d695695-666w7\" (UID: \"af258c6d-b8fa-403e-95d8-b2d6d3d3c3a4\") " pod="metallb-system/metallb-operator-controller-manager-68d695695-666w7" Feb 27 10:32:46 crc kubenswrapper[4998]: I0227 10:32:46.234095 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-7cc8894d64-jsj29"] Feb 27 10:32:46 crc kubenswrapper[4998]: I0227 10:32:46.234892 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7cc8894d64-jsj29" Feb 27 10:32:46 crc kubenswrapper[4998]: I0227 10:32:46.236808 4998 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 27 10:32:46 crc kubenswrapper[4998]: I0227 10:32:46.237002 4998 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Feb 27 10:32:46 crc kubenswrapper[4998]: I0227 10:32:46.237365 4998 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-97xhp" Feb 27 10:32:46 crc kubenswrapper[4998]: I0227 10:32:46.244981 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a161099d-bd66-4a7e-9b8d-c6b8881b2512-apiservice-cert\") pod \"metallb-operator-webhook-server-7cc8894d64-jsj29\" (UID: \"a161099d-bd66-4a7e-9b8d-c6b8881b2512\") " pod="metallb-system/metallb-operator-webhook-server-7cc8894d64-jsj29" Feb 27 10:32:46 crc kubenswrapper[4998]: I0227 10:32:46.245014 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a161099d-bd66-4a7e-9b8d-c6b8881b2512-webhook-cert\") pod \"metallb-operator-webhook-server-7cc8894d64-jsj29\" (UID: \"a161099d-bd66-4a7e-9b8d-c6b8881b2512\") " pod="metallb-system/metallb-operator-webhook-server-7cc8894d64-jsj29" Feb 27 10:32:46 crc kubenswrapper[4998]: I0227 10:32:46.245044 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpznp\" (UniqueName: \"kubernetes.io/projected/a161099d-bd66-4a7e-9b8d-c6b8881b2512-kube-api-access-wpznp\") pod \"metallb-operator-webhook-server-7cc8894d64-jsj29\" (UID: \"a161099d-bd66-4a7e-9b8d-c6b8881b2512\") " pod="metallb-system/metallb-operator-webhook-server-7cc8894d64-jsj29" Feb 27 10:32:46 crc kubenswrapper[4998]: I0227 10:32:46.260025 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7cc8894d64-jsj29"] Feb 27 10:32:46 crc kubenswrapper[4998]: I0227 10:32:46.339963 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-68d695695-666w7" Feb 27 10:32:46 crc kubenswrapper[4998]: I0227 10:32:46.346904 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wpznp\" (UniqueName: \"kubernetes.io/projected/a161099d-bd66-4a7e-9b8d-c6b8881b2512-kube-api-access-wpznp\") pod \"metallb-operator-webhook-server-7cc8894d64-jsj29\" (UID: \"a161099d-bd66-4a7e-9b8d-c6b8881b2512\") " pod="metallb-system/metallb-operator-webhook-server-7cc8894d64-jsj29" Feb 27 10:32:46 crc kubenswrapper[4998]: I0227 10:32:46.347057 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a161099d-bd66-4a7e-9b8d-c6b8881b2512-apiservice-cert\") pod \"metallb-operator-webhook-server-7cc8894d64-jsj29\" (UID: \"a161099d-bd66-4a7e-9b8d-c6b8881b2512\") " pod="metallb-system/metallb-operator-webhook-server-7cc8894d64-jsj29" Feb 27 10:32:46 crc kubenswrapper[4998]: I0227 10:32:46.347092 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a161099d-bd66-4a7e-9b8d-c6b8881b2512-webhook-cert\") pod \"metallb-operator-webhook-server-7cc8894d64-jsj29\" (UID: \"a161099d-bd66-4a7e-9b8d-c6b8881b2512\") " pod="metallb-system/metallb-operator-webhook-server-7cc8894d64-jsj29" Feb 27 10:32:46 crc kubenswrapper[4998]: I0227 10:32:46.350413 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a161099d-bd66-4a7e-9b8d-c6b8881b2512-apiservice-cert\") pod \"metallb-operator-webhook-server-7cc8894d64-jsj29\" (UID: \"a161099d-bd66-4a7e-9b8d-c6b8881b2512\") " pod="metallb-system/metallb-operator-webhook-server-7cc8894d64-jsj29" Feb 27 10:32:46 crc kubenswrapper[4998]: I0227 10:32:46.351852 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a161099d-bd66-4a7e-9b8d-c6b8881b2512-webhook-cert\") pod \"metallb-operator-webhook-server-7cc8894d64-jsj29\" (UID: \"a161099d-bd66-4a7e-9b8d-c6b8881b2512\") " pod="metallb-system/metallb-operator-webhook-server-7cc8894d64-jsj29" Feb 27 10:32:46 crc kubenswrapper[4998]: I0227 10:32:46.365873 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wpznp\" (UniqueName: \"kubernetes.io/projected/a161099d-bd66-4a7e-9b8d-c6b8881b2512-kube-api-access-wpznp\") pod \"metallb-operator-webhook-server-7cc8894d64-jsj29\" (UID: \"a161099d-bd66-4a7e-9b8d-c6b8881b2512\") " pod="metallb-system/metallb-operator-webhook-server-7cc8894d64-jsj29" Feb 27 10:32:46 crc kubenswrapper[4998]: I0227 10:32:46.549562 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7cc8894d64-jsj29" Feb 27 10:32:46 crc kubenswrapper[4998]: I0227 10:32:46.735650 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-68d695695-666w7"] Feb 27 10:32:46 crc kubenswrapper[4998]: I0227 10:32:46.936219 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7cc8894d64-jsj29"] Feb 27 10:32:46 crc kubenswrapper[4998]: W0227 10:32:46.942436 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda161099d_bd66_4a7e_9b8d_c6b8881b2512.slice/crio-83669351c66d19086970fdbd7ae842442d18adee223ce607b86628208acb03f9 WatchSource:0}: Error finding container 83669351c66d19086970fdbd7ae842442d18adee223ce607b86628208acb03f9: Status 404 returned error can't find the container with id 83669351c66d19086970fdbd7ae842442d18adee223ce607b86628208acb03f9 Feb 27 10:32:47 crc kubenswrapper[4998]: I0227 10:32:47.446620 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7cc8894d64-jsj29" event={"ID":"a161099d-bd66-4a7e-9b8d-c6b8881b2512","Type":"ContainerStarted","Data":"83669351c66d19086970fdbd7ae842442d18adee223ce607b86628208acb03f9"} Feb 27 10:32:47 crc kubenswrapper[4998]: I0227 10:32:47.449461 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-68d695695-666w7" event={"ID":"af258c6d-b8fa-403e-95d8-b2d6d3d3c3a4","Type":"ContainerStarted","Data":"dbad2b42a91c0189fed5f872528d275b23d5c75d7d951f595328761d947b01a7"} Feb 27 10:32:52 crc kubenswrapper[4998]: I0227 10:32:52.484916 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7cc8894d64-jsj29" event={"ID":"a161099d-bd66-4a7e-9b8d-c6b8881b2512","Type":"ContainerStarted","Data":"9b323e4eb9c1340101c31f897193f5d568a84985d32f09ad77323dbac36e7b4c"} Feb 27 10:32:52 crc kubenswrapper[4998]: I0227 10:32:52.485499 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-7cc8894d64-jsj29" Feb 27 10:32:52 crc kubenswrapper[4998]: I0227 10:32:52.486430 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-68d695695-666w7" event={"ID":"af258c6d-b8fa-403e-95d8-b2d6d3d3c3a4","Type":"ContainerStarted","Data":"b292c7cd5010a74b64a2c43173058e8358471162dfdd7d7a934163ad6582a442"} Feb 27 10:32:52 crc kubenswrapper[4998]: I0227 10:32:52.486604 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-68d695695-666w7" Feb 27 10:32:52 crc kubenswrapper[4998]: I0227 10:32:52.502913 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-7cc8894d64-jsj29" podStartSLOduration=1.326765118 podStartE2EDuration="6.502893414s" podCreationTimestamp="2026-02-27 10:32:46 +0000 UTC" firstStartedPulling="2026-02-27 10:32:46.945769835 +0000 UTC m=+918.944040803" lastFinishedPulling="2026-02-27 10:32:52.121898121 +0000 UTC m=+924.120169099" observedRunningTime="2026-02-27 10:32:52.499550305 +0000 UTC m=+924.497821283" watchObservedRunningTime="2026-02-27 10:32:52.502893414 +0000 UTC m=+924.501164382" Feb 27 10:32:52 crc kubenswrapper[4998]: I0227 10:32:52.525155 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-68d695695-666w7" podStartSLOduration=1.172865116 podStartE2EDuration="6.525133488s" podCreationTimestamp="2026-02-27 10:32:46 +0000 UTC" firstStartedPulling="2026-02-27 10:32:46.744692143 +0000 UTC m=+918.742963131" lastFinishedPulling="2026-02-27 10:32:52.096960495 +0000 UTC m=+924.095231503" observedRunningTime="2026-02-27 10:32:52.520594239 +0000 UTC m=+924.518865227" watchObservedRunningTime="2026-02-27 10:32:52.525133488 +0000 UTC m=+924.523404456" Feb 27 10:32:52 crc kubenswrapper[4998]: I0227 10:32:52.946758 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-pscdv"] Feb 27 10:32:52 crc kubenswrapper[4998]: I0227 10:32:52.947968 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pscdv" Feb 27 10:32:52 crc kubenswrapper[4998]: I0227 10:32:52.961669 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pscdv"] Feb 27 10:32:53 crc kubenswrapper[4998]: I0227 10:32:53.134023 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/354da589-a400-43c7-bc20-76283b2da16a-catalog-content\") pod \"redhat-marketplace-pscdv\" (UID: \"354da589-a400-43c7-bc20-76283b2da16a\") " pod="openshift-marketplace/redhat-marketplace-pscdv" Feb 27 10:32:53 crc kubenswrapper[4998]: I0227 10:32:53.134346 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/354da589-a400-43c7-bc20-76283b2da16a-utilities\") pod \"redhat-marketplace-pscdv\" (UID: \"354da589-a400-43c7-bc20-76283b2da16a\") " pod="openshift-marketplace/redhat-marketplace-pscdv" Feb 27 10:32:53 crc kubenswrapper[4998]: I0227 10:32:53.134438 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gn4zh\" (UniqueName: \"kubernetes.io/projected/354da589-a400-43c7-bc20-76283b2da16a-kube-api-access-gn4zh\") pod \"redhat-marketplace-pscdv\" (UID: \"354da589-a400-43c7-bc20-76283b2da16a\") " pod="openshift-marketplace/redhat-marketplace-pscdv" Feb 27 10:32:53 crc kubenswrapper[4998]: I0227 10:32:53.235266 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/354da589-a400-43c7-bc20-76283b2da16a-catalog-content\") pod \"redhat-marketplace-pscdv\" (UID: \"354da589-a400-43c7-bc20-76283b2da16a\") " pod="openshift-marketplace/redhat-marketplace-pscdv" Feb 27 10:32:53 crc kubenswrapper[4998]: I0227 10:32:53.235345 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/354da589-a400-43c7-bc20-76283b2da16a-utilities\") pod \"redhat-marketplace-pscdv\" (UID: \"354da589-a400-43c7-bc20-76283b2da16a\") " pod="openshift-marketplace/redhat-marketplace-pscdv" Feb 27 10:32:53 crc kubenswrapper[4998]: I0227 10:32:53.235382 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gn4zh\" (UniqueName: \"kubernetes.io/projected/354da589-a400-43c7-bc20-76283b2da16a-kube-api-access-gn4zh\") pod \"redhat-marketplace-pscdv\" (UID: \"354da589-a400-43c7-bc20-76283b2da16a\") " pod="openshift-marketplace/redhat-marketplace-pscdv" Feb 27 10:32:53 crc kubenswrapper[4998]: I0227 10:32:53.235647 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/354da589-a400-43c7-bc20-76283b2da16a-catalog-content\") pod \"redhat-marketplace-pscdv\" (UID: \"354da589-a400-43c7-bc20-76283b2da16a\") " pod="openshift-marketplace/redhat-marketplace-pscdv" Feb 27 10:32:53 crc kubenswrapper[4998]: I0227 10:32:53.236063 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/354da589-a400-43c7-bc20-76283b2da16a-utilities\") pod \"redhat-marketplace-pscdv\" (UID: \"354da589-a400-43c7-bc20-76283b2da16a\") " pod="openshift-marketplace/redhat-marketplace-pscdv" Feb 27 10:32:53 crc kubenswrapper[4998]: I0227 10:32:53.252412 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gn4zh\" (UniqueName: \"kubernetes.io/projected/354da589-a400-43c7-bc20-76283b2da16a-kube-api-access-gn4zh\") pod \"redhat-marketplace-pscdv\" (UID: \"354da589-a400-43c7-bc20-76283b2da16a\") " pod="openshift-marketplace/redhat-marketplace-pscdv" Feb 27 10:32:53 crc kubenswrapper[4998]: I0227 10:32:53.316164 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pscdv" Feb 27 10:32:53 crc kubenswrapper[4998]: I0227 10:32:53.527447 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pscdv"] Feb 27 10:32:54 crc kubenswrapper[4998]: I0227 10:32:54.502641 4998 generic.go:334] "Generic (PLEG): container finished" podID="354da589-a400-43c7-bc20-76283b2da16a" containerID="cc13a2b6c8fbb3c6bcdd930f569272638758940627dab215dff1966642b3ce69" exitCode=0 Feb 27 10:32:54 crc kubenswrapper[4998]: I0227 10:32:54.502729 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pscdv" event={"ID":"354da589-a400-43c7-bc20-76283b2da16a","Type":"ContainerDied","Data":"cc13a2b6c8fbb3c6bcdd930f569272638758940627dab215dff1966642b3ce69"} Feb 27 10:32:54 crc kubenswrapper[4998]: I0227 10:32:54.502986 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pscdv" event={"ID":"354da589-a400-43c7-bc20-76283b2da16a","Type":"ContainerStarted","Data":"5d76da246e6399703854693c2ed5fcb0232c2d1131454e4ae24cd95688ed5b4e"} Feb 27 10:32:55 crc kubenswrapper[4998]: I0227 10:32:55.512549 4998 generic.go:334] "Generic (PLEG): container finished" podID="354da589-a400-43c7-bc20-76283b2da16a" containerID="b52207ad6eaed465ed377eae77190d22ef0fde2fcfe67e02081797f3e275bd41" exitCode=0 Feb 27 10:32:55 crc kubenswrapper[4998]: I0227 10:32:55.512650 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pscdv" event={"ID":"354da589-a400-43c7-bc20-76283b2da16a","Type":"ContainerDied","Data":"b52207ad6eaed465ed377eae77190d22ef0fde2fcfe67e02081797f3e275bd41"} Feb 27 10:32:56 crc kubenswrapper[4998]: I0227 10:32:56.521411 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pscdv" event={"ID":"354da589-a400-43c7-bc20-76283b2da16a","Type":"ContainerStarted","Data":"8a12151f48ca3a2183b68ccdcf7a2ffb1c0ae380d867aaadebdd44e175cfa585"} Feb 27 10:32:56 crc kubenswrapper[4998]: I0227 10:32:56.547796 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-pscdv" podStartSLOduration=2.855242912 podStartE2EDuration="4.547779086s" podCreationTimestamp="2026-02-27 10:32:52 +0000 UTC" firstStartedPulling="2026-02-27 10:32:54.503994144 +0000 UTC m=+926.502265112" lastFinishedPulling="2026-02-27 10:32:56.196530318 +0000 UTC m=+928.194801286" observedRunningTime="2026-02-27 10:32:56.54401815 +0000 UTC m=+928.542289118" watchObservedRunningTime="2026-02-27 10:32:56.547779086 +0000 UTC m=+928.546050054" Feb 27 10:33:03 crc kubenswrapper[4998]: I0227 10:33:03.316735 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-pscdv" Feb 27 10:33:03 crc kubenswrapper[4998]: I0227 10:33:03.317376 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-pscdv" Feb 27 10:33:03 crc kubenswrapper[4998]: I0227 10:33:03.363593 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-pscdv" Feb 27 10:33:03 crc kubenswrapper[4998]: I0227 10:33:03.592737 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-pscdv" Feb 27 10:33:05 crc kubenswrapper[4998]: I0227 10:33:05.718452 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pscdv"] Feb 27 10:33:05 crc kubenswrapper[4998]: I0227 10:33:05.719308 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-pscdv" podUID="354da589-a400-43c7-bc20-76283b2da16a" containerName="registry-server" containerID="cri-o://8a12151f48ca3a2183b68ccdcf7a2ffb1c0ae380d867aaadebdd44e175cfa585" gracePeriod=2 Feb 27 10:33:06 crc kubenswrapper[4998]: I0227 10:33:06.553448 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-7cc8894d64-jsj29" Feb 27 10:33:06 crc kubenswrapper[4998]: I0227 10:33:06.591560 4998 generic.go:334] "Generic (PLEG): container finished" podID="354da589-a400-43c7-bc20-76283b2da16a" containerID="8a12151f48ca3a2183b68ccdcf7a2ffb1c0ae380d867aaadebdd44e175cfa585" exitCode=0 Feb 27 10:33:06 crc kubenswrapper[4998]: I0227 10:33:06.591618 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pscdv" event={"ID":"354da589-a400-43c7-bc20-76283b2da16a","Type":"ContainerDied","Data":"8a12151f48ca3a2183b68ccdcf7a2ffb1c0ae380d867aaadebdd44e175cfa585"} Feb 27 10:33:06 crc kubenswrapper[4998]: I0227 10:33:06.591657 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pscdv" event={"ID":"354da589-a400-43c7-bc20-76283b2da16a","Type":"ContainerDied","Data":"5d76da246e6399703854693c2ed5fcb0232c2d1131454e4ae24cd95688ed5b4e"} Feb 27 10:33:06 crc kubenswrapper[4998]: I0227 10:33:06.591668 4998 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d76da246e6399703854693c2ed5fcb0232c2d1131454e4ae24cd95688ed5b4e" Feb 27 10:33:06 crc kubenswrapper[4998]: I0227 10:33:06.593845 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pscdv" Feb 27 10:33:06 crc kubenswrapper[4998]: I0227 10:33:06.730331 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/354da589-a400-43c7-bc20-76283b2da16a-utilities\") pod \"354da589-a400-43c7-bc20-76283b2da16a\" (UID: \"354da589-a400-43c7-bc20-76283b2da16a\") " Feb 27 10:33:06 crc kubenswrapper[4998]: I0227 10:33:06.730429 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/354da589-a400-43c7-bc20-76283b2da16a-catalog-content\") pod \"354da589-a400-43c7-bc20-76283b2da16a\" (UID: \"354da589-a400-43c7-bc20-76283b2da16a\") " Feb 27 10:33:06 crc kubenswrapper[4998]: I0227 10:33:06.730470 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gn4zh\" (UniqueName: \"kubernetes.io/projected/354da589-a400-43c7-bc20-76283b2da16a-kube-api-access-gn4zh\") pod \"354da589-a400-43c7-bc20-76283b2da16a\" (UID: \"354da589-a400-43c7-bc20-76283b2da16a\") " Feb 27 10:33:06 crc kubenswrapper[4998]: I0227 10:33:06.731501 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/354da589-a400-43c7-bc20-76283b2da16a-utilities" (OuterVolumeSpecName: "utilities") pod "354da589-a400-43c7-bc20-76283b2da16a" (UID: "354da589-a400-43c7-bc20-76283b2da16a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:33:06 crc kubenswrapper[4998]: I0227 10:33:06.737967 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/354da589-a400-43c7-bc20-76283b2da16a-kube-api-access-gn4zh" (OuterVolumeSpecName: "kube-api-access-gn4zh") pod "354da589-a400-43c7-bc20-76283b2da16a" (UID: "354da589-a400-43c7-bc20-76283b2da16a"). InnerVolumeSpecName "kube-api-access-gn4zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:33:06 crc kubenswrapper[4998]: I0227 10:33:06.755030 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/354da589-a400-43c7-bc20-76283b2da16a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "354da589-a400-43c7-bc20-76283b2da16a" (UID: "354da589-a400-43c7-bc20-76283b2da16a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:33:06 crc kubenswrapper[4998]: I0227 10:33:06.831964 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gn4zh\" (UniqueName: \"kubernetes.io/projected/354da589-a400-43c7-bc20-76283b2da16a-kube-api-access-gn4zh\") on node \"crc\" DevicePath \"\"" Feb 27 10:33:06 crc kubenswrapper[4998]: I0227 10:33:06.832005 4998 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/354da589-a400-43c7-bc20-76283b2da16a-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 10:33:06 crc kubenswrapper[4998]: I0227 10:33:06.832019 4998 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/354da589-a400-43c7-bc20-76283b2da16a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 10:33:07 crc kubenswrapper[4998]: I0227 10:33:07.595828 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pscdv" Feb 27 10:33:07 crc kubenswrapper[4998]: I0227 10:33:07.613299 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pscdv"] Feb 27 10:33:07 crc kubenswrapper[4998]: I0227 10:33:07.620273 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-pscdv"] Feb 27 10:33:08 crc kubenswrapper[4998]: I0227 10:33:08.777354 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="354da589-a400-43c7-bc20-76283b2da16a" path="/var/lib/kubelet/pods/354da589-a400-43c7-bc20-76283b2da16a/volumes" Feb 27 10:33:26 crc kubenswrapper[4998]: I0227 10:33:26.342652 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-68d695695-666w7" Feb 27 10:33:27 crc kubenswrapper[4998]: I0227 10:33:27.384957 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-t5q9x"] Feb 27 10:33:27 crc kubenswrapper[4998]: E0227 10:33:27.385328 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="354da589-a400-43c7-bc20-76283b2da16a" containerName="extract-utilities" Feb 27 10:33:27 crc kubenswrapper[4998]: I0227 10:33:27.385351 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="354da589-a400-43c7-bc20-76283b2da16a" containerName="extract-utilities" Feb 27 10:33:27 crc kubenswrapper[4998]: E0227 10:33:27.385403 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="354da589-a400-43c7-bc20-76283b2da16a" containerName="registry-server" Feb 27 10:33:27 crc kubenswrapper[4998]: I0227 10:33:27.385416 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="354da589-a400-43c7-bc20-76283b2da16a" containerName="registry-server" Feb 27 10:33:27 crc kubenswrapper[4998]: E0227 10:33:27.385445 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="354da589-a400-43c7-bc20-76283b2da16a" containerName="extract-content" Feb 27 10:33:27 crc kubenswrapper[4998]: I0227 10:33:27.385457 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="354da589-a400-43c7-bc20-76283b2da16a" containerName="extract-content" Feb 27 10:33:27 crc kubenswrapper[4998]: I0227 10:33:27.385690 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="354da589-a400-43c7-bc20-76283b2da16a" containerName="registry-server" Feb 27 10:33:27 crc kubenswrapper[4998]: I0227 10:33:27.388629 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-t5q9x" Feb 27 10:33:27 crc kubenswrapper[4998]: I0227 10:33:27.391884 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7f989f654f-cm24v"] Feb 27 10:33:27 crc kubenswrapper[4998]: I0227 10:33:27.392196 4998 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-9blgw" Feb 27 10:33:27 crc kubenswrapper[4998]: I0227 10:33:27.392560 4998 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Feb 27 10:33:27 crc kubenswrapper[4998]: I0227 10:33:27.392729 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-cm24v" Feb 27 10:33:27 crc kubenswrapper[4998]: I0227 10:33:27.393165 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Feb 27 10:33:27 crc kubenswrapper[4998]: I0227 10:33:27.395616 4998 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Feb 27 10:33:27 crc kubenswrapper[4998]: I0227 10:33:27.405025 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7f989f654f-cm24v"] Feb 27 10:33:27 crc kubenswrapper[4998]: I0227 10:33:27.445209 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6156d7c5-b8a3-4741-b1b5-cc12edf7cba1-cert\") pod \"frr-k8s-webhook-server-7f989f654f-cm24v\" (UID: \"6156d7c5-b8a3-4741-b1b5-cc12edf7cba1\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-cm24v" Feb 27 10:33:27 crc kubenswrapper[4998]: I0227 10:33:27.445727 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xdlc\" (UniqueName: \"kubernetes.io/projected/6156d7c5-b8a3-4741-b1b5-cc12edf7cba1-kube-api-access-5xdlc\") pod \"frr-k8s-webhook-server-7f989f654f-cm24v\" (UID: \"6156d7c5-b8a3-4741-b1b5-cc12edf7cba1\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-cm24v" Feb 27 10:33:27 crc kubenswrapper[4998]: I0227 10:33:27.472081 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-9rnsc"] Feb 27 10:33:27 crc kubenswrapper[4998]: I0227 10:33:27.473549 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-9rnsc" Feb 27 10:33:27 crc kubenswrapper[4998]: I0227 10:33:27.477539 4998 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-mgmdp" Feb 27 10:33:27 crc kubenswrapper[4998]: I0227 10:33:27.477775 4998 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Feb 27 10:33:27 crc kubenswrapper[4998]: I0227 10:33:27.477883 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Feb 27 10:33:27 crc kubenswrapper[4998]: I0227 10:33:27.478023 4998 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Feb 27 10:33:27 crc kubenswrapper[4998]: I0227 10:33:27.495868 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-86ddb6bd46-plw4b"] Feb 27 10:33:27 crc kubenswrapper[4998]: I0227 10:33:27.496830 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-86ddb6bd46-plw4b" Feb 27 10:33:27 crc kubenswrapper[4998]: I0227 10:33:27.499446 4998 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Feb 27 10:33:27 crc kubenswrapper[4998]: I0227 10:33:27.505715 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-86ddb6bd46-plw4b"] Feb 27 10:33:27 crc kubenswrapper[4998]: I0227 10:33:27.547906 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xdlc\" (UniqueName: \"kubernetes.io/projected/6156d7c5-b8a3-4741-b1b5-cc12edf7cba1-kube-api-access-5xdlc\") pod \"frr-k8s-webhook-server-7f989f654f-cm24v\" (UID: \"6156d7c5-b8a3-4741-b1b5-cc12edf7cba1\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-cm24v" Feb 27 10:33:27 crc kubenswrapper[4998]: I0227 10:33:27.547951 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmb7z\" (UniqueName: \"kubernetes.io/projected/9f850639-e424-4e27-b201-24520cb55133-kube-api-access-jmb7z\") pod \"frr-k8s-t5q9x\" (UID: \"9f850639-e424-4e27-b201-24520cb55133\") " pod="metallb-system/frr-k8s-t5q9x" Feb 27 10:33:27 crc kubenswrapper[4998]: I0227 10:33:27.547978 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/9f850639-e424-4e27-b201-24520cb55133-frr-startup\") pod \"frr-k8s-t5q9x\" (UID: \"9f850639-e424-4e27-b201-24520cb55133\") " pod="metallb-system/frr-k8s-t5q9x" Feb 27 10:33:27 crc kubenswrapper[4998]: I0227 10:33:27.547998 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/9f850639-e424-4e27-b201-24520cb55133-metrics\") pod \"frr-k8s-t5q9x\" (UID: \"9f850639-e424-4e27-b201-24520cb55133\") " pod="metallb-system/frr-k8s-t5q9x" Feb 27 10:33:27 crc kubenswrapper[4998]: I0227 10:33:27.548039 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6156d7c5-b8a3-4741-b1b5-cc12edf7cba1-cert\") pod \"frr-k8s-webhook-server-7f989f654f-cm24v\" (UID: \"6156d7c5-b8a3-4741-b1b5-cc12edf7cba1\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-cm24v" Feb 27 10:33:27 crc kubenswrapper[4998]: I0227 10:33:27.548068 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/9f850639-e424-4e27-b201-24520cb55133-frr-sockets\") pod \"frr-k8s-t5q9x\" (UID: \"9f850639-e424-4e27-b201-24520cb55133\") " pod="metallb-system/frr-k8s-t5q9x" Feb 27 10:33:27 crc kubenswrapper[4998]: I0227 10:33:27.548084 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9f850639-e424-4e27-b201-24520cb55133-metrics-certs\") pod \"frr-k8s-t5q9x\" (UID: \"9f850639-e424-4e27-b201-24520cb55133\") " pod="metallb-system/frr-k8s-t5q9x" Feb 27 10:33:27 crc kubenswrapper[4998]: I0227 10:33:27.548123 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/9f850639-e424-4e27-b201-24520cb55133-reloader\") pod \"frr-k8s-t5q9x\" (UID: \"9f850639-e424-4e27-b201-24520cb55133\") " pod="metallb-system/frr-k8s-t5q9x" Feb 27 10:33:27 crc kubenswrapper[4998]: I0227 10:33:27.548156 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/9f850639-e424-4e27-b201-24520cb55133-frr-conf\") pod \"frr-k8s-t5q9x\" (UID: \"9f850639-e424-4e27-b201-24520cb55133\") " pod="metallb-system/frr-k8s-t5q9x" Feb 27 10:33:27 crc kubenswrapper[4998]: I0227 10:33:27.564054 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xdlc\" (UniqueName: \"kubernetes.io/projected/6156d7c5-b8a3-4741-b1b5-cc12edf7cba1-kube-api-access-5xdlc\") pod \"frr-k8s-webhook-server-7f989f654f-cm24v\" (UID: \"6156d7c5-b8a3-4741-b1b5-cc12edf7cba1\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-cm24v" Feb 27 10:33:27 crc kubenswrapper[4998]: I0227 10:33:27.572214 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6156d7c5-b8a3-4741-b1b5-cc12edf7cba1-cert\") pod \"frr-k8s-webhook-server-7f989f654f-cm24v\" (UID: \"6156d7c5-b8a3-4741-b1b5-cc12edf7cba1\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-cm24v" Feb 27 10:33:27 crc kubenswrapper[4998]: I0227 10:33:27.649140 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/9f850639-e424-4e27-b201-24520cb55133-frr-startup\") pod \"frr-k8s-t5q9x\" (UID: \"9f850639-e424-4e27-b201-24520cb55133\") " pod="metallb-system/frr-k8s-t5q9x" Feb 27 10:33:27 crc kubenswrapper[4998]: I0227 10:33:27.649200 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/9f850639-e424-4e27-b201-24520cb55133-metrics\") pod \"frr-k8s-t5q9x\" (UID: \"9f850639-e424-4e27-b201-24520cb55133\") " pod="metallb-system/frr-k8s-t5q9x" Feb 27 10:33:27 crc kubenswrapper[4998]: I0227 10:33:27.649260 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6251025c-f95e-45c1-a13d-25806c9afc6d-metrics-certs\") pod \"speaker-9rnsc\" (UID: \"6251025c-f95e-45c1-a13d-25806c9afc6d\") " pod="metallb-system/speaker-9rnsc" Feb 27 10:33:27 crc kubenswrapper[4998]: I0227 10:33:27.649295 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/9f850639-e424-4e27-b201-24520cb55133-frr-sockets\") pod \"frr-k8s-t5q9x\" (UID: \"9f850639-e424-4e27-b201-24520cb55133\") " pod="metallb-system/frr-k8s-t5q9x" Feb 27 10:33:27 crc kubenswrapper[4998]: I0227 10:33:27.649314 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/6251025c-f95e-45c1-a13d-25806c9afc6d-memberlist\") pod \"speaker-9rnsc\" (UID: \"6251025c-f95e-45c1-a13d-25806c9afc6d\") " pod="metallb-system/speaker-9rnsc" Feb 27 10:33:27 crc kubenswrapper[4998]: I0227 10:33:27.649336 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9f850639-e424-4e27-b201-24520cb55133-metrics-certs\") pod \"frr-k8s-t5q9x\" (UID: \"9f850639-e424-4e27-b201-24520cb55133\") " pod="metallb-system/frr-k8s-t5q9x" Feb 27 10:33:27 crc kubenswrapper[4998]: I0227 10:33:27.649379 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/665ae341-c75e-42ec-b806-6e58a49d6b0a-cert\") pod \"controller-86ddb6bd46-plw4b\" (UID: \"665ae341-c75e-42ec-b806-6e58a49d6b0a\") " pod="metallb-system/controller-86ddb6bd46-plw4b" Feb 27 10:33:27 crc kubenswrapper[4998]: I0227 10:33:27.649425 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6r5m\" (UniqueName: \"kubernetes.io/projected/6251025c-f95e-45c1-a13d-25806c9afc6d-kube-api-access-j6r5m\") pod \"speaker-9rnsc\" (UID: \"6251025c-f95e-45c1-a13d-25806c9afc6d\") " pod="metallb-system/speaker-9rnsc" Feb 27 10:33:27 crc kubenswrapper[4998]: I0227 10:33:27.649452 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/9f850639-e424-4e27-b201-24520cb55133-reloader\") pod \"frr-k8s-t5q9x\" (UID: \"9f850639-e424-4e27-b201-24520cb55133\") " pod="metallb-system/frr-k8s-t5q9x" Feb 27 10:33:27 crc kubenswrapper[4998]: I0227 10:33:27.649477 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/6251025c-f95e-45c1-a13d-25806c9afc6d-metallb-excludel2\") pod \"speaker-9rnsc\" (UID: \"6251025c-f95e-45c1-a13d-25806c9afc6d\") " pod="metallb-system/speaker-9rnsc" Feb 27 10:33:27 crc kubenswrapper[4998]: I0227 10:33:27.649519 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/9f850639-e424-4e27-b201-24520cb55133-frr-conf\") pod \"frr-k8s-t5q9x\" (UID: \"9f850639-e424-4e27-b201-24520cb55133\") " pod="metallb-system/frr-k8s-t5q9x" Feb 27 10:33:27 crc kubenswrapper[4998]: I0227 10:33:27.649543 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/665ae341-c75e-42ec-b806-6e58a49d6b0a-metrics-certs\") pod \"controller-86ddb6bd46-plw4b\" (UID: \"665ae341-c75e-42ec-b806-6e58a49d6b0a\") " pod="metallb-system/controller-86ddb6bd46-plw4b" Feb 27 10:33:27 crc kubenswrapper[4998]: I0227 10:33:27.649585 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9vrb\" (UniqueName: \"kubernetes.io/projected/665ae341-c75e-42ec-b806-6e58a49d6b0a-kube-api-access-t9vrb\") pod \"controller-86ddb6bd46-plw4b\" (UID: \"665ae341-c75e-42ec-b806-6e58a49d6b0a\") " pod="metallb-system/controller-86ddb6bd46-plw4b" Feb 27 10:33:27 crc kubenswrapper[4998]: I0227 10:33:27.649609 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmb7z\" (UniqueName: \"kubernetes.io/projected/9f850639-e424-4e27-b201-24520cb55133-kube-api-access-jmb7z\") pod \"frr-k8s-t5q9x\" (UID: \"9f850639-e424-4e27-b201-24520cb55133\") " pod="metallb-system/frr-k8s-t5q9x" Feb 27 10:33:27 crc kubenswrapper[4998]: I0227 10:33:27.650081 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/9f850639-e424-4e27-b201-24520cb55133-reloader\") pod \"frr-k8s-t5q9x\" (UID: \"9f850639-e424-4e27-b201-24520cb55133\") " pod="metallb-system/frr-k8s-t5q9x" Feb 27 10:33:27 crc kubenswrapper[4998]: E0227 10:33:27.650187 4998 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Feb 27 10:33:27 crc kubenswrapper[4998]: E0227 10:33:27.650260 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9f850639-e424-4e27-b201-24520cb55133-metrics-certs podName:9f850639-e424-4e27-b201-24520cb55133 nodeName:}" failed. No retries permitted until 2026-02-27 10:33:28.150220138 +0000 UTC m=+960.148491186 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9f850639-e424-4e27-b201-24520cb55133-metrics-certs") pod "frr-k8s-t5q9x" (UID: "9f850639-e424-4e27-b201-24520cb55133") : secret "frr-k8s-certs-secret" not found Feb 27 10:33:27 crc kubenswrapper[4998]: I0227 10:33:27.650440 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/9f850639-e424-4e27-b201-24520cb55133-metrics\") pod \"frr-k8s-t5q9x\" (UID: \"9f850639-e424-4e27-b201-24520cb55133\") " pod="metallb-system/frr-k8s-t5q9x" Feb 27 10:33:27 crc kubenswrapper[4998]: I0227 10:33:27.650517 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/9f850639-e424-4e27-b201-24520cb55133-frr-conf\") pod \"frr-k8s-t5q9x\" (UID: \"9f850639-e424-4e27-b201-24520cb55133\") " pod="metallb-system/frr-k8s-t5q9x" Feb 27 10:33:27 crc kubenswrapper[4998]: I0227 10:33:27.650772 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/9f850639-e424-4e27-b201-24520cb55133-frr-sockets\") pod \"frr-k8s-t5q9x\" (UID: \"9f850639-e424-4e27-b201-24520cb55133\") " pod="metallb-system/frr-k8s-t5q9x" Feb 27 10:33:27 crc kubenswrapper[4998]: I0227 10:33:27.650807 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/9f850639-e424-4e27-b201-24520cb55133-frr-startup\") pod \"frr-k8s-t5q9x\" (UID: \"9f850639-e424-4e27-b201-24520cb55133\") " pod="metallb-system/frr-k8s-t5q9x" Feb 27 10:33:27 crc kubenswrapper[4998]: I0227 10:33:27.672192 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmb7z\" (UniqueName: \"kubernetes.io/projected/9f850639-e424-4e27-b201-24520cb55133-kube-api-access-jmb7z\") pod \"frr-k8s-t5q9x\" (UID: \"9f850639-e424-4e27-b201-24520cb55133\") " pod="metallb-system/frr-k8s-t5q9x" Feb 27 10:33:27 crc kubenswrapper[4998]: I0227 10:33:27.721007 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-cm24v" Feb 27 10:33:27 crc kubenswrapper[4998]: I0227 10:33:27.751213 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9vrb\" (UniqueName: \"kubernetes.io/projected/665ae341-c75e-42ec-b806-6e58a49d6b0a-kube-api-access-t9vrb\") pod \"controller-86ddb6bd46-plw4b\" (UID: \"665ae341-c75e-42ec-b806-6e58a49d6b0a\") " pod="metallb-system/controller-86ddb6bd46-plw4b" Feb 27 10:33:27 crc kubenswrapper[4998]: I0227 10:33:27.751364 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6251025c-f95e-45c1-a13d-25806c9afc6d-metrics-certs\") pod \"speaker-9rnsc\" (UID: \"6251025c-f95e-45c1-a13d-25806c9afc6d\") " pod="metallb-system/speaker-9rnsc" Feb 27 10:33:27 crc kubenswrapper[4998]: I0227 10:33:27.751390 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/6251025c-f95e-45c1-a13d-25806c9afc6d-memberlist\") pod \"speaker-9rnsc\" (UID: \"6251025c-f95e-45c1-a13d-25806c9afc6d\") " pod="metallb-system/speaker-9rnsc" Feb 27 10:33:27 crc kubenswrapper[4998]: I0227 10:33:27.751427 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/665ae341-c75e-42ec-b806-6e58a49d6b0a-cert\") pod \"controller-86ddb6bd46-plw4b\" (UID: \"665ae341-c75e-42ec-b806-6e58a49d6b0a\") " pod="metallb-system/controller-86ddb6bd46-plw4b" Feb 27 10:33:27 crc kubenswrapper[4998]: I0227 10:33:27.751443 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6r5m\" (UniqueName: \"kubernetes.io/projected/6251025c-f95e-45c1-a13d-25806c9afc6d-kube-api-access-j6r5m\") pod \"speaker-9rnsc\" (UID: \"6251025c-f95e-45c1-a13d-25806c9afc6d\") " pod="metallb-system/speaker-9rnsc" Feb 27 10:33:27 crc kubenswrapper[4998]: I0227 10:33:27.751464 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/6251025c-f95e-45c1-a13d-25806c9afc6d-metallb-excludel2\") pod \"speaker-9rnsc\" (UID: \"6251025c-f95e-45c1-a13d-25806c9afc6d\") " pod="metallb-system/speaker-9rnsc" Feb 27 10:33:27 crc kubenswrapper[4998]: I0227 10:33:27.751489 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/665ae341-c75e-42ec-b806-6e58a49d6b0a-metrics-certs\") pod \"controller-86ddb6bd46-plw4b\" (UID: \"665ae341-c75e-42ec-b806-6e58a49d6b0a\") " pod="metallb-system/controller-86ddb6bd46-plw4b" Feb 27 10:33:27 crc kubenswrapper[4998]: E0227 10:33:27.751810 4998 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 27 10:33:27 crc kubenswrapper[4998]: E0227 10:33:27.751869 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6251025c-f95e-45c1-a13d-25806c9afc6d-memberlist podName:6251025c-f95e-45c1-a13d-25806c9afc6d nodeName:}" failed. No retries permitted until 2026-02-27 10:33:28.251854269 +0000 UTC m=+960.250125237 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/6251025c-f95e-45c1-a13d-25806c9afc6d-memberlist") pod "speaker-9rnsc" (UID: "6251025c-f95e-45c1-a13d-25806c9afc6d") : secret "metallb-memberlist" not found Feb 27 10:33:27 crc kubenswrapper[4998]: I0227 10:33:27.752365 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/6251025c-f95e-45c1-a13d-25806c9afc6d-metallb-excludel2\") pod \"speaker-9rnsc\" (UID: \"6251025c-f95e-45c1-a13d-25806c9afc6d\") " pod="metallb-system/speaker-9rnsc" Feb 27 10:33:27 crc kubenswrapper[4998]: I0227 10:33:27.755063 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/665ae341-c75e-42ec-b806-6e58a49d6b0a-metrics-certs\") pod \"controller-86ddb6bd46-plw4b\" (UID: \"665ae341-c75e-42ec-b806-6e58a49d6b0a\") " pod="metallb-system/controller-86ddb6bd46-plw4b" Feb 27 10:33:27 crc kubenswrapper[4998]: I0227 10:33:27.755710 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/665ae341-c75e-42ec-b806-6e58a49d6b0a-cert\") pod \"controller-86ddb6bd46-plw4b\" (UID: \"665ae341-c75e-42ec-b806-6e58a49d6b0a\") " pod="metallb-system/controller-86ddb6bd46-plw4b" Feb 27 10:33:27 crc kubenswrapper[4998]: I0227 10:33:27.757315 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6251025c-f95e-45c1-a13d-25806c9afc6d-metrics-certs\") pod \"speaker-9rnsc\" (UID: \"6251025c-f95e-45c1-a13d-25806c9afc6d\") " pod="metallb-system/speaker-9rnsc" Feb 27 10:33:27 crc kubenswrapper[4998]: I0227 10:33:27.772979 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9vrb\" (UniqueName: \"kubernetes.io/projected/665ae341-c75e-42ec-b806-6e58a49d6b0a-kube-api-access-t9vrb\") pod \"controller-86ddb6bd46-plw4b\" (UID: \"665ae341-c75e-42ec-b806-6e58a49d6b0a\") " pod="metallb-system/controller-86ddb6bd46-plw4b" Feb 27 10:33:27 crc kubenswrapper[4998]: I0227 10:33:27.774954 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6r5m\" (UniqueName: \"kubernetes.io/projected/6251025c-f95e-45c1-a13d-25806c9afc6d-kube-api-access-j6r5m\") pod \"speaker-9rnsc\" (UID: \"6251025c-f95e-45c1-a13d-25806c9afc6d\") " pod="metallb-system/speaker-9rnsc" Feb 27 10:33:27 crc kubenswrapper[4998]: I0227 10:33:27.817598 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-86ddb6bd46-plw4b" Feb 27 10:33:27 crc kubenswrapper[4998]: I0227 10:33:27.940460 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7f989f654f-cm24v"] Feb 27 10:33:27 crc kubenswrapper[4998]: I0227 10:33:27.946626 4998 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 27 10:33:28 crc kubenswrapper[4998]: I0227 10:33:28.156294 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9f850639-e424-4e27-b201-24520cb55133-metrics-certs\") pod \"frr-k8s-t5q9x\" (UID: \"9f850639-e424-4e27-b201-24520cb55133\") " pod="metallb-system/frr-k8s-t5q9x" Feb 27 10:33:28 crc kubenswrapper[4998]: I0227 10:33:28.166346 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9f850639-e424-4e27-b201-24520cb55133-metrics-certs\") pod \"frr-k8s-t5q9x\" (UID: \"9f850639-e424-4e27-b201-24520cb55133\") " pod="metallb-system/frr-k8s-t5q9x" Feb 27 10:33:28 crc kubenswrapper[4998]: I0227 10:33:28.229615 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-86ddb6bd46-plw4b"] Feb 27 10:33:28 crc kubenswrapper[4998]: W0227 10:33:28.233147 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod665ae341_c75e_42ec_b806_6e58a49d6b0a.slice/crio-603c854fcc4a481c315d926d2a4f7a4798a5636d22b9ff1acc58b9da582d15d6 WatchSource:0}: Error finding container 603c854fcc4a481c315d926d2a4f7a4798a5636d22b9ff1acc58b9da582d15d6: Status 404 returned error can't find the container with id 603c854fcc4a481c315d926d2a4f7a4798a5636d22b9ff1acc58b9da582d15d6 Feb 27 10:33:28 crc kubenswrapper[4998]: I0227 10:33:28.257741 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/6251025c-f95e-45c1-a13d-25806c9afc6d-memberlist\") pod \"speaker-9rnsc\" (UID: \"6251025c-f95e-45c1-a13d-25806c9afc6d\") " pod="metallb-system/speaker-9rnsc" Feb 27 10:33:28 crc kubenswrapper[4998]: E0227 10:33:28.258058 4998 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 27 10:33:28 crc kubenswrapper[4998]: E0227 10:33:28.258137 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6251025c-f95e-45c1-a13d-25806c9afc6d-memberlist podName:6251025c-f95e-45c1-a13d-25806c9afc6d nodeName:}" failed. No retries permitted until 2026-02-27 10:33:29.258115015 +0000 UTC m=+961.256385993 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/6251025c-f95e-45c1-a13d-25806c9afc6d-memberlist") pod "speaker-9rnsc" (UID: "6251025c-f95e-45c1-a13d-25806c9afc6d") : secret "metallb-memberlist" not found Feb 27 10:33:28 crc kubenswrapper[4998]: I0227 10:33:28.311071 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-t5q9x" Feb 27 10:33:28 crc kubenswrapper[4998]: I0227 10:33:28.721087 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-86ddb6bd46-plw4b" event={"ID":"665ae341-c75e-42ec-b806-6e58a49d6b0a","Type":"ContainerStarted","Data":"2f977c6b996b88a1411354aa331f681d92785a051d91b83a17dd224b10621985"} Feb 27 10:33:28 crc kubenswrapper[4998]: I0227 10:33:28.721457 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-86ddb6bd46-plw4b" Feb 27 10:33:28 crc kubenswrapper[4998]: I0227 10:33:28.721471 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-86ddb6bd46-plw4b" event={"ID":"665ae341-c75e-42ec-b806-6e58a49d6b0a","Type":"ContainerStarted","Data":"407313e055ea205e6bbcb064b97f53db794242159d1ded2bebb60f1632e6906a"} Feb 27 10:33:28 crc kubenswrapper[4998]: I0227 10:33:28.721481 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-86ddb6bd46-plw4b" event={"ID":"665ae341-c75e-42ec-b806-6e58a49d6b0a","Type":"ContainerStarted","Data":"603c854fcc4a481c315d926d2a4f7a4798a5636d22b9ff1acc58b9da582d15d6"} Feb 27 10:33:28 crc kubenswrapper[4998]: I0227 10:33:28.721930 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-cm24v" event={"ID":"6156d7c5-b8a3-4741-b1b5-cc12edf7cba1","Type":"ContainerStarted","Data":"c201a93a9ef8a73cada743a260a6e89160ed8facf743580b3c25a31c80dc9bb6"} Feb 27 10:33:28 crc kubenswrapper[4998]: I0227 10:33:28.722711 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-t5q9x" event={"ID":"9f850639-e424-4e27-b201-24520cb55133","Type":"ContainerStarted","Data":"e5918e0fecbac89aaf4bf60a167fd8f440ee3209f6e94f4e5b0eff0c8ca96f4a"} Feb 27 10:33:28 crc kubenswrapper[4998]: I0227 10:33:28.737254 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-86ddb6bd46-plw4b" podStartSLOduration=1.737235074 podStartE2EDuration="1.737235074s" podCreationTimestamp="2026-02-27 10:33:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:33:28.732853082 +0000 UTC m=+960.731124060" watchObservedRunningTime="2026-02-27 10:33:28.737235074 +0000 UTC m=+960.735506042" Feb 27 10:33:29 crc kubenswrapper[4998]: I0227 10:33:29.271199 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/6251025c-f95e-45c1-a13d-25806c9afc6d-memberlist\") pod \"speaker-9rnsc\" (UID: \"6251025c-f95e-45c1-a13d-25806c9afc6d\") " pod="metallb-system/speaker-9rnsc" Feb 27 10:33:29 crc kubenswrapper[4998]: I0227 10:33:29.317430 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/6251025c-f95e-45c1-a13d-25806c9afc6d-memberlist\") pod \"speaker-9rnsc\" (UID: \"6251025c-f95e-45c1-a13d-25806c9afc6d\") " pod="metallb-system/speaker-9rnsc" Feb 27 10:33:29 crc kubenswrapper[4998]: I0227 10:33:29.612202 4998 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-mgmdp" Feb 27 10:33:29 crc kubenswrapper[4998]: I0227 10:33:29.617946 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-9rnsc" Feb 27 10:33:29 crc kubenswrapper[4998]: I0227 10:33:29.733565 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-9rnsc" event={"ID":"6251025c-f95e-45c1-a13d-25806c9afc6d","Type":"ContainerStarted","Data":"7dce8843813f71bde581d0411785b6065e34ad7c617671a1aa4fbcb32094ec05"} Feb 27 10:33:30 crc kubenswrapper[4998]: I0227 10:33:30.747497 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-9rnsc" event={"ID":"6251025c-f95e-45c1-a13d-25806c9afc6d","Type":"ContainerStarted","Data":"9432953e984dfc37113080481c7660753c46b814112a9bfad108e93aa634bc26"} Feb 27 10:33:30 crc kubenswrapper[4998]: I0227 10:33:30.747784 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-9rnsc" event={"ID":"6251025c-f95e-45c1-a13d-25806c9afc6d","Type":"ContainerStarted","Data":"184348840d2a657f3e573bb874e657aa8b500e65225f05c3963c76e6ace71076"} Feb 27 10:33:30 crc kubenswrapper[4998]: I0227 10:33:30.747831 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-9rnsc" Feb 27 10:33:30 crc kubenswrapper[4998]: I0227 10:33:30.774068 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-9rnsc" podStartSLOduration=3.774047547 podStartE2EDuration="3.774047547s" podCreationTimestamp="2026-02-27 10:33:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:33:30.769463816 +0000 UTC m=+962.767734804" watchObservedRunningTime="2026-02-27 10:33:30.774047547 +0000 UTC m=+962.772318515" Feb 27 10:33:35 crc kubenswrapper[4998]: I0227 10:33:35.797748 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-cm24v" event={"ID":"6156d7c5-b8a3-4741-b1b5-cc12edf7cba1","Type":"ContainerStarted","Data":"131fa80947fe44a95b4b999ca27978bf211e2b6757dde332fc215f8448504639"} Feb 27 10:33:35 crc kubenswrapper[4998]: I0227 10:33:35.798255 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-cm24v" Feb 27 10:33:35 crc kubenswrapper[4998]: I0227 10:33:35.799842 4998 generic.go:334] "Generic (PLEG): container finished" podID="9f850639-e424-4e27-b201-24520cb55133" containerID="215e83da1666ad14d854b1e5fcb670be9cfe36edf94cbc7783d1dd45b7d32e28" exitCode=0 Feb 27 10:33:35 crc kubenswrapper[4998]: I0227 10:33:35.799894 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-t5q9x" event={"ID":"9f850639-e424-4e27-b201-24520cb55133","Type":"ContainerDied","Data":"215e83da1666ad14d854b1e5fcb670be9cfe36edf94cbc7783d1dd45b7d32e28"} Feb 27 10:33:35 crc kubenswrapper[4998]: I0227 10:33:35.821372 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-cm24v" podStartSLOduration=1.6814780200000001 podStartE2EDuration="8.821355703s" podCreationTimestamp="2026-02-27 10:33:27 +0000 UTC" firstStartedPulling="2026-02-27 10:33:27.946383369 +0000 UTC m=+959.944654337" lastFinishedPulling="2026-02-27 10:33:35.086261052 +0000 UTC m=+967.084532020" observedRunningTime="2026-02-27 10:33:35.818090766 +0000 UTC m=+967.816361724" watchObservedRunningTime="2026-02-27 10:33:35.821355703 +0000 UTC m=+967.819626671" Feb 27 10:33:36 crc kubenswrapper[4998]: I0227 10:33:36.807745 4998 generic.go:334] "Generic (PLEG): container finished" podID="9f850639-e424-4e27-b201-24520cb55133" containerID="44e882a9768d72ec399a6dfd623b6017261fd892f147c15e048745c458b0c7c6" exitCode=0 Feb 27 10:33:36 crc kubenswrapper[4998]: I0227 10:33:36.807810 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-t5q9x" event={"ID":"9f850639-e424-4e27-b201-24520cb55133","Type":"ContainerDied","Data":"44e882a9768d72ec399a6dfd623b6017261fd892f147c15e048745c458b0c7c6"} Feb 27 10:33:37 crc kubenswrapper[4998]: I0227 10:33:37.815839 4998 generic.go:334] "Generic (PLEG): container finished" podID="9f850639-e424-4e27-b201-24520cb55133" containerID="8633b7686117e0f491f925085951a4de6c33deb5b0228281fbd3ecab669cabbc" exitCode=0 Feb 27 10:33:37 crc kubenswrapper[4998]: I0227 10:33:37.815905 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-t5q9x" event={"ID":"9f850639-e424-4e27-b201-24520cb55133","Type":"ContainerDied","Data":"8633b7686117e0f491f925085951a4de6c33deb5b0228281fbd3ecab669cabbc"} Feb 27 10:33:38 crc kubenswrapper[4998]: I0227 10:33:38.826472 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-t5q9x" event={"ID":"9f850639-e424-4e27-b201-24520cb55133","Type":"ContainerStarted","Data":"b776543c46165d963b90f6559e1a287396e7b4dcbecf495614665c93af0a3534"} Feb 27 10:33:38 crc kubenswrapper[4998]: I0227 10:33:38.826739 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-t5q9x" event={"ID":"9f850639-e424-4e27-b201-24520cb55133","Type":"ContainerStarted","Data":"888c7f436e84b31faf56cee6f6905170d3a03b0be268895560b2244bfece3e79"} Feb 27 10:33:38 crc kubenswrapper[4998]: I0227 10:33:38.826856 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-t5q9x" Feb 27 10:33:38 crc kubenswrapper[4998]: I0227 10:33:38.826893 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-t5q9x" event={"ID":"9f850639-e424-4e27-b201-24520cb55133","Type":"ContainerStarted","Data":"b0540b4580f638adb0301a2c9852454bfc386c9522b30fd992fb57dffd065df8"} Feb 27 10:33:38 crc kubenswrapper[4998]: I0227 10:33:38.826929 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-t5q9x" event={"ID":"9f850639-e424-4e27-b201-24520cb55133","Type":"ContainerStarted","Data":"81871433a136281bb4f8f17d73a03074df1ab7f7de9a537f785fead71bf802eb"} Feb 27 10:33:38 crc kubenswrapper[4998]: I0227 10:33:38.826942 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-t5q9x" event={"ID":"9f850639-e424-4e27-b201-24520cb55133","Type":"ContainerStarted","Data":"e45104bfbd904f72873dd6fccb312c2d9d5019e8f2951505d2348050d9ec01df"} Feb 27 10:33:38 crc kubenswrapper[4998]: I0227 10:33:38.826952 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-t5q9x" event={"ID":"9f850639-e424-4e27-b201-24520cb55133","Type":"ContainerStarted","Data":"c5b03c8bd0bfbe55f8e9214e217f76abe931eb4a2b95d01618a3ddc65a9a35cd"} Feb 27 10:33:40 crc kubenswrapper[4998]: I0227 10:33:40.504861 4998 patch_prober.go:28] interesting pod/machine-config-daemon-m6kr5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 10:33:40 crc kubenswrapper[4998]: I0227 10:33:40.505164 4998 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 10:33:43 crc kubenswrapper[4998]: I0227 10:33:43.311976 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-t5q9x" Feb 27 10:33:43 crc kubenswrapper[4998]: I0227 10:33:43.350581 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-t5q9x" Feb 27 10:33:43 crc kubenswrapper[4998]: I0227 10:33:43.377292 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-t5q9x" podStartSLOduration=9.714873919 podStartE2EDuration="16.377276133s" podCreationTimestamp="2026-02-27 10:33:27 +0000 UTC" firstStartedPulling="2026-02-27 10:33:28.405425063 +0000 UTC m=+960.403696031" lastFinishedPulling="2026-02-27 10:33:35.067827237 +0000 UTC m=+967.066098245" observedRunningTime="2026-02-27 10:33:38.848168632 +0000 UTC m=+970.846439620" watchObservedRunningTime="2026-02-27 10:33:43.377276133 +0000 UTC m=+975.375547101" Feb 27 10:33:47 crc kubenswrapper[4998]: I0227 10:33:47.727728 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-cm24v" Feb 27 10:33:47 crc kubenswrapper[4998]: I0227 10:33:47.821901 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-86ddb6bd46-plw4b" Feb 27 10:33:48 crc kubenswrapper[4998]: I0227 10:33:48.314642 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-t5q9x" Feb 27 10:33:49 crc kubenswrapper[4998]: I0227 10:33:49.622537 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-9rnsc" Feb 27 10:33:52 crc kubenswrapper[4998]: I0227 10:33:52.742388 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-bktmk"] Feb 27 10:33:52 crc kubenswrapper[4998]: I0227 10:33:52.743274 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-bktmk" Feb 27 10:33:52 crc kubenswrapper[4998]: I0227 10:33:52.745121 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Feb 27 10:33:52 crc kubenswrapper[4998]: I0227 10:33:52.745238 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Feb 27 10:33:52 crc kubenswrapper[4998]: I0227 10:33:52.746263 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-srln2" Feb 27 10:33:52 crc kubenswrapper[4998]: I0227 10:33:52.755993 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-bktmk"] Feb 27 10:33:52 crc kubenswrapper[4998]: I0227 10:33:52.896577 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8m76\" (UniqueName: \"kubernetes.io/projected/0c2a8d0a-6b8a-4637-9eb8-3926da9fe2dc-kube-api-access-g8m76\") pod \"openstack-operator-index-bktmk\" (UID: \"0c2a8d0a-6b8a-4637-9eb8-3926da9fe2dc\") " pod="openstack-operators/openstack-operator-index-bktmk" Feb 27 10:33:52 crc kubenswrapper[4998]: I0227 10:33:52.998140 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8m76\" (UniqueName: \"kubernetes.io/projected/0c2a8d0a-6b8a-4637-9eb8-3926da9fe2dc-kube-api-access-g8m76\") pod \"openstack-operator-index-bktmk\" (UID: \"0c2a8d0a-6b8a-4637-9eb8-3926da9fe2dc\") " pod="openstack-operators/openstack-operator-index-bktmk" Feb 27 10:33:53 crc kubenswrapper[4998]: I0227 10:33:53.028484 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8m76\" (UniqueName: \"kubernetes.io/projected/0c2a8d0a-6b8a-4637-9eb8-3926da9fe2dc-kube-api-access-g8m76\") pod \"openstack-operator-index-bktmk\" (UID: \"0c2a8d0a-6b8a-4637-9eb8-3926da9fe2dc\") " pod="openstack-operators/openstack-operator-index-bktmk" Feb 27 10:33:53 crc kubenswrapper[4998]: I0227 10:33:53.061840 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-bktmk" Feb 27 10:33:53 crc kubenswrapper[4998]: I0227 10:33:53.504942 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-bktmk"] Feb 27 10:33:53 crc kubenswrapper[4998]: W0227 10:33:53.507722 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0c2a8d0a_6b8a_4637_9eb8_3926da9fe2dc.slice/crio-c7af8a57616e6faa3356f2ff215a3a7f15650abedb62df0b58ccd71ce3f6bf88 WatchSource:0}: Error finding container c7af8a57616e6faa3356f2ff215a3a7f15650abedb62df0b58ccd71ce3f6bf88: Status 404 returned error can't find the container with id c7af8a57616e6faa3356f2ff215a3a7f15650abedb62df0b58ccd71ce3f6bf88 Feb 27 10:33:53 crc kubenswrapper[4998]: I0227 10:33:53.926620 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-bktmk" event={"ID":"0c2a8d0a-6b8a-4637-9eb8-3926da9fe2dc","Type":"ContainerStarted","Data":"c7af8a57616e6faa3356f2ff215a3a7f15650abedb62df0b58ccd71ce3f6bf88"} Feb 27 10:33:56 crc kubenswrapper[4998]: I0227 10:33:56.115364 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-bktmk"] Feb 27 10:33:56 crc kubenswrapper[4998]: I0227 10:33:56.728169 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-cmv4s"] Feb 27 10:33:56 crc kubenswrapper[4998]: I0227 10:33:56.729285 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-cmv4s" Feb 27 10:33:56 crc kubenswrapper[4998]: I0227 10:33:56.735828 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-cmv4s"] Feb 27 10:33:56 crc kubenswrapper[4998]: I0227 10:33:56.881709 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpvdz\" (UniqueName: \"kubernetes.io/projected/0220a262-0cbc-4328-bc03-63d749d85892-kube-api-access-zpvdz\") pod \"openstack-operator-index-cmv4s\" (UID: \"0220a262-0cbc-4328-bc03-63d749d85892\") " pod="openstack-operators/openstack-operator-index-cmv4s" Feb 27 10:33:56 crc kubenswrapper[4998]: I0227 10:33:56.946937 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-bktmk" event={"ID":"0c2a8d0a-6b8a-4637-9eb8-3926da9fe2dc","Type":"ContainerStarted","Data":"ff4e23c4015fd46df3a83cacab39fec06cd3f7f18537b2c54ae6e1ffdca99217"} Feb 27 10:33:56 crc kubenswrapper[4998]: I0227 10:33:56.947079 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-bktmk" podUID="0c2a8d0a-6b8a-4637-9eb8-3926da9fe2dc" containerName="registry-server" containerID="cri-o://ff4e23c4015fd46df3a83cacab39fec06cd3f7f18537b2c54ae6e1ffdca99217" gracePeriod=2 Feb 27 10:33:56 crc kubenswrapper[4998]: I0227 10:33:56.967570 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-bktmk" podStartSLOduration=2.355204916 podStartE2EDuration="4.967547871s" podCreationTimestamp="2026-02-27 10:33:52 +0000 UTC" firstStartedPulling="2026-02-27 10:33:53.510076208 +0000 UTC m=+985.508347186" lastFinishedPulling="2026-02-27 10:33:56.122419173 +0000 UTC m=+988.120690141" observedRunningTime="2026-02-27 10:33:56.964053684 +0000 UTC m=+988.962324682" watchObservedRunningTime="2026-02-27 10:33:56.967547871 +0000 UTC m=+988.965818859" Feb 27 10:33:56 crc kubenswrapper[4998]: I0227 10:33:56.983079 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpvdz\" (UniqueName: \"kubernetes.io/projected/0220a262-0cbc-4328-bc03-63d749d85892-kube-api-access-zpvdz\") pod \"openstack-operator-index-cmv4s\" (UID: \"0220a262-0cbc-4328-bc03-63d749d85892\") " pod="openstack-operators/openstack-operator-index-cmv4s" Feb 27 10:33:57 crc kubenswrapper[4998]: I0227 10:33:57.006837 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpvdz\" (UniqueName: \"kubernetes.io/projected/0220a262-0cbc-4328-bc03-63d749d85892-kube-api-access-zpvdz\") pod \"openstack-operator-index-cmv4s\" (UID: \"0220a262-0cbc-4328-bc03-63d749d85892\") " pod="openstack-operators/openstack-operator-index-cmv4s" Feb 27 10:33:57 crc kubenswrapper[4998]: I0227 10:33:57.091180 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-cmv4s" Feb 27 10:33:57 crc kubenswrapper[4998]: I0227 10:33:57.335239 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-bktmk" Feb 27 10:33:57 crc kubenswrapper[4998]: I0227 10:33:57.489770 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g8m76\" (UniqueName: \"kubernetes.io/projected/0c2a8d0a-6b8a-4637-9eb8-3926da9fe2dc-kube-api-access-g8m76\") pod \"0c2a8d0a-6b8a-4637-9eb8-3926da9fe2dc\" (UID: \"0c2a8d0a-6b8a-4637-9eb8-3926da9fe2dc\") " Feb 27 10:33:57 crc kubenswrapper[4998]: I0227 10:33:57.497402 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c2a8d0a-6b8a-4637-9eb8-3926da9fe2dc-kube-api-access-g8m76" (OuterVolumeSpecName: "kube-api-access-g8m76") pod "0c2a8d0a-6b8a-4637-9eb8-3926da9fe2dc" (UID: "0c2a8d0a-6b8a-4637-9eb8-3926da9fe2dc"). InnerVolumeSpecName "kube-api-access-g8m76". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:33:57 crc kubenswrapper[4998]: I0227 10:33:57.546026 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-cmv4s"] Feb 27 10:33:57 crc kubenswrapper[4998]: W0227 10:33:57.556424 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0220a262_0cbc_4328_bc03_63d749d85892.slice/crio-36c8fde6e14e92e429a56c176faac635b274c3f4bccba95ca70047bbfa409b3a WatchSource:0}: Error finding container 36c8fde6e14e92e429a56c176faac635b274c3f4bccba95ca70047bbfa409b3a: Status 404 returned error can't find the container with id 36c8fde6e14e92e429a56c176faac635b274c3f4bccba95ca70047bbfa409b3a Feb 27 10:33:57 crc kubenswrapper[4998]: I0227 10:33:57.591044 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g8m76\" (UniqueName: \"kubernetes.io/projected/0c2a8d0a-6b8a-4637-9eb8-3926da9fe2dc-kube-api-access-g8m76\") on node \"crc\" DevicePath \"\"" Feb 27 10:33:57 crc kubenswrapper[4998]: I0227 10:33:57.954860 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-cmv4s" event={"ID":"0220a262-0cbc-4328-bc03-63d749d85892","Type":"ContainerStarted","Data":"315653eb9e034e24b411688b36a8cdb0ab7a6b109f8be62306597bd38811b54c"} Feb 27 10:33:57 crc kubenswrapper[4998]: I0227 10:33:57.955155 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-cmv4s" event={"ID":"0220a262-0cbc-4328-bc03-63d749d85892","Type":"ContainerStarted","Data":"36c8fde6e14e92e429a56c176faac635b274c3f4bccba95ca70047bbfa409b3a"} Feb 27 10:33:57 crc kubenswrapper[4998]: I0227 10:33:57.956375 4998 generic.go:334] "Generic (PLEG): container finished" podID="0c2a8d0a-6b8a-4637-9eb8-3926da9fe2dc" containerID="ff4e23c4015fd46df3a83cacab39fec06cd3f7f18537b2c54ae6e1ffdca99217" exitCode=0 Feb 27 10:33:57 crc kubenswrapper[4998]: I0227 10:33:57.956421 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-bktmk" event={"ID":"0c2a8d0a-6b8a-4637-9eb8-3926da9fe2dc","Type":"ContainerDied","Data":"ff4e23c4015fd46df3a83cacab39fec06cd3f7f18537b2c54ae6e1ffdca99217"} Feb 27 10:33:57 crc kubenswrapper[4998]: I0227 10:33:57.956431 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-bktmk" Feb 27 10:33:57 crc kubenswrapper[4998]: I0227 10:33:57.956461 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-bktmk" event={"ID":"0c2a8d0a-6b8a-4637-9eb8-3926da9fe2dc","Type":"ContainerDied","Data":"c7af8a57616e6faa3356f2ff215a3a7f15650abedb62df0b58ccd71ce3f6bf88"} Feb 27 10:33:57 crc kubenswrapper[4998]: I0227 10:33:57.956485 4998 scope.go:117] "RemoveContainer" containerID="ff4e23c4015fd46df3a83cacab39fec06cd3f7f18537b2c54ae6e1ffdca99217" Feb 27 10:33:57 crc kubenswrapper[4998]: I0227 10:33:57.976188 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-cmv4s" podStartSLOduration=1.9223581809999999 podStartE2EDuration="1.976169075s" podCreationTimestamp="2026-02-27 10:33:56 +0000 UTC" firstStartedPulling="2026-02-27 10:33:57.55914491 +0000 UTC m=+989.557415878" lastFinishedPulling="2026-02-27 10:33:57.612955804 +0000 UTC m=+989.611226772" observedRunningTime="2026-02-27 10:33:57.974727621 +0000 UTC m=+989.972998589" watchObservedRunningTime="2026-02-27 10:33:57.976169075 +0000 UTC m=+989.974440043" Feb 27 10:33:57 crc kubenswrapper[4998]: I0227 10:33:57.979310 4998 scope.go:117] "RemoveContainer" containerID="ff4e23c4015fd46df3a83cacab39fec06cd3f7f18537b2c54ae6e1ffdca99217" Feb 27 10:33:57 crc kubenswrapper[4998]: E0227 10:33:57.979768 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff4e23c4015fd46df3a83cacab39fec06cd3f7f18537b2c54ae6e1ffdca99217\": container with ID starting with ff4e23c4015fd46df3a83cacab39fec06cd3f7f18537b2c54ae6e1ffdca99217 not found: ID does not exist" containerID="ff4e23c4015fd46df3a83cacab39fec06cd3f7f18537b2c54ae6e1ffdca99217" Feb 27 10:33:57 crc kubenswrapper[4998]: I0227 10:33:57.979806 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff4e23c4015fd46df3a83cacab39fec06cd3f7f18537b2c54ae6e1ffdca99217"} err="failed to get container status \"ff4e23c4015fd46df3a83cacab39fec06cd3f7f18537b2c54ae6e1ffdca99217\": rpc error: code = NotFound desc = could not find container \"ff4e23c4015fd46df3a83cacab39fec06cd3f7f18537b2c54ae6e1ffdca99217\": container with ID starting with ff4e23c4015fd46df3a83cacab39fec06cd3f7f18537b2c54ae6e1ffdca99217 not found: ID does not exist" Feb 27 10:33:57 crc kubenswrapper[4998]: I0227 10:33:57.988347 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-bktmk"] Feb 27 10:33:57 crc kubenswrapper[4998]: I0227 10:33:57.991480 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-bktmk"] Feb 27 10:33:58 crc kubenswrapper[4998]: I0227 10:33:58.774372 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c2a8d0a-6b8a-4637-9eb8-3926da9fe2dc" path="/var/lib/kubelet/pods/0c2a8d0a-6b8a-4637-9eb8-3926da9fe2dc/volumes" Feb 27 10:34:00 crc kubenswrapper[4998]: I0227 10:34:00.139418 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536474-p2kkq"] Feb 27 10:34:00 crc kubenswrapper[4998]: E0227 10:34:00.140426 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c2a8d0a-6b8a-4637-9eb8-3926da9fe2dc" containerName="registry-server" Feb 27 10:34:00 crc kubenswrapper[4998]: I0227 10:34:00.140458 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c2a8d0a-6b8a-4637-9eb8-3926da9fe2dc" containerName="registry-server" Feb 27 10:34:00 crc kubenswrapper[4998]: I0227 10:34:00.140736 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c2a8d0a-6b8a-4637-9eb8-3926da9fe2dc" containerName="registry-server" Feb 27 10:34:00 crc kubenswrapper[4998]: I0227 10:34:00.141695 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536474-p2kkq" Feb 27 10:34:00 crc kubenswrapper[4998]: I0227 10:34:00.145505 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 10:34:00 crc kubenswrapper[4998]: I0227 10:34:00.145829 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-b74ch" Feb 27 10:34:00 crc kubenswrapper[4998]: I0227 10:34:00.145882 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536474-p2kkq"] Feb 27 10:34:00 crc kubenswrapper[4998]: I0227 10:34:00.146105 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 10:34:00 crc kubenswrapper[4998]: I0227 10:34:00.324871 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnl89\" (UniqueName: \"kubernetes.io/projected/ac252740-66dd-42c7-be96-44f999dedded-kube-api-access-pnl89\") pod \"auto-csr-approver-29536474-p2kkq\" (UID: \"ac252740-66dd-42c7-be96-44f999dedded\") " pod="openshift-infra/auto-csr-approver-29536474-p2kkq" Feb 27 10:34:00 crc kubenswrapper[4998]: I0227 10:34:00.426630 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnl89\" (UniqueName: \"kubernetes.io/projected/ac252740-66dd-42c7-be96-44f999dedded-kube-api-access-pnl89\") pod \"auto-csr-approver-29536474-p2kkq\" (UID: \"ac252740-66dd-42c7-be96-44f999dedded\") " pod="openshift-infra/auto-csr-approver-29536474-p2kkq" Feb 27 10:34:00 crc kubenswrapper[4998]: I0227 10:34:00.451965 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnl89\" (UniqueName: \"kubernetes.io/projected/ac252740-66dd-42c7-be96-44f999dedded-kube-api-access-pnl89\") pod \"auto-csr-approver-29536474-p2kkq\" (UID: \"ac252740-66dd-42c7-be96-44f999dedded\") " pod="openshift-infra/auto-csr-approver-29536474-p2kkq" Feb 27 10:34:00 crc kubenswrapper[4998]: I0227 10:34:00.468576 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536474-p2kkq" Feb 27 10:34:00 crc kubenswrapper[4998]: I0227 10:34:00.862395 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536474-p2kkq"] Feb 27 10:34:00 crc kubenswrapper[4998]: W0227 10:34:00.869217 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podac252740_66dd_42c7_be96_44f999dedded.slice/crio-b3ea7835fa4aad7b9c8198e7537a3da1a887f3ed5eabf8d1b17f0ebb4be36468 WatchSource:0}: Error finding container b3ea7835fa4aad7b9c8198e7537a3da1a887f3ed5eabf8d1b17f0ebb4be36468: Status 404 returned error can't find the container with id b3ea7835fa4aad7b9c8198e7537a3da1a887f3ed5eabf8d1b17f0ebb4be36468 Feb 27 10:34:00 crc kubenswrapper[4998]: I0227 10:34:00.977470 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536474-p2kkq" event={"ID":"ac252740-66dd-42c7-be96-44f999dedded","Type":"ContainerStarted","Data":"b3ea7835fa4aad7b9c8198e7537a3da1a887f3ed5eabf8d1b17f0ebb4be36468"} Feb 27 10:34:02 crc kubenswrapper[4998]: I0227 10:34:02.995187 4998 generic.go:334] "Generic (PLEG): container finished" podID="ac252740-66dd-42c7-be96-44f999dedded" containerID="5d69fd183772516d968030557baa8c6087e1fe8930806bedbd19ccb753e4d54c" exitCode=0 Feb 27 10:34:02 crc kubenswrapper[4998]: I0227 10:34:02.995341 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536474-p2kkq" event={"ID":"ac252740-66dd-42c7-be96-44f999dedded","Type":"ContainerDied","Data":"5d69fd183772516d968030557baa8c6087e1fe8930806bedbd19ccb753e4d54c"} Feb 27 10:34:04 crc kubenswrapper[4998]: I0227 10:34:04.271844 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536474-p2kkq" Feb 27 10:34:04 crc kubenswrapper[4998]: I0227 10:34:04.375549 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pnl89\" (UniqueName: \"kubernetes.io/projected/ac252740-66dd-42c7-be96-44f999dedded-kube-api-access-pnl89\") pod \"ac252740-66dd-42c7-be96-44f999dedded\" (UID: \"ac252740-66dd-42c7-be96-44f999dedded\") " Feb 27 10:34:04 crc kubenswrapper[4998]: I0227 10:34:04.381096 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac252740-66dd-42c7-be96-44f999dedded-kube-api-access-pnl89" (OuterVolumeSpecName: "kube-api-access-pnl89") pod "ac252740-66dd-42c7-be96-44f999dedded" (UID: "ac252740-66dd-42c7-be96-44f999dedded"). InnerVolumeSpecName "kube-api-access-pnl89". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:34:04 crc kubenswrapper[4998]: I0227 10:34:04.476790 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pnl89\" (UniqueName: \"kubernetes.io/projected/ac252740-66dd-42c7-be96-44f999dedded-kube-api-access-pnl89\") on node \"crc\" DevicePath \"\"" Feb 27 10:34:05 crc kubenswrapper[4998]: I0227 10:34:05.011491 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536474-p2kkq" event={"ID":"ac252740-66dd-42c7-be96-44f999dedded","Type":"ContainerDied","Data":"b3ea7835fa4aad7b9c8198e7537a3da1a887f3ed5eabf8d1b17f0ebb4be36468"} Feb 27 10:34:05 crc kubenswrapper[4998]: I0227 10:34:05.011826 4998 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3ea7835fa4aad7b9c8198e7537a3da1a887f3ed5eabf8d1b17f0ebb4be36468" Feb 27 10:34:05 crc kubenswrapper[4998]: I0227 10:34:05.011550 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536474-p2kkq" Feb 27 10:34:05 crc kubenswrapper[4998]: I0227 10:34:05.334283 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536468-q85rh"] Feb 27 10:34:05 crc kubenswrapper[4998]: I0227 10:34:05.338417 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536468-q85rh"] Feb 27 10:34:06 crc kubenswrapper[4998]: I0227 10:34:06.773558 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="257fe562-007d-4f87-b3a6-f4f0fab0fa07" path="/var/lib/kubelet/pods/257fe562-007d-4f87-b3a6-f4f0fab0fa07/volumes" Feb 27 10:34:07 crc kubenswrapper[4998]: I0227 10:34:07.091953 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-cmv4s" Feb 27 10:34:07 crc kubenswrapper[4998]: I0227 10:34:07.092010 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-cmv4s" Feb 27 10:34:07 crc kubenswrapper[4998]: I0227 10:34:07.119103 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-cmv4s" Feb 27 10:34:08 crc kubenswrapper[4998]: I0227 10:34:08.052848 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-cmv4s" Feb 27 10:34:10 crc kubenswrapper[4998]: I0227 10:34:10.505332 4998 patch_prober.go:28] interesting pod/machine-config-daemon-m6kr5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 10:34:10 crc kubenswrapper[4998]: I0227 10:34:10.505700 4998 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 10:34:14 crc kubenswrapper[4998]: I0227 10:34:14.754456 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/96b24b5a2dcfb06a74b6947e2ae317d2fae2eff88377a91491a3fd03d8tvgpz"] Feb 27 10:34:14 crc kubenswrapper[4998]: E0227 10:34:14.754947 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac252740-66dd-42c7-be96-44f999dedded" containerName="oc" Feb 27 10:34:14 crc kubenswrapper[4998]: I0227 10:34:14.754958 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac252740-66dd-42c7-be96-44f999dedded" containerName="oc" Feb 27 10:34:14 crc kubenswrapper[4998]: I0227 10:34:14.755073 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac252740-66dd-42c7-be96-44f999dedded" containerName="oc" Feb 27 10:34:14 crc kubenswrapper[4998]: I0227 10:34:14.755835 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/96b24b5a2dcfb06a74b6947e2ae317d2fae2eff88377a91491a3fd03d8tvgpz" Feb 27 10:34:14 crc kubenswrapper[4998]: I0227 10:34:14.757869 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-z8wp6" Feb 27 10:34:14 crc kubenswrapper[4998]: I0227 10:34:14.775167 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/96b24b5a2dcfb06a74b6947e2ae317d2fae2eff88377a91491a3fd03d8tvgpz"] Feb 27 10:34:14 crc kubenswrapper[4998]: I0227 10:34:14.831548 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/76aa4e91-bce4-4a29-ba7d-ced8e3a54002-bundle\") pod \"96b24b5a2dcfb06a74b6947e2ae317d2fae2eff88377a91491a3fd03d8tvgpz\" (UID: \"76aa4e91-bce4-4a29-ba7d-ced8e3a54002\") " pod="openstack-operators/96b24b5a2dcfb06a74b6947e2ae317d2fae2eff88377a91491a3fd03d8tvgpz" Feb 27 10:34:14 crc kubenswrapper[4998]: I0227 10:34:14.831601 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/76aa4e91-bce4-4a29-ba7d-ced8e3a54002-util\") pod \"96b24b5a2dcfb06a74b6947e2ae317d2fae2eff88377a91491a3fd03d8tvgpz\" (UID: \"76aa4e91-bce4-4a29-ba7d-ced8e3a54002\") " pod="openstack-operators/96b24b5a2dcfb06a74b6947e2ae317d2fae2eff88377a91491a3fd03d8tvgpz" Feb 27 10:34:14 crc kubenswrapper[4998]: I0227 10:34:14.831629 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqksk\" (UniqueName: \"kubernetes.io/projected/76aa4e91-bce4-4a29-ba7d-ced8e3a54002-kube-api-access-hqksk\") pod \"96b24b5a2dcfb06a74b6947e2ae317d2fae2eff88377a91491a3fd03d8tvgpz\" (UID: \"76aa4e91-bce4-4a29-ba7d-ced8e3a54002\") " pod="openstack-operators/96b24b5a2dcfb06a74b6947e2ae317d2fae2eff88377a91491a3fd03d8tvgpz" Feb 27 10:34:14 crc kubenswrapper[4998]: I0227 10:34:14.932541 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/76aa4e91-bce4-4a29-ba7d-ced8e3a54002-bundle\") pod \"96b24b5a2dcfb06a74b6947e2ae317d2fae2eff88377a91491a3fd03d8tvgpz\" (UID: \"76aa4e91-bce4-4a29-ba7d-ced8e3a54002\") " pod="openstack-operators/96b24b5a2dcfb06a74b6947e2ae317d2fae2eff88377a91491a3fd03d8tvgpz" Feb 27 10:34:14 crc kubenswrapper[4998]: I0227 10:34:14.932584 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/76aa4e91-bce4-4a29-ba7d-ced8e3a54002-util\") pod \"96b24b5a2dcfb06a74b6947e2ae317d2fae2eff88377a91491a3fd03d8tvgpz\" (UID: \"76aa4e91-bce4-4a29-ba7d-ced8e3a54002\") " pod="openstack-operators/96b24b5a2dcfb06a74b6947e2ae317d2fae2eff88377a91491a3fd03d8tvgpz" Feb 27 10:34:14 crc kubenswrapper[4998]: I0227 10:34:14.932608 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqksk\" (UniqueName: \"kubernetes.io/projected/76aa4e91-bce4-4a29-ba7d-ced8e3a54002-kube-api-access-hqksk\") pod \"96b24b5a2dcfb06a74b6947e2ae317d2fae2eff88377a91491a3fd03d8tvgpz\" (UID: \"76aa4e91-bce4-4a29-ba7d-ced8e3a54002\") " pod="openstack-operators/96b24b5a2dcfb06a74b6947e2ae317d2fae2eff88377a91491a3fd03d8tvgpz" Feb 27 10:34:14 crc kubenswrapper[4998]: I0227 10:34:14.933180 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/76aa4e91-bce4-4a29-ba7d-ced8e3a54002-util\") pod \"96b24b5a2dcfb06a74b6947e2ae317d2fae2eff88377a91491a3fd03d8tvgpz\" (UID: \"76aa4e91-bce4-4a29-ba7d-ced8e3a54002\") " pod="openstack-operators/96b24b5a2dcfb06a74b6947e2ae317d2fae2eff88377a91491a3fd03d8tvgpz" Feb 27 10:34:14 crc kubenswrapper[4998]: I0227 10:34:14.935014 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/76aa4e91-bce4-4a29-ba7d-ced8e3a54002-bundle\") pod \"96b24b5a2dcfb06a74b6947e2ae317d2fae2eff88377a91491a3fd03d8tvgpz\" (UID: \"76aa4e91-bce4-4a29-ba7d-ced8e3a54002\") " pod="openstack-operators/96b24b5a2dcfb06a74b6947e2ae317d2fae2eff88377a91491a3fd03d8tvgpz" Feb 27 10:34:14 crc kubenswrapper[4998]: I0227 10:34:14.950974 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqksk\" (UniqueName: \"kubernetes.io/projected/76aa4e91-bce4-4a29-ba7d-ced8e3a54002-kube-api-access-hqksk\") pod \"96b24b5a2dcfb06a74b6947e2ae317d2fae2eff88377a91491a3fd03d8tvgpz\" (UID: \"76aa4e91-bce4-4a29-ba7d-ced8e3a54002\") " pod="openstack-operators/96b24b5a2dcfb06a74b6947e2ae317d2fae2eff88377a91491a3fd03d8tvgpz" Feb 27 10:34:15 crc kubenswrapper[4998]: I0227 10:34:15.073779 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/96b24b5a2dcfb06a74b6947e2ae317d2fae2eff88377a91491a3fd03d8tvgpz" Feb 27 10:34:15 crc kubenswrapper[4998]: I0227 10:34:15.524768 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/96b24b5a2dcfb06a74b6947e2ae317d2fae2eff88377a91491a3fd03d8tvgpz"] Feb 27 10:34:16 crc kubenswrapper[4998]: I0227 10:34:16.080780 4998 generic.go:334] "Generic (PLEG): container finished" podID="76aa4e91-bce4-4a29-ba7d-ced8e3a54002" containerID="bcdac3707b0a13b11762dc2674e2655f36ae3df484c4971faad3dfd1403355ab" exitCode=0 Feb 27 10:34:16 crc kubenswrapper[4998]: I0227 10:34:16.080884 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/96b24b5a2dcfb06a74b6947e2ae317d2fae2eff88377a91491a3fd03d8tvgpz" event={"ID":"76aa4e91-bce4-4a29-ba7d-ced8e3a54002","Type":"ContainerDied","Data":"bcdac3707b0a13b11762dc2674e2655f36ae3df484c4971faad3dfd1403355ab"} Feb 27 10:34:16 crc kubenswrapper[4998]: I0227 10:34:16.081102 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/96b24b5a2dcfb06a74b6947e2ae317d2fae2eff88377a91491a3fd03d8tvgpz" event={"ID":"76aa4e91-bce4-4a29-ba7d-ced8e3a54002","Type":"ContainerStarted","Data":"43316bf0effb60def391b7290b3d72a1d8d41d5c59a23c8104897a8d3259450e"} Feb 27 10:34:17 crc kubenswrapper[4998]: I0227 10:34:17.088398 4998 generic.go:334] "Generic (PLEG): container finished" podID="76aa4e91-bce4-4a29-ba7d-ced8e3a54002" containerID="bc2156d5da486efd3bd9c252df2d6e2dead25b72a767fada9088642dade58585" exitCode=0 Feb 27 10:34:17 crc kubenswrapper[4998]: I0227 10:34:17.088464 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/96b24b5a2dcfb06a74b6947e2ae317d2fae2eff88377a91491a3fd03d8tvgpz" event={"ID":"76aa4e91-bce4-4a29-ba7d-ced8e3a54002","Type":"ContainerDied","Data":"bc2156d5da486efd3bd9c252df2d6e2dead25b72a767fada9088642dade58585"} Feb 27 10:34:18 crc kubenswrapper[4998]: I0227 10:34:18.100777 4998 generic.go:334] "Generic (PLEG): container finished" podID="76aa4e91-bce4-4a29-ba7d-ced8e3a54002" containerID="b0966d56abecf2096a0ba30fe9614bf83f2791c088ff13d210418dbee0576aae" exitCode=0 Feb 27 10:34:18 crc kubenswrapper[4998]: I0227 10:34:18.100896 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/96b24b5a2dcfb06a74b6947e2ae317d2fae2eff88377a91491a3fd03d8tvgpz" event={"ID":"76aa4e91-bce4-4a29-ba7d-ced8e3a54002","Type":"ContainerDied","Data":"b0966d56abecf2096a0ba30fe9614bf83f2791c088ff13d210418dbee0576aae"} Feb 27 10:34:19 crc kubenswrapper[4998]: I0227 10:34:19.436203 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/96b24b5a2dcfb06a74b6947e2ae317d2fae2eff88377a91491a3fd03d8tvgpz" Feb 27 10:34:19 crc kubenswrapper[4998]: I0227 10:34:19.589150 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hqksk\" (UniqueName: \"kubernetes.io/projected/76aa4e91-bce4-4a29-ba7d-ced8e3a54002-kube-api-access-hqksk\") pod \"76aa4e91-bce4-4a29-ba7d-ced8e3a54002\" (UID: \"76aa4e91-bce4-4a29-ba7d-ced8e3a54002\") " Feb 27 10:34:19 crc kubenswrapper[4998]: I0227 10:34:19.590118 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/76aa4e91-bce4-4a29-ba7d-ced8e3a54002-bundle\") pod \"76aa4e91-bce4-4a29-ba7d-ced8e3a54002\" (UID: \"76aa4e91-bce4-4a29-ba7d-ced8e3a54002\") " Feb 27 10:34:19 crc kubenswrapper[4998]: I0227 10:34:19.590200 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/76aa4e91-bce4-4a29-ba7d-ced8e3a54002-util\") pod \"76aa4e91-bce4-4a29-ba7d-ced8e3a54002\" (UID: \"76aa4e91-bce4-4a29-ba7d-ced8e3a54002\") " Feb 27 10:34:19 crc kubenswrapper[4998]: I0227 10:34:19.591047 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76aa4e91-bce4-4a29-ba7d-ced8e3a54002-bundle" (OuterVolumeSpecName: "bundle") pod "76aa4e91-bce4-4a29-ba7d-ced8e3a54002" (UID: "76aa4e91-bce4-4a29-ba7d-ced8e3a54002"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:34:19 crc kubenswrapper[4998]: I0227 10:34:19.595410 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76aa4e91-bce4-4a29-ba7d-ced8e3a54002-kube-api-access-hqksk" (OuterVolumeSpecName: "kube-api-access-hqksk") pod "76aa4e91-bce4-4a29-ba7d-ced8e3a54002" (UID: "76aa4e91-bce4-4a29-ba7d-ced8e3a54002"). InnerVolumeSpecName "kube-api-access-hqksk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:34:19 crc kubenswrapper[4998]: I0227 10:34:19.624921 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76aa4e91-bce4-4a29-ba7d-ced8e3a54002-util" (OuterVolumeSpecName: "util") pod "76aa4e91-bce4-4a29-ba7d-ced8e3a54002" (UID: "76aa4e91-bce4-4a29-ba7d-ced8e3a54002"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:34:19 crc kubenswrapper[4998]: I0227 10:34:19.691291 4998 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/76aa4e91-bce4-4a29-ba7d-ced8e3a54002-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:34:19 crc kubenswrapper[4998]: I0227 10:34:19.691335 4998 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/76aa4e91-bce4-4a29-ba7d-ced8e3a54002-util\") on node \"crc\" DevicePath \"\"" Feb 27 10:34:19 crc kubenswrapper[4998]: I0227 10:34:19.691350 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hqksk\" (UniqueName: \"kubernetes.io/projected/76aa4e91-bce4-4a29-ba7d-ced8e3a54002-kube-api-access-hqksk\") on node \"crc\" DevicePath \"\"" Feb 27 10:34:20 crc kubenswrapper[4998]: I0227 10:34:20.117568 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/96b24b5a2dcfb06a74b6947e2ae317d2fae2eff88377a91491a3fd03d8tvgpz" Feb 27 10:34:20 crc kubenswrapper[4998]: I0227 10:34:20.117519 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/96b24b5a2dcfb06a74b6947e2ae317d2fae2eff88377a91491a3fd03d8tvgpz" event={"ID":"76aa4e91-bce4-4a29-ba7d-ced8e3a54002","Type":"ContainerDied","Data":"43316bf0effb60def391b7290b3d72a1d8d41d5c59a23c8104897a8d3259450e"} Feb 27 10:34:20 crc kubenswrapper[4998]: I0227 10:34:20.117707 4998 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="43316bf0effb60def391b7290b3d72a1d8d41d5c59a23c8104897a8d3259450e" Feb 27 10:34:27 crc kubenswrapper[4998]: I0227 10:34:27.507970 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-67c7cb969b-hkjl4"] Feb 27 10:34:27 crc kubenswrapper[4998]: E0227 10:34:27.508760 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76aa4e91-bce4-4a29-ba7d-ced8e3a54002" containerName="extract" Feb 27 10:34:27 crc kubenswrapper[4998]: I0227 10:34:27.508776 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="76aa4e91-bce4-4a29-ba7d-ced8e3a54002" containerName="extract" Feb 27 10:34:27 crc kubenswrapper[4998]: E0227 10:34:27.508789 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76aa4e91-bce4-4a29-ba7d-ced8e3a54002" containerName="pull" Feb 27 10:34:27 crc kubenswrapper[4998]: I0227 10:34:27.508796 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="76aa4e91-bce4-4a29-ba7d-ced8e3a54002" containerName="pull" Feb 27 10:34:27 crc kubenswrapper[4998]: E0227 10:34:27.508823 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76aa4e91-bce4-4a29-ba7d-ced8e3a54002" containerName="util" Feb 27 10:34:27 crc kubenswrapper[4998]: I0227 10:34:27.508829 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="76aa4e91-bce4-4a29-ba7d-ced8e3a54002" containerName="util" Feb 27 10:34:27 crc kubenswrapper[4998]: I0227 10:34:27.508945 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="76aa4e91-bce4-4a29-ba7d-ced8e3a54002" containerName="extract" Feb 27 10:34:27 crc kubenswrapper[4998]: I0227 10:34:27.509479 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-67c7cb969b-hkjl4" Feb 27 10:34:27 crc kubenswrapper[4998]: I0227 10:34:27.513211 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-9q6rk" Feb 27 10:34:27 crc kubenswrapper[4998]: I0227 10:34:27.528871 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-67c7cb969b-hkjl4"] Feb 27 10:34:27 crc kubenswrapper[4998]: I0227 10:34:27.708764 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9spc\" (UniqueName: \"kubernetes.io/projected/9edacacf-594d-495b-bebf-baea9d2d9ab7-kube-api-access-k9spc\") pod \"openstack-operator-controller-init-67c7cb969b-hkjl4\" (UID: \"9edacacf-594d-495b-bebf-baea9d2d9ab7\") " pod="openstack-operators/openstack-operator-controller-init-67c7cb969b-hkjl4" Feb 27 10:34:27 crc kubenswrapper[4998]: I0227 10:34:27.810851 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9spc\" (UniqueName: \"kubernetes.io/projected/9edacacf-594d-495b-bebf-baea9d2d9ab7-kube-api-access-k9spc\") pod \"openstack-operator-controller-init-67c7cb969b-hkjl4\" (UID: \"9edacacf-594d-495b-bebf-baea9d2d9ab7\") " pod="openstack-operators/openstack-operator-controller-init-67c7cb969b-hkjl4" Feb 27 10:34:27 crc kubenswrapper[4998]: I0227 10:34:27.832135 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9spc\" (UniqueName: \"kubernetes.io/projected/9edacacf-594d-495b-bebf-baea9d2d9ab7-kube-api-access-k9spc\") pod \"openstack-operator-controller-init-67c7cb969b-hkjl4\" (UID: \"9edacacf-594d-495b-bebf-baea9d2d9ab7\") " pod="openstack-operators/openstack-operator-controller-init-67c7cb969b-hkjl4" Feb 27 10:34:28 crc kubenswrapper[4998]: I0227 10:34:28.128807 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-67c7cb969b-hkjl4" Feb 27 10:34:28 crc kubenswrapper[4998]: I0227 10:34:28.558586 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-67c7cb969b-hkjl4"] Feb 27 10:34:29 crc kubenswrapper[4998]: I0227 10:34:29.197966 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-67c7cb969b-hkjl4" event={"ID":"9edacacf-594d-495b-bebf-baea9d2d9ab7","Type":"ContainerStarted","Data":"06204bb41ce9315d8fea50c01d2a35f9a34706bb938122483fd57443616b6c57"} Feb 27 10:34:33 crc kubenswrapper[4998]: I0227 10:34:33.221112 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-67c7cb969b-hkjl4" event={"ID":"9edacacf-594d-495b-bebf-baea9d2d9ab7","Type":"ContainerStarted","Data":"5bab0e93f61db4f280c124c1688a6b34261c96c19e5d4c51b76a30a1d31a0f3c"} Feb 27 10:34:33 crc kubenswrapper[4998]: I0227 10:34:33.221682 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-67c7cb969b-hkjl4" Feb 27 10:34:33 crc kubenswrapper[4998]: I0227 10:34:33.244360 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-67c7cb969b-hkjl4" podStartSLOduration=2.624995196 podStartE2EDuration="6.244343926s" podCreationTimestamp="2026-02-27 10:34:27 +0000 UTC" firstStartedPulling="2026-02-27 10:34:28.575362058 +0000 UTC m=+1020.573633036" lastFinishedPulling="2026-02-27 10:34:32.194710798 +0000 UTC m=+1024.192981766" observedRunningTime="2026-02-27 10:34:33.243259262 +0000 UTC m=+1025.241530250" watchObservedRunningTime="2026-02-27 10:34:33.244343926 +0000 UTC m=+1025.242614894" Feb 27 10:34:34 crc kubenswrapper[4998]: I0227 10:34:34.994963 4998 scope.go:117] "RemoveContainer" containerID="94626b05378281ffd1ded52b44e04f2b9c381229d9f561efab667d0d6bf74250" Feb 27 10:34:38 crc kubenswrapper[4998]: I0227 10:34:38.131475 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-67c7cb969b-hkjl4" Feb 27 10:34:40 crc kubenswrapper[4998]: I0227 10:34:40.504064 4998 patch_prober.go:28] interesting pod/machine-config-daemon-m6kr5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 10:34:40 crc kubenswrapper[4998]: I0227 10:34:40.505408 4998 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 10:34:40 crc kubenswrapper[4998]: I0227 10:34:40.505470 4998 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" Feb 27 10:34:40 crc kubenswrapper[4998]: I0227 10:34:40.506056 4998 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"798a591820f18523d1f6d494045865d6035d0c926980498f800d24c0dbf69b5e"} pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 27 10:34:40 crc kubenswrapper[4998]: I0227 10:34:40.506132 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" containerName="machine-config-daemon" containerID="cri-o://798a591820f18523d1f6d494045865d6035d0c926980498f800d24c0dbf69b5e" gracePeriod=600 Feb 27 10:34:41 crc kubenswrapper[4998]: I0227 10:34:41.273644 4998 generic.go:334] "Generic (PLEG): container finished" podID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" containerID="798a591820f18523d1f6d494045865d6035d0c926980498f800d24c0dbf69b5e" exitCode=0 Feb 27 10:34:41 crc kubenswrapper[4998]: I0227 10:34:41.273727 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" event={"ID":"400c5e2f-5448-49c6-bf8e-04b21e552bb2","Type":"ContainerDied","Data":"798a591820f18523d1f6d494045865d6035d0c926980498f800d24c0dbf69b5e"} Feb 27 10:34:41 crc kubenswrapper[4998]: I0227 10:34:41.274016 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" event={"ID":"400c5e2f-5448-49c6-bf8e-04b21e552bb2","Type":"ContainerStarted","Data":"d4bd8462bb415ab0298cf24a40c264a6708906ed9fa7eae7a8b7e15bb36a14c4"} Feb 27 10:34:41 crc kubenswrapper[4998]: I0227 10:34:41.274043 4998 scope.go:117] "RemoveContainer" containerID="9dd84c2d84273411f555b8433ac91db1f4b3ffabd27398f5ba0d8023fe393865" Feb 27 10:34:44 crc kubenswrapper[4998]: I0227 10:34:44.367060 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gs7ts"] Feb 27 10:34:44 crc kubenswrapper[4998]: I0227 10:34:44.368643 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gs7ts" Feb 27 10:34:44 crc kubenswrapper[4998]: I0227 10:34:44.393498 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gs7ts"] Feb 27 10:34:44 crc kubenswrapper[4998]: I0227 10:34:44.525388 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2e182b0-621f-4d04-b0c0-b472643f18c7-catalog-content\") pod \"community-operators-gs7ts\" (UID: \"a2e182b0-621f-4d04-b0c0-b472643f18c7\") " pod="openshift-marketplace/community-operators-gs7ts" Feb 27 10:34:44 crc kubenswrapper[4998]: I0227 10:34:44.525465 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2e182b0-621f-4d04-b0c0-b472643f18c7-utilities\") pod \"community-operators-gs7ts\" (UID: \"a2e182b0-621f-4d04-b0c0-b472643f18c7\") " pod="openshift-marketplace/community-operators-gs7ts" Feb 27 10:34:44 crc kubenswrapper[4998]: I0227 10:34:44.525518 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gp9tc\" (UniqueName: \"kubernetes.io/projected/a2e182b0-621f-4d04-b0c0-b472643f18c7-kube-api-access-gp9tc\") pod \"community-operators-gs7ts\" (UID: \"a2e182b0-621f-4d04-b0c0-b472643f18c7\") " pod="openshift-marketplace/community-operators-gs7ts" Feb 27 10:34:44 crc kubenswrapper[4998]: I0227 10:34:44.626842 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2e182b0-621f-4d04-b0c0-b472643f18c7-utilities\") pod \"community-operators-gs7ts\" (UID: \"a2e182b0-621f-4d04-b0c0-b472643f18c7\") " pod="openshift-marketplace/community-operators-gs7ts" Feb 27 10:34:44 crc kubenswrapper[4998]: I0227 10:34:44.626891 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gp9tc\" (UniqueName: \"kubernetes.io/projected/a2e182b0-621f-4d04-b0c0-b472643f18c7-kube-api-access-gp9tc\") pod \"community-operators-gs7ts\" (UID: \"a2e182b0-621f-4d04-b0c0-b472643f18c7\") " pod="openshift-marketplace/community-operators-gs7ts" Feb 27 10:34:44 crc kubenswrapper[4998]: I0227 10:34:44.626950 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2e182b0-621f-4d04-b0c0-b472643f18c7-catalog-content\") pod \"community-operators-gs7ts\" (UID: \"a2e182b0-621f-4d04-b0c0-b472643f18c7\") " pod="openshift-marketplace/community-operators-gs7ts" Feb 27 10:34:44 crc kubenswrapper[4998]: I0227 10:34:44.627419 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2e182b0-621f-4d04-b0c0-b472643f18c7-catalog-content\") pod \"community-operators-gs7ts\" (UID: \"a2e182b0-621f-4d04-b0c0-b472643f18c7\") " pod="openshift-marketplace/community-operators-gs7ts" Feb 27 10:34:44 crc kubenswrapper[4998]: I0227 10:34:44.627490 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2e182b0-621f-4d04-b0c0-b472643f18c7-utilities\") pod \"community-operators-gs7ts\" (UID: \"a2e182b0-621f-4d04-b0c0-b472643f18c7\") " pod="openshift-marketplace/community-operators-gs7ts" Feb 27 10:34:44 crc kubenswrapper[4998]: I0227 10:34:44.651557 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gp9tc\" (UniqueName: \"kubernetes.io/projected/a2e182b0-621f-4d04-b0c0-b472643f18c7-kube-api-access-gp9tc\") pod \"community-operators-gs7ts\" (UID: \"a2e182b0-621f-4d04-b0c0-b472643f18c7\") " pod="openshift-marketplace/community-operators-gs7ts" Feb 27 10:34:44 crc kubenswrapper[4998]: I0227 10:34:44.696131 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gs7ts" Feb 27 10:34:45 crc kubenswrapper[4998]: I0227 10:34:45.178731 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gs7ts"] Feb 27 10:34:45 crc kubenswrapper[4998]: W0227 10:34:45.195306 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda2e182b0_621f_4d04_b0c0_b472643f18c7.slice/crio-54964fd4dd89ec41335ba80e2e3b4535c2229f7c561fffa28a1ba1d471745a94 WatchSource:0}: Error finding container 54964fd4dd89ec41335ba80e2e3b4535c2229f7c561fffa28a1ba1d471745a94: Status 404 returned error can't find the container with id 54964fd4dd89ec41335ba80e2e3b4535c2229f7c561fffa28a1ba1d471745a94 Feb 27 10:34:45 crc kubenswrapper[4998]: I0227 10:34:45.303708 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gs7ts" event={"ID":"a2e182b0-621f-4d04-b0c0-b472643f18c7","Type":"ContainerStarted","Data":"54964fd4dd89ec41335ba80e2e3b4535c2229f7c561fffa28a1ba1d471745a94"} Feb 27 10:34:46 crc kubenswrapper[4998]: I0227 10:34:46.311146 4998 generic.go:334] "Generic (PLEG): container finished" podID="a2e182b0-621f-4d04-b0c0-b472643f18c7" containerID="444cae4b7adfa8290bb0eb3fb595dfbbf198f3fb89801dfbec6f46158d9a2e70" exitCode=0 Feb 27 10:34:46 crc kubenswrapper[4998]: I0227 10:34:46.311192 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gs7ts" event={"ID":"a2e182b0-621f-4d04-b0c0-b472643f18c7","Type":"ContainerDied","Data":"444cae4b7adfa8290bb0eb3fb595dfbbf198f3fb89801dfbec6f46158d9a2e70"} Feb 27 10:34:48 crc kubenswrapper[4998]: I0227 10:34:48.327826 4998 generic.go:334] "Generic (PLEG): container finished" podID="a2e182b0-621f-4d04-b0c0-b472643f18c7" containerID="3e026a3d8d3245dd98234c296a8f502687faa7d732ae30764f2b79b3f7872b0e" exitCode=0 Feb 27 10:34:48 crc kubenswrapper[4998]: I0227 10:34:48.327964 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gs7ts" event={"ID":"a2e182b0-621f-4d04-b0c0-b472643f18c7","Type":"ContainerDied","Data":"3e026a3d8d3245dd98234c296a8f502687faa7d732ae30764f2b79b3f7872b0e"} Feb 27 10:34:49 crc kubenswrapper[4998]: I0227 10:34:49.338705 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gs7ts" event={"ID":"a2e182b0-621f-4d04-b0c0-b472643f18c7","Type":"ContainerStarted","Data":"1774fab082c5affe7abdfcea27236df1ea5c24bf26c2eb7fdb5c19316235979e"} Feb 27 10:34:49 crc kubenswrapper[4998]: I0227 10:34:49.358739 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gs7ts" podStartSLOduration=2.944654506 podStartE2EDuration="5.358724032s" podCreationTimestamp="2026-02-27 10:34:44 +0000 UTC" firstStartedPulling="2026-02-27 10:34:46.31261978 +0000 UTC m=+1038.310890748" lastFinishedPulling="2026-02-27 10:34:48.726689306 +0000 UTC m=+1040.724960274" observedRunningTime="2026-02-27 10:34:49.357472513 +0000 UTC m=+1041.355743481" watchObservedRunningTime="2026-02-27 10:34:49.358724032 +0000 UTC m=+1041.356995000" Feb 27 10:34:54 crc kubenswrapper[4998]: I0227 10:34:54.696680 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gs7ts" Feb 27 10:34:54 crc kubenswrapper[4998]: I0227 10:34:54.698548 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gs7ts" Feb 27 10:34:54 crc kubenswrapper[4998]: I0227 10:34:54.747525 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gs7ts" Feb 27 10:34:55 crc kubenswrapper[4998]: I0227 10:34:55.408520 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gs7ts" Feb 27 10:34:55 crc kubenswrapper[4998]: I0227 10:34:55.446849 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gs7ts"] Feb 27 10:34:57 crc kubenswrapper[4998]: I0227 10:34:57.382969 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-gs7ts" podUID="a2e182b0-621f-4d04-b0c0-b472643f18c7" containerName="registry-server" containerID="cri-o://1774fab082c5affe7abdfcea27236df1ea5c24bf26c2eb7fdb5c19316235979e" gracePeriod=2 Feb 27 10:34:57 crc kubenswrapper[4998]: I0227 10:34:57.765205 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gs7ts" Feb 27 10:34:57 crc kubenswrapper[4998]: I0227 10:34:57.822549 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gp9tc\" (UniqueName: \"kubernetes.io/projected/a2e182b0-621f-4d04-b0c0-b472643f18c7-kube-api-access-gp9tc\") pod \"a2e182b0-621f-4d04-b0c0-b472643f18c7\" (UID: \"a2e182b0-621f-4d04-b0c0-b472643f18c7\") " Feb 27 10:34:57 crc kubenswrapper[4998]: I0227 10:34:57.822664 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2e182b0-621f-4d04-b0c0-b472643f18c7-catalog-content\") pod \"a2e182b0-621f-4d04-b0c0-b472643f18c7\" (UID: \"a2e182b0-621f-4d04-b0c0-b472643f18c7\") " Feb 27 10:34:57 crc kubenswrapper[4998]: I0227 10:34:57.822731 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2e182b0-621f-4d04-b0c0-b472643f18c7-utilities\") pod \"a2e182b0-621f-4d04-b0c0-b472643f18c7\" (UID: \"a2e182b0-621f-4d04-b0c0-b472643f18c7\") " Feb 27 10:34:57 crc kubenswrapper[4998]: I0227 10:34:57.824953 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2e182b0-621f-4d04-b0c0-b472643f18c7-utilities" (OuterVolumeSpecName: "utilities") pod "a2e182b0-621f-4d04-b0c0-b472643f18c7" (UID: "a2e182b0-621f-4d04-b0c0-b472643f18c7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:34:57 crc kubenswrapper[4998]: I0227 10:34:57.830575 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2e182b0-621f-4d04-b0c0-b472643f18c7-kube-api-access-gp9tc" (OuterVolumeSpecName: "kube-api-access-gp9tc") pod "a2e182b0-621f-4d04-b0c0-b472643f18c7" (UID: "a2e182b0-621f-4d04-b0c0-b472643f18c7"). InnerVolumeSpecName "kube-api-access-gp9tc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:34:57 crc kubenswrapper[4998]: I0227 10:34:57.884945 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2e182b0-621f-4d04-b0c0-b472643f18c7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a2e182b0-621f-4d04-b0c0-b472643f18c7" (UID: "a2e182b0-621f-4d04-b0c0-b472643f18c7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:34:57 crc kubenswrapper[4998]: I0227 10:34:57.923910 4998 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2e182b0-621f-4d04-b0c0-b472643f18c7-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 10:34:57 crc kubenswrapper[4998]: I0227 10:34:57.923952 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gp9tc\" (UniqueName: \"kubernetes.io/projected/a2e182b0-621f-4d04-b0c0-b472643f18c7-kube-api-access-gp9tc\") on node \"crc\" DevicePath \"\"" Feb 27 10:34:57 crc kubenswrapper[4998]: I0227 10:34:57.923965 4998 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2e182b0-621f-4d04-b0c0-b472643f18c7-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 10:34:58 crc kubenswrapper[4998]: I0227 10:34:58.417642 4998 generic.go:334] "Generic (PLEG): container finished" podID="a2e182b0-621f-4d04-b0c0-b472643f18c7" containerID="1774fab082c5affe7abdfcea27236df1ea5c24bf26c2eb7fdb5c19316235979e" exitCode=0 Feb 27 10:34:58 crc kubenswrapper[4998]: I0227 10:34:58.417710 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gs7ts" event={"ID":"a2e182b0-621f-4d04-b0c0-b472643f18c7","Type":"ContainerDied","Data":"1774fab082c5affe7abdfcea27236df1ea5c24bf26c2eb7fdb5c19316235979e"} Feb 27 10:34:58 crc kubenswrapper[4998]: I0227 10:34:58.417975 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gs7ts" event={"ID":"a2e182b0-621f-4d04-b0c0-b472643f18c7","Type":"ContainerDied","Data":"54964fd4dd89ec41335ba80e2e3b4535c2229f7c561fffa28a1ba1d471745a94"} Feb 27 10:34:58 crc kubenswrapper[4998]: I0227 10:34:58.418004 4998 scope.go:117] "RemoveContainer" containerID="1774fab082c5affe7abdfcea27236df1ea5c24bf26c2eb7fdb5c19316235979e" Feb 27 10:34:58 crc kubenswrapper[4998]: I0227 10:34:58.417780 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gs7ts" Feb 27 10:34:58 crc kubenswrapper[4998]: I0227 10:34:58.444725 4998 scope.go:117] "RemoveContainer" containerID="3e026a3d8d3245dd98234c296a8f502687faa7d732ae30764f2b79b3f7872b0e" Feb 27 10:34:58 crc kubenswrapper[4998]: I0227 10:34:58.444837 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gs7ts"] Feb 27 10:34:58 crc kubenswrapper[4998]: I0227 10:34:58.450048 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-gs7ts"] Feb 27 10:34:58 crc kubenswrapper[4998]: I0227 10:34:58.471182 4998 scope.go:117] "RemoveContainer" containerID="444cae4b7adfa8290bb0eb3fb595dfbbf198f3fb89801dfbec6f46158d9a2e70" Feb 27 10:34:58 crc kubenswrapper[4998]: I0227 10:34:58.484375 4998 scope.go:117] "RemoveContainer" containerID="1774fab082c5affe7abdfcea27236df1ea5c24bf26c2eb7fdb5c19316235979e" Feb 27 10:34:58 crc kubenswrapper[4998]: E0227 10:34:58.486267 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1774fab082c5affe7abdfcea27236df1ea5c24bf26c2eb7fdb5c19316235979e\": container with ID starting with 1774fab082c5affe7abdfcea27236df1ea5c24bf26c2eb7fdb5c19316235979e not found: ID does not exist" containerID="1774fab082c5affe7abdfcea27236df1ea5c24bf26c2eb7fdb5c19316235979e" Feb 27 10:34:58 crc kubenswrapper[4998]: I0227 10:34:58.486322 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1774fab082c5affe7abdfcea27236df1ea5c24bf26c2eb7fdb5c19316235979e"} err="failed to get container status \"1774fab082c5affe7abdfcea27236df1ea5c24bf26c2eb7fdb5c19316235979e\": rpc error: code = NotFound desc = could not find container \"1774fab082c5affe7abdfcea27236df1ea5c24bf26c2eb7fdb5c19316235979e\": container with ID starting with 1774fab082c5affe7abdfcea27236df1ea5c24bf26c2eb7fdb5c19316235979e not found: ID does not exist" Feb 27 10:34:58 crc kubenswrapper[4998]: I0227 10:34:58.486353 4998 scope.go:117] "RemoveContainer" containerID="3e026a3d8d3245dd98234c296a8f502687faa7d732ae30764f2b79b3f7872b0e" Feb 27 10:34:58 crc kubenswrapper[4998]: E0227 10:34:58.486742 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e026a3d8d3245dd98234c296a8f502687faa7d732ae30764f2b79b3f7872b0e\": container with ID starting with 3e026a3d8d3245dd98234c296a8f502687faa7d732ae30764f2b79b3f7872b0e not found: ID does not exist" containerID="3e026a3d8d3245dd98234c296a8f502687faa7d732ae30764f2b79b3f7872b0e" Feb 27 10:34:58 crc kubenswrapper[4998]: I0227 10:34:58.486777 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e026a3d8d3245dd98234c296a8f502687faa7d732ae30764f2b79b3f7872b0e"} err="failed to get container status \"3e026a3d8d3245dd98234c296a8f502687faa7d732ae30764f2b79b3f7872b0e\": rpc error: code = NotFound desc = could not find container \"3e026a3d8d3245dd98234c296a8f502687faa7d732ae30764f2b79b3f7872b0e\": container with ID starting with 3e026a3d8d3245dd98234c296a8f502687faa7d732ae30764f2b79b3f7872b0e not found: ID does not exist" Feb 27 10:34:58 crc kubenswrapper[4998]: I0227 10:34:58.486798 4998 scope.go:117] "RemoveContainer" containerID="444cae4b7adfa8290bb0eb3fb595dfbbf198f3fb89801dfbec6f46158d9a2e70" Feb 27 10:34:58 crc kubenswrapper[4998]: E0227 10:34:58.486981 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"444cae4b7adfa8290bb0eb3fb595dfbbf198f3fb89801dfbec6f46158d9a2e70\": container with ID starting with 444cae4b7adfa8290bb0eb3fb595dfbbf198f3fb89801dfbec6f46158d9a2e70 not found: ID does not exist" containerID="444cae4b7adfa8290bb0eb3fb595dfbbf198f3fb89801dfbec6f46158d9a2e70" Feb 27 10:34:58 crc kubenswrapper[4998]: I0227 10:34:58.486997 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"444cae4b7adfa8290bb0eb3fb595dfbbf198f3fb89801dfbec6f46158d9a2e70"} err="failed to get container status \"444cae4b7adfa8290bb0eb3fb595dfbbf198f3fb89801dfbec6f46158d9a2e70\": rpc error: code = NotFound desc = could not find container \"444cae4b7adfa8290bb0eb3fb595dfbbf198f3fb89801dfbec6f46158d9a2e70\": container with ID starting with 444cae4b7adfa8290bb0eb3fb595dfbbf198f3fb89801dfbec6f46158d9a2e70 not found: ID does not exist" Feb 27 10:34:58 crc kubenswrapper[4998]: I0227 10:34:58.771563 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2e182b0-621f-4d04-b0c0-b472643f18c7" path="/var/lib/kubelet/pods/a2e182b0-621f-4d04-b0c0-b472643f18c7/volumes" Feb 27 10:35:11 crc kubenswrapper[4998]: I0227 10:35:11.082912 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9z27z"] Feb 27 10:35:11 crc kubenswrapper[4998]: E0227 10:35:11.083790 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2e182b0-621f-4d04-b0c0-b472643f18c7" containerName="extract-utilities" Feb 27 10:35:11 crc kubenswrapper[4998]: I0227 10:35:11.083805 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2e182b0-621f-4d04-b0c0-b472643f18c7" containerName="extract-utilities" Feb 27 10:35:11 crc kubenswrapper[4998]: E0227 10:35:11.083822 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2e182b0-621f-4d04-b0c0-b472643f18c7" containerName="extract-content" Feb 27 10:35:11 crc kubenswrapper[4998]: I0227 10:35:11.083830 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2e182b0-621f-4d04-b0c0-b472643f18c7" containerName="extract-content" Feb 27 10:35:11 crc kubenswrapper[4998]: E0227 10:35:11.083842 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2e182b0-621f-4d04-b0c0-b472643f18c7" containerName="registry-server" Feb 27 10:35:11 crc kubenswrapper[4998]: I0227 10:35:11.083849 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2e182b0-621f-4d04-b0c0-b472643f18c7" containerName="registry-server" Feb 27 10:35:11 crc kubenswrapper[4998]: I0227 10:35:11.084031 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2e182b0-621f-4d04-b0c0-b472643f18c7" containerName="registry-server" Feb 27 10:35:11 crc kubenswrapper[4998]: I0227 10:35:11.085078 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9z27z" Feb 27 10:35:11 crc kubenswrapper[4998]: I0227 10:35:11.095197 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79f06471-ac2f-4fec-95da-760990d48ad9-utilities\") pod \"certified-operators-9z27z\" (UID: \"79f06471-ac2f-4fec-95da-760990d48ad9\") " pod="openshift-marketplace/certified-operators-9z27z" Feb 27 10:35:11 crc kubenswrapper[4998]: I0227 10:35:11.095289 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ms8b2\" (UniqueName: \"kubernetes.io/projected/79f06471-ac2f-4fec-95da-760990d48ad9-kube-api-access-ms8b2\") pod \"certified-operators-9z27z\" (UID: \"79f06471-ac2f-4fec-95da-760990d48ad9\") " pod="openshift-marketplace/certified-operators-9z27z" Feb 27 10:35:11 crc kubenswrapper[4998]: I0227 10:35:11.095357 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79f06471-ac2f-4fec-95da-760990d48ad9-catalog-content\") pod \"certified-operators-9z27z\" (UID: \"79f06471-ac2f-4fec-95da-760990d48ad9\") " pod="openshift-marketplace/certified-operators-9z27z" Feb 27 10:35:11 crc kubenswrapper[4998]: I0227 10:35:11.100640 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9z27z"] Feb 27 10:35:11 crc kubenswrapper[4998]: I0227 10:35:11.196846 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79f06471-ac2f-4fec-95da-760990d48ad9-catalog-content\") pod \"certified-operators-9z27z\" (UID: \"79f06471-ac2f-4fec-95da-760990d48ad9\") " pod="openshift-marketplace/certified-operators-9z27z" Feb 27 10:35:11 crc kubenswrapper[4998]: I0227 10:35:11.196932 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79f06471-ac2f-4fec-95da-760990d48ad9-utilities\") pod \"certified-operators-9z27z\" (UID: \"79f06471-ac2f-4fec-95da-760990d48ad9\") " pod="openshift-marketplace/certified-operators-9z27z" Feb 27 10:35:11 crc kubenswrapper[4998]: I0227 10:35:11.196971 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ms8b2\" (UniqueName: \"kubernetes.io/projected/79f06471-ac2f-4fec-95da-760990d48ad9-kube-api-access-ms8b2\") pod \"certified-operators-9z27z\" (UID: \"79f06471-ac2f-4fec-95da-760990d48ad9\") " pod="openshift-marketplace/certified-operators-9z27z" Feb 27 10:35:11 crc kubenswrapper[4998]: I0227 10:35:11.197827 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79f06471-ac2f-4fec-95da-760990d48ad9-catalog-content\") pod \"certified-operators-9z27z\" (UID: \"79f06471-ac2f-4fec-95da-760990d48ad9\") " pod="openshift-marketplace/certified-operators-9z27z" Feb 27 10:35:11 crc kubenswrapper[4998]: I0227 10:35:11.198101 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79f06471-ac2f-4fec-95da-760990d48ad9-utilities\") pod \"certified-operators-9z27z\" (UID: \"79f06471-ac2f-4fec-95da-760990d48ad9\") " pod="openshift-marketplace/certified-operators-9z27z" Feb 27 10:35:11 crc kubenswrapper[4998]: I0227 10:35:11.225345 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ms8b2\" (UniqueName: \"kubernetes.io/projected/79f06471-ac2f-4fec-95da-760990d48ad9-kube-api-access-ms8b2\") pod \"certified-operators-9z27z\" (UID: \"79f06471-ac2f-4fec-95da-760990d48ad9\") " pod="openshift-marketplace/certified-operators-9z27z" Feb 27 10:35:11 crc kubenswrapper[4998]: I0227 10:35:11.401437 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9z27z" Feb 27 10:35:11 crc kubenswrapper[4998]: I0227 10:35:11.725748 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9z27z"] Feb 27 10:35:12 crc kubenswrapper[4998]: I0227 10:35:12.509336 4998 generic.go:334] "Generic (PLEG): container finished" podID="79f06471-ac2f-4fec-95da-760990d48ad9" containerID="49457f1fe0d3f268f621e3ac19d1d0296977592093d46fc3244f8f4e3869437e" exitCode=0 Feb 27 10:35:12 crc kubenswrapper[4998]: I0227 10:35:12.509380 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9z27z" event={"ID":"79f06471-ac2f-4fec-95da-760990d48ad9","Type":"ContainerDied","Data":"49457f1fe0d3f268f621e3ac19d1d0296977592093d46fc3244f8f4e3869437e"} Feb 27 10:35:12 crc kubenswrapper[4998]: I0227 10:35:12.509408 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9z27z" event={"ID":"79f06471-ac2f-4fec-95da-760990d48ad9","Type":"ContainerStarted","Data":"e899b6a71ad53ee2fee2a17cef30d12591858fab76e019e50de89ae6ccde33a6"} Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.123764 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-6db6876945-km79k"] Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.125103 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-6db6876945-km79k" Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.128647 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-ljr4l" Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.131322 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-55d77d7b5c-gg44v"] Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.132505 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-gg44v" Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.134475 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-rn5ft" Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.137527 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-6db6876945-km79k"] Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.144395 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-55d77d7b5c-gg44v"] Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.191554 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-5d87c9d997-nnlcl"] Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.192399 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-nnlcl" Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.197986 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-zn6qb" Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.198707 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-5d87c9d997-nnlcl"] Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.207978 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-64db6967f8-249j5"] Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.209184 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-64db6967f8-249j5" Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.213661 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-l4kzs" Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.229093 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-64db6967f8-249j5"] Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.243216 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-wxtjl"] Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.245307 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-wxtjl" Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.251504 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-ld64l" Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.258075 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-cf99c678f-jcgpp"] Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.259214 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-cf99c678f-jcgpp" Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.262465 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-4vncv" Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.271421 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-f7fcc58b9-kjznw"] Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.272217 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-kjznw" Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.275844 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-bbrst" Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.276013 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.311576 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-cf99c678f-jcgpp"] Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.325699 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-545456dc4-wmdmq"] Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.326671 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-545456dc4-wmdmq" Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.330643 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-jphpf" Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.330883 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4tvk\" (UniqueName: \"kubernetes.io/projected/cd2a5873-e5a8-4cc6-af9e-d90dd1253bdf-kube-api-access-d4tvk\") pod \"barbican-operator-controller-manager-6db6876945-km79k\" (UID: \"cd2a5873-e5a8-4cc6-af9e-d90dd1253bdf\") " pod="openstack-operators/barbican-operator-controller-manager-6db6876945-km79k" Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.330934 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pp4cq\" (UniqueName: \"kubernetes.io/projected/36f927dc-fac8-4bb6-85d1-df539857edf1-kube-api-access-pp4cq\") pod \"designate-operator-controller-manager-5d87c9d997-nnlcl\" (UID: \"36f927dc-fac8-4bb6-85d1-df539857edf1\") " pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-nnlcl" Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.330967 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c554g\" (UniqueName: \"kubernetes.io/projected/36f2300a-f1d8-429d-8b4f-065c53c8b68b-kube-api-access-c554g\") pod \"glance-operator-controller-manager-64db6967f8-249j5\" (UID: \"36f2300a-f1d8-429d-8b4f-065c53c8b68b\") " pod="openstack-operators/glance-operator-controller-manager-64db6967f8-249j5" Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.330987 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snbj6\" (UniqueName: \"kubernetes.io/projected/f7519997-ef58-4091-bce9-a43762551d56-kube-api-access-snbj6\") pod \"cinder-operator-controller-manager-55d77d7b5c-gg44v\" (UID: \"f7519997-ef58-4091-bce9-a43762551d56\") " pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-gg44v" Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.333940 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-f7fcc58b9-kjznw"] Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.361494 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7c789f89c6-c9hqt"] Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.362705 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7c789f89c6-c9hqt" Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.365624 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-z77vm" Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.371706 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-67d996989d-qqrgh"] Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.375750 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-67d996989d-qqrgh" Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.379984 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-btbm5" Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.385243 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7c789f89c6-c9hqt"] Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.409761 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-zr6mq"] Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.410802 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-zr6mq" Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.412966 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-tqlvq" Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.416507 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-67d996989d-qqrgh"] Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.431604 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-54688575f-fxkzb"] Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.432589 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-54688575f-fxkzb" Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.433284 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pp4cq\" (UniqueName: \"kubernetes.io/projected/36f927dc-fac8-4bb6-85d1-df539857edf1-kube-api-access-pp4cq\") pod \"designate-operator-controller-manager-5d87c9d997-nnlcl\" (UID: \"36f927dc-fac8-4bb6-85d1-df539857edf1\") " pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-nnlcl" Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.433416 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ad7e7a26-5a61-408d-86ae-0b25b8617147-cert\") pod \"infra-operator-controller-manager-f7fcc58b9-kjznw\" (UID: \"ad7e7a26-5a61-408d-86ae-0b25b8617147\") " pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-kjznw" Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.433554 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c554g\" (UniqueName: \"kubernetes.io/projected/36f2300a-f1d8-429d-8b4f-065c53c8b68b-kube-api-access-c554g\") pod \"glance-operator-controller-manager-64db6967f8-249j5\" (UID: \"36f2300a-f1d8-429d-8b4f-065c53c8b68b\") " pod="openstack-operators/glance-operator-controller-manager-64db6967f8-249j5" Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.433647 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drbj4\" (UniqueName: \"kubernetes.io/projected/a77eccb8-8369-473a-93ad-d9d67ccea057-kube-api-access-drbj4\") pod \"manila-operator-controller-manager-67d996989d-qqrgh\" (UID: \"a77eccb8-8369-473a-93ad-d9d67ccea057\") " pod="openstack-operators/manila-operator-controller-manager-67d996989d-qqrgh" Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.433735 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfvht\" (UniqueName: \"kubernetes.io/projected/35236f87-0600-46d7-ba5b-7576e20b9dc4-kube-api-access-qfvht\") pod \"horizon-operator-controller-manager-78bc7f9bd9-wxtjl\" (UID: \"35236f87-0600-46d7-ba5b-7576e20b9dc4\") " pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-wxtjl" Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.433817 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snbj6\" (UniqueName: \"kubernetes.io/projected/f7519997-ef58-4091-bce9-a43762551d56-kube-api-access-snbj6\") pod \"cinder-operator-controller-manager-55d77d7b5c-gg44v\" (UID: \"f7519997-ef58-4091-bce9-a43762551d56\") " pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-gg44v" Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.433898 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7d4j\" (UniqueName: \"kubernetes.io/projected/a27dd930-0b57-431c-ae5c-7c9af1e11dfa-kube-api-access-p7d4j\") pod \"heat-operator-controller-manager-cf99c678f-jcgpp\" (UID: \"a27dd930-0b57-431c-ae5c-7c9af1e11dfa\") " pod="openstack-operators/heat-operator-controller-manager-cf99c678f-jcgpp" Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.433996 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fdks\" (UniqueName: \"kubernetes.io/projected/ad7e7a26-5a61-408d-86ae-0b25b8617147-kube-api-access-5fdks\") pod \"infra-operator-controller-manager-f7fcc58b9-kjznw\" (UID: \"ad7e7a26-5a61-408d-86ae-0b25b8617147\") " pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-kjznw" Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.434128 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfbnj\" (UniqueName: \"kubernetes.io/projected/9ec9f13a-82dd-4ab4-8f5a-c15f2a42f2dc-kube-api-access-dfbnj\") pod \"mariadb-operator-controller-manager-7b6bfb6475-zr6mq\" (UID: \"9ec9f13a-82dd-4ab4-8f5a-c15f2a42f2dc\") " pod="openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-zr6mq" Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.434216 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kssv9\" (UniqueName: \"kubernetes.io/projected/4a837f08-5b8a-4cd8-8943-dc252cfb3f0f-kube-api-access-kssv9\") pod \"neutron-operator-controller-manager-54688575f-fxkzb\" (UID: \"4a837f08-5b8a-4cd8-8943-dc252cfb3f0f\") " pod="openstack-operators/neutron-operator-controller-manager-54688575f-fxkzb" Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.434343 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4tvk\" (UniqueName: \"kubernetes.io/projected/cd2a5873-e5a8-4cc6-af9e-d90dd1253bdf-kube-api-access-d4tvk\") pod \"barbican-operator-controller-manager-6db6876945-km79k\" (UID: \"cd2a5873-e5a8-4cc6-af9e-d90dd1253bdf\") " pod="openstack-operators/barbican-operator-controller-manager-6db6876945-km79k" Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.434429 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkl2k\" (UniqueName: \"kubernetes.io/projected/21596c8d-5360-482b-8cea-eda167f2f1cd-kube-api-access-kkl2k\") pod \"keystone-operator-controller-manager-7c789f89c6-c9hqt\" (UID: \"21596c8d-5360-482b-8cea-eda167f2f1cd\") " pod="openstack-operators/keystone-operator-controller-manager-7c789f89c6-c9hqt" Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.434520 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4dd6\" (UniqueName: \"kubernetes.io/projected/83211ec0-66ef-476c-ad20-e17e88348f29-kube-api-access-z4dd6\") pod \"ironic-operator-controller-manager-545456dc4-wmdmq\" (UID: \"83211ec0-66ef-476c-ad20-e17e88348f29\") " pod="openstack-operators/ironic-operator-controller-manager-545456dc4-wmdmq" Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.438269 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-wqwsg" Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.447025 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-zr6mq"] Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.468546 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-wxtjl"] Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.479890 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pp4cq\" (UniqueName: \"kubernetes.io/projected/36f927dc-fac8-4bb6-85d1-df539857edf1-kube-api-access-pp4cq\") pod \"designate-operator-controller-manager-5d87c9d997-nnlcl\" (UID: \"36f927dc-fac8-4bb6-85d1-df539857edf1\") " pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-nnlcl" Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.492452 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snbj6\" (UniqueName: \"kubernetes.io/projected/f7519997-ef58-4091-bce9-a43762551d56-kube-api-access-snbj6\") pod \"cinder-operator-controller-manager-55d77d7b5c-gg44v\" (UID: \"f7519997-ef58-4091-bce9-a43762551d56\") " pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-gg44v" Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.492578 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c554g\" (UniqueName: \"kubernetes.io/projected/36f2300a-f1d8-429d-8b4f-065c53c8b68b-kube-api-access-c554g\") pod \"glance-operator-controller-manager-64db6967f8-249j5\" (UID: \"36f2300a-f1d8-429d-8b4f-065c53c8b68b\") " pod="openstack-operators/glance-operator-controller-manager-64db6967f8-249j5" Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.510098 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-54688575f-fxkzb"] Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.530218 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4tvk\" (UniqueName: \"kubernetes.io/projected/cd2a5873-e5a8-4cc6-af9e-d90dd1253bdf-kube-api-access-d4tvk\") pod \"barbican-operator-controller-manager-6db6876945-km79k\" (UID: \"cd2a5873-e5a8-4cc6-af9e-d90dd1253bdf\") " pod="openstack-operators/barbican-operator-controller-manager-6db6876945-km79k" Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.533113 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-gg44v" Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.535911 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drbj4\" (UniqueName: \"kubernetes.io/projected/a77eccb8-8369-473a-93ad-d9d67ccea057-kube-api-access-drbj4\") pod \"manila-operator-controller-manager-67d996989d-qqrgh\" (UID: \"a77eccb8-8369-473a-93ad-d9d67ccea057\") " pod="openstack-operators/manila-operator-controller-manager-67d996989d-qqrgh" Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.535961 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfvht\" (UniqueName: \"kubernetes.io/projected/35236f87-0600-46d7-ba5b-7576e20b9dc4-kube-api-access-qfvht\") pod \"horizon-operator-controller-manager-78bc7f9bd9-wxtjl\" (UID: \"35236f87-0600-46d7-ba5b-7576e20b9dc4\") " pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-wxtjl" Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.535979 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7d4j\" (UniqueName: \"kubernetes.io/projected/a27dd930-0b57-431c-ae5c-7c9af1e11dfa-kube-api-access-p7d4j\") pod \"heat-operator-controller-manager-cf99c678f-jcgpp\" (UID: \"a27dd930-0b57-431c-ae5c-7c9af1e11dfa\") " pod="openstack-operators/heat-operator-controller-manager-cf99c678f-jcgpp" Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.536001 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fdks\" (UniqueName: \"kubernetes.io/projected/ad7e7a26-5a61-408d-86ae-0b25b8617147-kube-api-access-5fdks\") pod \"infra-operator-controller-manager-f7fcc58b9-kjznw\" (UID: \"ad7e7a26-5a61-408d-86ae-0b25b8617147\") " pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-kjznw" Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.536045 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfbnj\" (UniqueName: \"kubernetes.io/projected/9ec9f13a-82dd-4ab4-8f5a-c15f2a42f2dc-kube-api-access-dfbnj\") pod \"mariadb-operator-controller-manager-7b6bfb6475-zr6mq\" (UID: \"9ec9f13a-82dd-4ab4-8f5a-c15f2a42f2dc\") " pod="openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-zr6mq" Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.536065 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kssv9\" (UniqueName: \"kubernetes.io/projected/4a837f08-5b8a-4cd8-8943-dc252cfb3f0f-kube-api-access-kssv9\") pod \"neutron-operator-controller-manager-54688575f-fxkzb\" (UID: \"4a837f08-5b8a-4cd8-8943-dc252cfb3f0f\") " pod="openstack-operators/neutron-operator-controller-manager-54688575f-fxkzb" Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.536095 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkl2k\" (UniqueName: \"kubernetes.io/projected/21596c8d-5360-482b-8cea-eda167f2f1cd-kube-api-access-kkl2k\") pod \"keystone-operator-controller-manager-7c789f89c6-c9hqt\" (UID: \"21596c8d-5360-482b-8cea-eda167f2f1cd\") " pod="openstack-operators/keystone-operator-controller-manager-7c789f89c6-c9hqt" Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.536113 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4dd6\" (UniqueName: \"kubernetes.io/projected/83211ec0-66ef-476c-ad20-e17e88348f29-kube-api-access-z4dd6\") pod \"ironic-operator-controller-manager-545456dc4-wmdmq\" (UID: \"83211ec0-66ef-476c-ad20-e17e88348f29\") " pod="openstack-operators/ironic-operator-controller-manager-545456dc4-wmdmq" Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.536150 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ad7e7a26-5a61-408d-86ae-0b25b8617147-cert\") pod \"infra-operator-controller-manager-f7fcc58b9-kjznw\" (UID: \"ad7e7a26-5a61-408d-86ae-0b25b8617147\") " pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-kjznw" Feb 27 10:35:13 crc kubenswrapper[4998]: E0227 10:35:13.536306 4998 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 27 10:35:13 crc kubenswrapper[4998]: E0227 10:35:13.536370 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ad7e7a26-5a61-408d-86ae-0b25b8617147-cert podName:ad7e7a26-5a61-408d-86ae-0b25b8617147 nodeName:}" failed. No retries permitted until 2026-02-27 10:35:14.036340028 +0000 UTC m=+1066.034610986 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ad7e7a26-5a61-408d-86ae-0b25b8617147-cert") pod "infra-operator-controller-manager-f7fcc58b9-kjznw" (UID: "ad7e7a26-5a61-408d-86ae-0b25b8617147") : secret "infra-operator-webhook-server-cert" not found Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.538019 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-74b6b5dc96-hv45m"] Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.539611 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-hv45m" Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.542570 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-wtnvc" Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.549267 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-fs8hb"] Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.550111 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-fs8hb" Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.554000 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-7fj6w" Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.555147 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-nnlcl" Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.558105 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkl2k\" (UniqueName: \"kubernetes.io/projected/21596c8d-5360-482b-8cea-eda167f2f1cd-kube-api-access-kkl2k\") pod \"keystone-operator-controller-manager-7c789f89c6-c9hqt\" (UID: \"21596c8d-5360-482b-8cea-eda167f2f1cd\") " pod="openstack-operators/keystone-operator-controller-manager-7c789f89c6-c9hqt" Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.560926 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drbj4\" (UniqueName: \"kubernetes.io/projected/a77eccb8-8369-473a-93ad-d9d67ccea057-kube-api-access-drbj4\") pod \"manila-operator-controller-manager-67d996989d-qqrgh\" (UID: \"a77eccb8-8369-473a-93ad-d9d67ccea057\") " pod="openstack-operators/manila-operator-controller-manager-67d996989d-qqrgh" Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.570336 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4dd6\" (UniqueName: \"kubernetes.io/projected/83211ec0-66ef-476c-ad20-e17e88348f29-kube-api-access-z4dd6\") pod \"ironic-operator-controller-manager-545456dc4-wmdmq\" (UID: \"83211ec0-66ef-476c-ad20-e17e88348f29\") " pod="openstack-operators/ironic-operator-controller-manager-545456dc4-wmdmq" Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.572263 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-64db6967f8-249j5" Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.572606 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kssv9\" (UniqueName: \"kubernetes.io/projected/4a837f08-5b8a-4cd8-8943-dc252cfb3f0f-kube-api-access-kssv9\") pod \"neutron-operator-controller-manager-54688575f-fxkzb\" (UID: \"4a837f08-5b8a-4cd8-8943-dc252cfb3f0f\") " pod="openstack-operators/neutron-operator-controller-manager-54688575f-fxkzb" Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.572979 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7d4j\" (UniqueName: \"kubernetes.io/projected/a27dd930-0b57-431c-ae5c-7c9af1e11dfa-kube-api-access-p7d4j\") pod \"heat-operator-controller-manager-cf99c678f-jcgpp\" (UID: \"a27dd930-0b57-431c-ae5c-7c9af1e11dfa\") " pod="openstack-operators/heat-operator-controller-manager-cf99c678f-jcgpp" Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.574868 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-545456dc4-wmdmq"] Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.575541 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfbnj\" (UniqueName: \"kubernetes.io/projected/9ec9f13a-82dd-4ab4-8f5a-c15f2a42f2dc-kube-api-access-dfbnj\") pod \"mariadb-operator-controller-manager-7b6bfb6475-zr6mq\" (UID: \"9ec9f13a-82dd-4ab4-8f5a-c15f2a42f2dc\") " pod="openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-zr6mq" Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.575737 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fdks\" (UniqueName: \"kubernetes.io/projected/ad7e7a26-5a61-408d-86ae-0b25b8617147-kube-api-access-5fdks\") pod \"infra-operator-controller-manager-f7fcc58b9-kjznw\" (UID: \"ad7e7a26-5a61-408d-86ae-0b25b8617147\") " pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-kjznw" Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.580624 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-74b6b5dc96-hv45m"] Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.581291 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfvht\" (UniqueName: \"kubernetes.io/projected/35236f87-0600-46d7-ba5b-7576e20b9dc4-kube-api-access-qfvht\") pod \"horizon-operator-controller-manager-78bc7f9bd9-wxtjl\" (UID: \"35236f87-0600-46d7-ba5b-7576e20b9dc4\") " pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-wxtjl" Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.582397 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-wxtjl" Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.584950 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9z27z" event={"ID":"79f06471-ac2f-4fec-95da-760990d48ad9","Type":"ContainerStarted","Data":"0b5304665eda60e1c3766a688a5b5f1bae8baa4be44271087140a1ee16f263e2"} Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.586789 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-fs8hb"] Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.607020 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-cf99c678f-jcgpp" Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.607241 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-75684d597f-dh4xx"] Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.608077 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-dh4xx" Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.610775 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-6h4pr" Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.615912 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cd4n94"] Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.616816 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cd4n94" Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.622313 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.622426 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-lfkmr" Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.640280 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-75684d597f-dh4xx"] Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.647948 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-648564c9fc-l6kh7"] Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.648898 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-648564c9fc-l6kh7" Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.653705 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-kkglv" Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.655667 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-545456dc4-wmdmq" Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.663593 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-648564c9fc-l6kh7"] Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.681638 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cd4n94"] Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.687204 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7c789f89c6-c9hqt" Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.698150 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-9b9ff9f4d-mhc4p"] Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.699286 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-mhc4p" Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.701902 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-dp77d" Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.706444 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-67d996989d-qqrgh" Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.738189 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-zr6mq" Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.739389 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bp7f\" (UniqueName: \"kubernetes.io/projected/61e9c6f0-0c9c-47ca-b013-b94ba962ec66-kube-api-access-2bp7f\") pod \"nova-operator-controller-manager-74b6b5dc96-hv45m\" (UID: \"61e9c6f0-0c9c-47ca-b013-b94ba962ec66\") " pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-hv45m" Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.739905 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbkxh\" (UniqueName: \"kubernetes.io/projected/aaf3093e-214b-48a9-8310-56ba32b094f7-kube-api-access-gbkxh\") pod \"ovn-operator-controller-manager-75684d597f-dh4xx\" (UID: \"aaf3093e-214b-48a9-8310-56ba32b094f7\") " pod="openstack-operators/ovn-operator-controller-manager-75684d597f-dh4xx" Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.739951 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9nhp\" (UniqueName: \"kubernetes.io/projected/d31df8ec-7c1c-42b3-b538-2949c015b6e6-kube-api-access-z9nhp\") pod \"octavia-operator-controller-manager-5d86c7ddb7-fs8hb\" (UID: \"d31df8ec-7c1c-42b3-b538-2949c015b6e6\") " pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-fs8hb" Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.739982 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fkh8\" (UniqueName: \"kubernetes.io/projected/6bbe7a4a-b92f-4ba7-ab63-1261ba6a8248-kube-api-access-4fkh8\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cd4n94\" (UID: \"6bbe7a4a-b92f-4ba7-ab63-1261ba6a8248\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cd4n94" Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.740015 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6bbe7a4a-b92f-4ba7-ab63-1261ba6a8248-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cd4n94\" (UID: \"6bbe7a4a-b92f-4ba7-ab63-1261ba6a8248\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cd4n94" Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.748586 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-9b9ff9f4d-mhc4p"] Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.761306 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5fdb694969-vhsp7"] Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.762250 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-5fdb694969-vhsp7" Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.766788 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-h5xxn" Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.773447 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5fdb694969-vhsp7"] Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.781342 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-6db6876945-km79k" Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.787898 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-54688575f-fxkzb" Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.799556 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-55b5ff4dbb-bq92j"] Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.806578 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-bq92j" Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.811245 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-nlwqq" Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.829289 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-55b5ff4dbb-bq92j"] Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.842647 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6bbe7a4a-b92f-4ba7-ab63-1261ba6a8248-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cd4n94\" (UID: \"6bbe7a4a-b92f-4ba7-ab63-1261ba6a8248\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cd4n94" Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.842690 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbrn7\" (UniqueName: \"kubernetes.io/projected/3855bff7-c203-4258-98a0-5afa77cf9b5c-kube-api-access-gbrn7\") pod \"placement-operator-controller-manager-648564c9fc-l6kh7\" (UID: \"3855bff7-c203-4258-98a0-5afa77cf9b5c\") " pod="openstack-operators/placement-operator-controller-manager-648564c9fc-l6kh7" Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.842785 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bp7f\" (UniqueName: \"kubernetes.io/projected/61e9c6f0-0c9c-47ca-b013-b94ba962ec66-kube-api-access-2bp7f\") pod \"nova-operator-controller-manager-74b6b5dc96-hv45m\" (UID: \"61e9c6f0-0c9c-47ca-b013-b94ba962ec66\") " pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-hv45m" Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.842816 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77brb\" (UniqueName: \"kubernetes.io/projected/b712d7f9-e5c9-4a97-9757-e8689527c542-kube-api-access-77brb\") pod \"swift-operator-controller-manager-9b9ff9f4d-mhc4p\" (UID: \"b712d7f9-e5c9-4a97-9757-e8689527c542\") " pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-mhc4p" Feb 27 10:35:13 crc kubenswrapper[4998]: E0227 10:35:13.842856 4998 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.842862 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbkxh\" (UniqueName: \"kubernetes.io/projected/aaf3093e-214b-48a9-8310-56ba32b094f7-kube-api-access-gbkxh\") pod \"ovn-operator-controller-manager-75684d597f-dh4xx\" (UID: \"aaf3093e-214b-48a9-8310-56ba32b094f7\") " pod="openstack-operators/ovn-operator-controller-manager-75684d597f-dh4xx" Feb 27 10:35:13 crc kubenswrapper[4998]: E0227 10:35:13.842937 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6bbe7a4a-b92f-4ba7-ab63-1261ba6a8248-cert podName:6bbe7a4a-b92f-4ba7-ab63-1261ba6a8248 nodeName:}" failed. No retries permitted until 2026-02-27 10:35:14.342918588 +0000 UTC m=+1066.341189566 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6bbe7a4a-b92f-4ba7-ab63-1261ba6a8248-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cd4n94" (UID: "6bbe7a4a-b92f-4ba7-ab63-1261ba6a8248") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.842992 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9nhp\" (UniqueName: \"kubernetes.io/projected/d31df8ec-7c1c-42b3-b538-2949c015b6e6-kube-api-access-z9nhp\") pod \"octavia-operator-controller-manager-5d86c7ddb7-fs8hb\" (UID: \"d31df8ec-7c1c-42b3-b538-2949c015b6e6\") " pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-fs8hb" Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.843037 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fkh8\" (UniqueName: \"kubernetes.io/projected/6bbe7a4a-b92f-4ba7-ab63-1261ba6a8248-kube-api-access-4fkh8\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cd4n94\" (UID: \"6bbe7a4a-b92f-4ba7-ab63-1261ba6a8248\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cd4n94" Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.867686 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bp7f\" (UniqueName: \"kubernetes.io/projected/61e9c6f0-0c9c-47ca-b013-b94ba962ec66-kube-api-access-2bp7f\") pod \"nova-operator-controller-manager-74b6b5dc96-hv45m\" (UID: \"61e9c6f0-0c9c-47ca-b013-b94ba962ec66\") " pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-hv45m" Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.890895 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fkh8\" (UniqueName: \"kubernetes.io/projected/6bbe7a4a-b92f-4ba7-ab63-1261ba6a8248-kube-api-access-4fkh8\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cd4n94\" (UID: \"6bbe7a4a-b92f-4ba7-ab63-1261ba6a8248\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cd4n94" Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.893550 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbkxh\" (UniqueName: \"kubernetes.io/projected/aaf3093e-214b-48a9-8310-56ba32b094f7-kube-api-access-gbkxh\") pod \"ovn-operator-controller-manager-75684d597f-dh4xx\" (UID: \"aaf3093e-214b-48a9-8310-56ba32b094f7\") " pod="openstack-operators/ovn-operator-controller-manager-75684d597f-dh4xx" Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.893922 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9nhp\" (UniqueName: \"kubernetes.io/projected/d31df8ec-7c1c-42b3-b538-2949c015b6e6-kube-api-access-z9nhp\") pod \"octavia-operator-controller-manager-5d86c7ddb7-fs8hb\" (UID: \"d31df8ec-7c1c-42b3-b538-2949c015b6e6\") " pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-fs8hb" Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.894812 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-bccc79885-g9dst"] Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.896072 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-g9dst" Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.898349 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-2gppz" Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.917349 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-bccc79885-g9dst"] Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.923618 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-hv45m" Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.945403 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-fs8hb" Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.945985 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77brb\" (UniqueName: \"kubernetes.io/projected/b712d7f9-e5c9-4a97-9757-e8689527c542-kube-api-access-77brb\") pod \"swift-operator-controller-manager-9b9ff9f4d-mhc4p\" (UID: \"b712d7f9-e5c9-4a97-9757-e8689527c542\") " pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-mhc4p" Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.946140 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbrn7\" (UniqueName: \"kubernetes.io/projected/3855bff7-c203-4258-98a0-5afa77cf9b5c-kube-api-access-gbrn7\") pod \"placement-operator-controller-manager-648564c9fc-l6kh7\" (UID: \"3855bff7-c203-4258-98a0-5afa77cf9b5c\") " pod="openstack-operators/placement-operator-controller-manager-648564c9fc-l6kh7" Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.946178 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bc942\" (UniqueName: \"kubernetes.io/projected/4b9b7b96-1053-4c39-8079-b0c47d540545-kube-api-access-bc942\") pod \"test-operator-controller-manager-55b5ff4dbb-bq92j\" (UID: \"4b9b7b96-1053-4c39-8079-b0c47d540545\") " pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-bq92j" Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.946207 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25sbh\" (UniqueName: \"kubernetes.io/projected/3b9761b2-ee38-418d-800f-ba54fc8960b6-kube-api-access-25sbh\") pod \"telemetry-operator-controller-manager-5fdb694969-vhsp7\" (UID: \"3b9761b2-ee38-418d-800f-ba54fc8960b6\") " pod="openstack-operators/telemetry-operator-controller-manager-5fdb694969-vhsp7" Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.963335 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-dh4xx" Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.967146 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77brb\" (UniqueName: \"kubernetes.io/projected/b712d7f9-e5c9-4a97-9757-e8689527c542-kube-api-access-77brb\") pod \"swift-operator-controller-manager-9b9ff9f4d-mhc4p\" (UID: \"b712d7f9-e5c9-4a97-9757-e8689527c542\") " pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-mhc4p" Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.967837 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbrn7\" (UniqueName: \"kubernetes.io/projected/3855bff7-c203-4258-98a0-5afa77cf9b5c-kube-api-access-gbrn7\") pod \"placement-operator-controller-manager-648564c9fc-l6kh7\" (UID: \"3855bff7-c203-4258-98a0-5afa77cf9b5c\") " pod="openstack-operators/placement-operator-controller-manager-648564c9fc-l6kh7" Feb 27 10:35:13 crc kubenswrapper[4998]: I0227 10:35:13.989625 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-648564c9fc-l6kh7" Feb 27 10:35:14 crc kubenswrapper[4998]: I0227 10:35:14.003844 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-8f6f897df-qkvxk"] Feb 27 10:35:14 crc kubenswrapper[4998]: I0227 10:35:14.005348 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-8f6f897df-qkvxk" Feb 27 10:35:14 crc kubenswrapper[4998]: I0227 10:35:14.012768 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Feb 27 10:35:14 crc kubenswrapper[4998]: I0227 10:35:14.013238 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Feb 27 10:35:14 crc kubenswrapper[4998]: I0227 10:35:14.013530 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-xh72p" Feb 27 10:35:14 crc kubenswrapper[4998]: I0227 10:35:14.035848 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-8f6f897df-qkvxk"] Feb 27 10:35:14 crc kubenswrapper[4998]: I0227 10:35:14.047746 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mqr5\" (UniqueName: \"kubernetes.io/projected/f5fd6986-20ba-4982-a978-c76652150ac8-kube-api-access-9mqr5\") pod \"watcher-operator-controller-manager-bccc79885-g9dst\" (UID: \"f5fd6986-20ba-4982-a978-c76652150ac8\") " pod="openstack-operators/watcher-operator-controller-manager-bccc79885-g9dst" Feb 27 10:35:14 crc kubenswrapper[4998]: I0227 10:35:14.047959 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bc942\" (UniqueName: \"kubernetes.io/projected/4b9b7b96-1053-4c39-8079-b0c47d540545-kube-api-access-bc942\") pod \"test-operator-controller-manager-55b5ff4dbb-bq92j\" (UID: \"4b9b7b96-1053-4c39-8079-b0c47d540545\") " pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-bq92j" Feb 27 10:35:14 crc kubenswrapper[4998]: I0227 10:35:14.048023 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25sbh\" (UniqueName: \"kubernetes.io/projected/3b9761b2-ee38-418d-800f-ba54fc8960b6-kube-api-access-25sbh\") pod \"telemetry-operator-controller-manager-5fdb694969-vhsp7\" (UID: \"3b9761b2-ee38-418d-800f-ba54fc8960b6\") " pod="openstack-operators/telemetry-operator-controller-manager-5fdb694969-vhsp7" Feb 27 10:35:14 crc kubenswrapper[4998]: I0227 10:35:14.048091 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ad7e7a26-5a61-408d-86ae-0b25b8617147-cert\") pod \"infra-operator-controller-manager-f7fcc58b9-kjznw\" (UID: \"ad7e7a26-5a61-408d-86ae-0b25b8617147\") " pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-kjznw" Feb 27 10:35:14 crc kubenswrapper[4998]: I0227 10:35:14.048729 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-mhc4p" Feb 27 10:35:14 crc kubenswrapper[4998]: E0227 10:35:14.049646 4998 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 27 10:35:14 crc kubenswrapper[4998]: E0227 10:35:14.049701 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ad7e7a26-5a61-408d-86ae-0b25b8617147-cert podName:ad7e7a26-5a61-408d-86ae-0b25b8617147 nodeName:}" failed. No retries permitted until 2026-02-27 10:35:15.049681647 +0000 UTC m=+1067.047952615 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ad7e7a26-5a61-408d-86ae-0b25b8617147-cert") pod "infra-operator-controller-manager-f7fcc58b9-kjznw" (UID: "ad7e7a26-5a61-408d-86ae-0b25b8617147") : secret "infra-operator-webhook-server-cert" not found Feb 27 10:35:14 crc kubenswrapper[4998]: I0227 10:35:14.061468 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9rqts"] Feb 27 10:35:14 crc kubenswrapper[4998]: I0227 10:35:14.063214 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9rqts" Feb 27 10:35:14 crc kubenswrapper[4998]: I0227 10:35:14.069943 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-h95gz" Feb 27 10:35:14 crc kubenswrapper[4998]: I0227 10:35:14.070611 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bc942\" (UniqueName: \"kubernetes.io/projected/4b9b7b96-1053-4c39-8079-b0c47d540545-kube-api-access-bc942\") pod \"test-operator-controller-manager-55b5ff4dbb-bq92j\" (UID: \"4b9b7b96-1053-4c39-8079-b0c47d540545\") " pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-bq92j" Feb 27 10:35:14 crc kubenswrapper[4998]: I0227 10:35:14.071325 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25sbh\" (UniqueName: \"kubernetes.io/projected/3b9761b2-ee38-418d-800f-ba54fc8960b6-kube-api-access-25sbh\") pod \"telemetry-operator-controller-manager-5fdb694969-vhsp7\" (UID: \"3b9761b2-ee38-418d-800f-ba54fc8960b6\") " pod="openstack-operators/telemetry-operator-controller-manager-5fdb694969-vhsp7" Feb 27 10:35:14 crc kubenswrapper[4998]: I0227 10:35:14.085323 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9rqts"] Feb 27 10:35:14 crc kubenswrapper[4998]: I0227 10:35:14.146619 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-5fdb694969-vhsp7" Feb 27 10:35:14 crc kubenswrapper[4998]: I0227 10:35:14.150391 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mqr5\" (UniqueName: \"kubernetes.io/projected/f5fd6986-20ba-4982-a978-c76652150ac8-kube-api-access-9mqr5\") pod \"watcher-operator-controller-manager-bccc79885-g9dst\" (UID: \"f5fd6986-20ba-4982-a978-c76652150ac8\") " pod="openstack-operators/watcher-operator-controller-manager-bccc79885-g9dst" Feb 27 10:35:14 crc kubenswrapper[4998]: I0227 10:35:14.150526 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/86a542d0-8588-425c-9d8b-417b0b287ce2-metrics-certs\") pod \"openstack-operator-controller-manager-8f6f897df-qkvxk\" (UID: \"86a542d0-8588-425c-9d8b-417b0b287ce2\") " pod="openstack-operators/openstack-operator-controller-manager-8f6f897df-qkvxk" Feb 27 10:35:14 crc kubenswrapper[4998]: I0227 10:35:14.150573 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/86a542d0-8588-425c-9d8b-417b0b287ce2-webhook-certs\") pod \"openstack-operator-controller-manager-8f6f897df-qkvxk\" (UID: \"86a542d0-8588-425c-9d8b-417b0b287ce2\") " pod="openstack-operators/openstack-operator-controller-manager-8f6f897df-qkvxk" Feb 27 10:35:14 crc kubenswrapper[4998]: I0227 10:35:14.150652 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqr5l\" (UniqueName: \"kubernetes.io/projected/86a542d0-8588-425c-9d8b-417b0b287ce2-kube-api-access-lqr5l\") pod \"openstack-operator-controller-manager-8f6f897df-qkvxk\" (UID: \"86a542d0-8588-425c-9d8b-417b0b287ce2\") " pod="openstack-operators/openstack-operator-controller-manager-8f6f897df-qkvxk" Feb 27 10:35:14 crc kubenswrapper[4998]: I0227 10:35:14.172092 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mqr5\" (UniqueName: \"kubernetes.io/projected/f5fd6986-20ba-4982-a978-c76652150ac8-kube-api-access-9mqr5\") pod \"watcher-operator-controller-manager-bccc79885-g9dst\" (UID: \"f5fd6986-20ba-4982-a978-c76652150ac8\") " pod="openstack-operators/watcher-operator-controller-manager-bccc79885-g9dst" Feb 27 10:35:14 crc kubenswrapper[4998]: I0227 10:35:14.250253 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-bq92j" Feb 27 10:35:14 crc kubenswrapper[4998]: I0227 10:35:14.252254 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqr5l\" (UniqueName: \"kubernetes.io/projected/86a542d0-8588-425c-9d8b-417b0b287ce2-kube-api-access-lqr5l\") pod \"openstack-operator-controller-manager-8f6f897df-qkvxk\" (UID: \"86a542d0-8588-425c-9d8b-417b0b287ce2\") " pod="openstack-operators/openstack-operator-controller-manager-8f6f897df-qkvxk" Feb 27 10:35:14 crc kubenswrapper[4998]: I0227 10:35:14.252339 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/86a542d0-8588-425c-9d8b-417b0b287ce2-metrics-certs\") pod \"openstack-operator-controller-manager-8f6f897df-qkvxk\" (UID: \"86a542d0-8588-425c-9d8b-417b0b287ce2\") " pod="openstack-operators/openstack-operator-controller-manager-8f6f897df-qkvxk" Feb 27 10:35:14 crc kubenswrapper[4998]: I0227 10:35:14.252379 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6wjh\" (UniqueName: \"kubernetes.io/projected/d660dd5a-dbce-413b-9075-9de9a2776d8c-kube-api-access-v6wjh\") pod \"rabbitmq-cluster-operator-manager-668c99d594-9rqts\" (UID: \"d660dd5a-dbce-413b-9075-9de9a2776d8c\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9rqts" Feb 27 10:35:14 crc kubenswrapper[4998]: I0227 10:35:14.252413 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/86a542d0-8588-425c-9d8b-417b0b287ce2-webhook-certs\") pod \"openstack-operator-controller-manager-8f6f897df-qkvxk\" (UID: \"86a542d0-8588-425c-9d8b-417b0b287ce2\") " pod="openstack-operators/openstack-operator-controller-manager-8f6f897df-qkvxk" Feb 27 10:35:14 crc kubenswrapper[4998]: E0227 10:35:14.252544 4998 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 27 10:35:14 crc kubenswrapper[4998]: E0227 10:35:14.252584 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/86a542d0-8588-425c-9d8b-417b0b287ce2-webhook-certs podName:86a542d0-8588-425c-9d8b-417b0b287ce2 nodeName:}" failed. No retries permitted until 2026-02-27 10:35:14.752569607 +0000 UTC m=+1066.750840575 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/86a542d0-8588-425c-9d8b-417b0b287ce2-webhook-certs") pod "openstack-operator-controller-manager-8f6f897df-qkvxk" (UID: "86a542d0-8588-425c-9d8b-417b0b287ce2") : secret "webhook-server-cert" not found Feb 27 10:35:14 crc kubenswrapper[4998]: E0227 10:35:14.255687 4998 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 27 10:35:14 crc kubenswrapper[4998]: E0227 10:35:14.255741 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/86a542d0-8588-425c-9d8b-417b0b287ce2-metrics-certs podName:86a542d0-8588-425c-9d8b-417b0b287ce2 nodeName:}" failed. No retries permitted until 2026-02-27 10:35:14.755728453 +0000 UTC m=+1066.753999421 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/86a542d0-8588-425c-9d8b-417b0b287ce2-metrics-certs") pod "openstack-operator-controller-manager-8f6f897df-qkvxk" (UID: "86a542d0-8588-425c-9d8b-417b0b287ce2") : secret "metrics-server-cert" not found Feb 27 10:35:14 crc kubenswrapper[4998]: I0227 10:35:14.263363 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-g9dst" Feb 27 10:35:14 crc kubenswrapper[4998]: I0227 10:35:14.281580 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqr5l\" (UniqueName: \"kubernetes.io/projected/86a542d0-8588-425c-9d8b-417b0b287ce2-kube-api-access-lqr5l\") pod \"openstack-operator-controller-manager-8f6f897df-qkvxk\" (UID: \"86a542d0-8588-425c-9d8b-417b0b287ce2\") " pod="openstack-operators/openstack-operator-controller-manager-8f6f897df-qkvxk" Feb 27 10:35:14 crc kubenswrapper[4998]: I0227 10:35:14.355351 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6bbe7a4a-b92f-4ba7-ab63-1261ba6a8248-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cd4n94\" (UID: \"6bbe7a4a-b92f-4ba7-ab63-1261ba6a8248\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cd4n94" Feb 27 10:35:14 crc kubenswrapper[4998]: I0227 10:35:14.355449 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6wjh\" (UniqueName: \"kubernetes.io/projected/d660dd5a-dbce-413b-9075-9de9a2776d8c-kube-api-access-v6wjh\") pod \"rabbitmq-cluster-operator-manager-668c99d594-9rqts\" (UID: \"d660dd5a-dbce-413b-9075-9de9a2776d8c\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9rqts" Feb 27 10:35:14 crc kubenswrapper[4998]: E0227 10:35:14.356604 4998 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 27 10:35:14 crc kubenswrapper[4998]: E0227 10:35:14.356676 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6bbe7a4a-b92f-4ba7-ab63-1261ba6a8248-cert podName:6bbe7a4a-b92f-4ba7-ab63-1261ba6a8248 nodeName:}" failed. No retries permitted until 2026-02-27 10:35:15.356659018 +0000 UTC m=+1067.354929986 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6bbe7a4a-b92f-4ba7-ab63-1261ba6a8248-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cd4n94" (UID: "6bbe7a4a-b92f-4ba7-ab63-1261ba6a8248") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 27 10:35:14 crc kubenswrapper[4998]: I0227 10:35:14.361149 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-55d77d7b5c-gg44v"] Feb 27 10:35:14 crc kubenswrapper[4998]: I0227 10:35:14.389703 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6wjh\" (UniqueName: \"kubernetes.io/projected/d660dd5a-dbce-413b-9075-9de9a2776d8c-kube-api-access-v6wjh\") pod \"rabbitmq-cluster-operator-manager-668c99d594-9rqts\" (UID: \"d660dd5a-dbce-413b-9075-9de9a2776d8c\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9rqts" Feb 27 10:35:14 crc kubenswrapper[4998]: I0227 10:35:14.403547 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9rqts" Feb 27 10:35:14 crc kubenswrapper[4998]: I0227 10:35:14.603448 4998 generic.go:334] "Generic (PLEG): container finished" podID="79f06471-ac2f-4fec-95da-760990d48ad9" containerID="0b5304665eda60e1c3766a688a5b5f1bae8baa4be44271087140a1ee16f263e2" exitCode=0 Feb 27 10:35:14 crc kubenswrapper[4998]: I0227 10:35:14.603572 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9z27z" event={"ID":"79f06471-ac2f-4fec-95da-760990d48ad9","Type":"ContainerDied","Data":"0b5304665eda60e1c3766a688a5b5f1bae8baa4be44271087140a1ee16f263e2"} Feb 27 10:35:14 crc kubenswrapper[4998]: I0227 10:35:14.607632 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-gg44v" event={"ID":"f7519997-ef58-4091-bce9-a43762551d56","Type":"ContainerStarted","Data":"8b407f852662a5ccc03a28e275e5f67398f7333d36a3f2b94eb498898ac8b352"} Feb 27 10:35:14 crc kubenswrapper[4998]: I0227 10:35:14.682354 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-545456dc4-wmdmq"] Feb 27 10:35:14 crc kubenswrapper[4998]: I0227 10:35:14.696828 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-cf99c678f-jcgpp"] Feb 27 10:35:14 crc kubenswrapper[4998]: W0227 10:35:14.703746 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda27dd930_0b57_431c_ae5c_7c9af1e11dfa.slice/crio-c93aa2f4d3e6c1b8ab9e176a33ee38074bf17b649514d686049add9e98cb16fa WatchSource:0}: Error finding container c93aa2f4d3e6c1b8ab9e176a33ee38074bf17b649514d686049add9e98cb16fa: Status 404 returned error can't find the container with id c93aa2f4d3e6c1b8ab9e176a33ee38074bf17b649514d686049add9e98cb16fa Feb 27 10:35:14 crc kubenswrapper[4998]: I0227 10:35:14.704778 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-64db6967f8-249j5"] Feb 27 10:35:14 crc kubenswrapper[4998]: W0227 10:35:14.705082 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod35236f87_0600_46d7_ba5b_7576e20b9dc4.slice/crio-0ab975cbec2375ecddd804ba1f78b36c54f506b96a108f047987cb8cb6eaab08 WatchSource:0}: Error finding container 0ab975cbec2375ecddd804ba1f78b36c54f506b96a108f047987cb8cb6eaab08: Status 404 returned error can't find the container with id 0ab975cbec2375ecddd804ba1f78b36c54f506b96a108f047987cb8cb6eaab08 Feb 27 10:35:14 crc kubenswrapper[4998]: W0227 10:35:14.707264 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod36f2300a_f1d8_429d_8b4f_065c53c8b68b.slice/crio-88da37e2dd53e4ddd1f91bade8c12707929d4d59f805f8bf926e87d968d0f2b9 WatchSource:0}: Error finding container 88da37e2dd53e4ddd1f91bade8c12707929d4d59f805f8bf926e87d968d0f2b9: Status 404 returned error can't find the container with id 88da37e2dd53e4ddd1f91bade8c12707929d4d59f805f8bf926e87d968d0f2b9 Feb 27 10:35:14 crc kubenswrapper[4998]: I0227 10:35:14.712194 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-wxtjl"] Feb 27 10:35:14 crc kubenswrapper[4998]: I0227 10:35:14.763540 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/86a542d0-8588-425c-9d8b-417b0b287ce2-metrics-certs\") pod \"openstack-operator-controller-manager-8f6f897df-qkvxk\" (UID: \"86a542d0-8588-425c-9d8b-417b0b287ce2\") " pod="openstack-operators/openstack-operator-controller-manager-8f6f897df-qkvxk" Feb 27 10:35:14 crc kubenswrapper[4998]: I0227 10:35:14.763605 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/86a542d0-8588-425c-9d8b-417b0b287ce2-webhook-certs\") pod \"openstack-operator-controller-manager-8f6f897df-qkvxk\" (UID: \"86a542d0-8588-425c-9d8b-417b0b287ce2\") " pod="openstack-operators/openstack-operator-controller-manager-8f6f897df-qkvxk" Feb 27 10:35:14 crc kubenswrapper[4998]: E0227 10:35:14.763822 4998 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 27 10:35:14 crc kubenswrapper[4998]: E0227 10:35:14.763897 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/86a542d0-8588-425c-9d8b-417b0b287ce2-metrics-certs podName:86a542d0-8588-425c-9d8b-417b0b287ce2 nodeName:}" failed. No retries permitted until 2026-02-27 10:35:15.763874713 +0000 UTC m=+1067.762145761 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/86a542d0-8588-425c-9d8b-417b0b287ce2-metrics-certs") pod "openstack-operator-controller-manager-8f6f897df-qkvxk" (UID: "86a542d0-8588-425c-9d8b-417b0b287ce2") : secret "metrics-server-cert" not found Feb 27 10:35:14 crc kubenswrapper[4998]: E0227 10:35:14.763956 4998 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 27 10:35:14 crc kubenswrapper[4998]: E0227 10:35:14.763998 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/86a542d0-8588-425c-9d8b-417b0b287ce2-webhook-certs podName:86a542d0-8588-425c-9d8b-417b0b287ce2 nodeName:}" failed. No retries permitted until 2026-02-27 10:35:15.763981586 +0000 UTC m=+1067.762252554 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/86a542d0-8588-425c-9d8b-417b0b287ce2-webhook-certs") pod "openstack-operator-controller-manager-8f6f897df-qkvxk" (UID: "86a542d0-8588-425c-9d8b-417b0b287ce2") : secret "webhook-server-cert" not found Feb 27 10:35:14 crc kubenswrapper[4998]: I0227 10:35:14.784209 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-5d87c9d997-nnlcl"] Feb 27 10:35:14 crc kubenswrapper[4998]: I0227 10:35:14.790189 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7c789f89c6-c9hqt"] Feb 27 10:35:14 crc kubenswrapper[4998]: I0227 10:35:14.911956 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-zr6mq"] Feb 27 10:35:14 crc kubenswrapper[4998]: W0227 10:35:14.913532 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9ec9f13a_82dd_4ab4_8f5a_c15f2a42f2dc.slice/crio-82dc27ab0a2f2d0b684d82b4e00926634b462cf06008d46743e30943183b7a57 WatchSource:0}: Error finding container 82dc27ab0a2f2d0b684d82b4e00926634b462cf06008d46743e30943183b7a57: Status 404 returned error can't find the container with id 82dc27ab0a2f2d0b684d82b4e00926634b462cf06008d46743e30943183b7a57 Feb 27 10:35:14 crc kubenswrapper[4998]: I0227 10:35:14.918124 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-fs8hb"] Feb 27 10:35:14 crc kubenswrapper[4998]: W0227 10:35:14.920867 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd31df8ec_7c1c_42b3_b538_2949c015b6e6.slice/crio-9912d179d26a507529cbf3034a274f734120f62acf4067ba3a74249716ffdf43 WatchSource:0}: Error finding container 9912d179d26a507529cbf3034a274f734120f62acf4067ba3a74249716ffdf43: Status 404 returned error can't find the container with id 9912d179d26a507529cbf3034a274f734120f62acf4067ba3a74249716ffdf43 Feb 27 10:35:14 crc kubenswrapper[4998]: I0227 10:35:14.932847 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-6db6876945-km79k"] Feb 27 10:35:14 crc kubenswrapper[4998]: W0227 10:35:14.937215 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcd2a5873_e5a8_4cc6_af9e_d90dd1253bdf.slice/crio-c5a9c99dc31728d253fe849d2a901b065fd54144eff6213ecfd26deadb4e630e WatchSource:0}: Error finding container c5a9c99dc31728d253fe849d2a901b065fd54144eff6213ecfd26deadb4e630e: Status 404 returned error can't find the container with id c5a9c99dc31728d253fe849d2a901b065fd54144eff6213ecfd26deadb4e630e Feb 27 10:35:14 crc kubenswrapper[4998]: I0227 10:35:14.945677 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-67d996989d-qqrgh"] Feb 27 10:35:14 crc kubenswrapper[4998]: W0227 10:35:14.954865 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda77eccb8_8369_473a_93ad_d9d67ccea057.slice/crio-d20e6c4943db84fa0aac490d9ba4a3a50d2a71a44f6d15f3e7e8565f4422047f WatchSource:0}: Error finding container d20e6c4943db84fa0aac490d9ba4a3a50d2a71a44f6d15f3e7e8565f4422047f: Status 404 returned error can't find the container with id d20e6c4943db84fa0aac490d9ba4a3a50d2a71a44f6d15f3e7e8565f4422047f Feb 27 10:35:14 crc kubenswrapper[4998]: I0227 10:35:14.974591 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-74b6b5dc96-hv45m"] Feb 27 10:35:14 crc kubenswrapper[4998]: E0227 10:35:14.976301 4998 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:172f24bd4603ac3498536a8a2c8fffb07cf9113dd52bc132778ea0aa275c6b84,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2bp7f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-74b6b5dc96-hv45m_openstack-operators(61e9c6f0-0c9c-47ca-b013-b94ba962ec66): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 27 10:35:14 crc kubenswrapper[4998]: W0227 10:35:14.976793 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4a837f08_5b8a_4cd8_8943_dc252cfb3f0f.slice/crio-1bbb22fa1597787fe5e3347094804ca55ce0dde5e7e57c6cc2eae077c8a16a2d WatchSource:0}: Error finding container 1bbb22fa1597787fe5e3347094804ca55ce0dde5e7e57c6cc2eae077c8a16a2d: Status 404 returned error can't find the container with id 1bbb22fa1597787fe5e3347094804ca55ce0dde5e7e57c6cc2eae077c8a16a2d Feb 27 10:35:14 crc kubenswrapper[4998]: E0227 10:35:14.977668 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-hv45m" podUID="61e9c6f0-0c9c-47ca-b013-b94ba962ec66" Feb 27 10:35:14 crc kubenswrapper[4998]: I0227 10:35:14.982697 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-54688575f-fxkzb"] Feb 27 10:35:14 crc kubenswrapper[4998]: E0227 10:35:14.985748 4998 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:b242403a27609ac87a0ed3a7dd788aceaf8f3da3620981cf5e000d56862d77a4,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kssv9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-54688575f-fxkzb_openstack-operators(4a837f08-5b8a-4cd8-8943-dc252cfb3f0f): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 27 10:35:14 crc kubenswrapper[4998]: E0227 10:35:14.986839 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/neutron-operator-controller-manager-54688575f-fxkzb" podUID="4a837f08-5b8a-4cd8-8943-dc252cfb3f0f" Feb 27 10:35:15 crc kubenswrapper[4998]: I0227 10:35:15.067427 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ad7e7a26-5a61-408d-86ae-0b25b8617147-cert\") pod \"infra-operator-controller-manager-f7fcc58b9-kjznw\" (UID: \"ad7e7a26-5a61-408d-86ae-0b25b8617147\") " pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-kjznw" Feb 27 10:35:15 crc kubenswrapper[4998]: E0227 10:35:15.068105 4998 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 27 10:35:15 crc kubenswrapper[4998]: E0227 10:35:15.068172 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ad7e7a26-5a61-408d-86ae-0b25b8617147-cert podName:ad7e7a26-5a61-408d-86ae-0b25b8617147 nodeName:}" failed. No retries permitted until 2026-02-27 10:35:17.068152991 +0000 UTC m=+1069.066423959 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ad7e7a26-5a61-408d-86ae-0b25b8617147-cert") pod "infra-operator-controller-manager-f7fcc58b9-kjznw" (UID: "ad7e7a26-5a61-408d-86ae-0b25b8617147") : secret "infra-operator-webhook-server-cert" not found Feb 27 10:35:15 crc kubenswrapper[4998]: I0227 10:35:15.133134 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5fdb694969-vhsp7"] Feb 27 10:35:15 crc kubenswrapper[4998]: I0227 10:35:15.141911 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-75684d597f-dh4xx"] Feb 27 10:35:15 crc kubenswrapper[4998]: I0227 10:35:15.147310 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-648564c9fc-l6kh7"] Feb 27 10:35:15 crc kubenswrapper[4998]: W0227 10:35:15.152623 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3855bff7_c203_4258_98a0_5afa77cf9b5c.slice/crio-859ba34e275e3b47f6889113458256c972cfec48db4388e8ccf7f846c3414b00 WatchSource:0}: Error finding container 859ba34e275e3b47f6889113458256c972cfec48db4388e8ccf7f846c3414b00: Status 404 returned error can't find the container with id 859ba34e275e3b47f6889113458256c972cfec48db4388e8ccf7f846c3414b00 Feb 27 10:35:15 crc kubenswrapper[4998]: I0227 10:35:15.154675 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-9b9ff9f4d-mhc4p"] Feb 27 10:35:15 crc kubenswrapper[4998]: W0227 10:35:15.157463 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaaf3093e_214b_48a9_8310_56ba32b094f7.slice/crio-2ab8464016371ac2078943ddedea2012a38dc63cff254f3562fd7f599a98509b WatchSource:0}: Error finding container 2ab8464016371ac2078943ddedea2012a38dc63cff254f3562fd7f599a98509b: Status 404 returned error can't find the container with id 2ab8464016371ac2078943ddedea2012a38dc63cff254f3562fd7f599a98509b Feb 27 10:35:15 crc kubenswrapper[4998]: I0227 10:35:15.163399 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-bccc79885-g9dst"] Feb 27 10:35:15 crc kubenswrapper[4998]: W0227 10:35:15.164029 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b9761b2_ee38_418d_800f_ba54fc8960b6.slice/crio-9cd6a6101713ec2dcefabdb838279f1df94fbf6fdebd112e0d55899ea7d71805 WatchSource:0}: Error finding container 9cd6a6101713ec2dcefabdb838279f1df94fbf6fdebd112e0d55899ea7d71805: Status 404 returned error can't find the container with id 9cd6a6101713ec2dcefabdb838279f1df94fbf6fdebd112e0d55899ea7d71805 Feb 27 10:35:15 crc kubenswrapper[4998]: E0227 10:35:15.164747 4998 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:9f73c84a9581b5739d8da333c7b64403d7b7ca284b22c624d0effe07f3d2819c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gbkxh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-75684d597f-dh4xx_openstack-operators(aaf3093e-214b-48a9-8310-56ba32b094f7): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 27 10:35:15 crc kubenswrapper[4998]: E0227 10:35:15.166847 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-dh4xx" podUID="aaf3093e-214b-48a9-8310-56ba32b094f7" Feb 27 10:35:15 crc kubenswrapper[4998]: E0227 10:35:15.182683 4998 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:1b9074a4ce16396d8bd2d30a475fc8c2f004f75a023e3eef8950661e89c0bcc6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-25sbh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-5fdb694969-vhsp7_openstack-operators(3b9761b2-ee38-418d-800f-ba54fc8960b6): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 27 10:35:15 crc kubenswrapper[4998]: E0227 10:35:15.182854 4998 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:f309cdea8084a4b1e8cbcd732d6e250fd93c55cfd1b48ba9026907c8591faab7,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-77brb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-9b9ff9f4d-mhc4p_openstack-operators(b712d7f9-e5c9-4a97-9757-e8689527c542): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 27 10:35:15 crc kubenswrapper[4998]: E0227 10:35:15.183261 4998 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:06311600a491c689493552e7ff26e36df740fa4e7c143fca874bef19f24afb97,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9mqr5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-bccc79885-g9dst_openstack-operators(f5fd6986-20ba-4982-a978-c76652150ac8): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 27 10:35:15 crc kubenswrapper[4998]: E0227 10:35:15.184326 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-5fdb694969-vhsp7" podUID="3b9761b2-ee38-418d-800f-ba54fc8960b6" Feb 27 10:35:15 crc kubenswrapper[4998]: E0227 10:35:15.184337 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-mhc4p" podUID="b712d7f9-e5c9-4a97-9757-e8689527c542" Feb 27 10:35:15 crc kubenswrapper[4998]: E0227 10:35:15.184377 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-g9dst" podUID="f5fd6986-20ba-4982-a978-c76652150ac8" Feb 27 10:35:15 crc kubenswrapper[4998]: I0227 10:35:15.276773 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-55b5ff4dbb-bq92j"] Feb 27 10:35:15 crc kubenswrapper[4998]: I0227 10:35:15.286435 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9rqts"] Feb 27 10:35:15 crc kubenswrapper[4998]: W0227 10:35:15.290724 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4b9b7b96_1053_4c39_8079_b0c47d540545.slice/crio-d421fbc440a5a33f6e93f4ca9f5aec9b44d78594df30b9415637ff3b737cd28d WatchSource:0}: Error finding container d421fbc440a5a33f6e93f4ca9f5aec9b44d78594df30b9415637ff3b737cd28d: Status 404 returned error can't find the container with id d421fbc440a5a33f6e93f4ca9f5aec9b44d78594df30b9415637ff3b737cd28d Feb 27 10:35:15 crc kubenswrapper[4998]: E0227 10:35:15.301262 4998 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-v6wjh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-9rqts_openstack-operators(d660dd5a-dbce-413b-9075-9de9a2776d8c): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 27 10:35:15 crc kubenswrapper[4998]: E0227 10:35:15.302619 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9rqts" podUID="d660dd5a-dbce-413b-9075-9de9a2776d8c" Feb 27 10:35:15 crc kubenswrapper[4998]: I0227 10:35:15.372298 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6bbe7a4a-b92f-4ba7-ab63-1261ba6a8248-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cd4n94\" (UID: \"6bbe7a4a-b92f-4ba7-ab63-1261ba6a8248\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cd4n94" Feb 27 10:35:15 crc kubenswrapper[4998]: E0227 10:35:15.372534 4998 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 27 10:35:15 crc kubenswrapper[4998]: E0227 10:35:15.372599 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6bbe7a4a-b92f-4ba7-ab63-1261ba6a8248-cert podName:6bbe7a4a-b92f-4ba7-ab63-1261ba6a8248 nodeName:}" failed. No retries permitted until 2026-02-27 10:35:17.372585015 +0000 UTC m=+1069.370855983 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6bbe7a4a-b92f-4ba7-ab63-1261ba6a8248-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cd4n94" (UID: "6bbe7a4a-b92f-4ba7-ab63-1261ba6a8248") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 27 10:35:15 crc kubenswrapper[4998]: I0227 10:35:15.636540 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7c789f89c6-c9hqt" event={"ID":"21596c8d-5360-482b-8cea-eda167f2f1cd","Type":"ContainerStarted","Data":"3211711c2be5b87d686ed5deb1cdaffc2b7c98ad89d15b9830a96f80bba111b9"} Feb 27 10:35:15 crc kubenswrapper[4998]: I0227 10:35:15.638743 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-nnlcl" event={"ID":"36f927dc-fac8-4bb6-85d1-df539857edf1","Type":"ContainerStarted","Data":"b602abea907b92ec9bd90cd11c89837ba34d61e8bbdd3c9a14bc10b2acc6f469"} Feb 27 10:35:15 crc kubenswrapper[4998]: I0227 10:35:15.640532 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-67d996989d-qqrgh" event={"ID":"a77eccb8-8369-473a-93ad-d9d67ccea057","Type":"ContainerStarted","Data":"d20e6c4943db84fa0aac490d9ba4a3a50d2a71a44f6d15f3e7e8565f4422047f"} Feb 27 10:35:15 crc kubenswrapper[4998]: I0227 10:35:15.642286 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-bq92j" event={"ID":"4b9b7b96-1053-4c39-8079-b0c47d540545","Type":"ContainerStarted","Data":"d421fbc440a5a33f6e93f4ca9f5aec9b44d78594df30b9415637ff3b737cd28d"} Feb 27 10:35:15 crc kubenswrapper[4998]: I0227 10:35:15.643802 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-g9dst" event={"ID":"f5fd6986-20ba-4982-a978-c76652150ac8","Type":"ContainerStarted","Data":"185ae089163d35dab6b3cb96bbfd6f053167b271ae2449a7d313230084d297c7"} Feb 27 10:35:15 crc kubenswrapper[4998]: I0227 10:35:15.645374 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-mhc4p" event={"ID":"b712d7f9-e5c9-4a97-9757-e8689527c542","Type":"ContainerStarted","Data":"066ac97cdc64616a1aafe083510d6e14e1e237ff657399bd82bd7bfa101d550a"} Feb 27 10:35:15 crc kubenswrapper[4998]: E0227 10:35:15.645532 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:06311600a491c689493552e7ff26e36df740fa4e7c143fca874bef19f24afb97\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-g9dst" podUID="f5fd6986-20ba-4982-a978-c76652150ac8" Feb 27 10:35:15 crc kubenswrapper[4998]: I0227 10:35:15.651788 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-545456dc4-wmdmq" event={"ID":"83211ec0-66ef-476c-ad20-e17e88348f29","Type":"ContainerStarted","Data":"a0745b2cc44ac16d3bf652f37f2a599d8e95a6452fb508234aebe4f6f263ff87"} Feb 27 10:35:15 crc kubenswrapper[4998]: E0227 10:35:15.651864 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:f309cdea8084a4b1e8cbcd732d6e250fd93c55cfd1b48ba9026907c8591faab7\\\"\"" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-mhc4p" podUID="b712d7f9-e5c9-4a97-9757-e8689527c542" Feb 27 10:35:15 crc kubenswrapper[4998]: I0227 10:35:15.667738 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-648564c9fc-l6kh7" event={"ID":"3855bff7-c203-4258-98a0-5afa77cf9b5c","Type":"ContainerStarted","Data":"859ba34e275e3b47f6889113458256c972cfec48db4388e8ccf7f846c3414b00"} Feb 27 10:35:15 crc kubenswrapper[4998]: I0227 10:35:15.668656 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-cf99c678f-jcgpp" event={"ID":"a27dd930-0b57-431c-ae5c-7c9af1e11dfa","Type":"ContainerStarted","Data":"c93aa2f4d3e6c1b8ab9e176a33ee38074bf17b649514d686049add9e98cb16fa"} Feb 27 10:35:15 crc kubenswrapper[4998]: I0227 10:35:15.684835 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-6db6876945-km79k" event={"ID":"cd2a5873-e5a8-4cc6-af9e-d90dd1253bdf","Type":"ContainerStarted","Data":"c5a9c99dc31728d253fe849d2a901b065fd54144eff6213ecfd26deadb4e630e"} Feb 27 10:35:15 crc kubenswrapper[4998]: I0227 10:35:15.686766 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-fs8hb" event={"ID":"d31df8ec-7c1c-42b3-b538-2949c015b6e6","Type":"ContainerStarted","Data":"9912d179d26a507529cbf3034a274f734120f62acf4067ba3a74249716ffdf43"} Feb 27 10:35:15 crc kubenswrapper[4998]: I0227 10:35:15.714530 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-dh4xx" event={"ID":"aaf3093e-214b-48a9-8310-56ba32b094f7","Type":"ContainerStarted","Data":"2ab8464016371ac2078943ddedea2012a38dc63cff254f3562fd7f599a98509b"} Feb 27 10:35:15 crc kubenswrapper[4998]: I0227 10:35:15.727296 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-5fdb694969-vhsp7" event={"ID":"3b9761b2-ee38-418d-800f-ba54fc8960b6","Type":"ContainerStarted","Data":"9cd6a6101713ec2dcefabdb838279f1df94fbf6fdebd112e0d55899ea7d71805"} Feb 27 10:35:15 crc kubenswrapper[4998]: E0227 10:35:15.738172 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:1b9074a4ce16396d8bd2d30a475fc8c2f004f75a023e3eef8950661e89c0bcc6\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-5fdb694969-vhsp7" podUID="3b9761b2-ee38-418d-800f-ba54fc8960b6" Feb 27 10:35:15 crc kubenswrapper[4998]: E0227 10:35:15.738521 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:9f73c84a9581b5739d8da333c7b64403d7b7ca284b22c624d0effe07f3d2819c\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-dh4xx" podUID="aaf3093e-214b-48a9-8310-56ba32b094f7" Feb 27 10:35:15 crc kubenswrapper[4998]: I0227 10:35:15.751077 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-zr6mq" event={"ID":"9ec9f13a-82dd-4ab4-8f5a-c15f2a42f2dc","Type":"ContainerStarted","Data":"82dc27ab0a2f2d0b684d82b4e00926634b462cf06008d46743e30943183b7a57"} Feb 27 10:35:15 crc kubenswrapper[4998]: I0227 10:35:15.785777 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/86a542d0-8588-425c-9d8b-417b0b287ce2-webhook-certs\") pod \"openstack-operator-controller-manager-8f6f897df-qkvxk\" (UID: \"86a542d0-8588-425c-9d8b-417b0b287ce2\") " pod="openstack-operators/openstack-operator-controller-manager-8f6f897df-qkvxk" Feb 27 10:35:15 crc kubenswrapper[4998]: I0227 10:35:15.786062 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/86a542d0-8588-425c-9d8b-417b0b287ce2-metrics-certs\") pod \"openstack-operator-controller-manager-8f6f897df-qkvxk\" (UID: \"86a542d0-8588-425c-9d8b-417b0b287ce2\") " pod="openstack-operators/openstack-operator-controller-manager-8f6f897df-qkvxk" Feb 27 10:35:15 crc kubenswrapper[4998]: E0227 10:35:15.787392 4998 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 27 10:35:15 crc kubenswrapper[4998]: E0227 10:35:15.787452 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/86a542d0-8588-425c-9d8b-417b0b287ce2-webhook-certs podName:86a542d0-8588-425c-9d8b-417b0b287ce2 nodeName:}" failed. No retries permitted until 2026-02-27 10:35:17.787437323 +0000 UTC m=+1069.785708281 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/86a542d0-8588-425c-9d8b-417b0b287ce2-webhook-certs") pod "openstack-operator-controller-manager-8f6f897df-qkvxk" (UID: "86a542d0-8588-425c-9d8b-417b0b287ce2") : secret "webhook-server-cert" not found Feb 27 10:35:15 crc kubenswrapper[4998]: E0227 10:35:15.787901 4998 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 27 10:35:15 crc kubenswrapper[4998]: E0227 10:35:15.787945 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/86a542d0-8588-425c-9d8b-417b0b287ce2-metrics-certs podName:86a542d0-8588-425c-9d8b-417b0b287ce2 nodeName:}" failed. No retries permitted until 2026-02-27 10:35:17.787933708 +0000 UTC m=+1069.786204676 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/86a542d0-8588-425c-9d8b-417b0b287ce2-metrics-certs") pod "openstack-operator-controller-manager-8f6f897df-qkvxk" (UID: "86a542d0-8588-425c-9d8b-417b0b287ce2") : secret "metrics-server-cert" not found Feb 27 10:35:15 crc kubenswrapper[4998]: I0227 10:35:15.807920 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9rqts" event={"ID":"d660dd5a-dbce-413b-9075-9de9a2776d8c","Type":"ContainerStarted","Data":"c95611b2f9ceb2036a129ad13741fddae309409c64497dab7d74b8efc4823251"} Feb 27 10:35:15 crc kubenswrapper[4998]: I0227 10:35:15.813286 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-54688575f-fxkzb" event={"ID":"4a837f08-5b8a-4cd8-8943-dc252cfb3f0f","Type":"ContainerStarted","Data":"1bbb22fa1597787fe5e3347094804ca55ce0dde5e7e57c6cc2eae077c8a16a2d"} Feb 27 10:35:15 crc kubenswrapper[4998]: E0227 10:35:15.826939 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9rqts" podUID="d660dd5a-dbce-413b-9075-9de9a2776d8c" Feb 27 10:35:15 crc kubenswrapper[4998]: E0227 10:35:15.827020 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:b242403a27609ac87a0ed3a7dd788aceaf8f3da3620981cf5e000d56862d77a4\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-54688575f-fxkzb" podUID="4a837f08-5b8a-4cd8-8943-dc252cfb3f0f" Feb 27 10:35:15 crc kubenswrapper[4998]: I0227 10:35:15.828391 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-wxtjl" event={"ID":"35236f87-0600-46d7-ba5b-7576e20b9dc4","Type":"ContainerStarted","Data":"0ab975cbec2375ecddd804ba1f78b36c54f506b96a108f047987cb8cb6eaab08"} Feb 27 10:35:15 crc kubenswrapper[4998]: I0227 10:35:15.834942 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-hv45m" event={"ID":"61e9c6f0-0c9c-47ca-b013-b94ba962ec66","Type":"ContainerStarted","Data":"0711488746fbd332b769d0a5fc8a1d9d2676da1d87cde50498b0ab7eec6408b0"} Feb 27 10:35:15 crc kubenswrapper[4998]: E0227 10:35:15.844211 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:172f24bd4603ac3498536a8a2c8fffb07cf9113dd52bc132778ea0aa275c6b84\\\"\"" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-hv45m" podUID="61e9c6f0-0c9c-47ca-b013-b94ba962ec66" Feb 27 10:35:15 crc kubenswrapper[4998]: I0227 10:35:15.851573 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9z27z" event={"ID":"79f06471-ac2f-4fec-95da-760990d48ad9","Type":"ContainerStarted","Data":"3869a51c6b8c900ee2a904c84c238b3d587fab296d20ca2d9d0d585cc03e1db8"} Feb 27 10:35:15 crc kubenswrapper[4998]: I0227 10:35:15.867087 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-64db6967f8-249j5" event={"ID":"36f2300a-f1d8-429d-8b4f-065c53c8b68b","Type":"ContainerStarted","Data":"88da37e2dd53e4ddd1f91bade8c12707929d4d59f805f8bf926e87d968d0f2b9"} Feb 27 10:35:15 crc kubenswrapper[4998]: I0227 10:35:15.962945 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9z27z" podStartSLOduration=2.323993159 podStartE2EDuration="4.962927647s" podCreationTimestamp="2026-02-27 10:35:11 +0000 UTC" firstStartedPulling="2026-02-27 10:35:12.510744335 +0000 UTC m=+1064.509015303" lastFinishedPulling="2026-02-27 10:35:15.149678823 +0000 UTC m=+1067.147949791" observedRunningTime="2026-02-27 10:35:15.962758712 +0000 UTC m=+1067.961029680" watchObservedRunningTime="2026-02-27 10:35:15.962927647 +0000 UTC m=+1067.961198615" Feb 27 10:35:16 crc kubenswrapper[4998]: E0227 10:35:16.886265 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:b242403a27609ac87a0ed3a7dd788aceaf8f3da3620981cf5e000d56862d77a4\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-54688575f-fxkzb" podUID="4a837f08-5b8a-4cd8-8943-dc252cfb3f0f" Feb 27 10:35:16 crc kubenswrapper[4998]: E0227 10:35:16.886602 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:172f24bd4603ac3498536a8a2c8fffb07cf9113dd52bc132778ea0aa275c6b84\\\"\"" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-hv45m" podUID="61e9c6f0-0c9c-47ca-b013-b94ba962ec66" Feb 27 10:35:16 crc kubenswrapper[4998]: E0227 10:35:16.886644 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:1b9074a4ce16396d8bd2d30a475fc8c2f004f75a023e3eef8950661e89c0bcc6\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-5fdb694969-vhsp7" podUID="3b9761b2-ee38-418d-800f-ba54fc8960b6" Feb 27 10:35:16 crc kubenswrapper[4998]: E0227 10:35:16.886709 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:06311600a491c689493552e7ff26e36df740fa4e7c143fca874bef19f24afb97\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-g9dst" podUID="f5fd6986-20ba-4982-a978-c76652150ac8" Feb 27 10:35:16 crc kubenswrapper[4998]: E0227 10:35:16.886744 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9rqts" podUID="d660dd5a-dbce-413b-9075-9de9a2776d8c" Feb 27 10:35:16 crc kubenswrapper[4998]: E0227 10:35:16.886785 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:9f73c84a9581b5739d8da333c7b64403d7b7ca284b22c624d0effe07f3d2819c\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-dh4xx" podUID="aaf3093e-214b-48a9-8310-56ba32b094f7" Feb 27 10:35:16 crc kubenswrapper[4998]: E0227 10:35:16.886820 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:f309cdea8084a4b1e8cbcd732d6e250fd93c55cfd1b48ba9026907c8591faab7\\\"\"" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-mhc4p" podUID="b712d7f9-e5c9-4a97-9757-e8689527c542" Feb 27 10:35:17 crc kubenswrapper[4998]: I0227 10:35:17.107914 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ad7e7a26-5a61-408d-86ae-0b25b8617147-cert\") pod \"infra-operator-controller-manager-f7fcc58b9-kjznw\" (UID: \"ad7e7a26-5a61-408d-86ae-0b25b8617147\") " pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-kjznw" Feb 27 10:35:17 crc kubenswrapper[4998]: E0227 10:35:17.108507 4998 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 27 10:35:17 crc kubenswrapper[4998]: E0227 10:35:17.108567 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ad7e7a26-5a61-408d-86ae-0b25b8617147-cert podName:ad7e7a26-5a61-408d-86ae-0b25b8617147 nodeName:}" failed. No retries permitted until 2026-02-27 10:35:21.108548238 +0000 UTC m=+1073.106819206 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ad7e7a26-5a61-408d-86ae-0b25b8617147-cert") pod "infra-operator-controller-manager-f7fcc58b9-kjznw" (UID: "ad7e7a26-5a61-408d-86ae-0b25b8617147") : secret "infra-operator-webhook-server-cert" not found Feb 27 10:35:17 crc kubenswrapper[4998]: I0227 10:35:17.412639 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6bbe7a4a-b92f-4ba7-ab63-1261ba6a8248-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cd4n94\" (UID: \"6bbe7a4a-b92f-4ba7-ab63-1261ba6a8248\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cd4n94" Feb 27 10:35:17 crc kubenswrapper[4998]: E0227 10:35:17.412836 4998 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 27 10:35:17 crc kubenswrapper[4998]: E0227 10:35:17.412931 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6bbe7a4a-b92f-4ba7-ab63-1261ba6a8248-cert podName:6bbe7a4a-b92f-4ba7-ab63-1261ba6a8248 nodeName:}" failed. No retries permitted until 2026-02-27 10:35:21.412908288 +0000 UTC m=+1073.411179256 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6bbe7a4a-b92f-4ba7-ab63-1261ba6a8248-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cd4n94" (UID: "6bbe7a4a-b92f-4ba7-ab63-1261ba6a8248") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 27 10:35:17 crc kubenswrapper[4998]: I0227 10:35:17.818985 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/86a542d0-8588-425c-9d8b-417b0b287ce2-metrics-certs\") pod \"openstack-operator-controller-manager-8f6f897df-qkvxk\" (UID: \"86a542d0-8588-425c-9d8b-417b0b287ce2\") " pod="openstack-operators/openstack-operator-controller-manager-8f6f897df-qkvxk" Feb 27 10:35:17 crc kubenswrapper[4998]: I0227 10:35:17.819057 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/86a542d0-8588-425c-9d8b-417b0b287ce2-webhook-certs\") pod \"openstack-operator-controller-manager-8f6f897df-qkvxk\" (UID: \"86a542d0-8588-425c-9d8b-417b0b287ce2\") " pod="openstack-operators/openstack-operator-controller-manager-8f6f897df-qkvxk" Feb 27 10:35:17 crc kubenswrapper[4998]: E0227 10:35:17.819207 4998 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 27 10:35:17 crc kubenswrapper[4998]: E0227 10:35:17.819284 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/86a542d0-8588-425c-9d8b-417b0b287ce2-webhook-certs podName:86a542d0-8588-425c-9d8b-417b0b287ce2 nodeName:}" failed. No retries permitted until 2026-02-27 10:35:21.819265857 +0000 UTC m=+1073.817536825 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/86a542d0-8588-425c-9d8b-417b0b287ce2-webhook-certs") pod "openstack-operator-controller-manager-8f6f897df-qkvxk" (UID: "86a542d0-8588-425c-9d8b-417b0b287ce2") : secret "webhook-server-cert" not found Feb 27 10:35:17 crc kubenswrapper[4998]: E0227 10:35:17.819779 4998 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 27 10:35:17 crc kubenswrapper[4998]: E0227 10:35:17.819815 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/86a542d0-8588-425c-9d8b-417b0b287ce2-metrics-certs podName:86a542d0-8588-425c-9d8b-417b0b287ce2 nodeName:}" failed. No retries permitted until 2026-02-27 10:35:21.819802783 +0000 UTC m=+1073.818073751 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/86a542d0-8588-425c-9d8b-417b0b287ce2-metrics-certs") pod "openstack-operator-controller-manager-8f6f897df-qkvxk" (UID: "86a542d0-8588-425c-9d8b-417b0b287ce2") : secret "metrics-server-cert" not found Feb 27 10:35:21 crc kubenswrapper[4998]: I0227 10:35:21.180101 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ad7e7a26-5a61-408d-86ae-0b25b8617147-cert\") pod \"infra-operator-controller-manager-f7fcc58b9-kjznw\" (UID: \"ad7e7a26-5a61-408d-86ae-0b25b8617147\") " pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-kjznw" Feb 27 10:35:21 crc kubenswrapper[4998]: E0227 10:35:21.180473 4998 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 27 10:35:21 crc kubenswrapper[4998]: E0227 10:35:21.180660 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ad7e7a26-5a61-408d-86ae-0b25b8617147-cert podName:ad7e7a26-5a61-408d-86ae-0b25b8617147 nodeName:}" failed. No retries permitted until 2026-02-27 10:35:29.180636102 +0000 UTC m=+1081.178907080 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ad7e7a26-5a61-408d-86ae-0b25b8617147-cert") pod "infra-operator-controller-manager-f7fcc58b9-kjznw" (UID: "ad7e7a26-5a61-408d-86ae-0b25b8617147") : secret "infra-operator-webhook-server-cert" not found Feb 27 10:35:21 crc kubenswrapper[4998]: I0227 10:35:21.401896 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9z27z" Feb 27 10:35:21 crc kubenswrapper[4998]: I0227 10:35:21.402251 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9z27z" Feb 27 10:35:21 crc kubenswrapper[4998]: I0227 10:35:21.456094 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9z27z" Feb 27 10:35:21 crc kubenswrapper[4998]: I0227 10:35:21.486354 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6bbe7a4a-b92f-4ba7-ab63-1261ba6a8248-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cd4n94\" (UID: \"6bbe7a4a-b92f-4ba7-ab63-1261ba6a8248\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cd4n94" Feb 27 10:35:21 crc kubenswrapper[4998]: E0227 10:35:21.486501 4998 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 27 10:35:21 crc kubenswrapper[4998]: E0227 10:35:21.486573 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6bbe7a4a-b92f-4ba7-ab63-1261ba6a8248-cert podName:6bbe7a4a-b92f-4ba7-ab63-1261ba6a8248 nodeName:}" failed. No retries permitted until 2026-02-27 10:35:29.486556861 +0000 UTC m=+1081.484827829 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6bbe7a4a-b92f-4ba7-ab63-1261ba6a8248-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cd4n94" (UID: "6bbe7a4a-b92f-4ba7-ab63-1261ba6a8248") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 27 10:35:21 crc kubenswrapper[4998]: I0227 10:35:21.892496 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/86a542d0-8588-425c-9d8b-417b0b287ce2-metrics-certs\") pod \"openstack-operator-controller-manager-8f6f897df-qkvxk\" (UID: \"86a542d0-8588-425c-9d8b-417b0b287ce2\") " pod="openstack-operators/openstack-operator-controller-manager-8f6f897df-qkvxk" Feb 27 10:35:21 crc kubenswrapper[4998]: I0227 10:35:21.892555 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/86a542d0-8588-425c-9d8b-417b0b287ce2-webhook-certs\") pod \"openstack-operator-controller-manager-8f6f897df-qkvxk\" (UID: \"86a542d0-8588-425c-9d8b-417b0b287ce2\") " pod="openstack-operators/openstack-operator-controller-manager-8f6f897df-qkvxk" Feb 27 10:35:21 crc kubenswrapper[4998]: E0227 10:35:21.892672 4998 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 27 10:35:21 crc kubenswrapper[4998]: E0227 10:35:21.892673 4998 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 27 10:35:21 crc kubenswrapper[4998]: E0227 10:35:21.892721 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/86a542d0-8588-425c-9d8b-417b0b287ce2-webhook-certs podName:86a542d0-8588-425c-9d8b-417b0b287ce2 nodeName:}" failed. No retries permitted until 2026-02-27 10:35:29.892705184 +0000 UTC m=+1081.890976152 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/86a542d0-8588-425c-9d8b-417b0b287ce2-webhook-certs") pod "openstack-operator-controller-manager-8f6f897df-qkvxk" (UID: "86a542d0-8588-425c-9d8b-417b0b287ce2") : secret "webhook-server-cert" not found Feb 27 10:35:21 crc kubenswrapper[4998]: E0227 10:35:21.892736 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/86a542d0-8588-425c-9d8b-417b0b287ce2-metrics-certs podName:86a542d0-8588-425c-9d8b-417b0b287ce2 nodeName:}" failed. No retries permitted until 2026-02-27 10:35:29.892729284 +0000 UTC m=+1081.891000252 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/86a542d0-8588-425c-9d8b-417b0b287ce2-metrics-certs") pod "openstack-operator-controller-manager-8f6f897df-qkvxk" (UID: "86a542d0-8588-425c-9d8b-417b0b287ce2") : secret "metrics-server-cert" not found Feb 27 10:35:21 crc kubenswrapper[4998]: I0227 10:35:21.995644 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9z27z" Feb 27 10:35:22 crc kubenswrapper[4998]: I0227 10:35:22.044266 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9z27z"] Feb 27 10:35:23 crc kubenswrapper[4998]: I0227 10:35:23.956758 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-9z27z" podUID="79f06471-ac2f-4fec-95da-760990d48ad9" containerName="registry-server" containerID="cri-o://3869a51c6b8c900ee2a904c84c238b3d587fab296d20ca2d9d0d585cc03e1db8" gracePeriod=2 Feb 27 10:35:25 crc kubenswrapper[4998]: I0227 10:35:25.972772 4998 generic.go:334] "Generic (PLEG): container finished" podID="79f06471-ac2f-4fec-95da-760990d48ad9" containerID="3869a51c6b8c900ee2a904c84c238b3d587fab296d20ca2d9d0d585cc03e1db8" exitCode=0 Feb 27 10:35:25 crc kubenswrapper[4998]: I0227 10:35:25.972830 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9z27z" event={"ID":"79f06471-ac2f-4fec-95da-760990d48ad9","Type":"ContainerDied","Data":"3869a51c6b8c900ee2a904c84c238b3d587fab296d20ca2d9d0d585cc03e1db8"} Feb 27 10:35:29 crc kubenswrapper[4998]: I0227 10:35:29.209969 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ad7e7a26-5a61-408d-86ae-0b25b8617147-cert\") pod \"infra-operator-controller-manager-f7fcc58b9-kjznw\" (UID: \"ad7e7a26-5a61-408d-86ae-0b25b8617147\") " pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-kjznw" Feb 27 10:35:29 crc kubenswrapper[4998]: I0227 10:35:29.216452 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ad7e7a26-5a61-408d-86ae-0b25b8617147-cert\") pod \"infra-operator-controller-manager-f7fcc58b9-kjznw\" (UID: \"ad7e7a26-5a61-408d-86ae-0b25b8617147\") " pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-kjznw" Feb 27 10:35:29 crc kubenswrapper[4998]: I0227 10:35:29.229138 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-bbrst" Feb 27 10:35:29 crc kubenswrapper[4998]: I0227 10:35:29.238375 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-kjznw" Feb 27 10:35:29 crc kubenswrapper[4998]: I0227 10:35:29.517102 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6bbe7a4a-b92f-4ba7-ab63-1261ba6a8248-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cd4n94\" (UID: \"6bbe7a4a-b92f-4ba7-ab63-1261ba6a8248\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cd4n94" Feb 27 10:35:29 crc kubenswrapper[4998]: I0227 10:35:29.520727 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6bbe7a4a-b92f-4ba7-ab63-1261ba6a8248-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cd4n94\" (UID: \"6bbe7a4a-b92f-4ba7-ab63-1261ba6a8248\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cd4n94" Feb 27 10:35:29 crc kubenswrapper[4998]: I0227 10:35:29.579452 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-lfkmr" Feb 27 10:35:29 crc kubenswrapper[4998]: I0227 10:35:29.589038 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cd4n94" Feb 27 10:35:29 crc kubenswrapper[4998]: I0227 10:35:29.923343 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/86a542d0-8588-425c-9d8b-417b0b287ce2-metrics-certs\") pod \"openstack-operator-controller-manager-8f6f897df-qkvxk\" (UID: \"86a542d0-8588-425c-9d8b-417b0b287ce2\") " pod="openstack-operators/openstack-operator-controller-manager-8f6f897df-qkvxk" Feb 27 10:35:29 crc kubenswrapper[4998]: I0227 10:35:29.923393 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/86a542d0-8588-425c-9d8b-417b0b287ce2-webhook-certs\") pod \"openstack-operator-controller-manager-8f6f897df-qkvxk\" (UID: \"86a542d0-8588-425c-9d8b-417b0b287ce2\") " pod="openstack-operators/openstack-operator-controller-manager-8f6f897df-qkvxk" Feb 27 10:35:29 crc kubenswrapper[4998]: E0227 10:35:29.923520 4998 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 27 10:35:29 crc kubenswrapper[4998]: E0227 10:35:29.923565 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/86a542d0-8588-425c-9d8b-417b0b287ce2-webhook-certs podName:86a542d0-8588-425c-9d8b-417b0b287ce2 nodeName:}" failed. No retries permitted until 2026-02-27 10:35:45.923550261 +0000 UTC m=+1097.921821229 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/86a542d0-8588-425c-9d8b-417b0b287ce2-webhook-certs") pod "openstack-operator-controller-manager-8f6f897df-qkvxk" (UID: "86a542d0-8588-425c-9d8b-417b0b287ce2") : secret "webhook-server-cert" not found Feb 27 10:35:29 crc kubenswrapper[4998]: E0227 10:35:29.923773 4998 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 27 10:35:29 crc kubenswrapper[4998]: E0227 10:35:29.923870 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/86a542d0-8588-425c-9d8b-417b0b287ce2-metrics-certs podName:86a542d0-8588-425c-9d8b-417b0b287ce2 nodeName:}" failed. No retries permitted until 2026-02-27 10:35:45.92385786 +0000 UTC m=+1097.922128828 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/86a542d0-8588-425c-9d8b-417b0b287ce2-metrics-certs") pod "openstack-operator-controller-manager-8f6f897df-qkvxk" (UID: "86a542d0-8588-425c-9d8b-417b0b287ce2") : secret "metrics-server-cert" not found Feb 27 10:35:30 crc kubenswrapper[4998]: E0227 10:35:30.711351 4998 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ironic-operator@sha256:e41dfadd2c3bbcae29f8c43cd2feea6724a48cdef127d65d1d37816bb9945a01" Feb 27 10:35:30 crc kubenswrapper[4998]: E0227 10:35:30.711554 4998 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:e41dfadd2c3bbcae29f8c43cd2feea6724a48cdef127d65d1d37816bb9945a01,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-z4dd6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-545456dc4-wmdmq_openstack-operators(83211ec0-66ef-476c-ad20-e17e88348f29): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 27 10:35:30 crc kubenswrapper[4998]: E0227 10:35:30.712888 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-545456dc4-wmdmq" podUID="83211ec0-66ef-476c-ad20-e17e88348f29" Feb 27 10:35:31 crc kubenswrapper[4998]: E0227 10:35:31.003746 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:e41dfadd2c3bbcae29f8c43cd2feea6724a48cdef127d65d1d37816bb9945a01\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-545456dc4-wmdmq" podUID="83211ec0-66ef-476c-ad20-e17e88348f29" Feb 27 10:35:31 crc kubenswrapper[4998]: E0227 10:35:31.249023 4998 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/barbican-operator@sha256:3f9b0446a124745439306dc3bb7faec8c02c0b6be33f788b9d455fa57fb60120" Feb 27 10:35:31 crc kubenswrapper[4998]: E0227 10:35:31.249244 4998 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/barbican-operator@sha256:3f9b0446a124745439306dc3bb7faec8c02c0b6be33f788b9d455fa57fb60120,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-d4tvk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-operator-controller-manager-6db6876945-km79k_openstack-operators(cd2a5873-e5a8-4cc6-af9e-d90dd1253bdf): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 27 10:35:31 crc kubenswrapper[4998]: E0227 10:35:31.250433 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/barbican-operator-controller-manager-6db6876945-km79k" podUID="cd2a5873-e5a8-4cc6-af9e-d90dd1253bdf" Feb 27 10:35:31 crc kubenswrapper[4998]: E0227 10:35:31.402150 4998 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3869a51c6b8c900ee2a904c84c238b3d587fab296d20ca2d9d0d585cc03e1db8 is running failed: container process not found" containerID="3869a51c6b8c900ee2a904c84c238b3d587fab296d20ca2d9d0d585cc03e1db8" cmd=["grpc_health_probe","-addr=:50051"] Feb 27 10:35:31 crc kubenswrapper[4998]: E0227 10:35:31.403800 4998 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3869a51c6b8c900ee2a904c84c238b3d587fab296d20ca2d9d0d585cc03e1db8 is running failed: container process not found" containerID="3869a51c6b8c900ee2a904c84c238b3d587fab296d20ca2d9d0d585cc03e1db8" cmd=["grpc_health_probe","-addr=:50051"] Feb 27 10:35:31 crc kubenswrapper[4998]: E0227 10:35:31.404583 4998 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3869a51c6b8c900ee2a904c84c238b3d587fab296d20ca2d9d0d585cc03e1db8 is running failed: container process not found" containerID="3869a51c6b8c900ee2a904c84c238b3d587fab296d20ca2d9d0d585cc03e1db8" cmd=["grpc_health_probe","-addr=:50051"] Feb 27 10:35:31 crc kubenswrapper[4998]: E0227 10:35:31.404624 4998 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3869a51c6b8c900ee2a904c84c238b3d587fab296d20ca2d9d0d585cc03e1db8 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-9z27z" podUID="79f06471-ac2f-4fec-95da-760990d48ad9" containerName="registry-server" Feb 27 10:35:32 crc kubenswrapper[4998]: E0227 10:35:32.009852 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/barbican-operator@sha256:3f9b0446a124745439306dc3bb7faec8c02c0b6be33f788b9d455fa57fb60120\\\"\"" pod="openstack-operators/barbican-operator-controller-manager-6db6876945-km79k" podUID="cd2a5873-e5a8-4cc6-af9e-d90dd1253bdf" Feb 27 10:35:32 crc kubenswrapper[4998]: E0227 10:35:32.074302 4998 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:bb939885bd04593ad03af901adb77ee2a2d18529b328c23288c7cc7a2ba5282e" Feb 27 10:35:32 crc kubenswrapper[4998]: E0227 10:35:32.074482 4998 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:bb939885bd04593ad03af901adb77ee2a2d18529b328c23288c7cc7a2ba5282e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gbrn7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-648564c9fc-l6kh7_openstack-operators(3855bff7-c203-4258-98a0-5afa77cf9b5c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 27 10:35:32 crc kubenswrapper[4998]: E0227 10:35:32.075831 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-648564c9fc-l6kh7" podUID="3855bff7-c203-4258-98a0-5afa77cf9b5c" Feb 27 10:35:32 crc kubenswrapper[4998]: E0227 10:35:32.786111 4998 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:f1158ec4d879c4646eee4323bc501eba4d377beb2ad6fbe08ed30070c441ac26" Feb 27 10:35:32 crc kubenswrapper[4998]: E0227 10:35:32.786351 4998 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:f1158ec4d879c4646eee4323bc501eba4d377beb2ad6fbe08ed30070c441ac26,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-drbj4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-67d996989d-qqrgh_openstack-operators(a77eccb8-8369-473a-93ad-d9d67ccea057): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 27 10:35:32 crc kubenswrapper[4998]: E0227 10:35:32.787583 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-67d996989d-qqrgh" podUID="a77eccb8-8369-473a-93ad-d9d67ccea057" Feb 27 10:35:33 crc kubenswrapper[4998]: E0227 10:35:33.022996 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:f1158ec4d879c4646eee4323bc501eba4d377beb2ad6fbe08ed30070c441ac26\\\"\"" pod="openstack-operators/manila-operator-controller-manager-67d996989d-qqrgh" podUID="a77eccb8-8369-473a-93ad-d9d67ccea057" Feb 27 10:35:33 crc kubenswrapper[4998]: E0227 10:35:33.023025 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:bb939885bd04593ad03af901adb77ee2a2d18529b328c23288c7cc7a2ba5282e\\\"\"" pod="openstack-operators/placement-operator-controller-manager-648564c9fc-l6kh7" podUID="3855bff7-c203-4258-98a0-5afa77cf9b5c" Feb 27 10:35:33 crc kubenswrapper[4998]: E0227 10:35:33.513945 4998 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:508859beb0e5b69169393dbb0039dc03a9d4ba05f16f6ff74f9b25e19d446214" Feb 27 10:35:33 crc kubenswrapper[4998]: E0227 10:35:33.514192 4998 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:508859beb0e5b69169393dbb0039dc03a9d4ba05f16f6ff74f9b25e19d446214,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pp4cq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-5d87c9d997-nnlcl_openstack-operators(36f927dc-fac8-4bb6-85d1-df539857edf1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 27 10:35:33 crc kubenswrapper[4998]: E0227 10:35:33.516184 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-nnlcl" podUID="36f927dc-fac8-4bb6-85d1-df539857edf1" Feb 27 10:35:33 crc kubenswrapper[4998]: E0227 10:35:33.960605 4998 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:2d59045b8d8e6f9c5483c4fdda7c5057218d553200dc4bcf26789980ac1d9abd" Feb 27 10:35:33 crc kubenswrapper[4998]: E0227 10:35:33.961108 4998 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:2d59045b8d8e6f9c5483c4fdda7c5057218d553200dc4bcf26789980ac1d9abd,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-z9nhp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-5d86c7ddb7-fs8hb_openstack-operators(d31df8ec-7c1c-42b3-b538-2949c015b6e6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 27 10:35:33 crc kubenswrapper[4998]: E0227 10:35:33.963102 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-fs8hb" podUID="d31df8ec-7c1c-42b3-b538-2949c015b6e6" Feb 27 10:35:34 crc kubenswrapper[4998]: E0227 10:35:34.028841 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/designate-operator@sha256:508859beb0e5b69169393dbb0039dc03a9d4ba05f16f6ff74f9b25e19d446214\\\"\"" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-nnlcl" podUID="36f927dc-fac8-4bb6-85d1-df539857edf1" Feb 27 10:35:34 crc kubenswrapper[4998]: E0227 10:35:34.029008 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:2d59045b8d8e6f9c5483c4fdda7c5057218d553200dc4bcf26789980ac1d9abd\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-fs8hb" podUID="d31df8ec-7c1c-42b3-b538-2949c015b6e6" Feb 27 10:35:34 crc kubenswrapper[4998]: E0227 10:35:34.470867 4998 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:5592ec4a6fbe2c832d1828b51af0b907e5d733d478b6f378a9b2f6d6cf0ac505" Feb 27 10:35:34 crc kubenswrapper[4998]: E0227 10:35:34.471083 4998 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:5592ec4a6fbe2c832d1828b51af0b907e5d733d478b6f378a9b2f6d6cf0ac505,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dfbnj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-7b6bfb6475-zr6mq_openstack-operators(9ec9f13a-82dd-4ab4-8f5a-c15f2a42f2dc): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 27 10:35:34 crc kubenswrapper[4998]: E0227 10:35:34.472304 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-zr6mq" podUID="9ec9f13a-82dd-4ab4-8f5a-c15f2a42f2dc" Feb 27 10:35:35 crc kubenswrapper[4998]: E0227 10:35:35.039077 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:5592ec4a6fbe2c832d1828b51af0b907e5d733d478b6f378a9b2f6d6cf0ac505\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-zr6mq" podUID="9ec9f13a-82dd-4ab4-8f5a-c15f2a42f2dc" Feb 27 10:35:38 crc kubenswrapper[4998]: I0227 10:35:38.055570 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9z27z" event={"ID":"79f06471-ac2f-4fec-95da-760990d48ad9","Type":"ContainerDied","Data":"e899b6a71ad53ee2fee2a17cef30d12591858fab76e019e50de89ae6ccde33a6"} Feb 27 10:35:38 crc kubenswrapper[4998]: I0227 10:35:38.055806 4998 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e899b6a71ad53ee2fee2a17cef30d12591858fab76e019e50de89ae6ccde33a6" Feb 27 10:35:38 crc kubenswrapper[4998]: I0227 10:35:38.065199 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9z27z" Feb 27 10:35:38 crc kubenswrapper[4998]: I0227 10:35:38.079029 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79f06471-ac2f-4fec-95da-760990d48ad9-utilities\") pod \"79f06471-ac2f-4fec-95da-760990d48ad9\" (UID: \"79f06471-ac2f-4fec-95da-760990d48ad9\") " Feb 27 10:35:38 crc kubenswrapper[4998]: I0227 10:35:38.079179 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ms8b2\" (UniqueName: \"kubernetes.io/projected/79f06471-ac2f-4fec-95da-760990d48ad9-kube-api-access-ms8b2\") pod \"79f06471-ac2f-4fec-95da-760990d48ad9\" (UID: \"79f06471-ac2f-4fec-95da-760990d48ad9\") " Feb 27 10:35:38 crc kubenswrapper[4998]: I0227 10:35:38.079337 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79f06471-ac2f-4fec-95da-760990d48ad9-catalog-content\") pod \"79f06471-ac2f-4fec-95da-760990d48ad9\" (UID: \"79f06471-ac2f-4fec-95da-760990d48ad9\") " Feb 27 10:35:38 crc kubenswrapper[4998]: I0227 10:35:38.081294 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/79f06471-ac2f-4fec-95da-760990d48ad9-utilities" (OuterVolumeSpecName: "utilities") pod "79f06471-ac2f-4fec-95da-760990d48ad9" (UID: "79f06471-ac2f-4fec-95da-760990d48ad9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:35:38 crc kubenswrapper[4998]: I0227 10:35:38.090490 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79f06471-ac2f-4fec-95da-760990d48ad9-kube-api-access-ms8b2" (OuterVolumeSpecName: "kube-api-access-ms8b2") pod "79f06471-ac2f-4fec-95da-760990d48ad9" (UID: "79f06471-ac2f-4fec-95da-760990d48ad9"). InnerVolumeSpecName "kube-api-access-ms8b2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:35:38 crc kubenswrapper[4998]: I0227 10:35:38.145046 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/79f06471-ac2f-4fec-95da-760990d48ad9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "79f06471-ac2f-4fec-95da-760990d48ad9" (UID: "79f06471-ac2f-4fec-95da-760990d48ad9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:35:38 crc kubenswrapper[4998]: I0227 10:35:38.182608 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ms8b2\" (UniqueName: \"kubernetes.io/projected/79f06471-ac2f-4fec-95da-760990d48ad9-kube-api-access-ms8b2\") on node \"crc\" DevicePath \"\"" Feb 27 10:35:38 crc kubenswrapper[4998]: I0227 10:35:38.182645 4998 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79f06471-ac2f-4fec-95da-760990d48ad9-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 10:35:38 crc kubenswrapper[4998]: I0227 10:35:38.182657 4998 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79f06471-ac2f-4fec-95da-760990d48ad9-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 10:35:39 crc kubenswrapper[4998]: I0227 10:35:39.062252 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9z27z" Feb 27 10:35:39 crc kubenswrapper[4998]: I0227 10:35:39.094131 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9z27z"] Feb 27 10:35:39 crc kubenswrapper[4998]: I0227 10:35:39.095954 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-9z27z"] Feb 27 10:35:39 crc kubenswrapper[4998]: I0227 10:35:39.426482 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-f7fcc58b9-kjznw"] Feb 27 10:35:39 crc kubenswrapper[4998]: W0227 10:35:39.434735 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podad7e7a26_5a61_408d_86ae_0b25b8617147.slice/crio-63da4a97ce0d2a5679d8129920ccea436fca769c628aebd2d21d4f9defb3af1f WatchSource:0}: Error finding container 63da4a97ce0d2a5679d8129920ccea436fca769c628aebd2d21d4f9defb3af1f: Status 404 returned error can't find the container with id 63da4a97ce0d2a5679d8129920ccea436fca769c628aebd2d21d4f9defb3af1f Feb 27 10:35:39 crc kubenswrapper[4998]: I0227 10:35:39.559618 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cd4n94"] Feb 27 10:35:40 crc kubenswrapper[4998]: I0227 10:35:40.089394 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-kjznw" event={"ID":"ad7e7a26-5a61-408d-86ae-0b25b8617147","Type":"ContainerStarted","Data":"63da4a97ce0d2a5679d8129920ccea436fca769c628aebd2d21d4f9defb3af1f"} Feb 27 10:35:40 crc kubenswrapper[4998]: I0227 10:35:40.091920 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9rqts" event={"ID":"d660dd5a-dbce-413b-9075-9de9a2776d8c","Type":"ContainerStarted","Data":"03a73c0a932454eee11cb0623406b3edf70036ffe7fbfdb614ded4f7e9bff06e"} Feb 27 10:35:40 crc kubenswrapper[4998]: I0227 10:35:40.094377 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-gg44v" event={"ID":"f7519997-ef58-4091-bce9-a43762551d56","Type":"ContainerStarted","Data":"3fcd1e842f7597e62a8e55f9f0e2ce66aed0edde425c2ba837d585c42dc581ae"} Feb 27 10:35:40 crc kubenswrapper[4998]: I0227 10:35:40.094501 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-gg44v" Feb 27 10:35:40 crc kubenswrapper[4998]: I0227 10:35:40.095736 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cd4n94" event={"ID":"6bbe7a4a-b92f-4ba7-ab63-1261ba6a8248","Type":"ContainerStarted","Data":"6213ef5a3a85ba498a53e9e72242872b8032a54fe1746a9805e74a70b1b177b7"} Feb 27 10:35:40 crc kubenswrapper[4998]: I0227 10:35:40.097768 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-mhc4p" event={"ID":"b712d7f9-e5c9-4a97-9757-e8689527c542","Type":"ContainerStarted","Data":"df3341d2f016a7c97f1e2439e07e5515a0659619b1570df6db64b596473f1ff9"} Feb 27 10:35:40 crc kubenswrapper[4998]: I0227 10:35:40.097944 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-mhc4p" Feb 27 10:35:40 crc kubenswrapper[4998]: I0227 10:35:40.107703 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-bq92j" event={"ID":"4b9b7b96-1053-4c39-8079-b0c47d540545","Type":"ContainerStarted","Data":"85a16a0e8c5f4ccc280c3f6725cc708d4e948e04a9a94934f94c536898989e20"} Feb 27 10:35:40 crc kubenswrapper[4998]: I0227 10:35:40.107764 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-bq92j" Feb 27 10:35:40 crc kubenswrapper[4998]: I0227 10:35:40.124582 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9rqts" podStartSLOduration=3.290846947 podStartE2EDuration="27.124566809s" podCreationTimestamp="2026-02-27 10:35:13 +0000 UTC" firstStartedPulling="2026-02-27 10:35:15.301123671 +0000 UTC m=+1067.299394639" lastFinishedPulling="2026-02-27 10:35:39.134843533 +0000 UTC m=+1091.133114501" observedRunningTime="2026-02-27 10:35:40.12228849 +0000 UTC m=+1092.120559448" watchObservedRunningTime="2026-02-27 10:35:40.124566809 +0000 UTC m=+1092.122837767" Feb 27 10:35:40 crc kubenswrapper[4998]: I0227 10:35:40.135384 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-64db6967f8-249j5" event={"ID":"36f2300a-f1d8-429d-8b4f-065c53c8b68b","Type":"ContainerStarted","Data":"ce93a07c3fa83866784df896cd2a303884a22f11edeb7bc439c327e1c4140dc9"} Feb 27 10:35:40 crc kubenswrapper[4998]: I0227 10:35:40.135426 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-64db6967f8-249j5" Feb 27 10:35:40 crc kubenswrapper[4998]: I0227 10:35:40.143618 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-54688575f-fxkzb" event={"ID":"4a837f08-5b8a-4cd8-8943-dc252cfb3f0f","Type":"ContainerStarted","Data":"e054495e79b7b8091dd88460e49f286681fdfbe4a941979120d891895d5e0faf"} Feb 27 10:35:40 crc kubenswrapper[4998]: I0227 10:35:40.144335 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-54688575f-fxkzb" Feb 27 10:35:40 crc kubenswrapper[4998]: I0227 10:35:40.145277 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-gg44v" podStartSLOduration=6.043091209 podStartE2EDuration="27.145253312s" podCreationTimestamp="2026-02-27 10:35:13 +0000 UTC" firstStartedPulling="2026-02-27 10:35:14.454859589 +0000 UTC m=+1066.453130557" lastFinishedPulling="2026-02-27 10:35:35.557021692 +0000 UTC m=+1087.555292660" observedRunningTime="2026-02-27 10:35:40.141964071 +0000 UTC m=+1092.140235049" watchObservedRunningTime="2026-02-27 10:35:40.145253312 +0000 UTC m=+1092.143524270" Feb 27 10:35:40 crc kubenswrapper[4998]: I0227 10:35:40.150555 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-g9dst" event={"ID":"f5fd6986-20ba-4982-a978-c76652150ac8","Type":"ContainerStarted","Data":"e415282d3db521cbe8247c8ab749c0cff08c10faccc264915704d8d79332c06e"} Feb 27 10:35:40 crc kubenswrapper[4998]: I0227 10:35:40.151073 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-g9dst" Feb 27 10:35:40 crc kubenswrapper[4998]: I0227 10:35:40.156959 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-wxtjl" event={"ID":"35236f87-0600-46d7-ba5b-7576e20b9dc4","Type":"ContainerStarted","Data":"d1eba6759759bd24492123106533ae4c008df181e1a2427573042f998f958a3e"} Feb 27 10:35:40 crc kubenswrapper[4998]: I0227 10:35:40.157050 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-wxtjl" Feb 27 10:35:40 crc kubenswrapper[4998]: I0227 10:35:40.162067 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-cf99c678f-jcgpp" event={"ID":"a27dd930-0b57-431c-ae5c-7c9af1e11dfa","Type":"ContainerStarted","Data":"e94aecb0949746d0f57aa69f4a96ad6fd288ea9071bbd9c6862bbf2480c8b6fb"} Feb 27 10:35:40 crc kubenswrapper[4998]: I0227 10:35:40.162203 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-cf99c678f-jcgpp" Feb 27 10:35:40 crc kubenswrapper[4998]: I0227 10:35:40.165274 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-dh4xx" event={"ID":"aaf3093e-214b-48a9-8310-56ba32b094f7","Type":"ContainerStarted","Data":"6a4427bdb8064dca9c3a9d9fbf2463b012349f8a19aa257acc753346cff6335b"} Feb 27 10:35:40 crc kubenswrapper[4998]: I0227 10:35:40.165749 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-dh4xx" Feb 27 10:35:40 crc kubenswrapper[4998]: I0227 10:35:40.168431 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-5fdb694969-vhsp7" event={"ID":"3b9761b2-ee38-418d-800f-ba54fc8960b6","Type":"ContainerStarted","Data":"07d18a69bb3d067df8a01f098cd0f7d9d5b5de99bc72d573681c79ba0f34e3cd"} Feb 27 10:35:40 crc kubenswrapper[4998]: I0227 10:35:40.168809 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-5fdb694969-vhsp7" Feb 27 10:35:40 crc kubenswrapper[4998]: I0227 10:35:40.172562 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-hv45m" event={"ID":"61e9c6f0-0c9c-47ca-b013-b94ba962ec66","Type":"ContainerStarted","Data":"9f4cdf26633d9cfb0b78de4dc97f811a9bac839a5ec22b96f999ee20f513f554"} Feb 27 10:35:40 crc kubenswrapper[4998]: I0227 10:35:40.172978 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-hv45m" Feb 27 10:35:40 crc kubenswrapper[4998]: I0227 10:35:40.189342 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-bq92j" podStartSLOduration=5.495944328 podStartE2EDuration="27.189281308s" podCreationTimestamp="2026-02-27 10:35:13 +0000 UTC" firstStartedPulling="2026-02-27 10:35:15.293462587 +0000 UTC m=+1067.291733555" lastFinishedPulling="2026-02-27 10:35:36.986799567 +0000 UTC m=+1088.985070535" observedRunningTime="2026-02-27 10:35:40.185653486 +0000 UTC m=+1092.183924454" watchObservedRunningTime="2026-02-27 10:35:40.189281308 +0000 UTC m=+1092.187552276" Feb 27 10:35:40 crc kubenswrapper[4998]: I0227 10:35:40.201889 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7c789f89c6-c9hqt" event={"ID":"21596c8d-5360-482b-8cea-eda167f2f1cd","Type":"ContainerStarted","Data":"73285beac1c25a745ec573e20d23078f659fb5cf151b77cb8a48f626ddfd367e"} Feb 27 10:35:40 crc kubenswrapper[4998]: I0227 10:35:40.203056 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-7c789f89c6-c9hqt" Feb 27 10:35:40 crc kubenswrapper[4998]: I0227 10:35:40.239059 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-mhc4p" podStartSLOduration=3.285221335 podStartE2EDuration="27.239038787s" podCreationTimestamp="2026-02-27 10:35:13 +0000 UTC" firstStartedPulling="2026-02-27 10:35:15.182755814 +0000 UTC m=+1067.181026782" lastFinishedPulling="2026-02-27 10:35:39.136573266 +0000 UTC m=+1091.134844234" observedRunningTime="2026-02-27 10:35:40.226554886 +0000 UTC m=+1092.224825864" watchObservedRunningTime="2026-02-27 10:35:40.239038787 +0000 UTC m=+1092.237309755" Feb 27 10:35:40 crc kubenswrapper[4998]: I0227 10:35:40.290733 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-cf99c678f-jcgpp" podStartSLOduration=5.009208143 podStartE2EDuration="27.290708597s" podCreationTimestamp="2026-02-27 10:35:13 +0000 UTC" firstStartedPulling="2026-02-27 10:35:14.705260292 +0000 UTC m=+1066.703531260" lastFinishedPulling="2026-02-27 10:35:36.986760746 +0000 UTC m=+1088.985031714" observedRunningTime="2026-02-27 10:35:40.289732966 +0000 UTC m=+1092.288003934" watchObservedRunningTime="2026-02-27 10:35:40.290708597 +0000 UTC m=+1092.288979565" Feb 27 10:35:40 crc kubenswrapper[4998]: I0227 10:35:40.294279 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-7c789f89c6-c9hqt" podStartSLOduration=5.102937316 podStartE2EDuration="27.294265255s" podCreationTimestamp="2026-02-27 10:35:13 +0000 UTC" firstStartedPulling="2026-02-27 10:35:14.796281093 +0000 UTC m=+1066.794552061" lastFinishedPulling="2026-02-27 10:35:36.987609032 +0000 UTC m=+1088.985880000" observedRunningTime="2026-02-27 10:35:40.251734186 +0000 UTC m=+1092.250005154" watchObservedRunningTime="2026-02-27 10:35:40.294265255 +0000 UTC m=+1092.292536223" Feb 27 10:35:40 crc kubenswrapper[4998]: I0227 10:35:40.313581 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-54688575f-fxkzb" podStartSLOduration=3.286810065 podStartE2EDuration="27.313561765s" podCreationTimestamp="2026-02-27 10:35:13 +0000 UTC" firstStartedPulling="2026-02-27 10:35:14.985615289 +0000 UTC m=+1066.983886267" lastFinishedPulling="2026-02-27 10:35:39.012366999 +0000 UTC m=+1091.010637967" observedRunningTime="2026-02-27 10:35:40.310468471 +0000 UTC m=+1092.308739439" watchObservedRunningTime="2026-02-27 10:35:40.313561765 +0000 UTC m=+1092.311832733" Feb 27 10:35:40 crc kubenswrapper[4998]: I0227 10:35:40.342399 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-wxtjl" podStartSLOduration=5.062929753 podStartE2EDuration="27.342379626s" podCreationTimestamp="2026-02-27 10:35:13 +0000 UTC" firstStartedPulling="2026-02-27 10:35:14.707375395 +0000 UTC m=+1066.705646363" lastFinishedPulling="2026-02-27 10:35:36.986825268 +0000 UTC m=+1088.985096236" observedRunningTime="2026-02-27 10:35:40.336209107 +0000 UTC m=+1092.334480075" watchObservedRunningTime="2026-02-27 10:35:40.342379626 +0000 UTC m=+1092.340650594" Feb 27 10:35:40 crc kubenswrapper[4998]: I0227 10:35:40.364107 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-5fdb694969-vhsp7" podStartSLOduration=3.447830165 podStartE2EDuration="27.364087649s" podCreationTimestamp="2026-02-27 10:35:13 +0000 UTC" firstStartedPulling="2026-02-27 10:35:15.182563978 +0000 UTC m=+1067.180834946" lastFinishedPulling="2026-02-27 10:35:39.098821462 +0000 UTC m=+1091.097092430" observedRunningTime="2026-02-27 10:35:40.358525559 +0000 UTC m=+1092.356796547" watchObservedRunningTime="2026-02-27 10:35:40.364087649 +0000 UTC m=+1092.362358607" Feb 27 10:35:40 crc kubenswrapper[4998]: I0227 10:35:40.383578 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-hv45m" podStartSLOduration=3.346818397 podStartE2EDuration="27.383558944s" podCreationTimestamp="2026-02-27 10:35:13 +0000 UTC" firstStartedPulling="2026-02-27 10:35:14.976067427 +0000 UTC m=+1066.974338395" lastFinishedPulling="2026-02-27 10:35:39.012807974 +0000 UTC m=+1091.011078942" observedRunningTime="2026-02-27 10:35:40.379585243 +0000 UTC m=+1092.377856211" watchObservedRunningTime="2026-02-27 10:35:40.383558944 +0000 UTC m=+1092.381829912" Feb 27 10:35:40 crc kubenswrapper[4998]: I0227 10:35:40.402709 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-g9dst" podStartSLOduration=3.543502189 podStartE2EDuration="27.402691159s" podCreationTimestamp="2026-02-27 10:35:13 +0000 UTC" firstStartedPulling="2026-02-27 10:35:15.182916609 +0000 UTC m=+1067.181187567" lastFinishedPulling="2026-02-27 10:35:39.042105559 +0000 UTC m=+1091.040376537" observedRunningTime="2026-02-27 10:35:40.401699318 +0000 UTC m=+1092.399970286" watchObservedRunningTime="2026-02-27 10:35:40.402691159 +0000 UTC m=+1092.400962127" Feb 27 10:35:40 crc kubenswrapper[4998]: I0227 10:35:40.423368 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-64db6967f8-249j5" podStartSLOduration=5.145656482 podStartE2EDuration="27.42335359s" podCreationTimestamp="2026-02-27 10:35:13 +0000 UTC" firstStartedPulling="2026-02-27 10:35:14.709074548 +0000 UTC m=+1066.707345516" lastFinishedPulling="2026-02-27 10:35:36.986771656 +0000 UTC m=+1088.985042624" observedRunningTime="2026-02-27 10:35:40.422918907 +0000 UTC m=+1092.421189865" watchObservedRunningTime="2026-02-27 10:35:40.42335359 +0000 UTC m=+1092.421624558" Feb 27 10:35:40 crc kubenswrapper[4998]: I0227 10:35:40.454965 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-dh4xx" podStartSLOduration=3.607817154 podStartE2EDuration="27.454946555s" podCreationTimestamp="2026-02-27 10:35:13 +0000 UTC" firstStartedPulling="2026-02-27 10:35:15.164530087 +0000 UTC m=+1067.162801055" lastFinishedPulling="2026-02-27 10:35:39.011659488 +0000 UTC m=+1091.009930456" observedRunningTime="2026-02-27 10:35:40.442872787 +0000 UTC m=+1092.441143775" watchObservedRunningTime="2026-02-27 10:35:40.454946555 +0000 UTC m=+1092.453217523" Feb 27 10:35:40 crc kubenswrapper[4998]: I0227 10:35:40.802301 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79f06471-ac2f-4fec-95da-760990d48ad9" path="/var/lib/kubelet/pods/79f06471-ac2f-4fec-95da-760990d48ad9/volumes" Feb 27 10:35:43 crc kubenswrapper[4998]: I0227 10:35:43.226384 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cd4n94" event={"ID":"6bbe7a4a-b92f-4ba7-ab63-1261ba6a8248","Type":"ContainerStarted","Data":"65f6f1b0073cb121d693e2c859bac778608ae101678de621b63d4b16119c702d"} Feb 27 10:35:43 crc kubenswrapper[4998]: I0227 10:35:43.226940 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cd4n94" Feb 27 10:35:43 crc kubenswrapper[4998]: I0227 10:35:43.227496 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-kjznw" event={"ID":"ad7e7a26-5a61-408d-86ae-0b25b8617147","Type":"ContainerStarted","Data":"dd447bfb6e42b9ad24146a3d558cf2adb06b645e881e787a46edb7d513afb512"} Feb 27 10:35:43 crc kubenswrapper[4998]: I0227 10:35:43.228154 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-kjznw" Feb 27 10:35:43 crc kubenswrapper[4998]: I0227 10:35:43.250789 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cd4n94" podStartSLOduration=27.375113835 podStartE2EDuration="30.250773448s" podCreationTimestamp="2026-02-27 10:35:13 +0000 UTC" firstStartedPulling="2026-02-27 10:35:39.580279465 +0000 UTC m=+1091.578550443" lastFinishedPulling="2026-02-27 10:35:42.455939088 +0000 UTC m=+1094.454210056" observedRunningTime="2026-02-27 10:35:43.247197249 +0000 UTC m=+1095.245468247" watchObservedRunningTime="2026-02-27 10:35:43.250773448 +0000 UTC m=+1095.249044416" Feb 27 10:35:43 crc kubenswrapper[4998]: I0227 10:35:43.278618 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-kjznw" podStartSLOduration=27.256992295 podStartE2EDuration="30.278596428s" podCreationTimestamp="2026-02-27 10:35:13 +0000 UTC" firstStartedPulling="2026-02-27 10:35:39.436655706 +0000 UTC m=+1091.434926674" lastFinishedPulling="2026-02-27 10:35:42.458259839 +0000 UTC m=+1094.456530807" observedRunningTime="2026-02-27 10:35:43.273318577 +0000 UTC m=+1095.271589545" watchObservedRunningTime="2026-02-27 10:35:43.278596428 +0000 UTC m=+1095.276867396" Feb 27 10:35:44 crc kubenswrapper[4998]: I0227 10:35:44.052788 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-mhc4p" Feb 27 10:35:44 crc kubenswrapper[4998]: I0227 10:35:44.148427 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-5fdb694969-vhsp7" Feb 27 10:35:44 crc kubenswrapper[4998]: I0227 10:35:44.254716 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-bq92j" Feb 27 10:35:44 crc kubenswrapper[4998]: I0227 10:35:44.273114 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-g9dst" Feb 27 10:35:45 crc kubenswrapper[4998]: I0227 10:35:45.252342 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-6db6876945-km79k" event={"ID":"cd2a5873-e5a8-4cc6-af9e-d90dd1253bdf","Type":"ContainerStarted","Data":"68e1d2609eab7c5b77b44120b87c84c48fd97310a4e1ddb01c9e977248673651"} Feb 27 10:35:45 crc kubenswrapper[4998]: I0227 10:35:45.252744 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-6db6876945-km79k" Feb 27 10:35:45 crc kubenswrapper[4998]: I0227 10:35:45.270011 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-6db6876945-km79k" podStartSLOduration=2.994394868 podStartE2EDuration="32.269994126s" podCreationTimestamp="2026-02-27 10:35:13 +0000 UTC" firstStartedPulling="2026-02-27 10:35:14.940499901 +0000 UTC m=+1066.938770869" lastFinishedPulling="2026-02-27 10:35:44.216099159 +0000 UTC m=+1096.214370127" observedRunningTime="2026-02-27 10:35:45.266621113 +0000 UTC m=+1097.264892101" watchObservedRunningTime="2026-02-27 10:35:45.269994126 +0000 UTC m=+1097.268265094" Feb 27 10:35:45 crc kubenswrapper[4998]: I0227 10:35:45.925143 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/86a542d0-8588-425c-9d8b-417b0b287ce2-metrics-certs\") pod \"openstack-operator-controller-manager-8f6f897df-qkvxk\" (UID: \"86a542d0-8588-425c-9d8b-417b0b287ce2\") " pod="openstack-operators/openstack-operator-controller-manager-8f6f897df-qkvxk" Feb 27 10:35:45 crc kubenswrapper[4998]: I0227 10:35:45.925509 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/86a542d0-8588-425c-9d8b-417b0b287ce2-webhook-certs\") pod \"openstack-operator-controller-manager-8f6f897df-qkvxk\" (UID: \"86a542d0-8588-425c-9d8b-417b0b287ce2\") " pod="openstack-operators/openstack-operator-controller-manager-8f6f897df-qkvxk" Feb 27 10:35:45 crc kubenswrapper[4998]: I0227 10:35:45.931552 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/86a542d0-8588-425c-9d8b-417b0b287ce2-webhook-certs\") pod \"openstack-operator-controller-manager-8f6f897df-qkvxk\" (UID: \"86a542d0-8588-425c-9d8b-417b0b287ce2\") " pod="openstack-operators/openstack-operator-controller-manager-8f6f897df-qkvxk" Feb 27 10:35:45 crc kubenswrapper[4998]: I0227 10:35:45.933155 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/86a542d0-8588-425c-9d8b-417b0b287ce2-metrics-certs\") pod \"openstack-operator-controller-manager-8f6f897df-qkvxk\" (UID: \"86a542d0-8588-425c-9d8b-417b0b287ce2\") " pod="openstack-operators/openstack-operator-controller-manager-8f6f897df-qkvxk" Feb 27 10:35:46 crc kubenswrapper[4998]: I0227 10:35:46.153615 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-xh72p" Feb 27 10:35:46 crc kubenswrapper[4998]: I0227 10:35:46.161311 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-8f6f897df-qkvxk" Feb 27 10:35:46 crc kubenswrapper[4998]: I0227 10:35:46.261109 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-545456dc4-wmdmq" event={"ID":"83211ec0-66ef-476c-ad20-e17e88348f29","Type":"ContainerStarted","Data":"d886bdceeecdf5d92affb9e9807d175bf495100c9b86796ef8133888e470323c"} Feb 27 10:35:46 crc kubenswrapper[4998]: I0227 10:35:46.261331 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-545456dc4-wmdmq" Feb 27 10:35:46 crc kubenswrapper[4998]: I0227 10:35:46.300065 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-545456dc4-wmdmq" podStartSLOduration=2.822521626 podStartE2EDuration="33.300037655s" podCreationTimestamp="2026-02-27 10:35:13 +0000 UTC" firstStartedPulling="2026-02-27 10:35:14.699518836 +0000 UTC m=+1066.697789804" lastFinishedPulling="2026-02-27 10:35:45.177034865 +0000 UTC m=+1097.175305833" observedRunningTime="2026-02-27 10:35:46.286056718 +0000 UTC m=+1098.284327706" watchObservedRunningTime="2026-02-27 10:35:46.300037655 +0000 UTC m=+1098.298309253" Feb 27 10:35:46 crc kubenswrapper[4998]: I0227 10:35:46.367897 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-8f6f897df-qkvxk"] Feb 27 10:35:46 crc kubenswrapper[4998]: W0227 10:35:46.377884 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod86a542d0_8588_425c_9d8b_417b0b287ce2.slice/crio-017bc9b24318fa3378581623cc368fbd826b6fa06abd4811990eeca89e9324b1 WatchSource:0}: Error finding container 017bc9b24318fa3378581623cc368fbd826b6fa06abd4811990eeca89e9324b1: Status 404 returned error can't find the container with id 017bc9b24318fa3378581623cc368fbd826b6fa06abd4811990eeca89e9324b1 Feb 27 10:35:47 crc kubenswrapper[4998]: I0227 10:35:47.268005 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-8f6f897df-qkvxk" event={"ID":"86a542d0-8588-425c-9d8b-417b0b287ce2","Type":"ContainerStarted","Data":"2bcc46c4d915639cfe412debbcf8bdb7d8b3ce0cf9c479b78cb95bf2c59d2b7f"} Feb 27 10:35:47 crc kubenswrapper[4998]: I0227 10:35:47.268339 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-8f6f897df-qkvxk" Feb 27 10:35:47 crc kubenswrapper[4998]: I0227 10:35:47.268360 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-8f6f897df-qkvxk" event={"ID":"86a542d0-8588-425c-9d8b-417b0b287ce2","Type":"ContainerStarted","Data":"017bc9b24318fa3378581623cc368fbd826b6fa06abd4811990eeca89e9324b1"} Feb 27 10:35:47 crc kubenswrapper[4998]: I0227 10:35:47.270016 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-nnlcl" event={"ID":"36f927dc-fac8-4bb6-85d1-df539857edf1","Type":"ContainerStarted","Data":"0e19e77e71758c0254a90e5c5bdf4fb0c21d0419737439520e016b57ba2c3ae2"} Feb 27 10:35:47 crc kubenswrapper[4998]: I0227 10:35:47.270214 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-nnlcl" Feb 27 10:35:47 crc kubenswrapper[4998]: I0227 10:35:47.271677 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-67d996989d-qqrgh" event={"ID":"a77eccb8-8369-473a-93ad-d9d67ccea057","Type":"ContainerStarted","Data":"da7a4a7c2e60abd4ad9ca976d950b58d9fcb128f3743b43ba2a8f25d1957f9e5"} Feb 27 10:35:47 crc kubenswrapper[4998]: I0227 10:35:47.271909 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-67d996989d-qqrgh" Feb 27 10:35:47 crc kubenswrapper[4998]: I0227 10:35:47.297187 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-8f6f897df-qkvxk" podStartSLOduration=34.297167378 podStartE2EDuration="34.297167378s" podCreationTimestamp="2026-02-27 10:35:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:35:47.291268678 +0000 UTC m=+1099.289539646" watchObservedRunningTime="2026-02-27 10:35:47.297167378 +0000 UTC m=+1099.295438346" Feb 27 10:35:47 crc kubenswrapper[4998]: I0227 10:35:47.310563 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-67d996989d-qqrgh" podStartSLOduration=2.941326606 podStartE2EDuration="34.310548357s" podCreationTimestamp="2026-02-27 10:35:13 +0000 UTC" firstStartedPulling="2026-02-27 10:35:14.95947169 +0000 UTC m=+1066.957742658" lastFinishedPulling="2026-02-27 10:35:46.328693441 +0000 UTC m=+1098.326964409" observedRunningTime="2026-02-27 10:35:47.309307139 +0000 UTC m=+1099.307578107" watchObservedRunningTime="2026-02-27 10:35:47.310548357 +0000 UTC m=+1099.308819325" Feb 27 10:35:47 crc kubenswrapper[4998]: I0227 10:35:47.325035 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-nnlcl" podStartSLOduration=2.455681575 podStartE2EDuration="34.325004319s" podCreationTimestamp="2026-02-27 10:35:13 +0000 UTC" firstStartedPulling="2026-02-27 10:35:14.794843509 +0000 UTC m=+1066.793114477" lastFinishedPulling="2026-02-27 10:35:46.664166253 +0000 UTC m=+1098.662437221" observedRunningTime="2026-02-27 10:35:47.321459781 +0000 UTC m=+1099.319730749" watchObservedRunningTime="2026-02-27 10:35:47.325004319 +0000 UTC m=+1099.323275287" Feb 27 10:35:48 crc kubenswrapper[4998]: I0227 10:35:48.278366 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-648564c9fc-l6kh7" event={"ID":"3855bff7-c203-4258-98a0-5afa77cf9b5c","Type":"ContainerStarted","Data":"daf3b01f59ef0d83cfd0b986f81a3671f365a8a42e99f8861e11897a373fc31a"} Feb 27 10:35:48 crc kubenswrapper[4998]: I0227 10:35:48.278862 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-648564c9fc-l6kh7" Feb 27 10:35:48 crc kubenswrapper[4998]: I0227 10:35:48.795559 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-648564c9fc-l6kh7" podStartSLOduration=3.810627393 podStartE2EDuration="35.795533829s" podCreationTimestamp="2026-02-27 10:35:13 +0000 UTC" firstStartedPulling="2026-02-27 10:35:15.158676488 +0000 UTC m=+1067.156947456" lastFinishedPulling="2026-02-27 10:35:47.143582924 +0000 UTC m=+1099.141853892" observedRunningTime="2026-02-27 10:35:48.291727653 +0000 UTC m=+1100.289998621" watchObservedRunningTime="2026-02-27 10:35:48.795533829 +0000 UTC m=+1100.793804797" Feb 27 10:35:49 crc kubenswrapper[4998]: I0227 10:35:49.244550 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-kjznw" Feb 27 10:35:49 crc kubenswrapper[4998]: I0227 10:35:49.595794 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cd4n94" Feb 27 10:35:50 crc kubenswrapper[4998]: I0227 10:35:50.310649 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-fs8hb" event={"ID":"d31df8ec-7c1c-42b3-b538-2949c015b6e6","Type":"ContainerStarted","Data":"840717ebc83914191e529659011018c57dd89c3c12bb0ef657b76dfebc3ab30f"} Feb 27 10:35:50 crc kubenswrapper[4998]: I0227 10:35:50.311503 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-fs8hb" Feb 27 10:35:50 crc kubenswrapper[4998]: I0227 10:35:50.313254 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-zr6mq" event={"ID":"9ec9f13a-82dd-4ab4-8f5a-c15f2a42f2dc","Type":"ContainerStarted","Data":"3051f93d37fb128c606bafe70a3c7c7914ff9002fc5d09fd4bbba28a793fff29"} Feb 27 10:35:50 crc kubenswrapper[4998]: I0227 10:35:50.313619 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-zr6mq" Feb 27 10:35:50 crc kubenswrapper[4998]: I0227 10:35:50.327134 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-fs8hb" podStartSLOduration=2.698343107 podStartE2EDuration="37.327119766s" podCreationTimestamp="2026-02-27 10:35:13 +0000 UTC" firstStartedPulling="2026-02-27 10:35:14.928520794 +0000 UTC m=+1066.926791762" lastFinishedPulling="2026-02-27 10:35:49.557297453 +0000 UTC m=+1101.555568421" observedRunningTime="2026-02-27 10:35:50.324286644 +0000 UTC m=+1102.322557612" watchObservedRunningTime="2026-02-27 10:35:50.327119766 +0000 UTC m=+1102.325390734" Feb 27 10:35:50 crc kubenswrapper[4998]: I0227 10:35:50.343700 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-zr6mq" podStartSLOduration=2.70699954 podStartE2EDuration="37.343681209s" podCreationTimestamp="2026-02-27 10:35:13 +0000 UTC" firstStartedPulling="2026-02-27 10:35:14.919933421 +0000 UTC m=+1066.918204389" lastFinishedPulling="2026-02-27 10:35:49.55661509 +0000 UTC m=+1101.554886058" observedRunningTime="2026-02-27 10:35:50.34029158 +0000 UTC m=+1102.338562548" watchObservedRunningTime="2026-02-27 10:35:50.343681209 +0000 UTC m=+1102.341952177" Feb 27 10:35:53 crc kubenswrapper[4998]: I0227 10:35:53.542713 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-gg44v" Feb 27 10:35:53 crc kubenswrapper[4998]: I0227 10:35:53.559365 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-nnlcl" Feb 27 10:35:53 crc kubenswrapper[4998]: I0227 10:35:53.579344 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-64db6967f8-249j5" Feb 27 10:35:53 crc kubenswrapper[4998]: I0227 10:35:53.586332 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-wxtjl" Feb 27 10:35:53 crc kubenswrapper[4998]: I0227 10:35:53.618596 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-cf99c678f-jcgpp" Feb 27 10:35:53 crc kubenswrapper[4998]: I0227 10:35:53.658647 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-545456dc4-wmdmq" Feb 27 10:35:53 crc kubenswrapper[4998]: I0227 10:35:53.690468 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-7c789f89c6-c9hqt" Feb 27 10:35:53 crc kubenswrapper[4998]: I0227 10:35:53.710905 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-67d996989d-qqrgh" Feb 27 10:35:53 crc kubenswrapper[4998]: I0227 10:35:53.786799 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-6db6876945-km79k" Feb 27 10:35:53 crc kubenswrapper[4998]: I0227 10:35:53.790887 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-54688575f-fxkzb" Feb 27 10:35:53 crc kubenswrapper[4998]: I0227 10:35:53.927872 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-hv45m" Feb 27 10:35:53 crc kubenswrapper[4998]: I0227 10:35:53.966605 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-dh4xx" Feb 27 10:35:53 crc kubenswrapper[4998]: I0227 10:35:53.993060 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-648564c9fc-l6kh7" Feb 27 10:35:56 crc kubenswrapper[4998]: I0227 10:35:56.169290 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-8f6f897df-qkvxk" Feb 27 10:36:00 crc kubenswrapper[4998]: I0227 10:36:00.144403 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536476-dsmkc"] Feb 27 10:36:00 crc kubenswrapper[4998]: E0227 10:36:00.145035 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79f06471-ac2f-4fec-95da-760990d48ad9" containerName="registry-server" Feb 27 10:36:00 crc kubenswrapper[4998]: I0227 10:36:00.145053 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="79f06471-ac2f-4fec-95da-760990d48ad9" containerName="registry-server" Feb 27 10:36:00 crc kubenswrapper[4998]: E0227 10:36:00.145069 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79f06471-ac2f-4fec-95da-760990d48ad9" containerName="extract-content" Feb 27 10:36:00 crc kubenswrapper[4998]: I0227 10:36:00.145079 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="79f06471-ac2f-4fec-95da-760990d48ad9" containerName="extract-content" Feb 27 10:36:00 crc kubenswrapper[4998]: E0227 10:36:00.145097 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79f06471-ac2f-4fec-95da-760990d48ad9" containerName="extract-utilities" Feb 27 10:36:00 crc kubenswrapper[4998]: I0227 10:36:00.145107 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="79f06471-ac2f-4fec-95da-760990d48ad9" containerName="extract-utilities" Feb 27 10:36:00 crc kubenswrapper[4998]: I0227 10:36:00.145294 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="79f06471-ac2f-4fec-95da-760990d48ad9" containerName="registry-server" Feb 27 10:36:00 crc kubenswrapper[4998]: I0227 10:36:00.145869 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536476-dsmkc" Feb 27 10:36:00 crc kubenswrapper[4998]: I0227 10:36:00.153081 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536476-dsmkc"] Feb 27 10:36:00 crc kubenswrapper[4998]: I0227 10:36:00.155455 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-b74ch" Feb 27 10:36:00 crc kubenswrapper[4998]: I0227 10:36:00.155467 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 10:36:00 crc kubenswrapper[4998]: I0227 10:36:00.162065 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 10:36:00 crc kubenswrapper[4998]: I0227 10:36:00.321767 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tm7rq\" (UniqueName: \"kubernetes.io/projected/bb0ae9f5-5e46-43c3-ae31-23a68848c96d-kube-api-access-tm7rq\") pod \"auto-csr-approver-29536476-dsmkc\" (UID: \"bb0ae9f5-5e46-43c3-ae31-23a68848c96d\") " pod="openshift-infra/auto-csr-approver-29536476-dsmkc" Feb 27 10:36:00 crc kubenswrapper[4998]: I0227 10:36:00.423006 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tm7rq\" (UniqueName: \"kubernetes.io/projected/bb0ae9f5-5e46-43c3-ae31-23a68848c96d-kube-api-access-tm7rq\") pod \"auto-csr-approver-29536476-dsmkc\" (UID: \"bb0ae9f5-5e46-43c3-ae31-23a68848c96d\") " pod="openshift-infra/auto-csr-approver-29536476-dsmkc" Feb 27 10:36:00 crc kubenswrapper[4998]: I0227 10:36:00.452135 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tm7rq\" (UniqueName: \"kubernetes.io/projected/bb0ae9f5-5e46-43c3-ae31-23a68848c96d-kube-api-access-tm7rq\") pod \"auto-csr-approver-29536476-dsmkc\" (UID: \"bb0ae9f5-5e46-43c3-ae31-23a68848c96d\") " pod="openshift-infra/auto-csr-approver-29536476-dsmkc" Feb 27 10:36:00 crc kubenswrapper[4998]: I0227 10:36:00.471037 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536476-dsmkc" Feb 27 10:36:00 crc kubenswrapper[4998]: I0227 10:36:00.915847 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536476-dsmkc"] Feb 27 10:36:00 crc kubenswrapper[4998]: W0227 10:36:00.924849 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbb0ae9f5_5e46_43c3_ae31_23a68848c96d.slice/crio-085ce6e59c524b3cea43074eab4a7f5f24cd613edf14f5a8724cca5627619a69 WatchSource:0}: Error finding container 085ce6e59c524b3cea43074eab4a7f5f24cd613edf14f5a8724cca5627619a69: Status 404 returned error can't find the container with id 085ce6e59c524b3cea43074eab4a7f5f24cd613edf14f5a8724cca5627619a69 Feb 27 10:36:01 crc kubenswrapper[4998]: I0227 10:36:01.386564 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536476-dsmkc" event={"ID":"bb0ae9f5-5e46-43c3-ae31-23a68848c96d","Type":"ContainerStarted","Data":"085ce6e59c524b3cea43074eab4a7f5f24cd613edf14f5a8724cca5627619a69"} Feb 27 10:36:03 crc kubenswrapper[4998]: I0227 10:36:03.742883 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-zr6mq" Feb 27 10:36:03 crc kubenswrapper[4998]: I0227 10:36:03.949206 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-fs8hb" Feb 27 10:36:05 crc kubenswrapper[4998]: I0227 10:36:05.418853 4998 generic.go:334] "Generic (PLEG): container finished" podID="bb0ae9f5-5e46-43c3-ae31-23a68848c96d" containerID="88b207ed088e77bf20cf381bd8825fa32c16d5908fe2992af8efafd62c4b6824" exitCode=0 Feb 27 10:36:05 crc kubenswrapper[4998]: I0227 10:36:05.418914 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536476-dsmkc" event={"ID":"bb0ae9f5-5e46-43c3-ae31-23a68848c96d","Type":"ContainerDied","Data":"88b207ed088e77bf20cf381bd8825fa32c16d5908fe2992af8efafd62c4b6824"} Feb 27 10:36:06 crc kubenswrapper[4998]: I0227 10:36:06.672560 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536476-dsmkc" Feb 27 10:36:06 crc kubenswrapper[4998]: I0227 10:36:06.821059 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tm7rq\" (UniqueName: \"kubernetes.io/projected/bb0ae9f5-5e46-43c3-ae31-23a68848c96d-kube-api-access-tm7rq\") pod \"bb0ae9f5-5e46-43c3-ae31-23a68848c96d\" (UID: \"bb0ae9f5-5e46-43c3-ae31-23a68848c96d\") " Feb 27 10:36:06 crc kubenswrapper[4998]: I0227 10:36:06.830952 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb0ae9f5-5e46-43c3-ae31-23a68848c96d-kube-api-access-tm7rq" (OuterVolumeSpecName: "kube-api-access-tm7rq") pod "bb0ae9f5-5e46-43c3-ae31-23a68848c96d" (UID: "bb0ae9f5-5e46-43c3-ae31-23a68848c96d"). InnerVolumeSpecName "kube-api-access-tm7rq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:36:06 crc kubenswrapper[4998]: I0227 10:36:06.923030 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tm7rq\" (UniqueName: \"kubernetes.io/projected/bb0ae9f5-5e46-43c3-ae31-23a68848c96d-kube-api-access-tm7rq\") on node \"crc\" DevicePath \"\"" Feb 27 10:36:07 crc kubenswrapper[4998]: I0227 10:36:07.436313 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536476-dsmkc" event={"ID":"bb0ae9f5-5e46-43c3-ae31-23a68848c96d","Type":"ContainerDied","Data":"085ce6e59c524b3cea43074eab4a7f5f24cd613edf14f5a8724cca5627619a69"} Feb 27 10:36:07 crc kubenswrapper[4998]: I0227 10:36:07.436351 4998 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="085ce6e59c524b3cea43074eab4a7f5f24cd613edf14f5a8724cca5627619a69" Feb 27 10:36:07 crc kubenswrapper[4998]: I0227 10:36:07.436386 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536476-dsmkc" Feb 27 10:36:07 crc kubenswrapper[4998]: I0227 10:36:07.733908 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536470-fzk6x"] Feb 27 10:36:07 crc kubenswrapper[4998]: I0227 10:36:07.740428 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536470-fzk6x"] Feb 27 10:36:08 crc kubenswrapper[4998]: I0227 10:36:08.776584 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eab32a1f-971a-416a-9568-fa7b00bb0476" path="/var/lib/kubelet/pods/eab32a1f-971a-416a-9568-fa7b00bb0476/volumes" Feb 27 10:36:20 crc kubenswrapper[4998]: I0227 10:36:20.347585 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-jqwq2"] Feb 27 10:36:20 crc kubenswrapper[4998]: E0227 10:36:20.348520 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb0ae9f5-5e46-43c3-ae31-23a68848c96d" containerName="oc" Feb 27 10:36:20 crc kubenswrapper[4998]: I0227 10:36:20.348539 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb0ae9f5-5e46-43c3-ae31-23a68848c96d" containerName="oc" Feb 27 10:36:20 crc kubenswrapper[4998]: I0227 10:36:20.348714 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb0ae9f5-5e46-43c3-ae31-23a68848c96d" containerName="oc" Feb 27 10:36:20 crc kubenswrapper[4998]: I0227 10:36:20.349610 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-jqwq2" Feb 27 10:36:20 crc kubenswrapper[4998]: I0227 10:36:20.352995 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Feb 27 10:36:20 crc kubenswrapper[4998]: I0227 10:36:20.353219 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-tjvqd" Feb 27 10:36:20 crc kubenswrapper[4998]: I0227 10:36:20.354447 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Feb 27 10:36:20 crc kubenswrapper[4998]: I0227 10:36:20.356953 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Feb 27 10:36:20 crc kubenswrapper[4998]: I0227 10:36:20.380633 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-jqwq2"] Feb 27 10:36:20 crc kubenswrapper[4998]: I0227 10:36:20.440647 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76jmv\" (UniqueName: \"kubernetes.io/projected/63329382-7f66-4eb9-9376-3ca15773a907-kube-api-access-76jmv\") pod \"dnsmasq-dns-675f4bcbfc-jqwq2\" (UID: \"63329382-7f66-4eb9-9376-3ca15773a907\") " pod="openstack/dnsmasq-dns-675f4bcbfc-jqwq2" Feb 27 10:36:20 crc kubenswrapper[4998]: I0227 10:36:20.440705 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63329382-7f66-4eb9-9376-3ca15773a907-config\") pod \"dnsmasq-dns-675f4bcbfc-jqwq2\" (UID: \"63329382-7f66-4eb9-9376-3ca15773a907\") " pod="openstack/dnsmasq-dns-675f4bcbfc-jqwq2" Feb 27 10:36:20 crc kubenswrapper[4998]: I0227 10:36:20.454377 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-2764j"] Feb 27 10:36:20 crc kubenswrapper[4998]: I0227 10:36:20.455715 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-2764j" Feb 27 10:36:20 crc kubenswrapper[4998]: I0227 10:36:20.457144 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Feb 27 10:36:20 crc kubenswrapper[4998]: I0227 10:36:20.499635 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-2764j"] Feb 27 10:36:20 crc kubenswrapper[4998]: I0227 10:36:20.542084 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76jmv\" (UniqueName: \"kubernetes.io/projected/63329382-7f66-4eb9-9376-3ca15773a907-kube-api-access-76jmv\") pod \"dnsmasq-dns-675f4bcbfc-jqwq2\" (UID: \"63329382-7f66-4eb9-9376-3ca15773a907\") " pod="openstack/dnsmasq-dns-675f4bcbfc-jqwq2" Feb 27 10:36:20 crc kubenswrapper[4998]: I0227 10:36:20.542146 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63329382-7f66-4eb9-9376-3ca15773a907-config\") pod \"dnsmasq-dns-675f4bcbfc-jqwq2\" (UID: \"63329382-7f66-4eb9-9376-3ca15773a907\") " pod="openstack/dnsmasq-dns-675f4bcbfc-jqwq2" Feb 27 10:36:20 crc kubenswrapper[4998]: I0227 10:36:20.542181 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8d738168-4761-4f32-9100-0b9ec5979911-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-2764j\" (UID: \"8d738168-4761-4f32-9100-0b9ec5979911\") " pod="openstack/dnsmasq-dns-78dd6ddcc-2764j" Feb 27 10:36:20 crc kubenswrapper[4998]: I0227 10:36:20.542198 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrcdw\" (UniqueName: \"kubernetes.io/projected/8d738168-4761-4f32-9100-0b9ec5979911-kube-api-access-lrcdw\") pod \"dnsmasq-dns-78dd6ddcc-2764j\" (UID: \"8d738168-4761-4f32-9100-0b9ec5979911\") " pod="openstack/dnsmasq-dns-78dd6ddcc-2764j" Feb 27 10:36:20 crc kubenswrapper[4998]: I0227 10:36:20.542307 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d738168-4761-4f32-9100-0b9ec5979911-config\") pod \"dnsmasq-dns-78dd6ddcc-2764j\" (UID: \"8d738168-4761-4f32-9100-0b9ec5979911\") " pod="openstack/dnsmasq-dns-78dd6ddcc-2764j" Feb 27 10:36:20 crc kubenswrapper[4998]: I0227 10:36:20.545471 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63329382-7f66-4eb9-9376-3ca15773a907-config\") pod \"dnsmasq-dns-675f4bcbfc-jqwq2\" (UID: \"63329382-7f66-4eb9-9376-3ca15773a907\") " pod="openstack/dnsmasq-dns-675f4bcbfc-jqwq2" Feb 27 10:36:20 crc kubenswrapper[4998]: I0227 10:36:20.561180 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76jmv\" (UniqueName: \"kubernetes.io/projected/63329382-7f66-4eb9-9376-3ca15773a907-kube-api-access-76jmv\") pod \"dnsmasq-dns-675f4bcbfc-jqwq2\" (UID: \"63329382-7f66-4eb9-9376-3ca15773a907\") " pod="openstack/dnsmasq-dns-675f4bcbfc-jqwq2" Feb 27 10:36:20 crc kubenswrapper[4998]: I0227 10:36:20.643876 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8d738168-4761-4f32-9100-0b9ec5979911-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-2764j\" (UID: \"8d738168-4761-4f32-9100-0b9ec5979911\") " pod="openstack/dnsmasq-dns-78dd6ddcc-2764j" Feb 27 10:36:20 crc kubenswrapper[4998]: I0227 10:36:20.644246 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrcdw\" (UniqueName: \"kubernetes.io/projected/8d738168-4761-4f32-9100-0b9ec5979911-kube-api-access-lrcdw\") pod \"dnsmasq-dns-78dd6ddcc-2764j\" (UID: \"8d738168-4761-4f32-9100-0b9ec5979911\") " pod="openstack/dnsmasq-dns-78dd6ddcc-2764j" Feb 27 10:36:20 crc kubenswrapper[4998]: I0227 10:36:20.644341 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d738168-4761-4f32-9100-0b9ec5979911-config\") pod \"dnsmasq-dns-78dd6ddcc-2764j\" (UID: \"8d738168-4761-4f32-9100-0b9ec5979911\") " pod="openstack/dnsmasq-dns-78dd6ddcc-2764j" Feb 27 10:36:20 crc kubenswrapper[4998]: I0227 10:36:20.644949 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8d738168-4761-4f32-9100-0b9ec5979911-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-2764j\" (UID: \"8d738168-4761-4f32-9100-0b9ec5979911\") " pod="openstack/dnsmasq-dns-78dd6ddcc-2764j" Feb 27 10:36:20 crc kubenswrapper[4998]: I0227 10:36:20.645314 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d738168-4761-4f32-9100-0b9ec5979911-config\") pod \"dnsmasq-dns-78dd6ddcc-2764j\" (UID: \"8d738168-4761-4f32-9100-0b9ec5979911\") " pod="openstack/dnsmasq-dns-78dd6ddcc-2764j" Feb 27 10:36:20 crc kubenswrapper[4998]: I0227 10:36:20.664935 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrcdw\" (UniqueName: \"kubernetes.io/projected/8d738168-4761-4f32-9100-0b9ec5979911-kube-api-access-lrcdw\") pod \"dnsmasq-dns-78dd6ddcc-2764j\" (UID: \"8d738168-4761-4f32-9100-0b9ec5979911\") " pod="openstack/dnsmasq-dns-78dd6ddcc-2764j" Feb 27 10:36:20 crc kubenswrapper[4998]: I0227 10:36:20.686322 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-jqwq2" Feb 27 10:36:20 crc kubenswrapper[4998]: I0227 10:36:20.770621 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-2764j" Feb 27 10:36:20 crc kubenswrapper[4998]: I0227 10:36:20.994594 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-2764j"] Feb 27 10:36:21 crc kubenswrapper[4998]: W0227 10:36:21.003187 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8d738168_4761_4f32_9100_0b9ec5979911.slice/crio-99e656a53b9968d9c98437726ab345c15d9d847be8dfd9e69ddb682854dc9cd8 WatchSource:0}: Error finding container 99e656a53b9968d9c98437726ab345c15d9d847be8dfd9e69ddb682854dc9cd8: Status 404 returned error can't find the container with id 99e656a53b9968d9c98437726ab345c15d9d847be8dfd9e69ddb682854dc9cd8 Feb 27 10:36:21 crc kubenswrapper[4998]: W0227 10:36:21.097527 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod63329382_7f66_4eb9_9376_3ca15773a907.slice/crio-b4b27d98621e8e506e9ebcfc5b382fcef00ec66c428e4a229b9f585b4bc10532 WatchSource:0}: Error finding container b4b27d98621e8e506e9ebcfc5b382fcef00ec66c428e4a229b9f585b4bc10532: Status 404 returned error can't find the container with id b4b27d98621e8e506e9ebcfc5b382fcef00ec66c428e4a229b9f585b4bc10532 Feb 27 10:36:21 crc kubenswrapper[4998]: I0227 10:36:21.099644 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-jqwq2"] Feb 27 10:36:21 crc kubenswrapper[4998]: I0227 10:36:21.527125 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-jqwq2" event={"ID":"63329382-7f66-4eb9-9376-3ca15773a907","Type":"ContainerStarted","Data":"b4b27d98621e8e506e9ebcfc5b382fcef00ec66c428e4a229b9f585b4bc10532"} Feb 27 10:36:21 crc kubenswrapper[4998]: I0227 10:36:21.528119 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-2764j" event={"ID":"8d738168-4761-4f32-9100-0b9ec5979911","Type":"ContainerStarted","Data":"99e656a53b9968d9c98437726ab345c15d9d847be8dfd9e69ddb682854dc9cd8"} Feb 27 10:36:23 crc kubenswrapper[4998]: I0227 10:36:23.263543 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-jqwq2"] Feb 27 10:36:23 crc kubenswrapper[4998]: I0227 10:36:23.282911 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-zgmbf"] Feb 27 10:36:23 crc kubenswrapper[4998]: I0227 10:36:23.297745 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-zgmbf"] Feb 27 10:36:23 crc kubenswrapper[4998]: I0227 10:36:23.297865 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-zgmbf" Feb 27 10:36:23 crc kubenswrapper[4998]: I0227 10:36:23.384976 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f5c960f-4048-47fa-8c05-f48d6702f1a3-config\") pod \"dnsmasq-dns-5ccc8479f9-zgmbf\" (UID: \"2f5c960f-4048-47fa-8c05-f48d6702f1a3\") " pod="openstack/dnsmasq-dns-5ccc8479f9-zgmbf" Feb 27 10:36:23 crc kubenswrapper[4998]: I0227 10:36:23.385099 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2f5c960f-4048-47fa-8c05-f48d6702f1a3-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-zgmbf\" (UID: \"2f5c960f-4048-47fa-8c05-f48d6702f1a3\") " pod="openstack/dnsmasq-dns-5ccc8479f9-zgmbf" Feb 27 10:36:23 crc kubenswrapper[4998]: I0227 10:36:23.385130 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plxch\" (UniqueName: \"kubernetes.io/projected/2f5c960f-4048-47fa-8c05-f48d6702f1a3-kube-api-access-plxch\") pod \"dnsmasq-dns-5ccc8479f9-zgmbf\" (UID: \"2f5c960f-4048-47fa-8c05-f48d6702f1a3\") " pod="openstack/dnsmasq-dns-5ccc8479f9-zgmbf" Feb 27 10:36:23 crc kubenswrapper[4998]: I0227 10:36:23.486954 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2f5c960f-4048-47fa-8c05-f48d6702f1a3-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-zgmbf\" (UID: \"2f5c960f-4048-47fa-8c05-f48d6702f1a3\") " pod="openstack/dnsmasq-dns-5ccc8479f9-zgmbf" Feb 27 10:36:23 crc kubenswrapper[4998]: I0227 10:36:23.486995 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plxch\" (UniqueName: \"kubernetes.io/projected/2f5c960f-4048-47fa-8c05-f48d6702f1a3-kube-api-access-plxch\") pod \"dnsmasq-dns-5ccc8479f9-zgmbf\" (UID: \"2f5c960f-4048-47fa-8c05-f48d6702f1a3\") " pod="openstack/dnsmasq-dns-5ccc8479f9-zgmbf" Feb 27 10:36:23 crc kubenswrapper[4998]: I0227 10:36:23.487076 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f5c960f-4048-47fa-8c05-f48d6702f1a3-config\") pod \"dnsmasq-dns-5ccc8479f9-zgmbf\" (UID: \"2f5c960f-4048-47fa-8c05-f48d6702f1a3\") " pod="openstack/dnsmasq-dns-5ccc8479f9-zgmbf" Feb 27 10:36:23 crc kubenswrapper[4998]: I0227 10:36:23.488199 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f5c960f-4048-47fa-8c05-f48d6702f1a3-config\") pod \"dnsmasq-dns-5ccc8479f9-zgmbf\" (UID: \"2f5c960f-4048-47fa-8c05-f48d6702f1a3\") " pod="openstack/dnsmasq-dns-5ccc8479f9-zgmbf" Feb 27 10:36:23 crc kubenswrapper[4998]: I0227 10:36:23.488345 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2f5c960f-4048-47fa-8c05-f48d6702f1a3-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-zgmbf\" (UID: \"2f5c960f-4048-47fa-8c05-f48d6702f1a3\") " pod="openstack/dnsmasq-dns-5ccc8479f9-zgmbf" Feb 27 10:36:23 crc kubenswrapper[4998]: I0227 10:36:23.511323 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plxch\" (UniqueName: \"kubernetes.io/projected/2f5c960f-4048-47fa-8c05-f48d6702f1a3-kube-api-access-plxch\") pod \"dnsmasq-dns-5ccc8479f9-zgmbf\" (UID: \"2f5c960f-4048-47fa-8c05-f48d6702f1a3\") " pod="openstack/dnsmasq-dns-5ccc8479f9-zgmbf" Feb 27 10:36:23 crc kubenswrapper[4998]: I0227 10:36:23.547125 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-2764j"] Feb 27 10:36:23 crc kubenswrapper[4998]: I0227 10:36:23.576603 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-r99b8"] Feb 27 10:36:23 crc kubenswrapper[4998]: I0227 10:36:23.577987 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-r99b8" Feb 27 10:36:23 crc kubenswrapper[4998]: I0227 10:36:23.586597 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-r99b8"] Feb 27 10:36:23 crc kubenswrapper[4998]: I0227 10:36:23.631587 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-zgmbf" Feb 27 10:36:23 crc kubenswrapper[4998]: I0227 10:36:23.689377 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f44997ba-4122-479b-b1c6-f57d2c9a3e05-config\") pod \"dnsmasq-dns-57d769cc4f-r99b8\" (UID: \"f44997ba-4122-479b-b1c6-f57d2c9a3e05\") " pod="openstack/dnsmasq-dns-57d769cc4f-r99b8" Feb 27 10:36:23 crc kubenswrapper[4998]: I0227 10:36:23.689503 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f44997ba-4122-479b-b1c6-f57d2c9a3e05-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-r99b8\" (UID: \"f44997ba-4122-479b-b1c6-f57d2c9a3e05\") " pod="openstack/dnsmasq-dns-57d769cc4f-r99b8" Feb 27 10:36:23 crc kubenswrapper[4998]: I0227 10:36:23.689549 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8kvp\" (UniqueName: \"kubernetes.io/projected/f44997ba-4122-479b-b1c6-f57d2c9a3e05-kube-api-access-l8kvp\") pod \"dnsmasq-dns-57d769cc4f-r99b8\" (UID: \"f44997ba-4122-479b-b1c6-f57d2c9a3e05\") " pod="openstack/dnsmasq-dns-57d769cc4f-r99b8" Feb 27 10:36:23 crc kubenswrapper[4998]: I0227 10:36:23.791742 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f44997ba-4122-479b-b1c6-f57d2c9a3e05-config\") pod \"dnsmasq-dns-57d769cc4f-r99b8\" (UID: \"f44997ba-4122-479b-b1c6-f57d2c9a3e05\") " pod="openstack/dnsmasq-dns-57d769cc4f-r99b8" Feb 27 10:36:23 crc kubenswrapper[4998]: I0227 10:36:23.792066 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f44997ba-4122-479b-b1c6-f57d2c9a3e05-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-r99b8\" (UID: \"f44997ba-4122-479b-b1c6-f57d2c9a3e05\") " pod="openstack/dnsmasq-dns-57d769cc4f-r99b8" Feb 27 10:36:23 crc kubenswrapper[4998]: I0227 10:36:23.792085 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8kvp\" (UniqueName: \"kubernetes.io/projected/f44997ba-4122-479b-b1c6-f57d2c9a3e05-kube-api-access-l8kvp\") pod \"dnsmasq-dns-57d769cc4f-r99b8\" (UID: \"f44997ba-4122-479b-b1c6-f57d2c9a3e05\") " pod="openstack/dnsmasq-dns-57d769cc4f-r99b8" Feb 27 10:36:23 crc kubenswrapper[4998]: I0227 10:36:23.793491 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f44997ba-4122-479b-b1c6-f57d2c9a3e05-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-r99b8\" (UID: \"f44997ba-4122-479b-b1c6-f57d2c9a3e05\") " pod="openstack/dnsmasq-dns-57d769cc4f-r99b8" Feb 27 10:36:23 crc kubenswrapper[4998]: I0227 10:36:23.793600 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f44997ba-4122-479b-b1c6-f57d2c9a3e05-config\") pod \"dnsmasq-dns-57d769cc4f-r99b8\" (UID: \"f44997ba-4122-479b-b1c6-f57d2c9a3e05\") " pod="openstack/dnsmasq-dns-57d769cc4f-r99b8" Feb 27 10:36:23 crc kubenswrapper[4998]: I0227 10:36:23.815147 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8kvp\" (UniqueName: \"kubernetes.io/projected/f44997ba-4122-479b-b1c6-f57d2c9a3e05-kube-api-access-l8kvp\") pod \"dnsmasq-dns-57d769cc4f-r99b8\" (UID: \"f44997ba-4122-479b-b1c6-f57d2c9a3e05\") " pod="openstack/dnsmasq-dns-57d769cc4f-r99b8" Feb 27 10:36:23 crc kubenswrapper[4998]: I0227 10:36:23.911147 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-r99b8" Feb 27 10:36:24 crc kubenswrapper[4998]: I0227 10:36:24.109632 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-zgmbf"] Feb 27 10:36:24 crc kubenswrapper[4998]: W0227 10:36:24.119329 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2f5c960f_4048_47fa_8c05_f48d6702f1a3.slice/crio-6ebcf193cc49db5b8565701c0853831d1d5037b5a5ebfb02f88b5903d750f4d8 WatchSource:0}: Error finding container 6ebcf193cc49db5b8565701c0853831d1d5037b5a5ebfb02f88b5903d750f4d8: Status 404 returned error can't find the container with id 6ebcf193cc49db5b8565701c0853831d1d5037b5a5ebfb02f88b5903d750f4d8 Feb 27 10:36:24 crc kubenswrapper[4998]: I0227 10:36:24.327420 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-r99b8"] Feb 27 10:36:24 crc kubenswrapper[4998]: W0227 10:36:24.334792 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf44997ba_4122_479b_b1c6_f57d2c9a3e05.slice/crio-a9f14d2890a237ce43653e7ceca3467500c0695f08ace043e2d9dd5a2e42f81a WatchSource:0}: Error finding container a9f14d2890a237ce43653e7ceca3467500c0695f08ace043e2d9dd5a2e42f81a: Status 404 returned error can't find the container with id a9f14d2890a237ce43653e7ceca3467500c0695f08ace043e2d9dd5a2e42f81a Feb 27 10:36:24 crc kubenswrapper[4998]: I0227 10:36:24.418954 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 27 10:36:24 crc kubenswrapper[4998]: I0227 10:36:24.420492 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:36:24 crc kubenswrapper[4998]: I0227 10:36:24.426318 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 27 10:36:24 crc kubenswrapper[4998]: I0227 10:36:24.426576 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 27 10:36:24 crc kubenswrapper[4998]: I0227 10:36:24.426853 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-dnxmh" Feb 27 10:36:24 crc kubenswrapper[4998]: I0227 10:36:24.427264 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 27 10:36:24 crc kubenswrapper[4998]: I0227 10:36:24.427820 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 27 10:36:24 crc kubenswrapper[4998]: I0227 10:36:24.428563 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 27 10:36:24 crc kubenswrapper[4998]: I0227 10:36:24.430415 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 27 10:36:24 crc kubenswrapper[4998]: I0227 10:36:24.451947 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 27 10:36:24 crc kubenswrapper[4998]: I0227 10:36:24.504262 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/68cd6142-df7e-4994-97c0-0bc08ea1e3d4-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"68cd6142-df7e-4994-97c0-0bc08ea1e3d4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:36:24 crc kubenswrapper[4998]: I0227 10:36:24.504332 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/68cd6142-df7e-4994-97c0-0bc08ea1e3d4-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"68cd6142-df7e-4994-97c0-0bc08ea1e3d4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:36:24 crc kubenswrapper[4998]: I0227 10:36:24.504376 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/68cd6142-df7e-4994-97c0-0bc08ea1e3d4-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"68cd6142-df7e-4994-97c0-0bc08ea1e3d4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:36:24 crc kubenswrapper[4998]: I0227 10:36:24.504391 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/68cd6142-df7e-4994-97c0-0bc08ea1e3d4-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"68cd6142-df7e-4994-97c0-0bc08ea1e3d4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:36:24 crc kubenswrapper[4998]: I0227 10:36:24.504476 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/68cd6142-df7e-4994-97c0-0bc08ea1e3d4-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"68cd6142-df7e-4994-97c0-0bc08ea1e3d4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:36:24 crc kubenswrapper[4998]: I0227 10:36:24.504505 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"68cd6142-df7e-4994-97c0-0bc08ea1e3d4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:36:24 crc kubenswrapper[4998]: I0227 10:36:24.504521 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4z7d\" (UniqueName: \"kubernetes.io/projected/68cd6142-df7e-4994-97c0-0bc08ea1e3d4-kube-api-access-z4z7d\") pod \"rabbitmq-cell1-server-0\" (UID: \"68cd6142-df7e-4994-97c0-0bc08ea1e3d4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:36:24 crc kubenswrapper[4998]: I0227 10:36:24.504538 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/68cd6142-df7e-4994-97c0-0bc08ea1e3d4-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"68cd6142-df7e-4994-97c0-0bc08ea1e3d4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:36:24 crc kubenswrapper[4998]: I0227 10:36:24.504565 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/68cd6142-df7e-4994-97c0-0bc08ea1e3d4-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"68cd6142-df7e-4994-97c0-0bc08ea1e3d4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:36:24 crc kubenswrapper[4998]: I0227 10:36:24.504614 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/68cd6142-df7e-4994-97c0-0bc08ea1e3d4-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"68cd6142-df7e-4994-97c0-0bc08ea1e3d4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:36:24 crc kubenswrapper[4998]: I0227 10:36:24.504632 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/68cd6142-df7e-4994-97c0-0bc08ea1e3d4-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"68cd6142-df7e-4994-97c0-0bc08ea1e3d4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:36:24 crc kubenswrapper[4998]: I0227 10:36:24.585567 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-r99b8" event={"ID":"f44997ba-4122-479b-b1c6-f57d2c9a3e05","Type":"ContainerStarted","Data":"a9f14d2890a237ce43653e7ceca3467500c0695f08ace043e2d9dd5a2e42f81a"} Feb 27 10:36:24 crc kubenswrapper[4998]: I0227 10:36:24.587281 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-zgmbf" event={"ID":"2f5c960f-4048-47fa-8c05-f48d6702f1a3","Type":"ContainerStarted","Data":"6ebcf193cc49db5b8565701c0853831d1d5037b5a5ebfb02f88b5903d750f4d8"} Feb 27 10:36:24 crc kubenswrapper[4998]: I0227 10:36:24.606355 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/68cd6142-df7e-4994-97c0-0bc08ea1e3d4-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"68cd6142-df7e-4994-97c0-0bc08ea1e3d4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:36:24 crc kubenswrapper[4998]: I0227 10:36:24.606408 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/68cd6142-df7e-4994-97c0-0bc08ea1e3d4-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"68cd6142-df7e-4994-97c0-0bc08ea1e3d4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:36:24 crc kubenswrapper[4998]: I0227 10:36:24.606459 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/68cd6142-df7e-4994-97c0-0bc08ea1e3d4-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"68cd6142-df7e-4994-97c0-0bc08ea1e3d4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:36:24 crc kubenswrapper[4998]: I0227 10:36:24.606492 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/68cd6142-df7e-4994-97c0-0bc08ea1e3d4-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"68cd6142-df7e-4994-97c0-0bc08ea1e3d4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:36:24 crc kubenswrapper[4998]: I0227 10:36:24.606528 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/68cd6142-df7e-4994-97c0-0bc08ea1e3d4-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"68cd6142-df7e-4994-97c0-0bc08ea1e3d4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:36:24 crc kubenswrapper[4998]: I0227 10:36:24.606585 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/68cd6142-df7e-4994-97c0-0bc08ea1e3d4-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"68cd6142-df7e-4994-97c0-0bc08ea1e3d4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:36:24 crc kubenswrapper[4998]: I0227 10:36:24.606624 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/68cd6142-df7e-4994-97c0-0bc08ea1e3d4-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"68cd6142-df7e-4994-97c0-0bc08ea1e3d4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:36:24 crc kubenswrapper[4998]: I0227 10:36:24.606650 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4z7d\" (UniqueName: \"kubernetes.io/projected/68cd6142-df7e-4994-97c0-0bc08ea1e3d4-kube-api-access-z4z7d\") pod \"rabbitmq-cell1-server-0\" (UID: \"68cd6142-df7e-4994-97c0-0bc08ea1e3d4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:36:24 crc kubenswrapper[4998]: I0227 10:36:24.606671 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"68cd6142-df7e-4994-97c0-0bc08ea1e3d4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:36:24 crc kubenswrapper[4998]: I0227 10:36:24.606693 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/68cd6142-df7e-4994-97c0-0bc08ea1e3d4-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"68cd6142-df7e-4994-97c0-0bc08ea1e3d4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:36:24 crc kubenswrapper[4998]: I0227 10:36:24.606719 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/68cd6142-df7e-4994-97c0-0bc08ea1e3d4-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"68cd6142-df7e-4994-97c0-0bc08ea1e3d4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:36:24 crc kubenswrapper[4998]: I0227 10:36:24.609189 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/68cd6142-df7e-4994-97c0-0bc08ea1e3d4-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"68cd6142-df7e-4994-97c0-0bc08ea1e3d4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:36:24 crc kubenswrapper[4998]: I0227 10:36:24.609598 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/68cd6142-df7e-4994-97c0-0bc08ea1e3d4-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"68cd6142-df7e-4994-97c0-0bc08ea1e3d4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:36:24 crc kubenswrapper[4998]: I0227 10:36:24.609930 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/68cd6142-df7e-4994-97c0-0bc08ea1e3d4-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"68cd6142-df7e-4994-97c0-0bc08ea1e3d4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:36:24 crc kubenswrapper[4998]: I0227 10:36:24.609953 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/68cd6142-df7e-4994-97c0-0bc08ea1e3d4-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"68cd6142-df7e-4994-97c0-0bc08ea1e3d4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:36:24 crc kubenswrapper[4998]: I0227 10:36:24.610579 4998 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"68cd6142-df7e-4994-97c0-0bc08ea1e3d4\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:36:24 crc kubenswrapper[4998]: I0227 10:36:24.611008 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/68cd6142-df7e-4994-97c0-0bc08ea1e3d4-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"68cd6142-df7e-4994-97c0-0bc08ea1e3d4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:36:24 crc kubenswrapper[4998]: I0227 10:36:24.613495 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/68cd6142-df7e-4994-97c0-0bc08ea1e3d4-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"68cd6142-df7e-4994-97c0-0bc08ea1e3d4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:36:24 crc kubenswrapper[4998]: I0227 10:36:24.614732 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/68cd6142-df7e-4994-97c0-0bc08ea1e3d4-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"68cd6142-df7e-4994-97c0-0bc08ea1e3d4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:36:24 crc kubenswrapper[4998]: I0227 10:36:24.620946 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/68cd6142-df7e-4994-97c0-0bc08ea1e3d4-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"68cd6142-df7e-4994-97c0-0bc08ea1e3d4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:36:24 crc kubenswrapper[4998]: I0227 10:36:24.622695 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/68cd6142-df7e-4994-97c0-0bc08ea1e3d4-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"68cd6142-df7e-4994-97c0-0bc08ea1e3d4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:36:24 crc kubenswrapper[4998]: I0227 10:36:24.630082 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4z7d\" (UniqueName: \"kubernetes.io/projected/68cd6142-df7e-4994-97c0-0bc08ea1e3d4-kube-api-access-z4z7d\") pod \"rabbitmq-cell1-server-0\" (UID: \"68cd6142-df7e-4994-97c0-0bc08ea1e3d4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:36:24 crc kubenswrapper[4998]: I0227 10:36:24.638315 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"68cd6142-df7e-4994-97c0-0bc08ea1e3d4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:36:24 crc kubenswrapper[4998]: I0227 10:36:24.724134 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 27 10:36:24 crc kubenswrapper[4998]: I0227 10:36:24.729873 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 27 10:36:24 crc kubenswrapper[4998]: I0227 10:36:24.742676 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 27 10:36:24 crc kubenswrapper[4998]: I0227 10:36:24.742802 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 27 10:36:24 crc kubenswrapper[4998]: I0227 10:36:24.742940 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 27 10:36:24 crc kubenswrapper[4998]: I0227 10:36:24.743034 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 27 10:36:24 crc kubenswrapper[4998]: I0227 10:36:24.743144 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 27 10:36:24 crc kubenswrapper[4998]: I0227 10:36:24.744594 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-q8jmt" Feb 27 10:36:24 crc kubenswrapper[4998]: I0227 10:36:24.744752 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 27 10:36:24 crc kubenswrapper[4998]: I0227 10:36:24.745114 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:36:24 crc kubenswrapper[4998]: I0227 10:36:24.779725 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 27 10:36:24 crc kubenswrapper[4998]: I0227 10:36:24.810811 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0ca208a2-3ba0-43e6-a2c4-942c12e54b41-config-data\") pod \"rabbitmq-server-0\" (UID: \"0ca208a2-3ba0-43e6-a2c4-942c12e54b41\") " pod="openstack/rabbitmq-server-0" Feb 27 10:36:24 crc kubenswrapper[4998]: I0227 10:36:24.810858 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0ca208a2-3ba0-43e6-a2c4-942c12e54b41-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"0ca208a2-3ba0-43e6-a2c4-942c12e54b41\") " pod="openstack/rabbitmq-server-0" Feb 27 10:36:24 crc kubenswrapper[4998]: I0227 10:36:24.810891 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0ca208a2-3ba0-43e6-a2c4-942c12e54b41-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"0ca208a2-3ba0-43e6-a2c4-942c12e54b41\") " pod="openstack/rabbitmq-server-0" Feb 27 10:36:24 crc kubenswrapper[4998]: I0227 10:36:24.810934 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0ca208a2-3ba0-43e6-a2c4-942c12e54b41-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"0ca208a2-3ba0-43e6-a2c4-942c12e54b41\") " pod="openstack/rabbitmq-server-0" Feb 27 10:36:24 crc kubenswrapper[4998]: I0227 10:36:24.810960 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0ca208a2-3ba0-43e6-a2c4-942c12e54b41-pod-info\") pod \"rabbitmq-server-0\" (UID: \"0ca208a2-3ba0-43e6-a2c4-942c12e54b41\") " pod="openstack/rabbitmq-server-0" Feb 27 10:36:24 crc kubenswrapper[4998]: I0227 10:36:24.810983 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcjdq\" (UniqueName: \"kubernetes.io/projected/0ca208a2-3ba0-43e6-a2c4-942c12e54b41-kube-api-access-hcjdq\") pod \"rabbitmq-server-0\" (UID: \"0ca208a2-3ba0-43e6-a2c4-942c12e54b41\") " pod="openstack/rabbitmq-server-0" Feb 27 10:36:24 crc kubenswrapper[4998]: I0227 10:36:24.811036 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"0ca208a2-3ba0-43e6-a2c4-942c12e54b41\") " pod="openstack/rabbitmq-server-0" Feb 27 10:36:24 crc kubenswrapper[4998]: I0227 10:36:24.811066 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0ca208a2-3ba0-43e6-a2c4-942c12e54b41-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"0ca208a2-3ba0-43e6-a2c4-942c12e54b41\") " pod="openstack/rabbitmq-server-0" Feb 27 10:36:24 crc kubenswrapper[4998]: I0227 10:36:24.811099 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0ca208a2-3ba0-43e6-a2c4-942c12e54b41-server-conf\") pod \"rabbitmq-server-0\" (UID: \"0ca208a2-3ba0-43e6-a2c4-942c12e54b41\") " pod="openstack/rabbitmq-server-0" Feb 27 10:36:24 crc kubenswrapper[4998]: I0227 10:36:24.811131 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0ca208a2-3ba0-43e6-a2c4-942c12e54b41-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"0ca208a2-3ba0-43e6-a2c4-942c12e54b41\") " pod="openstack/rabbitmq-server-0" Feb 27 10:36:24 crc kubenswrapper[4998]: I0227 10:36:24.811150 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0ca208a2-3ba0-43e6-a2c4-942c12e54b41-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"0ca208a2-3ba0-43e6-a2c4-942c12e54b41\") " pod="openstack/rabbitmq-server-0" Feb 27 10:36:24 crc kubenswrapper[4998]: I0227 10:36:24.913052 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"0ca208a2-3ba0-43e6-a2c4-942c12e54b41\") " pod="openstack/rabbitmq-server-0" Feb 27 10:36:24 crc kubenswrapper[4998]: I0227 10:36:24.913112 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0ca208a2-3ba0-43e6-a2c4-942c12e54b41-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"0ca208a2-3ba0-43e6-a2c4-942c12e54b41\") " pod="openstack/rabbitmq-server-0" Feb 27 10:36:24 crc kubenswrapper[4998]: I0227 10:36:24.913173 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0ca208a2-3ba0-43e6-a2c4-942c12e54b41-server-conf\") pod \"rabbitmq-server-0\" (UID: \"0ca208a2-3ba0-43e6-a2c4-942c12e54b41\") " pod="openstack/rabbitmq-server-0" Feb 27 10:36:24 crc kubenswrapper[4998]: I0227 10:36:24.913219 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0ca208a2-3ba0-43e6-a2c4-942c12e54b41-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"0ca208a2-3ba0-43e6-a2c4-942c12e54b41\") " pod="openstack/rabbitmq-server-0" Feb 27 10:36:24 crc kubenswrapper[4998]: I0227 10:36:24.913292 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0ca208a2-3ba0-43e6-a2c4-942c12e54b41-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"0ca208a2-3ba0-43e6-a2c4-942c12e54b41\") " pod="openstack/rabbitmq-server-0" Feb 27 10:36:24 crc kubenswrapper[4998]: I0227 10:36:24.913304 4998 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"0ca208a2-3ba0-43e6-a2c4-942c12e54b41\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-server-0" Feb 27 10:36:24 crc kubenswrapper[4998]: I0227 10:36:24.913747 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0ca208a2-3ba0-43e6-a2c4-942c12e54b41-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"0ca208a2-3ba0-43e6-a2c4-942c12e54b41\") " pod="openstack/rabbitmq-server-0" Feb 27 10:36:24 crc kubenswrapper[4998]: I0227 10:36:24.913811 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0ca208a2-3ba0-43e6-a2c4-942c12e54b41-config-data\") pod \"rabbitmq-server-0\" (UID: \"0ca208a2-3ba0-43e6-a2c4-942c12e54b41\") " pod="openstack/rabbitmq-server-0" Feb 27 10:36:24 crc kubenswrapper[4998]: I0227 10:36:24.913826 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0ca208a2-3ba0-43e6-a2c4-942c12e54b41-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"0ca208a2-3ba0-43e6-a2c4-942c12e54b41\") " pod="openstack/rabbitmq-server-0" Feb 27 10:36:24 crc kubenswrapper[4998]: I0227 10:36:24.914188 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0ca208a2-3ba0-43e6-a2c4-942c12e54b41-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"0ca208a2-3ba0-43e6-a2c4-942c12e54b41\") " pod="openstack/rabbitmq-server-0" Feb 27 10:36:24 crc kubenswrapper[4998]: I0227 10:36:24.914359 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0ca208a2-3ba0-43e6-a2c4-942c12e54b41-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"0ca208a2-3ba0-43e6-a2c4-942c12e54b41\") " pod="openstack/rabbitmq-server-0" Feb 27 10:36:24 crc kubenswrapper[4998]: I0227 10:36:24.914432 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0ca208a2-3ba0-43e6-a2c4-942c12e54b41-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"0ca208a2-3ba0-43e6-a2c4-942c12e54b41\") " pod="openstack/rabbitmq-server-0" Feb 27 10:36:24 crc kubenswrapper[4998]: I0227 10:36:24.914467 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0ca208a2-3ba0-43e6-a2c4-942c12e54b41-pod-info\") pod \"rabbitmq-server-0\" (UID: \"0ca208a2-3ba0-43e6-a2c4-942c12e54b41\") " pod="openstack/rabbitmq-server-0" Feb 27 10:36:24 crc kubenswrapper[4998]: I0227 10:36:24.914491 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hcjdq\" (UniqueName: \"kubernetes.io/projected/0ca208a2-3ba0-43e6-a2c4-942c12e54b41-kube-api-access-hcjdq\") pod \"rabbitmq-server-0\" (UID: \"0ca208a2-3ba0-43e6-a2c4-942c12e54b41\") " pod="openstack/rabbitmq-server-0" Feb 27 10:36:24 crc kubenswrapper[4998]: I0227 10:36:24.915300 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0ca208a2-3ba0-43e6-a2c4-942c12e54b41-config-data\") pod \"rabbitmq-server-0\" (UID: \"0ca208a2-3ba0-43e6-a2c4-942c12e54b41\") " pod="openstack/rabbitmq-server-0" Feb 27 10:36:24 crc kubenswrapper[4998]: I0227 10:36:24.915345 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0ca208a2-3ba0-43e6-a2c4-942c12e54b41-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"0ca208a2-3ba0-43e6-a2c4-942c12e54b41\") " pod="openstack/rabbitmq-server-0" Feb 27 10:36:24 crc kubenswrapper[4998]: I0227 10:36:24.917993 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0ca208a2-3ba0-43e6-a2c4-942c12e54b41-server-conf\") pod \"rabbitmq-server-0\" (UID: \"0ca208a2-3ba0-43e6-a2c4-942c12e54b41\") " pod="openstack/rabbitmq-server-0" Feb 27 10:36:24 crc kubenswrapper[4998]: I0227 10:36:24.920027 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0ca208a2-3ba0-43e6-a2c4-942c12e54b41-pod-info\") pod \"rabbitmq-server-0\" (UID: \"0ca208a2-3ba0-43e6-a2c4-942c12e54b41\") " pod="openstack/rabbitmq-server-0" Feb 27 10:36:24 crc kubenswrapper[4998]: I0227 10:36:24.920175 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0ca208a2-3ba0-43e6-a2c4-942c12e54b41-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"0ca208a2-3ba0-43e6-a2c4-942c12e54b41\") " pod="openstack/rabbitmq-server-0" Feb 27 10:36:24 crc kubenswrapper[4998]: I0227 10:36:24.920812 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0ca208a2-3ba0-43e6-a2c4-942c12e54b41-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"0ca208a2-3ba0-43e6-a2c4-942c12e54b41\") " pod="openstack/rabbitmq-server-0" Feb 27 10:36:24 crc kubenswrapper[4998]: I0227 10:36:24.921271 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0ca208a2-3ba0-43e6-a2c4-942c12e54b41-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"0ca208a2-3ba0-43e6-a2c4-942c12e54b41\") " pod="openstack/rabbitmq-server-0" Feb 27 10:36:24 crc kubenswrapper[4998]: I0227 10:36:24.931545 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hcjdq\" (UniqueName: \"kubernetes.io/projected/0ca208a2-3ba0-43e6-a2c4-942c12e54b41-kube-api-access-hcjdq\") pod \"rabbitmq-server-0\" (UID: \"0ca208a2-3ba0-43e6-a2c4-942c12e54b41\") " pod="openstack/rabbitmq-server-0" Feb 27 10:36:24 crc kubenswrapper[4998]: I0227 10:36:24.937289 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"0ca208a2-3ba0-43e6-a2c4-942c12e54b41\") " pod="openstack/rabbitmq-server-0" Feb 27 10:36:25 crc kubenswrapper[4998]: I0227 10:36:25.065350 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 27 10:36:25 crc kubenswrapper[4998]: I0227 10:36:25.806799 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Feb 27 10:36:25 crc kubenswrapper[4998]: I0227 10:36:25.811703 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 27 10:36:25 crc kubenswrapper[4998]: I0227 10:36:25.814456 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 27 10:36:25 crc kubenswrapper[4998]: I0227 10:36:25.815217 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-rdxpc" Feb 27 10:36:25 crc kubenswrapper[4998]: I0227 10:36:25.815400 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Feb 27 10:36:25 crc kubenswrapper[4998]: I0227 10:36:25.822708 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Feb 27 10:36:25 crc kubenswrapper[4998]: I0227 10:36:25.825145 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Feb 27 10:36:25 crc kubenswrapper[4998]: I0227 10:36:25.832664 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Feb 27 10:36:25 crc kubenswrapper[4998]: I0227 10:36:25.927870 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/bffe5e23-2abd-45ce-b167-5cc72eb06ae2-config-data-default\") pod \"openstack-galera-0\" (UID: \"bffe5e23-2abd-45ce-b167-5cc72eb06ae2\") " pod="openstack/openstack-galera-0" Feb 27 10:36:25 crc kubenswrapper[4998]: I0227 10:36:25.927923 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bffe5e23-2abd-45ce-b167-5cc72eb06ae2-operator-scripts\") pod \"openstack-galera-0\" (UID: \"bffe5e23-2abd-45ce-b167-5cc72eb06ae2\") " pod="openstack/openstack-galera-0" Feb 27 10:36:25 crc kubenswrapper[4998]: I0227 10:36:25.928034 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bffe5e23-2abd-45ce-b167-5cc72eb06ae2-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"bffe5e23-2abd-45ce-b167-5cc72eb06ae2\") " pod="openstack/openstack-galera-0" Feb 27 10:36:25 crc kubenswrapper[4998]: I0227 10:36:25.928174 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-0\" (UID: \"bffe5e23-2abd-45ce-b167-5cc72eb06ae2\") " pod="openstack/openstack-galera-0" Feb 27 10:36:25 crc kubenswrapper[4998]: I0227 10:36:25.928205 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5s2d4\" (UniqueName: \"kubernetes.io/projected/bffe5e23-2abd-45ce-b167-5cc72eb06ae2-kube-api-access-5s2d4\") pod \"openstack-galera-0\" (UID: \"bffe5e23-2abd-45ce-b167-5cc72eb06ae2\") " pod="openstack/openstack-galera-0" Feb 27 10:36:25 crc kubenswrapper[4998]: I0227 10:36:25.928503 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/bffe5e23-2abd-45ce-b167-5cc72eb06ae2-kolla-config\") pod \"openstack-galera-0\" (UID: \"bffe5e23-2abd-45ce-b167-5cc72eb06ae2\") " pod="openstack/openstack-galera-0" Feb 27 10:36:25 crc kubenswrapper[4998]: I0227 10:36:25.928532 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/bffe5e23-2abd-45ce-b167-5cc72eb06ae2-config-data-generated\") pod \"openstack-galera-0\" (UID: \"bffe5e23-2abd-45ce-b167-5cc72eb06ae2\") " pod="openstack/openstack-galera-0" Feb 27 10:36:25 crc kubenswrapper[4998]: I0227 10:36:25.928597 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/bffe5e23-2abd-45ce-b167-5cc72eb06ae2-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"bffe5e23-2abd-45ce-b167-5cc72eb06ae2\") " pod="openstack/openstack-galera-0" Feb 27 10:36:26 crc kubenswrapper[4998]: I0227 10:36:26.030102 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bffe5e23-2abd-45ce-b167-5cc72eb06ae2-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"bffe5e23-2abd-45ce-b167-5cc72eb06ae2\") " pod="openstack/openstack-galera-0" Feb 27 10:36:26 crc kubenswrapper[4998]: I0227 10:36:26.030167 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-0\" (UID: \"bffe5e23-2abd-45ce-b167-5cc72eb06ae2\") " pod="openstack/openstack-galera-0" Feb 27 10:36:26 crc kubenswrapper[4998]: I0227 10:36:26.030187 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5s2d4\" (UniqueName: \"kubernetes.io/projected/bffe5e23-2abd-45ce-b167-5cc72eb06ae2-kube-api-access-5s2d4\") pod \"openstack-galera-0\" (UID: \"bffe5e23-2abd-45ce-b167-5cc72eb06ae2\") " pod="openstack/openstack-galera-0" Feb 27 10:36:26 crc kubenswrapper[4998]: I0227 10:36:26.030280 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/bffe5e23-2abd-45ce-b167-5cc72eb06ae2-kolla-config\") pod \"openstack-galera-0\" (UID: \"bffe5e23-2abd-45ce-b167-5cc72eb06ae2\") " pod="openstack/openstack-galera-0" Feb 27 10:36:26 crc kubenswrapper[4998]: I0227 10:36:26.030303 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/bffe5e23-2abd-45ce-b167-5cc72eb06ae2-config-data-generated\") pod \"openstack-galera-0\" (UID: \"bffe5e23-2abd-45ce-b167-5cc72eb06ae2\") " pod="openstack/openstack-galera-0" Feb 27 10:36:26 crc kubenswrapper[4998]: I0227 10:36:26.030341 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/bffe5e23-2abd-45ce-b167-5cc72eb06ae2-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"bffe5e23-2abd-45ce-b167-5cc72eb06ae2\") " pod="openstack/openstack-galera-0" Feb 27 10:36:26 crc kubenswrapper[4998]: I0227 10:36:26.030362 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/bffe5e23-2abd-45ce-b167-5cc72eb06ae2-config-data-default\") pod \"openstack-galera-0\" (UID: \"bffe5e23-2abd-45ce-b167-5cc72eb06ae2\") " pod="openstack/openstack-galera-0" Feb 27 10:36:26 crc kubenswrapper[4998]: I0227 10:36:26.030401 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bffe5e23-2abd-45ce-b167-5cc72eb06ae2-operator-scripts\") pod \"openstack-galera-0\" (UID: \"bffe5e23-2abd-45ce-b167-5cc72eb06ae2\") " pod="openstack/openstack-galera-0" Feb 27 10:36:26 crc kubenswrapper[4998]: I0227 10:36:26.030395 4998 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-0\" (UID: \"bffe5e23-2abd-45ce-b167-5cc72eb06ae2\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/openstack-galera-0" Feb 27 10:36:26 crc kubenswrapper[4998]: I0227 10:36:26.030825 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/bffe5e23-2abd-45ce-b167-5cc72eb06ae2-config-data-generated\") pod \"openstack-galera-0\" (UID: \"bffe5e23-2abd-45ce-b167-5cc72eb06ae2\") " pod="openstack/openstack-galera-0" Feb 27 10:36:26 crc kubenswrapper[4998]: I0227 10:36:26.031534 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/bffe5e23-2abd-45ce-b167-5cc72eb06ae2-config-data-default\") pod \"openstack-galera-0\" (UID: \"bffe5e23-2abd-45ce-b167-5cc72eb06ae2\") " pod="openstack/openstack-galera-0" Feb 27 10:36:26 crc kubenswrapper[4998]: I0227 10:36:26.031839 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/bffe5e23-2abd-45ce-b167-5cc72eb06ae2-kolla-config\") pod \"openstack-galera-0\" (UID: \"bffe5e23-2abd-45ce-b167-5cc72eb06ae2\") " pod="openstack/openstack-galera-0" Feb 27 10:36:26 crc kubenswrapper[4998]: I0227 10:36:26.033745 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bffe5e23-2abd-45ce-b167-5cc72eb06ae2-operator-scripts\") pod \"openstack-galera-0\" (UID: \"bffe5e23-2abd-45ce-b167-5cc72eb06ae2\") " pod="openstack/openstack-galera-0" Feb 27 10:36:26 crc kubenswrapper[4998]: I0227 10:36:26.041075 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bffe5e23-2abd-45ce-b167-5cc72eb06ae2-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"bffe5e23-2abd-45ce-b167-5cc72eb06ae2\") " pod="openstack/openstack-galera-0" Feb 27 10:36:26 crc kubenswrapper[4998]: I0227 10:36:26.050948 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5s2d4\" (UniqueName: \"kubernetes.io/projected/bffe5e23-2abd-45ce-b167-5cc72eb06ae2-kube-api-access-5s2d4\") pod \"openstack-galera-0\" (UID: \"bffe5e23-2abd-45ce-b167-5cc72eb06ae2\") " pod="openstack/openstack-galera-0" Feb 27 10:36:26 crc kubenswrapper[4998]: I0227 10:36:26.051410 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-0\" (UID: \"bffe5e23-2abd-45ce-b167-5cc72eb06ae2\") " pod="openstack/openstack-galera-0" Feb 27 10:36:26 crc kubenswrapper[4998]: I0227 10:36:26.051814 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/bffe5e23-2abd-45ce-b167-5cc72eb06ae2-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"bffe5e23-2abd-45ce-b167-5cc72eb06ae2\") " pod="openstack/openstack-galera-0" Feb 27 10:36:26 crc kubenswrapper[4998]: I0227 10:36:26.137456 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 27 10:36:27 crc kubenswrapper[4998]: I0227 10:36:27.140126 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 27 10:36:27 crc kubenswrapper[4998]: I0227 10:36:27.141867 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 27 10:36:27 crc kubenswrapper[4998]: I0227 10:36:27.145932 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Feb 27 10:36:27 crc kubenswrapper[4998]: I0227 10:36:27.146195 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Feb 27 10:36:27 crc kubenswrapper[4998]: I0227 10:36:27.146470 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-fmc6k" Feb 27 10:36:27 crc kubenswrapper[4998]: I0227 10:36:27.147073 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Feb 27 10:36:27 crc kubenswrapper[4998]: I0227 10:36:27.152837 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 27 10:36:27 crc kubenswrapper[4998]: I0227 10:36:27.255138 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/7eaabc1a-14fa-4a68-b828-c6fc9a19d7b2-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"7eaabc1a-14fa-4a68-b828-c6fc9a19d7b2\") " pod="openstack/openstack-cell1-galera-0" Feb 27 10:36:27 crc kubenswrapper[4998]: I0227 10:36:27.255199 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7eaabc1a-14fa-4a68-b828-c6fc9a19d7b2-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"7eaabc1a-14fa-4a68-b828-c6fc9a19d7b2\") " pod="openstack/openstack-cell1-galera-0" Feb 27 10:36:27 crc kubenswrapper[4998]: I0227 10:36:27.255242 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"7eaabc1a-14fa-4a68-b828-c6fc9a19d7b2\") " pod="openstack/openstack-cell1-galera-0" Feb 27 10:36:27 crc kubenswrapper[4998]: I0227 10:36:27.255553 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/7eaabc1a-14fa-4a68-b828-c6fc9a19d7b2-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"7eaabc1a-14fa-4a68-b828-c6fc9a19d7b2\") " pod="openstack/openstack-cell1-galera-0" Feb 27 10:36:27 crc kubenswrapper[4998]: I0227 10:36:27.255613 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/7eaabc1a-14fa-4a68-b828-c6fc9a19d7b2-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"7eaabc1a-14fa-4a68-b828-c6fc9a19d7b2\") " pod="openstack/openstack-cell1-galera-0" Feb 27 10:36:27 crc kubenswrapper[4998]: I0227 10:36:27.255679 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7eaabc1a-14fa-4a68-b828-c6fc9a19d7b2-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"7eaabc1a-14fa-4a68-b828-c6fc9a19d7b2\") " pod="openstack/openstack-cell1-galera-0" Feb 27 10:36:27 crc kubenswrapper[4998]: I0227 10:36:27.255709 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8whf8\" (UniqueName: \"kubernetes.io/projected/7eaabc1a-14fa-4a68-b828-c6fc9a19d7b2-kube-api-access-8whf8\") pod \"openstack-cell1-galera-0\" (UID: \"7eaabc1a-14fa-4a68-b828-c6fc9a19d7b2\") " pod="openstack/openstack-cell1-galera-0" Feb 27 10:36:27 crc kubenswrapper[4998]: I0227 10:36:27.255734 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/7eaabc1a-14fa-4a68-b828-c6fc9a19d7b2-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"7eaabc1a-14fa-4a68-b828-c6fc9a19d7b2\") " pod="openstack/openstack-cell1-galera-0" Feb 27 10:36:27 crc kubenswrapper[4998]: I0227 10:36:27.357444 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/7eaabc1a-14fa-4a68-b828-c6fc9a19d7b2-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"7eaabc1a-14fa-4a68-b828-c6fc9a19d7b2\") " pod="openstack/openstack-cell1-galera-0" Feb 27 10:36:27 crc kubenswrapper[4998]: I0227 10:36:27.357547 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/7eaabc1a-14fa-4a68-b828-c6fc9a19d7b2-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"7eaabc1a-14fa-4a68-b828-c6fc9a19d7b2\") " pod="openstack/openstack-cell1-galera-0" Feb 27 10:36:27 crc kubenswrapper[4998]: I0227 10:36:27.357675 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7eaabc1a-14fa-4a68-b828-c6fc9a19d7b2-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"7eaabc1a-14fa-4a68-b828-c6fc9a19d7b2\") " pod="openstack/openstack-cell1-galera-0" Feb 27 10:36:27 crc kubenswrapper[4998]: I0227 10:36:27.357703 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8whf8\" (UniqueName: \"kubernetes.io/projected/7eaabc1a-14fa-4a68-b828-c6fc9a19d7b2-kube-api-access-8whf8\") pod \"openstack-cell1-galera-0\" (UID: \"7eaabc1a-14fa-4a68-b828-c6fc9a19d7b2\") " pod="openstack/openstack-cell1-galera-0" Feb 27 10:36:27 crc kubenswrapper[4998]: I0227 10:36:27.357967 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/7eaabc1a-14fa-4a68-b828-c6fc9a19d7b2-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"7eaabc1a-14fa-4a68-b828-c6fc9a19d7b2\") " pod="openstack/openstack-cell1-galera-0" Feb 27 10:36:27 crc kubenswrapper[4998]: I0227 10:36:27.358062 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/7eaabc1a-14fa-4a68-b828-c6fc9a19d7b2-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"7eaabc1a-14fa-4a68-b828-c6fc9a19d7b2\") " pod="openstack/openstack-cell1-galera-0" Feb 27 10:36:27 crc kubenswrapper[4998]: I0227 10:36:27.358640 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/7eaabc1a-14fa-4a68-b828-c6fc9a19d7b2-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"7eaabc1a-14fa-4a68-b828-c6fc9a19d7b2\") " pod="openstack/openstack-cell1-galera-0" Feb 27 10:36:27 crc kubenswrapper[4998]: I0227 10:36:27.360769 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7eaabc1a-14fa-4a68-b828-c6fc9a19d7b2-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"7eaabc1a-14fa-4a68-b828-c6fc9a19d7b2\") " pod="openstack/openstack-cell1-galera-0" Feb 27 10:36:27 crc kubenswrapper[4998]: I0227 10:36:27.360814 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"7eaabc1a-14fa-4a68-b828-c6fc9a19d7b2\") " pod="openstack/openstack-cell1-galera-0" Feb 27 10:36:27 crc kubenswrapper[4998]: I0227 10:36:27.360989 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/7eaabc1a-14fa-4a68-b828-c6fc9a19d7b2-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"7eaabc1a-14fa-4a68-b828-c6fc9a19d7b2\") " pod="openstack/openstack-cell1-galera-0" Feb 27 10:36:27 crc kubenswrapper[4998]: I0227 10:36:27.361333 4998 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"7eaabc1a-14fa-4a68-b828-c6fc9a19d7b2\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/openstack-cell1-galera-0" Feb 27 10:36:27 crc kubenswrapper[4998]: I0227 10:36:27.361551 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/7eaabc1a-14fa-4a68-b828-c6fc9a19d7b2-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"7eaabc1a-14fa-4a68-b828-c6fc9a19d7b2\") " pod="openstack/openstack-cell1-galera-0" Feb 27 10:36:27 crc kubenswrapper[4998]: I0227 10:36:27.362795 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/7eaabc1a-14fa-4a68-b828-c6fc9a19d7b2-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"7eaabc1a-14fa-4a68-b828-c6fc9a19d7b2\") " pod="openstack/openstack-cell1-galera-0" Feb 27 10:36:27 crc kubenswrapper[4998]: I0227 10:36:27.364496 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7eaabc1a-14fa-4a68-b828-c6fc9a19d7b2-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"7eaabc1a-14fa-4a68-b828-c6fc9a19d7b2\") " pod="openstack/openstack-cell1-galera-0" Feb 27 10:36:27 crc kubenswrapper[4998]: I0227 10:36:27.367388 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7eaabc1a-14fa-4a68-b828-c6fc9a19d7b2-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"7eaabc1a-14fa-4a68-b828-c6fc9a19d7b2\") " pod="openstack/openstack-cell1-galera-0" Feb 27 10:36:27 crc kubenswrapper[4998]: I0227 10:36:27.392608 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"7eaabc1a-14fa-4a68-b828-c6fc9a19d7b2\") " pod="openstack/openstack-cell1-galera-0" Feb 27 10:36:27 crc kubenswrapper[4998]: I0227 10:36:27.396325 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8whf8\" (UniqueName: \"kubernetes.io/projected/7eaabc1a-14fa-4a68-b828-c6fc9a19d7b2-kube-api-access-8whf8\") pod \"openstack-cell1-galera-0\" (UID: \"7eaabc1a-14fa-4a68-b828-c6fc9a19d7b2\") " pod="openstack/openstack-cell1-galera-0" Feb 27 10:36:27 crc kubenswrapper[4998]: I0227 10:36:27.433674 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Feb 27 10:36:27 crc kubenswrapper[4998]: I0227 10:36:27.435385 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 27 10:36:27 crc kubenswrapper[4998]: I0227 10:36:27.438460 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Feb 27 10:36:27 crc kubenswrapper[4998]: I0227 10:36:27.438640 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-8pj8s" Feb 27 10:36:27 crc kubenswrapper[4998]: I0227 10:36:27.440331 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Feb 27 10:36:27 crc kubenswrapper[4998]: I0227 10:36:27.462583 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 27 10:36:27 crc kubenswrapper[4998]: I0227 10:36:27.465990 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 27 10:36:27 crc kubenswrapper[4998]: I0227 10:36:27.565131 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f00d2d1a-0f01-465b-9009-3cc7e1544fc0-kolla-config\") pod \"memcached-0\" (UID: \"f00d2d1a-0f01-465b-9009-3cc7e1544fc0\") " pod="openstack/memcached-0" Feb 27 10:36:27 crc kubenswrapper[4998]: I0227 10:36:27.565188 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtgxr\" (UniqueName: \"kubernetes.io/projected/f00d2d1a-0f01-465b-9009-3cc7e1544fc0-kube-api-access-vtgxr\") pod \"memcached-0\" (UID: \"f00d2d1a-0f01-465b-9009-3cc7e1544fc0\") " pod="openstack/memcached-0" Feb 27 10:36:27 crc kubenswrapper[4998]: I0227 10:36:27.565216 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/f00d2d1a-0f01-465b-9009-3cc7e1544fc0-memcached-tls-certs\") pod \"memcached-0\" (UID: \"f00d2d1a-0f01-465b-9009-3cc7e1544fc0\") " pod="openstack/memcached-0" Feb 27 10:36:27 crc kubenswrapper[4998]: I0227 10:36:27.565359 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f00d2d1a-0f01-465b-9009-3cc7e1544fc0-config-data\") pod \"memcached-0\" (UID: \"f00d2d1a-0f01-465b-9009-3cc7e1544fc0\") " pod="openstack/memcached-0" Feb 27 10:36:27 crc kubenswrapper[4998]: I0227 10:36:27.565383 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f00d2d1a-0f01-465b-9009-3cc7e1544fc0-combined-ca-bundle\") pod \"memcached-0\" (UID: \"f00d2d1a-0f01-465b-9009-3cc7e1544fc0\") " pod="openstack/memcached-0" Feb 27 10:36:27 crc kubenswrapper[4998]: I0227 10:36:27.666714 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f00d2d1a-0f01-465b-9009-3cc7e1544fc0-config-data\") pod \"memcached-0\" (UID: \"f00d2d1a-0f01-465b-9009-3cc7e1544fc0\") " pod="openstack/memcached-0" Feb 27 10:36:27 crc kubenswrapper[4998]: I0227 10:36:27.666768 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f00d2d1a-0f01-465b-9009-3cc7e1544fc0-combined-ca-bundle\") pod \"memcached-0\" (UID: \"f00d2d1a-0f01-465b-9009-3cc7e1544fc0\") " pod="openstack/memcached-0" Feb 27 10:36:27 crc kubenswrapper[4998]: I0227 10:36:27.666788 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f00d2d1a-0f01-465b-9009-3cc7e1544fc0-kolla-config\") pod \"memcached-0\" (UID: \"f00d2d1a-0f01-465b-9009-3cc7e1544fc0\") " pod="openstack/memcached-0" Feb 27 10:36:27 crc kubenswrapper[4998]: I0227 10:36:27.666816 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtgxr\" (UniqueName: \"kubernetes.io/projected/f00d2d1a-0f01-465b-9009-3cc7e1544fc0-kube-api-access-vtgxr\") pod \"memcached-0\" (UID: \"f00d2d1a-0f01-465b-9009-3cc7e1544fc0\") " pod="openstack/memcached-0" Feb 27 10:36:27 crc kubenswrapper[4998]: I0227 10:36:27.666844 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/f00d2d1a-0f01-465b-9009-3cc7e1544fc0-memcached-tls-certs\") pod \"memcached-0\" (UID: \"f00d2d1a-0f01-465b-9009-3cc7e1544fc0\") " pod="openstack/memcached-0" Feb 27 10:36:27 crc kubenswrapper[4998]: I0227 10:36:27.669694 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f00d2d1a-0f01-465b-9009-3cc7e1544fc0-config-data\") pod \"memcached-0\" (UID: \"f00d2d1a-0f01-465b-9009-3cc7e1544fc0\") " pod="openstack/memcached-0" Feb 27 10:36:27 crc kubenswrapper[4998]: I0227 10:36:27.671313 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f00d2d1a-0f01-465b-9009-3cc7e1544fc0-kolla-config\") pod \"memcached-0\" (UID: \"f00d2d1a-0f01-465b-9009-3cc7e1544fc0\") " pod="openstack/memcached-0" Feb 27 10:36:27 crc kubenswrapper[4998]: I0227 10:36:27.674203 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f00d2d1a-0f01-465b-9009-3cc7e1544fc0-combined-ca-bundle\") pod \"memcached-0\" (UID: \"f00d2d1a-0f01-465b-9009-3cc7e1544fc0\") " pod="openstack/memcached-0" Feb 27 10:36:27 crc kubenswrapper[4998]: I0227 10:36:27.685488 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtgxr\" (UniqueName: \"kubernetes.io/projected/f00d2d1a-0f01-465b-9009-3cc7e1544fc0-kube-api-access-vtgxr\") pod \"memcached-0\" (UID: \"f00d2d1a-0f01-465b-9009-3cc7e1544fc0\") " pod="openstack/memcached-0" Feb 27 10:36:27 crc kubenswrapper[4998]: I0227 10:36:27.691984 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/f00d2d1a-0f01-465b-9009-3cc7e1544fc0-memcached-tls-certs\") pod \"memcached-0\" (UID: \"f00d2d1a-0f01-465b-9009-3cc7e1544fc0\") " pod="openstack/memcached-0" Feb 27 10:36:27 crc kubenswrapper[4998]: I0227 10:36:27.783006 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 27 10:36:29 crc kubenswrapper[4998]: I0227 10:36:29.656384 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 27 10:36:29 crc kubenswrapper[4998]: I0227 10:36:29.658933 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 27 10:36:29 crc kubenswrapper[4998]: I0227 10:36:29.664498 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-r2gwx" Feb 27 10:36:29 crc kubenswrapper[4998]: I0227 10:36:29.673806 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 27 10:36:29 crc kubenswrapper[4998]: I0227 10:36:29.796860 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ql5ts\" (UniqueName: \"kubernetes.io/projected/2940ab76-4aef-4a1d-8cd5-6fd4cbced5be-kube-api-access-ql5ts\") pod \"kube-state-metrics-0\" (UID: \"2940ab76-4aef-4a1d-8cd5-6fd4cbced5be\") " pod="openstack/kube-state-metrics-0" Feb 27 10:36:29 crc kubenswrapper[4998]: I0227 10:36:29.898753 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ql5ts\" (UniqueName: \"kubernetes.io/projected/2940ab76-4aef-4a1d-8cd5-6fd4cbced5be-kube-api-access-ql5ts\") pod \"kube-state-metrics-0\" (UID: \"2940ab76-4aef-4a1d-8cd5-6fd4cbced5be\") " pod="openstack/kube-state-metrics-0" Feb 27 10:36:29 crc kubenswrapper[4998]: I0227 10:36:29.915973 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ql5ts\" (UniqueName: \"kubernetes.io/projected/2940ab76-4aef-4a1d-8cd5-6fd4cbced5be-kube-api-access-ql5ts\") pod \"kube-state-metrics-0\" (UID: \"2940ab76-4aef-4a1d-8cd5-6fd4cbced5be\") " pod="openstack/kube-state-metrics-0" Feb 27 10:36:29 crc kubenswrapper[4998]: I0227 10:36:29.976017 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 27 10:36:33 crc kubenswrapper[4998]: I0227 10:36:33.054803 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-695g4"] Feb 27 10:36:33 crc kubenswrapper[4998]: I0227 10:36:33.056218 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-695g4" Feb 27 10:36:33 crc kubenswrapper[4998]: I0227 10:36:33.061878 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Feb 27 10:36:33 crc kubenswrapper[4998]: I0227 10:36:33.062217 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Feb 27 10:36:33 crc kubenswrapper[4998]: I0227 10:36:33.062372 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-pxsxs" Feb 27 10:36:33 crc kubenswrapper[4998]: I0227 10:36:33.076118 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-pds2s"] Feb 27 10:36:33 crc kubenswrapper[4998]: I0227 10:36:33.078107 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-pds2s" Feb 27 10:36:33 crc kubenswrapper[4998]: I0227 10:36:33.081543 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-695g4"] Feb 27 10:36:33 crc kubenswrapper[4998]: I0227 10:36:33.086709 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-pds2s"] Feb 27 10:36:33 crc kubenswrapper[4998]: I0227 10:36:33.149249 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca3e5ab9-22e3-473e-8e9f-3c78b654bdc9-combined-ca-bundle\") pod \"ovn-controller-695g4\" (UID: \"ca3e5ab9-22e3-473e-8e9f-3c78b654bdc9\") " pod="openstack/ovn-controller-695g4" Feb 27 10:36:33 crc kubenswrapper[4998]: I0227 10:36:33.149335 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ca3e5ab9-22e3-473e-8e9f-3c78b654bdc9-scripts\") pod \"ovn-controller-695g4\" (UID: \"ca3e5ab9-22e3-473e-8e9f-3c78b654bdc9\") " pod="openstack/ovn-controller-695g4" Feb 27 10:36:33 crc kubenswrapper[4998]: I0227 10:36:33.149366 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b2c77aed-5925-42b3-a90c-3f7acc4da187-scripts\") pod \"ovn-controller-ovs-pds2s\" (UID: \"b2c77aed-5925-42b3-a90c-3f7acc4da187\") " pod="openstack/ovn-controller-ovs-pds2s" Feb 27 10:36:33 crc kubenswrapper[4998]: I0227 10:36:33.149388 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/b2c77aed-5925-42b3-a90c-3f7acc4da187-var-log\") pod \"ovn-controller-ovs-pds2s\" (UID: \"b2c77aed-5925-42b3-a90c-3f7acc4da187\") " pod="openstack/ovn-controller-ovs-pds2s" Feb 27 10:36:33 crc kubenswrapper[4998]: I0227 10:36:33.149434 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ca3e5ab9-22e3-473e-8e9f-3c78b654bdc9-var-log-ovn\") pod \"ovn-controller-695g4\" (UID: \"ca3e5ab9-22e3-473e-8e9f-3c78b654bdc9\") " pod="openstack/ovn-controller-695g4" Feb 27 10:36:33 crc kubenswrapper[4998]: I0227 10:36:33.149505 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcc9m\" (UniqueName: \"kubernetes.io/projected/b2c77aed-5925-42b3-a90c-3f7acc4da187-kube-api-access-rcc9m\") pod \"ovn-controller-ovs-pds2s\" (UID: \"b2c77aed-5925-42b3-a90c-3f7acc4da187\") " pod="openstack/ovn-controller-ovs-pds2s" Feb 27 10:36:33 crc kubenswrapper[4998]: I0227 10:36:33.149541 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/b2c77aed-5925-42b3-a90c-3f7acc4da187-etc-ovs\") pod \"ovn-controller-ovs-pds2s\" (UID: \"b2c77aed-5925-42b3-a90c-3f7acc4da187\") " pod="openstack/ovn-controller-ovs-pds2s" Feb 27 10:36:33 crc kubenswrapper[4998]: I0227 10:36:33.149574 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ca3e5ab9-22e3-473e-8e9f-3c78b654bdc9-var-run-ovn\") pod \"ovn-controller-695g4\" (UID: \"ca3e5ab9-22e3-473e-8e9f-3c78b654bdc9\") " pod="openstack/ovn-controller-695g4" Feb 27 10:36:33 crc kubenswrapper[4998]: I0227 10:36:33.149606 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca3e5ab9-22e3-473e-8e9f-3c78b654bdc9-ovn-controller-tls-certs\") pod \"ovn-controller-695g4\" (UID: \"ca3e5ab9-22e3-473e-8e9f-3c78b654bdc9\") " pod="openstack/ovn-controller-695g4" Feb 27 10:36:33 crc kubenswrapper[4998]: I0227 10:36:33.149656 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b2c77aed-5925-42b3-a90c-3f7acc4da187-var-run\") pod \"ovn-controller-ovs-pds2s\" (UID: \"b2c77aed-5925-42b3-a90c-3f7acc4da187\") " pod="openstack/ovn-controller-ovs-pds2s" Feb 27 10:36:33 crc kubenswrapper[4998]: I0227 10:36:33.149683 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ca3e5ab9-22e3-473e-8e9f-3c78b654bdc9-var-run\") pod \"ovn-controller-695g4\" (UID: \"ca3e5ab9-22e3-473e-8e9f-3c78b654bdc9\") " pod="openstack/ovn-controller-695g4" Feb 27 10:36:33 crc kubenswrapper[4998]: I0227 10:36:33.149712 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/b2c77aed-5925-42b3-a90c-3f7acc4da187-var-lib\") pod \"ovn-controller-ovs-pds2s\" (UID: \"b2c77aed-5925-42b3-a90c-3f7acc4da187\") " pod="openstack/ovn-controller-ovs-pds2s" Feb 27 10:36:33 crc kubenswrapper[4998]: I0227 10:36:33.149736 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgkvm\" (UniqueName: \"kubernetes.io/projected/ca3e5ab9-22e3-473e-8e9f-3c78b654bdc9-kube-api-access-fgkvm\") pod \"ovn-controller-695g4\" (UID: \"ca3e5ab9-22e3-473e-8e9f-3c78b654bdc9\") " pod="openstack/ovn-controller-695g4" Feb 27 10:36:33 crc kubenswrapper[4998]: I0227 10:36:33.251373 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcc9m\" (UniqueName: \"kubernetes.io/projected/b2c77aed-5925-42b3-a90c-3f7acc4da187-kube-api-access-rcc9m\") pod \"ovn-controller-ovs-pds2s\" (UID: \"b2c77aed-5925-42b3-a90c-3f7acc4da187\") " pod="openstack/ovn-controller-ovs-pds2s" Feb 27 10:36:33 crc kubenswrapper[4998]: I0227 10:36:33.251421 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/b2c77aed-5925-42b3-a90c-3f7acc4da187-etc-ovs\") pod \"ovn-controller-ovs-pds2s\" (UID: \"b2c77aed-5925-42b3-a90c-3f7acc4da187\") " pod="openstack/ovn-controller-ovs-pds2s" Feb 27 10:36:33 crc kubenswrapper[4998]: I0227 10:36:33.251449 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ca3e5ab9-22e3-473e-8e9f-3c78b654bdc9-var-run-ovn\") pod \"ovn-controller-695g4\" (UID: \"ca3e5ab9-22e3-473e-8e9f-3c78b654bdc9\") " pod="openstack/ovn-controller-695g4" Feb 27 10:36:33 crc kubenswrapper[4998]: I0227 10:36:33.251474 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca3e5ab9-22e3-473e-8e9f-3c78b654bdc9-ovn-controller-tls-certs\") pod \"ovn-controller-695g4\" (UID: \"ca3e5ab9-22e3-473e-8e9f-3c78b654bdc9\") " pod="openstack/ovn-controller-695g4" Feb 27 10:36:33 crc kubenswrapper[4998]: I0227 10:36:33.251511 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ca3e5ab9-22e3-473e-8e9f-3c78b654bdc9-var-run\") pod \"ovn-controller-695g4\" (UID: \"ca3e5ab9-22e3-473e-8e9f-3c78b654bdc9\") " pod="openstack/ovn-controller-695g4" Feb 27 10:36:33 crc kubenswrapper[4998]: I0227 10:36:33.251573 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b2c77aed-5925-42b3-a90c-3f7acc4da187-var-run\") pod \"ovn-controller-ovs-pds2s\" (UID: \"b2c77aed-5925-42b3-a90c-3f7acc4da187\") " pod="openstack/ovn-controller-ovs-pds2s" Feb 27 10:36:33 crc kubenswrapper[4998]: I0227 10:36:33.251591 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/b2c77aed-5925-42b3-a90c-3f7acc4da187-var-lib\") pod \"ovn-controller-ovs-pds2s\" (UID: \"b2c77aed-5925-42b3-a90c-3f7acc4da187\") " pod="openstack/ovn-controller-ovs-pds2s" Feb 27 10:36:33 crc kubenswrapper[4998]: I0227 10:36:33.251606 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgkvm\" (UniqueName: \"kubernetes.io/projected/ca3e5ab9-22e3-473e-8e9f-3c78b654bdc9-kube-api-access-fgkvm\") pod \"ovn-controller-695g4\" (UID: \"ca3e5ab9-22e3-473e-8e9f-3c78b654bdc9\") " pod="openstack/ovn-controller-695g4" Feb 27 10:36:33 crc kubenswrapper[4998]: I0227 10:36:33.251621 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca3e5ab9-22e3-473e-8e9f-3c78b654bdc9-combined-ca-bundle\") pod \"ovn-controller-695g4\" (UID: \"ca3e5ab9-22e3-473e-8e9f-3c78b654bdc9\") " pod="openstack/ovn-controller-695g4" Feb 27 10:36:33 crc kubenswrapper[4998]: I0227 10:36:33.251642 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ca3e5ab9-22e3-473e-8e9f-3c78b654bdc9-scripts\") pod \"ovn-controller-695g4\" (UID: \"ca3e5ab9-22e3-473e-8e9f-3c78b654bdc9\") " pod="openstack/ovn-controller-695g4" Feb 27 10:36:33 crc kubenswrapper[4998]: I0227 10:36:33.251660 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b2c77aed-5925-42b3-a90c-3f7acc4da187-scripts\") pod \"ovn-controller-ovs-pds2s\" (UID: \"b2c77aed-5925-42b3-a90c-3f7acc4da187\") " pod="openstack/ovn-controller-ovs-pds2s" Feb 27 10:36:33 crc kubenswrapper[4998]: I0227 10:36:33.251680 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/b2c77aed-5925-42b3-a90c-3f7acc4da187-var-log\") pod \"ovn-controller-ovs-pds2s\" (UID: \"b2c77aed-5925-42b3-a90c-3f7acc4da187\") " pod="openstack/ovn-controller-ovs-pds2s" Feb 27 10:36:33 crc kubenswrapper[4998]: I0227 10:36:33.251695 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ca3e5ab9-22e3-473e-8e9f-3c78b654bdc9-var-log-ovn\") pod \"ovn-controller-695g4\" (UID: \"ca3e5ab9-22e3-473e-8e9f-3c78b654bdc9\") " pod="openstack/ovn-controller-695g4" Feb 27 10:36:33 crc kubenswrapper[4998]: I0227 10:36:33.254238 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b2c77aed-5925-42b3-a90c-3f7acc4da187-scripts\") pod \"ovn-controller-ovs-pds2s\" (UID: \"b2c77aed-5925-42b3-a90c-3f7acc4da187\") " pod="openstack/ovn-controller-ovs-pds2s" Feb 27 10:36:33 crc kubenswrapper[4998]: I0227 10:36:33.262969 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca3e5ab9-22e3-473e-8e9f-3c78b654bdc9-ovn-controller-tls-certs\") pod \"ovn-controller-695g4\" (UID: \"ca3e5ab9-22e3-473e-8e9f-3c78b654bdc9\") " pod="openstack/ovn-controller-695g4" Feb 27 10:36:33 crc kubenswrapper[4998]: I0227 10:36:33.263004 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca3e5ab9-22e3-473e-8e9f-3c78b654bdc9-combined-ca-bundle\") pod \"ovn-controller-695g4\" (UID: \"ca3e5ab9-22e3-473e-8e9f-3c78b654bdc9\") " pod="openstack/ovn-controller-695g4" Feb 27 10:36:33 crc kubenswrapper[4998]: I0227 10:36:33.267251 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcc9m\" (UniqueName: \"kubernetes.io/projected/b2c77aed-5925-42b3-a90c-3f7acc4da187-kube-api-access-rcc9m\") pod \"ovn-controller-ovs-pds2s\" (UID: \"b2c77aed-5925-42b3-a90c-3f7acc4da187\") " pod="openstack/ovn-controller-ovs-pds2s" Feb 27 10:36:33 crc kubenswrapper[4998]: I0227 10:36:33.271837 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgkvm\" (UniqueName: \"kubernetes.io/projected/ca3e5ab9-22e3-473e-8e9f-3c78b654bdc9-kube-api-access-fgkvm\") pod \"ovn-controller-695g4\" (UID: \"ca3e5ab9-22e3-473e-8e9f-3c78b654bdc9\") " pod="openstack/ovn-controller-695g4" Feb 27 10:36:33 crc kubenswrapper[4998]: I0227 10:36:33.272755 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ca3e5ab9-22e3-473e-8e9f-3c78b654bdc9-scripts\") pod \"ovn-controller-695g4\" (UID: \"ca3e5ab9-22e3-473e-8e9f-3c78b654bdc9\") " pod="openstack/ovn-controller-695g4" Feb 27 10:36:33 crc kubenswrapper[4998]: I0227 10:36:33.280644 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ca3e5ab9-22e3-473e-8e9f-3c78b654bdc9-var-log-ovn\") pod \"ovn-controller-695g4\" (UID: \"ca3e5ab9-22e3-473e-8e9f-3c78b654bdc9\") " pod="openstack/ovn-controller-695g4" Feb 27 10:36:33 crc kubenswrapper[4998]: I0227 10:36:33.280766 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ca3e5ab9-22e3-473e-8e9f-3c78b654bdc9-var-run-ovn\") pod \"ovn-controller-695g4\" (UID: \"ca3e5ab9-22e3-473e-8e9f-3c78b654bdc9\") " pod="openstack/ovn-controller-695g4" Feb 27 10:36:33 crc kubenswrapper[4998]: I0227 10:36:33.280790 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ca3e5ab9-22e3-473e-8e9f-3c78b654bdc9-var-run\") pod \"ovn-controller-695g4\" (UID: \"ca3e5ab9-22e3-473e-8e9f-3c78b654bdc9\") " pod="openstack/ovn-controller-695g4" Feb 27 10:36:33 crc kubenswrapper[4998]: I0227 10:36:33.280825 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b2c77aed-5925-42b3-a90c-3f7acc4da187-var-run\") pod \"ovn-controller-ovs-pds2s\" (UID: \"b2c77aed-5925-42b3-a90c-3f7acc4da187\") " pod="openstack/ovn-controller-ovs-pds2s" Feb 27 10:36:33 crc kubenswrapper[4998]: I0227 10:36:33.280966 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/b2c77aed-5925-42b3-a90c-3f7acc4da187-var-lib\") pod \"ovn-controller-ovs-pds2s\" (UID: \"b2c77aed-5925-42b3-a90c-3f7acc4da187\") " pod="openstack/ovn-controller-ovs-pds2s" Feb 27 10:36:33 crc kubenswrapper[4998]: I0227 10:36:33.280998 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/b2c77aed-5925-42b3-a90c-3f7acc4da187-etc-ovs\") pod \"ovn-controller-ovs-pds2s\" (UID: \"b2c77aed-5925-42b3-a90c-3f7acc4da187\") " pod="openstack/ovn-controller-ovs-pds2s" Feb 27 10:36:33 crc kubenswrapper[4998]: I0227 10:36:33.281070 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/b2c77aed-5925-42b3-a90c-3f7acc4da187-var-log\") pod \"ovn-controller-ovs-pds2s\" (UID: \"b2c77aed-5925-42b3-a90c-3f7acc4da187\") " pod="openstack/ovn-controller-ovs-pds2s" Feb 27 10:36:33 crc kubenswrapper[4998]: I0227 10:36:33.381593 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-695g4" Feb 27 10:36:33 crc kubenswrapper[4998]: I0227 10:36:33.393811 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-pds2s" Feb 27 10:36:34 crc kubenswrapper[4998]: I0227 10:36:34.399885 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 27 10:36:34 crc kubenswrapper[4998]: I0227 10:36:34.401538 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 27 10:36:34 crc kubenswrapper[4998]: I0227 10:36:34.403144 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-kcr4g" Feb 27 10:36:34 crc kubenswrapper[4998]: I0227 10:36:34.403793 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Feb 27 10:36:34 crc kubenswrapper[4998]: I0227 10:36:34.403958 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Feb 27 10:36:34 crc kubenswrapper[4998]: I0227 10:36:34.404115 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Feb 27 10:36:34 crc kubenswrapper[4998]: I0227 10:36:34.404113 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Feb 27 10:36:34 crc kubenswrapper[4998]: I0227 10:36:34.419808 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 27 10:36:34 crc kubenswrapper[4998]: I0227 10:36:34.473989 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6332347d-e3f4-468b-9e36-a1b27163d1cd-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"6332347d-e3f4-468b-9e36-a1b27163d1cd\") " pod="openstack/ovsdbserver-nb-0" Feb 27 10:36:34 crc kubenswrapper[4998]: I0227 10:36:34.474042 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6332347d-e3f4-468b-9e36-a1b27163d1cd-config\") pod \"ovsdbserver-nb-0\" (UID: \"6332347d-e3f4-468b-9e36-a1b27163d1cd\") " pod="openstack/ovsdbserver-nb-0" Feb 27 10:36:34 crc kubenswrapper[4998]: I0227 10:36:34.474065 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-nb-0\" (UID: \"6332347d-e3f4-468b-9e36-a1b27163d1cd\") " pod="openstack/ovsdbserver-nb-0" Feb 27 10:36:34 crc kubenswrapper[4998]: I0227 10:36:34.474348 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6332347d-e3f4-468b-9e36-a1b27163d1cd-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"6332347d-e3f4-468b-9e36-a1b27163d1cd\") " pod="openstack/ovsdbserver-nb-0" Feb 27 10:36:34 crc kubenswrapper[4998]: I0227 10:36:34.474376 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6332347d-e3f4-468b-9e36-a1b27163d1cd-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"6332347d-e3f4-468b-9e36-a1b27163d1cd\") " pod="openstack/ovsdbserver-nb-0" Feb 27 10:36:34 crc kubenswrapper[4998]: I0227 10:36:34.474430 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6332347d-e3f4-468b-9e36-a1b27163d1cd-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"6332347d-e3f4-468b-9e36-a1b27163d1cd\") " pod="openstack/ovsdbserver-nb-0" Feb 27 10:36:34 crc kubenswrapper[4998]: I0227 10:36:34.474660 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c26qn\" (UniqueName: \"kubernetes.io/projected/6332347d-e3f4-468b-9e36-a1b27163d1cd-kube-api-access-c26qn\") pod \"ovsdbserver-nb-0\" (UID: \"6332347d-e3f4-468b-9e36-a1b27163d1cd\") " pod="openstack/ovsdbserver-nb-0" Feb 27 10:36:34 crc kubenswrapper[4998]: I0227 10:36:34.474900 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6332347d-e3f4-468b-9e36-a1b27163d1cd-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"6332347d-e3f4-468b-9e36-a1b27163d1cd\") " pod="openstack/ovsdbserver-nb-0" Feb 27 10:36:34 crc kubenswrapper[4998]: I0227 10:36:34.578884 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6332347d-e3f4-468b-9e36-a1b27163d1cd-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"6332347d-e3f4-468b-9e36-a1b27163d1cd\") " pod="openstack/ovsdbserver-nb-0" Feb 27 10:36:34 crc kubenswrapper[4998]: I0227 10:36:34.578952 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6332347d-e3f4-468b-9e36-a1b27163d1cd-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"6332347d-e3f4-468b-9e36-a1b27163d1cd\") " pod="openstack/ovsdbserver-nb-0" Feb 27 10:36:34 crc kubenswrapper[4998]: I0227 10:36:34.579165 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6332347d-e3f4-468b-9e36-a1b27163d1cd-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"6332347d-e3f4-468b-9e36-a1b27163d1cd\") " pod="openstack/ovsdbserver-nb-0" Feb 27 10:36:34 crc kubenswrapper[4998]: I0227 10:36:34.579316 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c26qn\" (UniqueName: \"kubernetes.io/projected/6332347d-e3f4-468b-9e36-a1b27163d1cd-kube-api-access-c26qn\") pod \"ovsdbserver-nb-0\" (UID: \"6332347d-e3f4-468b-9e36-a1b27163d1cd\") " pod="openstack/ovsdbserver-nb-0" Feb 27 10:36:34 crc kubenswrapper[4998]: I0227 10:36:34.579407 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6332347d-e3f4-468b-9e36-a1b27163d1cd-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"6332347d-e3f4-468b-9e36-a1b27163d1cd\") " pod="openstack/ovsdbserver-nb-0" Feb 27 10:36:34 crc kubenswrapper[4998]: I0227 10:36:34.579417 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6332347d-e3f4-468b-9e36-a1b27163d1cd-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"6332347d-e3f4-468b-9e36-a1b27163d1cd\") " pod="openstack/ovsdbserver-nb-0" Feb 27 10:36:34 crc kubenswrapper[4998]: I0227 10:36:34.579506 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6332347d-e3f4-468b-9e36-a1b27163d1cd-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"6332347d-e3f4-468b-9e36-a1b27163d1cd\") " pod="openstack/ovsdbserver-nb-0" Feb 27 10:36:34 crc kubenswrapper[4998]: I0227 10:36:34.579548 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6332347d-e3f4-468b-9e36-a1b27163d1cd-config\") pod \"ovsdbserver-nb-0\" (UID: \"6332347d-e3f4-468b-9e36-a1b27163d1cd\") " pod="openstack/ovsdbserver-nb-0" Feb 27 10:36:34 crc kubenswrapper[4998]: I0227 10:36:34.579583 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-nb-0\" (UID: \"6332347d-e3f4-468b-9e36-a1b27163d1cd\") " pod="openstack/ovsdbserver-nb-0" Feb 27 10:36:34 crc kubenswrapper[4998]: I0227 10:36:34.579871 4998 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-nb-0\" (UID: \"6332347d-e3f4-468b-9e36-a1b27163d1cd\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/ovsdbserver-nb-0" Feb 27 10:36:34 crc kubenswrapper[4998]: I0227 10:36:34.580345 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6332347d-e3f4-468b-9e36-a1b27163d1cd-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"6332347d-e3f4-468b-9e36-a1b27163d1cd\") " pod="openstack/ovsdbserver-nb-0" Feb 27 10:36:34 crc kubenswrapper[4998]: I0227 10:36:34.580750 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6332347d-e3f4-468b-9e36-a1b27163d1cd-config\") pod \"ovsdbserver-nb-0\" (UID: \"6332347d-e3f4-468b-9e36-a1b27163d1cd\") " pod="openstack/ovsdbserver-nb-0" Feb 27 10:36:34 crc kubenswrapper[4998]: I0227 10:36:34.585076 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6332347d-e3f4-468b-9e36-a1b27163d1cd-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"6332347d-e3f4-468b-9e36-a1b27163d1cd\") " pod="openstack/ovsdbserver-nb-0" Feb 27 10:36:34 crc kubenswrapper[4998]: I0227 10:36:34.585957 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6332347d-e3f4-468b-9e36-a1b27163d1cd-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"6332347d-e3f4-468b-9e36-a1b27163d1cd\") " pod="openstack/ovsdbserver-nb-0" Feb 27 10:36:34 crc kubenswrapper[4998]: I0227 10:36:34.594197 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6332347d-e3f4-468b-9e36-a1b27163d1cd-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"6332347d-e3f4-468b-9e36-a1b27163d1cd\") " pod="openstack/ovsdbserver-nb-0" Feb 27 10:36:34 crc kubenswrapper[4998]: I0227 10:36:34.596847 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c26qn\" (UniqueName: \"kubernetes.io/projected/6332347d-e3f4-468b-9e36-a1b27163d1cd-kube-api-access-c26qn\") pod \"ovsdbserver-nb-0\" (UID: \"6332347d-e3f4-468b-9e36-a1b27163d1cd\") " pod="openstack/ovsdbserver-nb-0" Feb 27 10:36:34 crc kubenswrapper[4998]: I0227 10:36:34.612690 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-nb-0\" (UID: \"6332347d-e3f4-468b-9e36-a1b27163d1cd\") " pod="openstack/ovsdbserver-nb-0" Feb 27 10:36:34 crc kubenswrapper[4998]: I0227 10:36:34.730445 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 27 10:36:36 crc kubenswrapper[4998]: I0227 10:36:36.830676 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 27 10:36:36 crc kubenswrapper[4998]: I0227 10:36:36.831780 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 27 10:36:36 crc kubenswrapper[4998]: I0227 10:36:36.834777 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Feb 27 10:36:36 crc kubenswrapper[4998]: I0227 10:36:36.835108 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-7nd5r" Feb 27 10:36:36 crc kubenswrapper[4998]: I0227 10:36:36.835382 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Feb 27 10:36:36 crc kubenswrapper[4998]: I0227 10:36:36.839167 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Feb 27 10:36:36 crc kubenswrapper[4998]: I0227 10:36:36.895919 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 27 10:36:36 crc kubenswrapper[4998]: I0227 10:36:36.918493 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7fe9696-b73f-4bd1-a13f-ed934d6d8a90-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"d7fe9696-b73f-4bd1-a13f-ed934d6d8a90\") " pod="openstack/ovsdbserver-sb-0" Feb 27 10:36:36 crc kubenswrapper[4998]: I0227 10:36:36.918555 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d7fe9696-b73f-4bd1-a13f-ed934d6d8a90-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"d7fe9696-b73f-4bd1-a13f-ed934d6d8a90\") " pod="openstack/ovsdbserver-sb-0" Feb 27 10:36:36 crc kubenswrapper[4998]: I0227 10:36:36.918580 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7fe9696-b73f-4bd1-a13f-ed934d6d8a90-config\") pod \"ovsdbserver-sb-0\" (UID: \"d7fe9696-b73f-4bd1-a13f-ed934d6d8a90\") " pod="openstack/ovsdbserver-sb-0" Feb 27 10:36:36 crc kubenswrapper[4998]: I0227 10:36:36.918597 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7fe9696-b73f-4bd1-a13f-ed934d6d8a90-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"d7fe9696-b73f-4bd1-a13f-ed934d6d8a90\") " pod="openstack/ovsdbserver-sb-0" Feb 27 10:36:36 crc kubenswrapper[4998]: I0227 10:36:36.918642 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7fe9696-b73f-4bd1-a13f-ed934d6d8a90-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"d7fe9696-b73f-4bd1-a13f-ed934d6d8a90\") " pod="openstack/ovsdbserver-sb-0" Feb 27 10:36:36 crc kubenswrapper[4998]: I0227 10:36:36.918664 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d7fe9696-b73f-4bd1-a13f-ed934d6d8a90-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"d7fe9696-b73f-4bd1-a13f-ed934d6d8a90\") " pod="openstack/ovsdbserver-sb-0" Feb 27 10:36:36 crc kubenswrapper[4998]: I0227 10:36:36.918784 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnt24\" (UniqueName: \"kubernetes.io/projected/d7fe9696-b73f-4bd1-a13f-ed934d6d8a90-kube-api-access-lnt24\") pod \"ovsdbserver-sb-0\" (UID: \"d7fe9696-b73f-4bd1-a13f-ed934d6d8a90\") " pod="openstack/ovsdbserver-sb-0" Feb 27 10:36:36 crc kubenswrapper[4998]: I0227 10:36:36.918884 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-sb-0\" (UID: \"d7fe9696-b73f-4bd1-a13f-ed934d6d8a90\") " pod="openstack/ovsdbserver-sb-0" Feb 27 10:36:37 crc kubenswrapper[4998]: I0227 10:36:37.029890 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7fe9696-b73f-4bd1-a13f-ed934d6d8a90-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"d7fe9696-b73f-4bd1-a13f-ed934d6d8a90\") " pod="openstack/ovsdbserver-sb-0" Feb 27 10:36:37 crc kubenswrapper[4998]: I0227 10:36:37.030179 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d7fe9696-b73f-4bd1-a13f-ed934d6d8a90-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"d7fe9696-b73f-4bd1-a13f-ed934d6d8a90\") " pod="openstack/ovsdbserver-sb-0" Feb 27 10:36:37 crc kubenswrapper[4998]: I0227 10:36:37.030303 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnt24\" (UniqueName: \"kubernetes.io/projected/d7fe9696-b73f-4bd1-a13f-ed934d6d8a90-kube-api-access-lnt24\") pod \"ovsdbserver-sb-0\" (UID: \"d7fe9696-b73f-4bd1-a13f-ed934d6d8a90\") " pod="openstack/ovsdbserver-sb-0" Feb 27 10:36:37 crc kubenswrapper[4998]: I0227 10:36:37.030716 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-sb-0\" (UID: \"d7fe9696-b73f-4bd1-a13f-ed934d6d8a90\") " pod="openstack/ovsdbserver-sb-0" Feb 27 10:36:37 crc kubenswrapper[4998]: I0227 10:36:37.030933 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d7fe9696-b73f-4bd1-a13f-ed934d6d8a90-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"d7fe9696-b73f-4bd1-a13f-ed934d6d8a90\") " pod="openstack/ovsdbserver-sb-0" Feb 27 10:36:37 crc kubenswrapper[4998]: I0227 10:36:37.031057 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7fe9696-b73f-4bd1-a13f-ed934d6d8a90-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"d7fe9696-b73f-4bd1-a13f-ed934d6d8a90\") " pod="openstack/ovsdbserver-sb-0" Feb 27 10:36:37 crc kubenswrapper[4998]: I0227 10:36:37.031158 4998 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-sb-0\" (UID: \"d7fe9696-b73f-4bd1-a13f-ed934d6d8a90\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/ovsdbserver-sb-0" Feb 27 10:36:37 crc kubenswrapper[4998]: I0227 10:36:37.031199 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d7fe9696-b73f-4bd1-a13f-ed934d6d8a90-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"d7fe9696-b73f-4bd1-a13f-ed934d6d8a90\") " pod="openstack/ovsdbserver-sb-0" Feb 27 10:36:37 crc kubenswrapper[4998]: I0227 10:36:37.031302 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7fe9696-b73f-4bd1-a13f-ed934d6d8a90-config\") pod \"ovsdbserver-sb-0\" (UID: \"d7fe9696-b73f-4bd1-a13f-ed934d6d8a90\") " pod="openstack/ovsdbserver-sb-0" Feb 27 10:36:37 crc kubenswrapper[4998]: I0227 10:36:37.031378 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7fe9696-b73f-4bd1-a13f-ed934d6d8a90-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"d7fe9696-b73f-4bd1-a13f-ed934d6d8a90\") " pod="openstack/ovsdbserver-sb-0" Feb 27 10:36:37 crc kubenswrapper[4998]: I0227 10:36:37.032950 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d7fe9696-b73f-4bd1-a13f-ed934d6d8a90-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"d7fe9696-b73f-4bd1-a13f-ed934d6d8a90\") " pod="openstack/ovsdbserver-sb-0" Feb 27 10:36:37 crc kubenswrapper[4998]: I0227 10:36:37.033774 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7fe9696-b73f-4bd1-a13f-ed934d6d8a90-config\") pod \"ovsdbserver-sb-0\" (UID: \"d7fe9696-b73f-4bd1-a13f-ed934d6d8a90\") " pod="openstack/ovsdbserver-sb-0" Feb 27 10:36:37 crc kubenswrapper[4998]: I0227 10:36:37.034377 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7fe9696-b73f-4bd1-a13f-ed934d6d8a90-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"d7fe9696-b73f-4bd1-a13f-ed934d6d8a90\") " pod="openstack/ovsdbserver-sb-0" Feb 27 10:36:37 crc kubenswrapper[4998]: I0227 10:36:37.036073 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7fe9696-b73f-4bd1-a13f-ed934d6d8a90-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"d7fe9696-b73f-4bd1-a13f-ed934d6d8a90\") " pod="openstack/ovsdbserver-sb-0" Feb 27 10:36:37 crc kubenswrapper[4998]: I0227 10:36:37.038291 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7fe9696-b73f-4bd1-a13f-ed934d6d8a90-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"d7fe9696-b73f-4bd1-a13f-ed934d6d8a90\") " pod="openstack/ovsdbserver-sb-0" Feb 27 10:36:37 crc kubenswrapper[4998]: I0227 10:36:37.058481 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnt24\" (UniqueName: \"kubernetes.io/projected/d7fe9696-b73f-4bd1-a13f-ed934d6d8a90-kube-api-access-lnt24\") pod \"ovsdbserver-sb-0\" (UID: \"d7fe9696-b73f-4bd1-a13f-ed934d6d8a90\") " pod="openstack/ovsdbserver-sb-0" Feb 27 10:36:37 crc kubenswrapper[4998]: I0227 10:36:37.061568 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-sb-0\" (UID: \"d7fe9696-b73f-4bd1-a13f-ed934d6d8a90\") " pod="openstack/ovsdbserver-sb-0" Feb 27 10:36:37 crc kubenswrapper[4998]: I0227 10:36:37.238823 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 27 10:36:39 crc kubenswrapper[4998]: I0227 10:36:39.043104 4998 scope.go:117] "RemoveContainer" containerID="aef75ba98977dfd47e03f2db6a71e5c20a68bcf8ce9430d3c2bd545cbc85f697" Feb 27 10:36:39 crc kubenswrapper[4998]: E0227 10:36:39.732928 4998 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 27 10:36:39 crc kubenswrapper[4998]: E0227 10:36:39.733075 4998 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-76jmv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-jqwq2_openstack(63329382-7f66-4eb9-9376-3ca15773a907): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 27 10:36:39 crc kubenswrapper[4998]: E0227 10:36:39.734328 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-jqwq2" podUID="63329382-7f66-4eb9-9376-3ca15773a907" Feb 27 10:36:39 crc kubenswrapper[4998]: E0227 10:36:39.783288 4998 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 27 10:36:39 crc kubenswrapper[4998]: E0227 10:36:39.783740 4998 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lrcdw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-2764j_openstack(8d738168-4761-4f32-9100-0b9ec5979911): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 27 10:36:39 crc kubenswrapper[4998]: E0227 10:36:39.785357 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-2764j" podUID="8d738168-4761-4f32-9100-0b9ec5979911" Feb 27 10:36:40 crc kubenswrapper[4998]: I0227 10:36:40.268219 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 27 10:36:40 crc kubenswrapper[4998]: I0227 10:36:40.422985 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 27 10:36:40 crc kubenswrapper[4998]: I0227 10:36:40.434616 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 27 10:36:40 crc kubenswrapper[4998]: I0227 10:36:40.505150 4998 patch_prober.go:28] interesting pod/machine-config-daemon-m6kr5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 10:36:40 crc kubenswrapper[4998]: I0227 10:36:40.505216 4998 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 10:36:40 crc kubenswrapper[4998]: I0227 10:36:40.536257 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-pds2s"] Feb 27 10:36:40 crc kubenswrapper[4998]: I0227 10:36:40.552836 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 27 10:36:40 crc kubenswrapper[4998]: I0227 10:36:40.580722 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 27 10:36:40 crc kubenswrapper[4998]: W0227 10:36:40.588449 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf00d2d1a_0f01_465b_9009_3cc7e1544fc0.slice/crio-e3a9ff92d1d0620518cb64ff3757f820a4df3d5574b0180a3353a2f566b3ea25 WatchSource:0}: Error finding container e3a9ff92d1d0620518cb64ff3757f820a4df3d5574b0180a3353a2f566b3ea25: Status 404 returned error can't find the container with id e3a9ff92d1d0620518cb64ff3757f820a4df3d5574b0180a3353a2f566b3ea25 Feb 27 10:36:40 crc kubenswrapper[4998]: I0227 10:36:40.589563 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-695g4"] Feb 27 10:36:40 crc kubenswrapper[4998]: I0227 10:36:40.596091 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 27 10:36:40 crc kubenswrapper[4998]: I0227 10:36:40.649429 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 27 10:36:40 crc kubenswrapper[4998]: W0227 10:36:40.650786 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6332347d_e3f4_468b_9e36_a1b27163d1cd.slice/crio-fed36ba53e5f16442166bb3cdb6ef33ed3bc7609478b01df110154ea3a02e267 WatchSource:0}: Error finding container fed36ba53e5f16442166bb3cdb6ef33ed3bc7609478b01df110154ea3a02e267: Status 404 returned error can't find the container with id fed36ba53e5f16442166bb3cdb6ef33ed3bc7609478b01df110154ea3a02e267 Feb 27 10:36:40 crc kubenswrapper[4998]: I0227 10:36:40.707554 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0ca208a2-3ba0-43e6-a2c4-942c12e54b41","Type":"ContainerStarted","Data":"e52a8922465ed9814efebc82a6bdd9ee0f7bd82c90c5a603d3fa439499654251"} Feb 27 10:36:40 crc kubenswrapper[4998]: I0227 10:36:40.708888 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"68cd6142-df7e-4994-97c0-0bc08ea1e3d4","Type":"ContainerStarted","Data":"8a535c8958ed175671f7276a4ebd6c425e975c31457e7f338ddc48d10db11f8e"} Feb 27 10:36:40 crc kubenswrapper[4998]: I0227 10:36:40.710155 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"2940ab76-4aef-4a1d-8cd5-6fd4cbced5be","Type":"ContainerStarted","Data":"b8a23a0969b6a2af551da48b3a73a47ef633ee638b39995b67a82110a5c380a5"} Feb 27 10:36:40 crc kubenswrapper[4998]: I0227 10:36:40.711574 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"bffe5e23-2abd-45ce-b167-5cc72eb06ae2","Type":"ContainerStarted","Data":"5ed3f4400d857ff3efe0373ed472145ed1649122e6fb161d0c59e119c90fa109"} Feb 27 10:36:40 crc kubenswrapper[4998]: I0227 10:36:40.712689 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"7eaabc1a-14fa-4a68-b828-c6fc9a19d7b2","Type":"ContainerStarted","Data":"23415f3b8d6c484bc42d360e4d7ed21c465355c24ed7a90b3a9dc75d453b7df1"} Feb 27 10:36:40 crc kubenswrapper[4998]: I0227 10:36:40.713948 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"f00d2d1a-0f01-465b-9009-3cc7e1544fc0","Type":"ContainerStarted","Data":"e3a9ff92d1d0620518cb64ff3757f820a4df3d5574b0180a3353a2f566b3ea25"} Feb 27 10:36:40 crc kubenswrapper[4998]: I0227 10:36:40.715385 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-695g4" event={"ID":"ca3e5ab9-22e3-473e-8e9f-3c78b654bdc9","Type":"ContainerStarted","Data":"123322db1261829b8111e5ed95dba05bfa9578e9627e75f889aca84aafadffa4"} Feb 27 10:36:40 crc kubenswrapper[4998]: I0227 10:36:40.716340 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"6332347d-e3f4-468b-9e36-a1b27163d1cd","Type":"ContainerStarted","Data":"fed36ba53e5f16442166bb3cdb6ef33ed3bc7609478b01df110154ea3a02e267"} Feb 27 10:36:40 crc kubenswrapper[4998]: I0227 10:36:40.718120 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-pds2s" event={"ID":"b2c77aed-5925-42b3-a90c-3f7acc4da187","Type":"ContainerStarted","Data":"c8a3eb6ce36cbba1254f70e51142f48f8ba6d161174f9b4275435f14c02afa59"} Feb 27 10:36:40 crc kubenswrapper[4998]: I0227 10:36:40.883197 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 27 10:36:40 crc kubenswrapper[4998]: W0227 10:36:40.884632 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd7fe9696_b73f_4bd1_a13f_ed934d6d8a90.slice/crio-fffcb16e71acbcce2a955b586bf5c26d2470ff9d05785299a72ee9d2cce7e6c8 WatchSource:0}: Error finding container fffcb16e71acbcce2a955b586bf5c26d2470ff9d05785299a72ee9d2cce7e6c8: Status 404 returned error can't find the container with id fffcb16e71acbcce2a955b586bf5c26d2470ff9d05785299a72ee9d2cce7e6c8 Feb 27 10:36:40 crc kubenswrapper[4998]: I0227 10:36:40.994697 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-jqwq2" Feb 27 10:36:41 crc kubenswrapper[4998]: I0227 10:36:41.016442 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-2764j" Feb 27 10:36:41 crc kubenswrapper[4998]: I0227 10:36:41.095741 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d738168-4761-4f32-9100-0b9ec5979911-config\") pod \"8d738168-4761-4f32-9100-0b9ec5979911\" (UID: \"8d738168-4761-4f32-9100-0b9ec5979911\") " Feb 27 10:36:41 crc kubenswrapper[4998]: I0227 10:36:41.096066 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lrcdw\" (UniqueName: \"kubernetes.io/projected/8d738168-4761-4f32-9100-0b9ec5979911-kube-api-access-lrcdw\") pod \"8d738168-4761-4f32-9100-0b9ec5979911\" (UID: \"8d738168-4761-4f32-9100-0b9ec5979911\") " Feb 27 10:36:41 crc kubenswrapper[4998]: I0227 10:36:41.096203 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-76jmv\" (UniqueName: \"kubernetes.io/projected/63329382-7f66-4eb9-9376-3ca15773a907-kube-api-access-76jmv\") pod \"63329382-7f66-4eb9-9376-3ca15773a907\" (UID: \"63329382-7f66-4eb9-9376-3ca15773a907\") " Feb 27 10:36:41 crc kubenswrapper[4998]: I0227 10:36:41.096313 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63329382-7f66-4eb9-9376-3ca15773a907-config\") pod \"63329382-7f66-4eb9-9376-3ca15773a907\" (UID: \"63329382-7f66-4eb9-9376-3ca15773a907\") " Feb 27 10:36:41 crc kubenswrapper[4998]: I0227 10:36:41.096342 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8d738168-4761-4f32-9100-0b9ec5979911-dns-svc\") pod \"8d738168-4761-4f32-9100-0b9ec5979911\" (UID: \"8d738168-4761-4f32-9100-0b9ec5979911\") " Feb 27 10:36:41 crc kubenswrapper[4998]: I0227 10:36:41.096398 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d738168-4761-4f32-9100-0b9ec5979911-config" (OuterVolumeSpecName: "config") pod "8d738168-4761-4f32-9100-0b9ec5979911" (UID: "8d738168-4761-4f32-9100-0b9ec5979911"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:36:41 crc kubenswrapper[4998]: I0227 10:36:41.096724 4998 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d738168-4761-4f32-9100-0b9ec5979911-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:36:41 crc kubenswrapper[4998]: I0227 10:36:41.097086 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d738168-4761-4f32-9100-0b9ec5979911-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8d738168-4761-4f32-9100-0b9ec5979911" (UID: "8d738168-4761-4f32-9100-0b9ec5979911"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:36:41 crc kubenswrapper[4998]: I0227 10:36:41.097477 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63329382-7f66-4eb9-9376-3ca15773a907-config" (OuterVolumeSpecName: "config") pod "63329382-7f66-4eb9-9376-3ca15773a907" (UID: "63329382-7f66-4eb9-9376-3ca15773a907"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:36:41 crc kubenswrapper[4998]: I0227 10:36:41.101950 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63329382-7f66-4eb9-9376-3ca15773a907-kube-api-access-76jmv" (OuterVolumeSpecName: "kube-api-access-76jmv") pod "63329382-7f66-4eb9-9376-3ca15773a907" (UID: "63329382-7f66-4eb9-9376-3ca15773a907"). InnerVolumeSpecName "kube-api-access-76jmv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:36:41 crc kubenswrapper[4998]: I0227 10:36:41.102426 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d738168-4761-4f32-9100-0b9ec5979911-kube-api-access-lrcdw" (OuterVolumeSpecName: "kube-api-access-lrcdw") pod "8d738168-4761-4f32-9100-0b9ec5979911" (UID: "8d738168-4761-4f32-9100-0b9ec5979911"). InnerVolumeSpecName "kube-api-access-lrcdw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:36:41 crc kubenswrapper[4998]: I0227 10:36:41.198007 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-76jmv\" (UniqueName: \"kubernetes.io/projected/63329382-7f66-4eb9-9376-3ca15773a907-kube-api-access-76jmv\") on node \"crc\" DevicePath \"\"" Feb 27 10:36:41 crc kubenswrapper[4998]: I0227 10:36:41.198054 4998 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63329382-7f66-4eb9-9376-3ca15773a907-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:36:41 crc kubenswrapper[4998]: I0227 10:36:41.198067 4998 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8d738168-4761-4f32-9100-0b9ec5979911-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 27 10:36:41 crc kubenswrapper[4998]: I0227 10:36:41.198078 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lrcdw\" (UniqueName: \"kubernetes.io/projected/8d738168-4761-4f32-9100-0b9ec5979911-kube-api-access-lrcdw\") on node \"crc\" DevicePath \"\"" Feb 27 10:36:41 crc kubenswrapper[4998]: I0227 10:36:41.726237 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-2764j" Feb 27 10:36:41 crc kubenswrapper[4998]: I0227 10:36:41.726248 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-2764j" event={"ID":"8d738168-4761-4f32-9100-0b9ec5979911","Type":"ContainerDied","Data":"99e656a53b9968d9c98437726ab345c15d9d847be8dfd9e69ddb682854dc9cd8"} Feb 27 10:36:41 crc kubenswrapper[4998]: I0227 10:36:41.727714 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"d7fe9696-b73f-4bd1-a13f-ed934d6d8a90","Type":"ContainerStarted","Data":"fffcb16e71acbcce2a955b586bf5c26d2470ff9d05785299a72ee9d2cce7e6c8"} Feb 27 10:36:41 crc kubenswrapper[4998]: I0227 10:36:41.732824 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-jqwq2" event={"ID":"63329382-7f66-4eb9-9376-3ca15773a907","Type":"ContainerDied","Data":"b4b27d98621e8e506e9ebcfc5b382fcef00ec66c428e4a229b9f585b4bc10532"} Feb 27 10:36:41 crc kubenswrapper[4998]: I0227 10:36:41.732923 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-jqwq2" Feb 27 10:36:41 crc kubenswrapper[4998]: I0227 10:36:41.777703 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-2764j"] Feb 27 10:36:41 crc kubenswrapper[4998]: I0227 10:36:41.783789 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-2764j"] Feb 27 10:36:41 crc kubenswrapper[4998]: I0227 10:36:41.810443 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-jqwq2"] Feb 27 10:36:41 crc kubenswrapper[4998]: I0227 10:36:41.815429 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-jqwq2"] Feb 27 10:36:42 crc kubenswrapper[4998]: I0227 10:36:42.776528 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63329382-7f66-4eb9-9376-3ca15773a907" path="/var/lib/kubelet/pods/63329382-7f66-4eb9-9376-3ca15773a907/volumes" Feb 27 10:36:42 crc kubenswrapper[4998]: I0227 10:36:42.777184 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d738168-4761-4f32-9100-0b9ec5979911" path="/var/lib/kubelet/pods/8d738168-4761-4f32-9100-0b9ec5979911/volumes" Feb 27 10:36:56 crc kubenswrapper[4998]: I0227 10:36:56.031490 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"f00d2d1a-0f01-465b-9009-3cc7e1544fc0","Type":"ContainerStarted","Data":"a99d87793a76392cd56743d90f349734a59d2724b931bfcd938a6ceccdfdb24f"} Feb 27 10:36:56 crc kubenswrapper[4998]: I0227 10:36:56.032095 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Feb 27 10:36:56 crc kubenswrapper[4998]: I0227 10:36:56.036254 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0ca208a2-3ba0-43e6-a2c4-942c12e54b41","Type":"ContainerStarted","Data":"f72bedbf2a5292be214576f7a9572718cf7fad9c31f0bf452a153749bb16372b"} Feb 27 10:36:56 crc kubenswrapper[4998]: I0227 10:36:56.038409 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-695g4" event={"ID":"ca3e5ab9-22e3-473e-8e9f-3c78b654bdc9","Type":"ContainerStarted","Data":"5d96f099f8f4fcdc492daf51b349b535c8321fcc1eaf087d46d6fa83b1ddfc87"} Feb 27 10:36:56 crc kubenswrapper[4998]: I0227 10:36:56.038559 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-695g4" Feb 27 10:36:56 crc kubenswrapper[4998]: I0227 10:36:56.040322 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"d7fe9696-b73f-4bd1-a13f-ed934d6d8a90","Type":"ContainerStarted","Data":"45d721cdbee4f4cd3d9cfc4cb3e1005a2735f20a69135054999211dc729645b9"} Feb 27 10:36:56 crc kubenswrapper[4998]: I0227 10:36:56.042056 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"6332347d-e3f4-468b-9e36-a1b27163d1cd","Type":"ContainerStarted","Data":"9c8e74e56b4c6da07fea7a24f059726a35bd95e3b2195b5282ebffb75b482fca"} Feb 27 10:36:56 crc kubenswrapper[4998]: I0227 10:36:56.044288 4998 generic.go:334] "Generic (PLEG): container finished" podID="b2c77aed-5925-42b3-a90c-3f7acc4da187" containerID="249189e7386c81d64384c6f738ea77812b9a2a895831b903c8672bcaff99d4e7" exitCode=0 Feb 27 10:36:56 crc kubenswrapper[4998]: I0227 10:36:56.044362 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-pds2s" event={"ID":"b2c77aed-5925-42b3-a90c-3f7acc4da187","Type":"ContainerDied","Data":"249189e7386c81d64384c6f738ea77812b9a2a895831b903c8672bcaff99d4e7"} Feb 27 10:36:56 crc kubenswrapper[4998]: I0227 10:36:56.061738 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=15.293661339 podStartE2EDuration="29.061715809s" podCreationTimestamp="2026-02-27 10:36:27 +0000 UTC" firstStartedPulling="2026-02-27 10:36:40.59216697 +0000 UTC m=+1152.590437938" lastFinishedPulling="2026-02-27 10:36:54.3602214 +0000 UTC m=+1166.358492408" observedRunningTime="2026-02-27 10:36:56.04779016 +0000 UTC m=+1168.046061158" watchObservedRunningTime="2026-02-27 10:36:56.061715809 +0000 UTC m=+1168.059986777" Feb 27 10:36:56 crc kubenswrapper[4998]: I0227 10:36:56.065992 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"7eaabc1a-14fa-4a68-b828-c6fc9a19d7b2","Type":"ContainerStarted","Data":"39bb37f11d769fe3597be00d92acf4bd2443709081fed24254d8cbea445c2897"} Feb 27 10:36:56 crc kubenswrapper[4998]: I0227 10:36:56.072974 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"2940ab76-4aef-4a1d-8cd5-6fd4cbced5be","Type":"ContainerStarted","Data":"9dc2a66377bafe558f9015d2aeb659428ad2b9001f08812180a6c776d0ef7b6e"} Feb 27 10:36:56 crc kubenswrapper[4998]: I0227 10:36:56.075652 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 27 10:36:56 crc kubenswrapper[4998]: I0227 10:36:56.077580 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-695g4" podStartSLOduration=9.307750841 podStartE2EDuration="23.077561069s" podCreationTimestamp="2026-02-27 10:36:33 +0000 UTC" firstStartedPulling="2026-02-27 10:36:40.5921886 +0000 UTC m=+1152.590459568" lastFinishedPulling="2026-02-27 10:36:54.361998828 +0000 UTC m=+1166.360269796" observedRunningTime="2026-02-27 10:36:56.068675483 +0000 UTC m=+1168.066946451" watchObservedRunningTime="2026-02-27 10:36:56.077561069 +0000 UTC m=+1168.075832037" Feb 27 10:36:56 crc kubenswrapper[4998]: I0227 10:36:56.081648 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-zgmbf" event={"ID":"2f5c960f-4048-47fa-8c05-f48d6702f1a3","Type":"ContainerDied","Data":"d24b842ba0b1398781ea950d733f916be34aabb8db7b9465fe9faf45ca3a40c2"} Feb 27 10:36:56 crc kubenswrapper[4998]: I0227 10:36:56.081649 4998 generic.go:334] "Generic (PLEG): container finished" podID="2f5c960f-4048-47fa-8c05-f48d6702f1a3" containerID="d24b842ba0b1398781ea950d733f916be34aabb8db7b9465fe9faf45ca3a40c2" exitCode=0 Feb 27 10:36:56 crc kubenswrapper[4998]: I0227 10:36:56.088799 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"68cd6142-df7e-4994-97c0-0bc08ea1e3d4","Type":"ContainerStarted","Data":"009a183d6e1f7dc57da1884d0a8c4b15c0dd6b099075bea5e84e0e1183821aa1"} Feb 27 10:36:56 crc kubenswrapper[4998]: I0227 10:36:56.090724 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"bffe5e23-2abd-45ce-b167-5cc72eb06ae2","Type":"ContainerStarted","Data":"f37acf6167d494d84d2c2252af8a41f26ba0a003cf4038cfacbd8c10c9eff71d"} Feb 27 10:36:56 crc kubenswrapper[4998]: I0227 10:36:56.092483 4998 generic.go:334] "Generic (PLEG): container finished" podID="f44997ba-4122-479b-b1c6-f57d2c9a3e05" containerID="e79948874259a9d049576469ad5ffd38f1090b62c892a5064091b2bfa83cc3f0" exitCode=0 Feb 27 10:36:56 crc kubenswrapper[4998]: I0227 10:36:56.092518 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-r99b8" event={"ID":"f44997ba-4122-479b-b1c6-f57d2c9a3e05","Type":"ContainerDied","Data":"e79948874259a9d049576469ad5ffd38f1090b62c892a5064091b2bfa83cc3f0"} Feb 27 10:36:56 crc kubenswrapper[4998]: I0227 10:36:56.186987 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=12.753621176 podStartE2EDuration="27.186970293s" podCreationTimestamp="2026-02-27 10:36:29 +0000 UTC" firstStartedPulling="2026-02-27 10:36:40.571816204 +0000 UTC m=+1152.570087172" lastFinishedPulling="2026-02-27 10:36:55.005165321 +0000 UTC m=+1167.003436289" observedRunningTime="2026-02-27 10:36:56.178650085 +0000 UTC m=+1168.176921053" watchObservedRunningTime="2026-02-27 10:36:56.186970293 +0000 UTC m=+1168.185241261" Feb 27 10:36:56 crc kubenswrapper[4998]: I0227 10:36:56.542392 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-jtqtq"] Feb 27 10:36:56 crc kubenswrapper[4998]: I0227 10:36:56.546102 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-jtqtq" Feb 27 10:36:56 crc kubenswrapper[4998]: I0227 10:36:56.552915 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Feb 27 10:36:56 crc kubenswrapper[4998]: I0227 10:36:56.573911 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-jtqtq"] Feb 27 10:36:56 crc kubenswrapper[4998]: I0227 10:36:56.658714 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2dd08b20-6d5e-4d2a-8237-fabf05188a4e-config\") pod \"ovn-controller-metrics-jtqtq\" (UID: \"2dd08b20-6d5e-4d2a-8237-fabf05188a4e\") " pod="openstack/ovn-controller-metrics-jtqtq" Feb 27 10:36:56 crc kubenswrapper[4998]: I0227 10:36:56.659037 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2dd08b20-6d5e-4d2a-8237-fabf05188a4e-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-jtqtq\" (UID: \"2dd08b20-6d5e-4d2a-8237-fabf05188a4e\") " pod="openstack/ovn-controller-metrics-jtqtq" Feb 27 10:36:56 crc kubenswrapper[4998]: I0227 10:36:56.659082 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5rjm\" (UniqueName: \"kubernetes.io/projected/2dd08b20-6d5e-4d2a-8237-fabf05188a4e-kube-api-access-r5rjm\") pod \"ovn-controller-metrics-jtqtq\" (UID: \"2dd08b20-6d5e-4d2a-8237-fabf05188a4e\") " pod="openstack/ovn-controller-metrics-jtqtq" Feb 27 10:36:56 crc kubenswrapper[4998]: I0227 10:36:56.659123 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/2dd08b20-6d5e-4d2a-8237-fabf05188a4e-ovn-rundir\") pod \"ovn-controller-metrics-jtqtq\" (UID: \"2dd08b20-6d5e-4d2a-8237-fabf05188a4e\") " pod="openstack/ovn-controller-metrics-jtqtq" Feb 27 10:36:56 crc kubenswrapper[4998]: I0227 10:36:56.659191 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dd08b20-6d5e-4d2a-8237-fabf05188a4e-combined-ca-bundle\") pod \"ovn-controller-metrics-jtqtq\" (UID: \"2dd08b20-6d5e-4d2a-8237-fabf05188a4e\") " pod="openstack/ovn-controller-metrics-jtqtq" Feb 27 10:36:56 crc kubenswrapper[4998]: I0227 10:36:56.659213 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/2dd08b20-6d5e-4d2a-8237-fabf05188a4e-ovs-rundir\") pod \"ovn-controller-metrics-jtqtq\" (UID: \"2dd08b20-6d5e-4d2a-8237-fabf05188a4e\") " pod="openstack/ovn-controller-metrics-jtqtq" Feb 27 10:36:56 crc kubenswrapper[4998]: I0227 10:36:56.694530 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-r99b8"] Feb 27 10:36:56 crc kubenswrapper[4998]: I0227 10:36:56.720122 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-hdmgb"] Feb 27 10:36:56 crc kubenswrapper[4998]: I0227 10:36:56.724019 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-hdmgb" Feb 27 10:36:56 crc kubenswrapper[4998]: I0227 10:36:56.727557 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Feb 27 10:36:56 crc kubenswrapper[4998]: I0227 10:36:56.735600 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-hdmgb"] Feb 27 10:36:56 crc kubenswrapper[4998]: I0227 10:36:56.761183 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dd08b20-6d5e-4d2a-8237-fabf05188a4e-combined-ca-bundle\") pod \"ovn-controller-metrics-jtqtq\" (UID: \"2dd08b20-6d5e-4d2a-8237-fabf05188a4e\") " pod="openstack/ovn-controller-metrics-jtqtq" Feb 27 10:36:56 crc kubenswrapper[4998]: I0227 10:36:56.761269 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/2dd08b20-6d5e-4d2a-8237-fabf05188a4e-ovs-rundir\") pod \"ovn-controller-metrics-jtqtq\" (UID: \"2dd08b20-6d5e-4d2a-8237-fabf05188a4e\") " pod="openstack/ovn-controller-metrics-jtqtq" Feb 27 10:36:56 crc kubenswrapper[4998]: I0227 10:36:56.761570 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2dd08b20-6d5e-4d2a-8237-fabf05188a4e-config\") pod \"ovn-controller-metrics-jtqtq\" (UID: \"2dd08b20-6d5e-4d2a-8237-fabf05188a4e\") " pod="openstack/ovn-controller-metrics-jtqtq" Feb 27 10:36:56 crc kubenswrapper[4998]: I0227 10:36:56.761621 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2dd08b20-6d5e-4d2a-8237-fabf05188a4e-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-jtqtq\" (UID: \"2dd08b20-6d5e-4d2a-8237-fabf05188a4e\") " pod="openstack/ovn-controller-metrics-jtqtq" Feb 27 10:36:56 crc kubenswrapper[4998]: I0227 10:36:56.761658 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/2dd08b20-6d5e-4d2a-8237-fabf05188a4e-ovs-rundir\") pod \"ovn-controller-metrics-jtqtq\" (UID: \"2dd08b20-6d5e-4d2a-8237-fabf05188a4e\") " pod="openstack/ovn-controller-metrics-jtqtq" Feb 27 10:36:56 crc kubenswrapper[4998]: I0227 10:36:56.761678 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5rjm\" (UniqueName: \"kubernetes.io/projected/2dd08b20-6d5e-4d2a-8237-fabf05188a4e-kube-api-access-r5rjm\") pod \"ovn-controller-metrics-jtqtq\" (UID: \"2dd08b20-6d5e-4d2a-8237-fabf05188a4e\") " pod="openstack/ovn-controller-metrics-jtqtq" Feb 27 10:36:56 crc kubenswrapper[4998]: I0227 10:36:56.761754 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/2dd08b20-6d5e-4d2a-8237-fabf05188a4e-ovn-rundir\") pod \"ovn-controller-metrics-jtqtq\" (UID: \"2dd08b20-6d5e-4d2a-8237-fabf05188a4e\") " pod="openstack/ovn-controller-metrics-jtqtq" Feb 27 10:36:56 crc kubenswrapper[4998]: I0227 10:36:56.761878 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/2dd08b20-6d5e-4d2a-8237-fabf05188a4e-ovn-rundir\") pod \"ovn-controller-metrics-jtqtq\" (UID: \"2dd08b20-6d5e-4d2a-8237-fabf05188a4e\") " pod="openstack/ovn-controller-metrics-jtqtq" Feb 27 10:36:56 crc kubenswrapper[4998]: I0227 10:36:56.762535 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2dd08b20-6d5e-4d2a-8237-fabf05188a4e-config\") pod \"ovn-controller-metrics-jtqtq\" (UID: \"2dd08b20-6d5e-4d2a-8237-fabf05188a4e\") " pod="openstack/ovn-controller-metrics-jtqtq" Feb 27 10:36:56 crc kubenswrapper[4998]: I0227 10:36:56.768897 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dd08b20-6d5e-4d2a-8237-fabf05188a4e-combined-ca-bundle\") pod \"ovn-controller-metrics-jtqtq\" (UID: \"2dd08b20-6d5e-4d2a-8237-fabf05188a4e\") " pod="openstack/ovn-controller-metrics-jtqtq" Feb 27 10:36:56 crc kubenswrapper[4998]: I0227 10:36:56.778062 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2dd08b20-6d5e-4d2a-8237-fabf05188a4e-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-jtqtq\" (UID: \"2dd08b20-6d5e-4d2a-8237-fabf05188a4e\") " pod="openstack/ovn-controller-metrics-jtqtq" Feb 27 10:36:56 crc kubenswrapper[4998]: I0227 10:36:56.788193 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5rjm\" (UniqueName: \"kubernetes.io/projected/2dd08b20-6d5e-4d2a-8237-fabf05188a4e-kube-api-access-r5rjm\") pod \"ovn-controller-metrics-jtqtq\" (UID: \"2dd08b20-6d5e-4d2a-8237-fabf05188a4e\") " pod="openstack/ovn-controller-metrics-jtqtq" Feb 27 10:36:56 crc kubenswrapper[4998]: I0227 10:36:56.832577 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-zgmbf"] Feb 27 10:36:56 crc kubenswrapper[4998]: I0227 10:36:56.858673 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-qw4jj"] Feb 27 10:36:56 crc kubenswrapper[4998]: I0227 10:36:56.859823 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-qw4jj" Feb 27 10:36:56 crc kubenswrapper[4998]: I0227 10:36:56.862736 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wf5ql\" (UniqueName: \"kubernetes.io/projected/e2c169ad-e5d1-483b-be21-538827ae0f1d-kube-api-access-wf5ql\") pod \"dnsmasq-dns-7fd796d7df-hdmgb\" (UID: \"e2c169ad-e5d1-483b-be21-538827ae0f1d\") " pod="openstack/dnsmasq-dns-7fd796d7df-hdmgb" Feb 27 10:36:56 crc kubenswrapper[4998]: I0227 10:36:56.862814 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e2c169ad-e5d1-483b-be21-538827ae0f1d-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-hdmgb\" (UID: \"e2c169ad-e5d1-483b-be21-538827ae0f1d\") " pod="openstack/dnsmasq-dns-7fd796d7df-hdmgb" Feb 27 10:36:56 crc kubenswrapper[4998]: I0227 10:36:56.862835 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2c169ad-e5d1-483b-be21-538827ae0f1d-config\") pod \"dnsmasq-dns-7fd796d7df-hdmgb\" (UID: \"e2c169ad-e5d1-483b-be21-538827ae0f1d\") " pod="openstack/dnsmasq-dns-7fd796d7df-hdmgb" Feb 27 10:36:56 crc kubenswrapper[4998]: I0227 10:36:56.862886 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e2c169ad-e5d1-483b-be21-538827ae0f1d-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-hdmgb\" (UID: \"e2c169ad-e5d1-483b-be21-538827ae0f1d\") " pod="openstack/dnsmasq-dns-7fd796d7df-hdmgb" Feb 27 10:36:56 crc kubenswrapper[4998]: I0227 10:36:56.864662 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Feb 27 10:36:56 crc kubenswrapper[4998]: I0227 10:36:56.877548 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-jtqtq" Feb 27 10:36:56 crc kubenswrapper[4998]: I0227 10:36:56.887998 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-qw4jj"] Feb 27 10:36:56 crc kubenswrapper[4998]: I0227 10:36:56.963926 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3ec16531-729c-451c-b5a6-0bc04b3a1b3c-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-qw4jj\" (UID: \"3ec16531-729c-451c-b5a6-0bc04b3a1b3c\") " pod="openstack/dnsmasq-dns-86db49b7ff-qw4jj" Feb 27 10:36:56 crc kubenswrapper[4998]: I0227 10:36:56.964199 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e2c169ad-e5d1-483b-be21-538827ae0f1d-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-hdmgb\" (UID: \"e2c169ad-e5d1-483b-be21-538827ae0f1d\") " pod="openstack/dnsmasq-dns-7fd796d7df-hdmgb" Feb 27 10:36:56 crc kubenswrapper[4998]: I0227 10:36:56.964257 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2c169ad-e5d1-483b-be21-538827ae0f1d-config\") pod \"dnsmasq-dns-7fd796d7df-hdmgb\" (UID: \"e2c169ad-e5d1-483b-be21-538827ae0f1d\") " pod="openstack/dnsmasq-dns-7fd796d7df-hdmgb" Feb 27 10:36:56 crc kubenswrapper[4998]: I0227 10:36:56.964291 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3ec16531-729c-451c-b5a6-0bc04b3a1b3c-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-qw4jj\" (UID: \"3ec16531-729c-451c-b5a6-0bc04b3a1b3c\") " pod="openstack/dnsmasq-dns-86db49b7ff-qw4jj" Feb 27 10:36:56 crc kubenswrapper[4998]: I0227 10:36:56.964325 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e2c169ad-e5d1-483b-be21-538827ae0f1d-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-hdmgb\" (UID: \"e2c169ad-e5d1-483b-be21-538827ae0f1d\") " pod="openstack/dnsmasq-dns-7fd796d7df-hdmgb" Feb 27 10:36:56 crc kubenswrapper[4998]: I0227 10:36:56.964352 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zc6vr\" (UniqueName: \"kubernetes.io/projected/3ec16531-729c-451c-b5a6-0bc04b3a1b3c-kube-api-access-zc6vr\") pod \"dnsmasq-dns-86db49b7ff-qw4jj\" (UID: \"3ec16531-729c-451c-b5a6-0bc04b3a1b3c\") " pod="openstack/dnsmasq-dns-86db49b7ff-qw4jj" Feb 27 10:36:56 crc kubenswrapper[4998]: I0227 10:36:56.964374 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3ec16531-729c-451c-b5a6-0bc04b3a1b3c-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-qw4jj\" (UID: \"3ec16531-729c-451c-b5a6-0bc04b3a1b3c\") " pod="openstack/dnsmasq-dns-86db49b7ff-qw4jj" Feb 27 10:36:56 crc kubenswrapper[4998]: I0227 10:36:56.964437 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ec16531-729c-451c-b5a6-0bc04b3a1b3c-config\") pod \"dnsmasq-dns-86db49b7ff-qw4jj\" (UID: \"3ec16531-729c-451c-b5a6-0bc04b3a1b3c\") " pod="openstack/dnsmasq-dns-86db49b7ff-qw4jj" Feb 27 10:36:56 crc kubenswrapper[4998]: I0227 10:36:56.964514 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wf5ql\" (UniqueName: \"kubernetes.io/projected/e2c169ad-e5d1-483b-be21-538827ae0f1d-kube-api-access-wf5ql\") pod \"dnsmasq-dns-7fd796d7df-hdmgb\" (UID: \"e2c169ad-e5d1-483b-be21-538827ae0f1d\") " pod="openstack/dnsmasq-dns-7fd796d7df-hdmgb" Feb 27 10:36:56 crc kubenswrapper[4998]: I0227 10:36:56.965140 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e2c169ad-e5d1-483b-be21-538827ae0f1d-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-hdmgb\" (UID: \"e2c169ad-e5d1-483b-be21-538827ae0f1d\") " pod="openstack/dnsmasq-dns-7fd796d7df-hdmgb" Feb 27 10:36:56 crc kubenswrapper[4998]: I0227 10:36:56.966139 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e2c169ad-e5d1-483b-be21-538827ae0f1d-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-hdmgb\" (UID: \"e2c169ad-e5d1-483b-be21-538827ae0f1d\") " pod="openstack/dnsmasq-dns-7fd796d7df-hdmgb" Feb 27 10:36:56 crc kubenswrapper[4998]: I0227 10:36:56.967317 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2c169ad-e5d1-483b-be21-538827ae0f1d-config\") pod \"dnsmasq-dns-7fd796d7df-hdmgb\" (UID: \"e2c169ad-e5d1-483b-be21-538827ae0f1d\") " pod="openstack/dnsmasq-dns-7fd796d7df-hdmgb" Feb 27 10:36:56 crc kubenswrapper[4998]: I0227 10:36:56.983682 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wf5ql\" (UniqueName: \"kubernetes.io/projected/e2c169ad-e5d1-483b-be21-538827ae0f1d-kube-api-access-wf5ql\") pod \"dnsmasq-dns-7fd796d7df-hdmgb\" (UID: \"e2c169ad-e5d1-483b-be21-538827ae0f1d\") " pod="openstack/dnsmasq-dns-7fd796d7df-hdmgb" Feb 27 10:36:57 crc kubenswrapper[4998]: I0227 10:36:57.047498 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-hdmgb" Feb 27 10:36:57 crc kubenswrapper[4998]: I0227 10:36:57.066624 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3ec16531-729c-451c-b5a6-0bc04b3a1b3c-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-qw4jj\" (UID: \"3ec16531-729c-451c-b5a6-0bc04b3a1b3c\") " pod="openstack/dnsmasq-dns-86db49b7ff-qw4jj" Feb 27 10:36:57 crc kubenswrapper[4998]: I0227 10:36:57.066716 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3ec16531-729c-451c-b5a6-0bc04b3a1b3c-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-qw4jj\" (UID: \"3ec16531-729c-451c-b5a6-0bc04b3a1b3c\") " pod="openstack/dnsmasq-dns-86db49b7ff-qw4jj" Feb 27 10:36:57 crc kubenswrapper[4998]: I0227 10:36:57.066793 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zc6vr\" (UniqueName: \"kubernetes.io/projected/3ec16531-729c-451c-b5a6-0bc04b3a1b3c-kube-api-access-zc6vr\") pod \"dnsmasq-dns-86db49b7ff-qw4jj\" (UID: \"3ec16531-729c-451c-b5a6-0bc04b3a1b3c\") " pod="openstack/dnsmasq-dns-86db49b7ff-qw4jj" Feb 27 10:36:57 crc kubenswrapper[4998]: I0227 10:36:57.066860 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3ec16531-729c-451c-b5a6-0bc04b3a1b3c-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-qw4jj\" (UID: \"3ec16531-729c-451c-b5a6-0bc04b3a1b3c\") " pod="openstack/dnsmasq-dns-86db49b7ff-qw4jj" Feb 27 10:36:57 crc kubenswrapper[4998]: I0227 10:36:57.066933 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ec16531-729c-451c-b5a6-0bc04b3a1b3c-config\") pod \"dnsmasq-dns-86db49b7ff-qw4jj\" (UID: \"3ec16531-729c-451c-b5a6-0bc04b3a1b3c\") " pod="openstack/dnsmasq-dns-86db49b7ff-qw4jj" Feb 27 10:36:57 crc kubenswrapper[4998]: I0227 10:36:57.067889 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ec16531-729c-451c-b5a6-0bc04b3a1b3c-config\") pod \"dnsmasq-dns-86db49b7ff-qw4jj\" (UID: \"3ec16531-729c-451c-b5a6-0bc04b3a1b3c\") " pod="openstack/dnsmasq-dns-86db49b7ff-qw4jj" Feb 27 10:36:57 crc kubenswrapper[4998]: I0227 10:36:57.068412 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3ec16531-729c-451c-b5a6-0bc04b3a1b3c-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-qw4jj\" (UID: \"3ec16531-729c-451c-b5a6-0bc04b3a1b3c\") " pod="openstack/dnsmasq-dns-86db49b7ff-qw4jj" Feb 27 10:36:57 crc kubenswrapper[4998]: I0227 10:36:57.068636 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3ec16531-729c-451c-b5a6-0bc04b3a1b3c-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-qw4jj\" (UID: \"3ec16531-729c-451c-b5a6-0bc04b3a1b3c\") " pod="openstack/dnsmasq-dns-86db49b7ff-qw4jj" Feb 27 10:36:57 crc kubenswrapper[4998]: I0227 10:36:57.069074 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3ec16531-729c-451c-b5a6-0bc04b3a1b3c-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-qw4jj\" (UID: \"3ec16531-729c-451c-b5a6-0bc04b3a1b3c\") " pod="openstack/dnsmasq-dns-86db49b7ff-qw4jj" Feb 27 10:36:57 crc kubenswrapper[4998]: I0227 10:36:57.098264 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zc6vr\" (UniqueName: \"kubernetes.io/projected/3ec16531-729c-451c-b5a6-0bc04b3a1b3c-kube-api-access-zc6vr\") pod \"dnsmasq-dns-86db49b7ff-qw4jj\" (UID: \"3ec16531-729c-451c-b5a6-0bc04b3a1b3c\") " pod="openstack/dnsmasq-dns-86db49b7ff-qw4jj" Feb 27 10:36:57 crc kubenswrapper[4998]: I0227 10:36:57.112316 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-pds2s" event={"ID":"b2c77aed-5925-42b3-a90c-3f7acc4da187","Type":"ContainerStarted","Data":"cf3b9bbca5c9f705c5ccc7ee4b91ba73845f6a2710602c3ac1e44e938afc1357"} Feb 27 10:36:57 crc kubenswrapper[4998]: I0227 10:36:57.112365 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-pds2s" event={"ID":"b2c77aed-5925-42b3-a90c-3f7acc4da187","Type":"ContainerStarted","Data":"451be114f4a3100e5e83c6a7cbc7118cc54fa229166393dd1dcd9a262ef67859"} Feb 27 10:36:57 crc kubenswrapper[4998]: I0227 10:36:57.113578 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-pds2s" Feb 27 10:36:57 crc kubenswrapper[4998]: I0227 10:36:57.113609 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-pds2s" Feb 27 10:36:57 crc kubenswrapper[4998]: I0227 10:36:57.118697 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-r99b8" event={"ID":"f44997ba-4122-479b-b1c6-f57d2c9a3e05","Type":"ContainerStarted","Data":"0b1c528b120d05a4df1910fc666213ac3e8c0e8ac30ce39465fdacb48722d492"} Feb 27 10:36:57 crc kubenswrapper[4998]: I0227 10:36:57.119019 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-r99b8" Feb 27 10:36:57 crc kubenswrapper[4998]: I0227 10:36:57.121325 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-zgmbf" event={"ID":"2f5c960f-4048-47fa-8c05-f48d6702f1a3","Type":"ContainerStarted","Data":"7e139b43fa02174757b76edace2e15a59d7d438ac1152749b59b1c2996398766"} Feb 27 10:36:57 crc kubenswrapper[4998]: I0227 10:36:57.160568 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-r99b8" podStartSLOduration=4.137084799 podStartE2EDuration="34.160548159s" podCreationTimestamp="2026-02-27 10:36:23 +0000 UTC" firstStartedPulling="2026-02-27 10:36:24.336726859 +0000 UTC m=+1136.334997827" lastFinishedPulling="2026-02-27 10:36:54.360190199 +0000 UTC m=+1166.358461187" observedRunningTime="2026-02-27 10:36:57.156541109 +0000 UTC m=+1169.154812077" watchObservedRunningTime="2026-02-27 10:36:57.160548159 +0000 UTC m=+1169.158819137" Feb 27 10:36:57 crc kubenswrapper[4998]: I0227 10:36:57.175587 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-pds2s" podStartSLOduration=10.41303381 podStartE2EDuration="24.175568462s" podCreationTimestamp="2026-02-27 10:36:33 +0000 UTC" firstStartedPulling="2026-02-27 10:36:40.544882857 +0000 UTC m=+1152.543153825" lastFinishedPulling="2026-02-27 10:36:54.307417509 +0000 UTC m=+1166.305688477" observedRunningTime="2026-02-27 10:36:57.139662816 +0000 UTC m=+1169.137933794" watchObservedRunningTime="2026-02-27 10:36:57.175568462 +0000 UTC m=+1169.173839430" Feb 27 10:36:57 crc kubenswrapper[4998]: I0227 10:36:57.200785 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-jtqtq"] Feb 27 10:36:57 crc kubenswrapper[4998]: I0227 10:36:57.210420 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5ccc8479f9-zgmbf" podStartSLOduration=4.485378075 podStartE2EDuration="34.210398874s" podCreationTimestamp="2026-02-27 10:36:23 +0000 UTC" firstStartedPulling="2026-02-27 10:36:24.121977412 +0000 UTC m=+1136.120248390" lastFinishedPulling="2026-02-27 10:36:53.846998221 +0000 UTC m=+1165.845269189" observedRunningTime="2026-02-27 10:36:57.178829787 +0000 UTC m=+1169.177100755" watchObservedRunningTime="2026-02-27 10:36:57.210398874 +0000 UTC m=+1169.208669842" Feb 27 10:36:57 crc kubenswrapper[4998]: I0227 10:36:57.218732 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-qw4jj" Feb 27 10:36:57 crc kubenswrapper[4998]: W0227 10:36:57.534952 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode2c169ad_e5d1_483b_be21_538827ae0f1d.slice/crio-71f927a64c10d8e56408a68ed90bfc6835f7aa88f2167dd438bd38e939deefc1 WatchSource:0}: Error finding container 71f927a64c10d8e56408a68ed90bfc6835f7aa88f2167dd438bd38e939deefc1: Status 404 returned error can't find the container with id 71f927a64c10d8e56408a68ed90bfc6835f7aa88f2167dd438bd38e939deefc1 Feb 27 10:36:57 crc kubenswrapper[4998]: I0227 10:36:57.535338 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-hdmgb"] Feb 27 10:36:57 crc kubenswrapper[4998]: I0227 10:36:57.669720 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-qw4jj"] Feb 27 10:36:58 crc kubenswrapper[4998]: I0227 10:36:58.131040 4998 generic.go:334] "Generic (PLEG): container finished" podID="e2c169ad-e5d1-483b-be21-538827ae0f1d" containerID="b4a68918c620f1e2fdaf59e8bd00a7831c57a057ea11f7340fa03bcac790f842" exitCode=0 Feb 27 10:36:58 crc kubenswrapper[4998]: I0227 10:36:58.131330 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-hdmgb" event={"ID":"e2c169ad-e5d1-483b-be21-538827ae0f1d","Type":"ContainerDied","Data":"b4a68918c620f1e2fdaf59e8bd00a7831c57a057ea11f7340fa03bcac790f842"} Feb 27 10:36:58 crc kubenswrapper[4998]: I0227 10:36:58.131358 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-hdmgb" event={"ID":"e2c169ad-e5d1-483b-be21-538827ae0f1d","Type":"ContainerStarted","Data":"71f927a64c10d8e56408a68ed90bfc6835f7aa88f2167dd438bd38e939deefc1"} Feb 27 10:36:58 crc kubenswrapper[4998]: I0227 10:36:58.134860 4998 generic.go:334] "Generic (PLEG): container finished" podID="3ec16531-729c-451c-b5a6-0bc04b3a1b3c" containerID="612ccacdc69672c5a2ff77de3871fff060984f107d460d1a708b108861dd7ba7" exitCode=0 Feb 27 10:36:58 crc kubenswrapper[4998]: I0227 10:36:58.134941 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-qw4jj" event={"ID":"3ec16531-729c-451c-b5a6-0bc04b3a1b3c","Type":"ContainerDied","Data":"612ccacdc69672c5a2ff77de3871fff060984f107d460d1a708b108861dd7ba7"} Feb 27 10:36:58 crc kubenswrapper[4998]: I0227 10:36:58.134985 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-qw4jj" event={"ID":"3ec16531-729c-451c-b5a6-0bc04b3a1b3c","Type":"ContainerStarted","Data":"f13aaba53e80e55fca3628314818cb4465eff21412055710b79d70c87ab575d2"} Feb 27 10:36:58 crc kubenswrapper[4998]: I0227 10:36:58.143483 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-r99b8" podUID="f44997ba-4122-479b-b1c6-f57d2c9a3e05" containerName="dnsmasq-dns" containerID="cri-o://0b1c528b120d05a4df1910fc666213ac3e8c0e8ac30ce39465fdacb48722d492" gracePeriod=10 Feb 27 10:36:58 crc kubenswrapper[4998]: I0227 10:36:58.143592 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-jtqtq" event={"ID":"2dd08b20-6d5e-4d2a-8237-fabf05188a4e","Type":"ContainerStarted","Data":"22b8637a9c9ffd5ebafacc28350cf951de6860f85f6d54ea7ba24775925c0dc9"} Feb 27 10:36:58 crc kubenswrapper[4998]: I0227 10:36:58.144321 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5ccc8479f9-zgmbf" podUID="2f5c960f-4048-47fa-8c05-f48d6702f1a3" containerName="dnsmasq-dns" containerID="cri-o://7e139b43fa02174757b76edace2e15a59d7d438ac1152749b59b1c2996398766" gracePeriod=10 Feb 27 10:36:58 crc kubenswrapper[4998]: I0227 10:36:58.144397 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5ccc8479f9-zgmbf" Feb 27 10:36:59 crc kubenswrapper[4998]: I0227 10:36:59.149805 4998 generic.go:334] "Generic (PLEG): container finished" podID="bffe5e23-2abd-45ce-b167-5cc72eb06ae2" containerID="f37acf6167d494d84d2c2252af8a41f26ba0a003cf4038cfacbd8c10c9eff71d" exitCode=0 Feb 27 10:36:59 crc kubenswrapper[4998]: I0227 10:36:59.149921 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"bffe5e23-2abd-45ce-b167-5cc72eb06ae2","Type":"ContainerDied","Data":"f37acf6167d494d84d2c2252af8a41f26ba0a003cf4038cfacbd8c10c9eff71d"} Feb 27 10:36:59 crc kubenswrapper[4998]: I0227 10:36:59.151665 4998 generic.go:334] "Generic (PLEG): container finished" podID="7eaabc1a-14fa-4a68-b828-c6fc9a19d7b2" containerID="39bb37f11d769fe3597be00d92acf4bd2443709081fed24254d8cbea445c2897" exitCode=0 Feb 27 10:36:59 crc kubenswrapper[4998]: I0227 10:36:59.151734 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"7eaabc1a-14fa-4a68-b828-c6fc9a19d7b2","Type":"ContainerDied","Data":"39bb37f11d769fe3597be00d92acf4bd2443709081fed24254d8cbea445c2897"} Feb 27 10:36:59 crc kubenswrapper[4998]: I0227 10:36:59.154768 4998 generic.go:334] "Generic (PLEG): container finished" podID="f44997ba-4122-479b-b1c6-f57d2c9a3e05" containerID="0b1c528b120d05a4df1910fc666213ac3e8c0e8ac30ce39465fdacb48722d492" exitCode=0 Feb 27 10:36:59 crc kubenswrapper[4998]: I0227 10:36:59.154829 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-r99b8" event={"ID":"f44997ba-4122-479b-b1c6-f57d2c9a3e05","Type":"ContainerDied","Data":"0b1c528b120d05a4df1910fc666213ac3e8c0e8ac30ce39465fdacb48722d492"} Feb 27 10:36:59 crc kubenswrapper[4998]: I0227 10:36:59.157058 4998 generic.go:334] "Generic (PLEG): container finished" podID="2f5c960f-4048-47fa-8c05-f48d6702f1a3" containerID="7e139b43fa02174757b76edace2e15a59d7d438ac1152749b59b1c2996398766" exitCode=0 Feb 27 10:36:59 crc kubenswrapper[4998]: I0227 10:36:59.157153 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-zgmbf" event={"ID":"2f5c960f-4048-47fa-8c05-f48d6702f1a3","Type":"ContainerDied","Data":"7e139b43fa02174757b76edace2e15a59d7d438ac1152749b59b1c2996398766"} Feb 27 10:36:59 crc kubenswrapper[4998]: I0227 10:36:59.868101 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-zgmbf" Feb 27 10:36:59 crc kubenswrapper[4998]: I0227 10:36:59.925729 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f5c960f-4048-47fa-8c05-f48d6702f1a3-config\") pod \"2f5c960f-4048-47fa-8c05-f48d6702f1a3\" (UID: \"2f5c960f-4048-47fa-8c05-f48d6702f1a3\") " Feb 27 10:36:59 crc kubenswrapper[4998]: I0227 10:36:59.925876 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2f5c960f-4048-47fa-8c05-f48d6702f1a3-dns-svc\") pod \"2f5c960f-4048-47fa-8c05-f48d6702f1a3\" (UID: \"2f5c960f-4048-47fa-8c05-f48d6702f1a3\") " Feb 27 10:36:59 crc kubenswrapper[4998]: I0227 10:36:59.925956 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-plxch\" (UniqueName: \"kubernetes.io/projected/2f5c960f-4048-47fa-8c05-f48d6702f1a3-kube-api-access-plxch\") pod \"2f5c960f-4048-47fa-8c05-f48d6702f1a3\" (UID: \"2f5c960f-4048-47fa-8c05-f48d6702f1a3\") " Feb 27 10:36:59 crc kubenswrapper[4998]: I0227 10:36:59.932526 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f5c960f-4048-47fa-8c05-f48d6702f1a3-kube-api-access-plxch" (OuterVolumeSpecName: "kube-api-access-plxch") pod "2f5c960f-4048-47fa-8c05-f48d6702f1a3" (UID: "2f5c960f-4048-47fa-8c05-f48d6702f1a3"). InnerVolumeSpecName "kube-api-access-plxch". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:37:00 crc kubenswrapper[4998]: I0227 10:37:00.028592 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-plxch\" (UniqueName: \"kubernetes.io/projected/2f5c960f-4048-47fa-8c05-f48d6702f1a3-kube-api-access-plxch\") on node \"crc\" DevicePath \"\"" Feb 27 10:37:00 crc kubenswrapper[4998]: I0227 10:37:00.034176 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-r99b8" Feb 27 10:37:00 crc kubenswrapper[4998]: I0227 10:37:00.115767 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f5c960f-4048-47fa-8c05-f48d6702f1a3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2f5c960f-4048-47fa-8c05-f48d6702f1a3" (UID: "2f5c960f-4048-47fa-8c05-f48d6702f1a3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:37:00 crc kubenswrapper[4998]: I0227 10:37:00.116403 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f5c960f-4048-47fa-8c05-f48d6702f1a3-config" (OuterVolumeSpecName: "config") pod "2f5c960f-4048-47fa-8c05-f48d6702f1a3" (UID: "2f5c960f-4048-47fa-8c05-f48d6702f1a3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:37:00 crc kubenswrapper[4998]: I0227 10:37:00.129550 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l8kvp\" (UniqueName: \"kubernetes.io/projected/f44997ba-4122-479b-b1c6-f57d2c9a3e05-kube-api-access-l8kvp\") pod \"f44997ba-4122-479b-b1c6-f57d2c9a3e05\" (UID: \"f44997ba-4122-479b-b1c6-f57d2c9a3e05\") " Feb 27 10:37:00 crc kubenswrapper[4998]: I0227 10:37:00.129670 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f44997ba-4122-479b-b1c6-f57d2c9a3e05-config\") pod \"f44997ba-4122-479b-b1c6-f57d2c9a3e05\" (UID: \"f44997ba-4122-479b-b1c6-f57d2c9a3e05\") " Feb 27 10:37:00 crc kubenswrapper[4998]: I0227 10:37:00.129714 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f44997ba-4122-479b-b1c6-f57d2c9a3e05-dns-svc\") pod \"f44997ba-4122-479b-b1c6-f57d2c9a3e05\" (UID: \"f44997ba-4122-479b-b1c6-f57d2c9a3e05\") " Feb 27 10:37:00 crc kubenswrapper[4998]: I0227 10:37:00.130066 4998 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f5c960f-4048-47fa-8c05-f48d6702f1a3-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:37:00 crc kubenswrapper[4998]: I0227 10:37:00.130085 4998 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2f5c960f-4048-47fa-8c05-f48d6702f1a3-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 27 10:37:00 crc kubenswrapper[4998]: I0227 10:37:00.134557 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f44997ba-4122-479b-b1c6-f57d2c9a3e05-kube-api-access-l8kvp" (OuterVolumeSpecName: "kube-api-access-l8kvp") pod "f44997ba-4122-479b-b1c6-f57d2c9a3e05" (UID: "f44997ba-4122-479b-b1c6-f57d2c9a3e05"). InnerVolumeSpecName "kube-api-access-l8kvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:37:00 crc kubenswrapper[4998]: I0227 10:37:00.166088 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f44997ba-4122-479b-b1c6-f57d2c9a3e05-config" (OuterVolumeSpecName: "config") pod "f44997ba-4122-479b-b1c6-f57d2c9a3e05" (UID: "f44997ba-4122-479b-b1c6-f57d2c9a3e05"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:37:00 crc kubenswrapper[4998]: I0227 10:37:00.175451 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-hdmgb" event={"ID":"e2c169ad-e5d1-483b-be21-538827ae0f1d","Type":"ContainerStarted","Data":"6b389ba838724db86fc0a5ecbf0338aab448620bb3b8193d892580dbb2193599"} Feb 27 10:37:00 crc kubenswrapper[4998]: I0227 10:37:00.175578 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f44997ba-4122-479b-b1c6-f57d2c9a3e05-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f44997ba-4122-479b-b1c6-f57d2c9a3e05" (UID: "f44997ba-4122-479b-b1c6-f57d2c9a3e05"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:37:00 crc kubenswrapper[4998]: I0227 10:37:00.175662 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7fd796d7df-hdmgb" Feb 27 10:37:00 crc kubenswrapper[4998]: I0227 10:37:00.179480 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"d7fe9696-b73f-4bd1-a13f-ed934d6d8a90","Type":"ContainerStarted","Data":"5cf2aab78d4e2c98407a9e5b53b1e1b4aa6b6ca462e2a103802f2b1c722b10dc"} Feb 27 10:37:00 crc kubenswrapper[4998]: I0227 10:37:00.181983 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-qw4jj" event={"ID":"3ec16531-729c-451c-b5a6-0bc04b3a1b3c","Type":"ContainerStarted","Data":"86a67fa708ba3f13438b131b4c6c003e77ba8233e031690602c89353a08f487f"} Feb 27 10:37:00 crc kubenswrapper[4998]: I0227 10:37:00.182744 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-qw4jj" Feb 27 10:37:00 crc kubenswrapper[4998]: I0227 10:37:00.184336 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"bffe5e23-2abd-45ce-b167-5cc72eb06ae2","Type":"ContainerStarted","Data":"cf3444f5045e946057930152581f490a9164d6507025501ba6ff847fe12eee5c"} Feb 27 10:37:00 crc kubenswrapper[4998]: I0227 10:37:00.189326 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"7eaabc1a-14fa-4a68-b828-c6fc9a19d7b2","Type":"ContainerStarted","Data":"34953ca38c07f64efe2dc2a16685d6719c7be657217fee63f737552709b7e332"} Feb 27 10:37:00 crc kubenswrapper[4998]: I0227 10:37:00.199592 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-r99b8" event={"ID":"f44997ba-4122-479b-b1c6-f57d2c9a3e05","Type":"ContainerDied","Data":"a9f14d2890a237ce43653e7ceca3467500c0695f08ace043e2d9dd5a2e42f81a"} Feb 27 10:37:00 crc kubenswrapper[4998]: I0227 10:37:00.199650 4998 scope.go:117] "RemoveContainer" containerID="0b1c528b120d05a4df1910fc666213ac3e8c0e8ac30ce39465fdacb48722d492" Feb 27 10:37:00 crc kubenswrapper[4998]: I0227 10:37:00.199818 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-r99b8" Feb 27 10:37:00 crc kubenswrapper[4998]: I0227 10:37:00.200965 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7fd796d7df-hdmgb" podStartSLOduration=4.200948069 podStartE2EDuration="4.200948069s" podCreationTimestamp="2026-02-27 10:36:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:37:00.194804812 +0000 UTC m=+1172.193075780" watchObservedRunningTime="2026-02-27 10:37:00.200948069 +0000 UTC m=+1172.199219037" Feb 27 10:37:00 crc kubenswrapper[4998]: I0227 10:37:00.205998 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-zgmbf" event={"ID":"2f5c960f-4048-47fa-8c05-f48d6702f1a3","Type":"ContainerDied","Data":"6ebcf193cc49db5b8565701c0853831d1d5037b5a5ebfb02f88b5903d750f4d8"} Feb 27 10:37:00 crc kubenswrapper[4998]: I0227 10:37:00.206664 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-zgmbf" Feb 27 10:37:00 crc kubenswrapper[4998]: I0227 10:37:00.219966 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=20.143925802 podStartE2EDuration="34.219946821s" podCreationTimestamp="2026-02-27 10:36:26 +0000 UTC" firstStartedPulling="2026-02-27 10:36:40.284625805 +0000 UTC m=+1152.282896783" lastFinishedPulling="2026-02-27 10:36:54.360646814 +0000 UTC m=+1166.358917802" observedRunningTime="2026-02-27 10:37:00.209406282 +0000 UTC m=+1172.207677270" watchObservedRunningTime="2026-02-27 10:37:00.219946821 +0000 UTC m=+1172.218217789" Feb 27 10:37:00 crc kubenswrapper[4998]: I0227 10:37:00.231571 4998 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f44997ba-4122-479b-b1c6-f57d2c9a3e05-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 27 10:37:00 crc kubenswrapper[4998]: I0227 10:37:00.231843 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l8kvp\" (UniqueName: \"kubernetes.io/projected/f44997ba-4122-479b-b1c6-f57d2c9a3e05-kube-api-access-l8kvp\") on node \"crc\" DevicePath \"\"" Feb 27 10:37:00 crc kubenswrapper[4998]: I0227 10:37:00.233681 4998 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f44997ba-4122-479b-b1c6-f57d2c9a3e05-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:37:00 crc kubenswrapper[4998]: I0227 10:37:00.236304 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-qw4jj" podStartSLOduration=4.236291388 podStartE2EDuration="4.236291388s" podCreationTimestamp="2026-02-27 10:36:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:37:00.228679742 +0000 UTC m=+1172.226950720" watchObservedRunningTime="2026-02-27 10:37:00.236291388 +0000 UTC m=+1172.234562356" Feb 27 10:37:00 crc kubenswrapper[4998]: I0227 10:37:00.261399 4998 scope.go:117] "RemoveContainer" containerID="e79948874259a9d049576469ad5ffd38f1090b62c892a5064091b2bfa83cc3f0" Feb 27 10:37:00 crc kubenswrapper[4998]: I0227 10:37:00.262550 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=6.308571653 podStartE2EDuration="25.262524812s" podCreationTimestamp="2026-02-27 10:36:35 +0000 UTC" firstStartedPulling="2026-02-27 10:36:40.889260138 +0000 UTC m=+1152.887531106" lastFinishedPulling="2026-02-27 10:36:59.843213297 +0000 UTC m=+1171.841484265" observedRunningTime="2026-02-27 10:37:00.24505167 +0000 UTC m=+1172.243322638" watchObservedRunningTime="2026-02-27 10:37:00.262524812 +0000 UTC m=+1172.260795800" Feb 27 10:37:00 crc kubenswrapper[4998]: I0227 10:37:00.277863 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=22.353344836 podStartE2EDuration="36.277838075s" podCreationTimestamp="2026-02-27 10:36:24 +0000 UTC" firstStartedPulling="2026-02-27 10:36:40.592145929 +0000 UTC m=+1152.590416897" lastFinishedPulling="2026-02-27 10:36:54.516639168 +0000 UTC m=+1166.514910136" observedRunningTime="2026-02-27 10:37:00.268946349 +0000 UTC m=+1172.267217337" watchObservedRunningTime="2026-02-27 10:37:00.277838075 +0000 UTC m=+1172.276109043" Feb 27 10:37:00 crc kubenswrapper[4998]: I0227 10:37:00.306463 4998 scope.go:117] "RemoveContainer" containerID="7e139b43fa02174757b76edace2e15a59d7d438ac1152749b59b1c2996398766" Feb 27 10:37:00 crc kubenswrapper[4998]: I0227 10:37:00.343550 4998 scope.go:117] "RemoveContainer" containerID="d24b842ba0b1398781ea950d733f916be34aabb8db7b9465fe9faf45ca3a40c2" Feb 27 10:37:00 crc kubenswrapper[4998]: I0227 10:37:00.367177 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-r99b8"] Feb 27 10:37:00 crc kubenswrapper[4998]: I0227 10:37:00.384785 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-r99b8"] Feb 27 10:37:00 crc kubenswrapper[4998]: I0227 10:37:00.402649 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-zgmbf"] Feb 27 10:37:00 crc kubenswrapper[4998]: I0227 10:37:00.412817 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-zgmbf"] Feb 27 10:37:00 crc kubenswrapper[4998]: I0227 10:37:00.776941 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f5c960f-4048-47fa-8c05-f48d6702f1a3" path="/var/lib/kubelet/pods/2f5c960f-4048-47fa-8c05-f48d6702f1a3/volumes" Feb 27 10:37:00 crc kubenswrapper[4998]: I0227 10:37:00.777801 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f44997ba-4122-479b-b1c6-f57d2c9a3e05" path="/var/lib/kubelet/pods/f44997ba-4122-479b-b1c6-f57d2c9a3e05/volumes" Feb 27 10:37:01 crc kubenswrapper[4998]: I0227 10:37:01.215083 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-jtqtq" event={"ID":"2dd08b20-6d5e-4d2a-8237-fabf05188a4e","Type":"ContainerStarted","Data":"9d46c69941b8ab61de56bd69f118d0314bf2782dc3dccaeb94932d88139e21c6"} Feb 27 10:37:01 crc kubenswrapper[4998]: I0227 10:37:01.217896 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"6332347d-e3f4-468b-9e36-a1b27163d1cd","Type":"ContainerStarted","Data":"acfd6f8ef79830f5f6cab53835887de20fefa5b6afd2d23a52c0fd4368e82d38"} Feb 27 10:37:01 crc kubenswrapper[4998]: I0227 10:37:01.236144 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-jtqtq" podStartSLOduration=2.54760301 podStartE2EDuration="5.236124068s" podCreationTimestamp="2026-02-27 10:36:56 +0000 UTC" firstStartedPulling="2026-02-27 10:36:57.203770631 +0000 UTC m=+1169.202041599" lastFinishedPulling="2026-02-27 10:36:59.892291689 +0000 UTC m=+1171.890562657" observedRunningTime="2026-02-27 10:37:01.229306349 +0000 UTC m=+1173.227577317" watchObservedRunningTime="2026-02-27 10:37:01.236124068 +0000 UTC m=+1173.234395036" Feb 27 10:37:01 crc kubenswrapper[4998]: I0227 10:37:01.239533 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Feb 27 10:37:01 crc kubenswrapper[4998]: I0227 10:37:01.266443 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=9.02481908 podStartE2EDuration="28.266417414s" podCreationTimestamp="2026-02-27 10:36:33 +0000 UTC" firstStartedPulling="2026-02-27 10:36:40.653315169 +0000 UTC m=+1152.651586137" lastFinishedPulling="2026-02-27 10:36:59.894913493 +0000 UTC m=+1171.893184471" observedRunningTime="2026-02-27 10:37:01.252920529 +0000 UTC m=+1173.251191497" watchObservedRunningTime="2026-02-27 10:37:01.266417414 +0000 UTC m=+1173.264688382" Feb 27 10:37:01 crc kubenswrapper[4998]: I0227 10:37:01.281469 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Feb 27 10:37:01 crc kubenswrapper[4998]: I0227 10:37:01.730634 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Feb 27 10:37:01 crc kubenswrapper[4998]: I0227 10:37:01.764073 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Feb 27 10:37:02 crc kubenswrapper[4998]: I0227 10:37:02.226934 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Feb 27 10:37:02 crc kubenswrapper[4998]: I0227 10:37:02.227358 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Feb 27 10:37:02 crc kubenswrapper[4998]: I0227 10:37:02.265174 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Feb 27 10:37:02 crc kubenswrapper[4998]: I0227 10:37:02.270070 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Feb 27 10:37:02 crc kubenswrapper[4998]: I0227 10:37:02.519785 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Feb 27 10:37:02 crc kubenswrapper[4998]: E0227 10:37:02.520197 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f5c960f-4048-47fa-8c05-f48d6702f1a3" containerName="init" Feb 27 10:37:02 crc kubenswrapper[4998]: I0227 10:37:02.520286 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f5c960f-4048-47fa-8c05-f48d6702f1a3" containerName="init" Feb 27 10:37:02 crc kubenswrapper[4998]: E0227 10:37:02.520319 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f44997ba-4122-479b-b1c6-f57d2c9a3e05" containerName="dnsmasq-dns" Feb 27 10:37:02 crc kubenswrapper[4998]: I0227 10:37:02.520327 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="f44997ba-4122-479b-b1c6-f57d2c9a3e05" containerName="dnsmasq-dns" Feb 27 10:37:02 crc kubenswrapper[4998]: E0227 10:37:02.520364 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f5c960f-4048-47fa-8c05-f48d6702f1a3" containerName="dnsmasq-dns" Feb 27 10:37:02 crc kubenswrapper[4998]: I0227 10:37:02.520374 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f5c960f-4048-47fa-8c05-f48d6702f1a3" containerName="dnsmasq-dns" Feb 27 10:37:02 crc kubenswrapper[4998]: E0227 10:37:02.520393 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f44997ba-4122-479b-b1c6-f57d2c9a3e05" containerName="init" Feb 27 10:37:02 crc kubenswrapper[4998]: I0227 10:37:02.520401 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="f44997ba-4122-479b-b1c6-f57d2c9a3e05" containerName="init" Feb 27 10:37:02 crc kubenswrapper[4998]: I0227 10:37:02.520594 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f5c960f-4048-47fa-8c05-f48d6702f1a3" containerName="dnsmasq-dns" Feb 27 10:37:02 crc kubenswrapper[4998]: I0227 10:37:02.520606 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="f44997ba-4122-479b-b1c6-f57d2c9a3e05" containerName="dnsmasq-dns" Feb 27 10:37:02 crc kubenswrapper[4998]: I0227 10:37:02.521606 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 27 10:37:02 crc kubenswrapper[4998]: I0227 10:37:02.525792 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Feb 27 10:37:02 crc kubenswrapper[4998]: I0227 10:37:02.525896 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-cctr2" Feb 27 10:37:02 crc kubenswrapper[4998]: I0227 10:37:02.525807 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Feb 27 10:37:02 crc kubenswrapper[4998]: I0227 10:37:02.528254 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Feb 27 10:37:02 crc kubenswrapper[4998]: I0227 10:37:02.533831 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 27 10:37:02 crc kubenswrapper[4998]: I0227 10:37:02.678464 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/196e9f4a-19f0-4a5d-b07b-fdfadfce3f87-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"196e9f4a-19f0-4a5d-b07b-fdfadfce3f87\") " pod="openstack/ovn-northd-0" Feb 27 10:37:02 crc kubenswrapper[4998]: I0227 10:37:02.678510 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/196e9f4a-19f0-4a5d-b07b-fdfadfce3f87-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"196e9f4a-19f0-4a5d-b07b-fdfadfce3f87\") " pod="openstack/ovn-northd-0" Feb 27 10:37:02 crc kubenswrapper[4998]: I0227 10:37:02.678560 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/196e9f4a-19f0-4a5d-b07b-fdfadfce3f87-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"196e9f4a-19f0-4a5d-b07b-fdfadfce3f87\") " pod="openstack/ovn-northd-0" Feb 27 10:37:02 crc kubenswrapper[4998]: I0227 10:37:02.678598 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/196e9f4a-19f0-4a5d-b07b-fdfadfce3f87-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"196e9f4a-19f0-4a5d-b07b-fdfadfce3f87\") " pod="openstack/ovn-northd-0" Feb 27 10:37:02 crc kubenswrapper[4998]: I0227 10:37:02.678622 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/196e9f4a-19f0-4a5d-b07b-fdfadfce3f87-config\") pod \"ovn-northd-0\" (UID: \"196e9f4a-19f0-4a5d-b07b-fdfadfce3f87\") " pod="openstack/ovn-northd-0" Feb 27 10:37:02 crc kubenswrapper[4998]: I0227 10:37:02.678739 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/196e9f4a-19f0-4a5d-b07b-fdfadfce3f87-scripts\") pod \"ovn-northd-0\" (UID: \"196e9f4a-19f0-4a5d-b07b-fdfadfce3f87\") " pod="openstack/ovn-northd-0" Feb 27 10:37:02 crc kubenswrapper[4998]: I0227 10:37:02.678793 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4mv4\" (UniqueName: \"kubernetes.io/projected/196e9f4a-19f0-4a5d-b07b-fdfadfce3f87-kube-api-access-c4mv4\") pod \"ovn-northd-0\" (UID: \"196e9f4a-19f0-4a5d-b07b-fdfadfce3f87\") " pod="openstack/ovn-northd-0" Feb 27 10:37:02 crc kubenswrapper[4998]: I0227 10:37:02.780021 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/196e9f4a-19f0-4a5d-b07b-fdfadfce3f87-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"196e9f4a-19f0-4a5d-b07b-fdfadfce3f87\") " pod="openstack/ovn-northd-0" Feb 27 10:37:02 crc kubenswrapper[4998]: I0227 10:37:02.780082 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/196e9f4a-19f0-4a5d-b07b-fdfadfce3f87-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"196e9f4a-19f0-4a5d-b07b-fdfadfce3f87\") " pod="openstack/ovn-northd-0" Feb 27 10:37:02 crc kubenswrapper[4998]: I0227 10:37:02.780107 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/196e9f4a-19f0-4a5d-b07b-fdfadfce3f87-config\") pod \"ovn-northd-0\" (UID: \"196e9f4a-19f0-4a5d-b07b-fdfadfce3f87\") " pod="openstack/ovn-northd-0" Feb 27 10:37:02 crc kubenswrapper[4998]: I0227 10:37:02.780157 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/196e9f4a-19f0-4a5d-b07b-fdfadfce3f87-scripts\") pod \"ovn-northd-0\" (UID: \"196e9f4a-19f0-4a5d-b07b-fdfadfce3f87\") " pod="openstack/ovn-northd-0" Feb 27 10:37:02 crc kubenswrapper[4998]: I0227 10:37:02.780181 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4mv4\" (UniqueName: \"kubernetes.io/projected/196e9f4a-19f0-4a5d-b07b-fdfadfce3f87-kube-api-access-c4mv4\") pod \"ovn-northd-0\" (UID: \"196e9f4a-19f0-4a5d-b07b-fdfadfce3f87\") " pod="openstack/ovn-northd-0" Feb 27 10:37:02 crc kubenswrapper[4998]: I0227 10:37:02.780242 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/196e9f4a-19f0-4a5d-b07b-fdfadfce3f87-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"196e9f4a-19f0-4a5d-b07b-fdfadfce3f87\") " pod="openstack/ovn-northd-0" Feb 27 10:37:02 crc kubenswrapper[4998]: I0227 10:37:02.780265 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/196e9f4a-19f0-4a5d-b07b-fdfadfce3f87-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"196e9f4a-19f0-4a5d-b07b-fdfadfce3f87\") " pod="openstack/ovn-northd-0" Feb 27 10:37:02 crc kubenswrapper[4998]: I0227 10:37:02.781109 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/196e9f4a-19f0-4a5d-b07b-fdfadfce3f87-config\") pod \"ovn-northd-0\" (UID: \"196e9f4a-19f0-4a5d-b07b-fdfadfce3f87\") " pod="openstack/ovn-northd-0" Feb 27 10:37:02 crc kubenswrapper[4998]: I0227 10:37:02.781116 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/196e9f4a-19f0-4a5d-b07b-fdfadfce3f87-scripts\") pod \"ovn-northd-0\" (UID: \"196e9f4a-19f0-4a5d-b07b-fdfadfce3f87\") " pod="openstack/ovn-northd-0" Feb 27 10:37:02 crc kubenswrapper[4998]: I0227 10:37:02.781397 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/196e9f4a-19f0-4a5d-b07b-fdfadfce3f87-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"196e9f4a-19f0-4a5d-b07b-fdfadfce3f87\") " pod="openstack/ovn-northd-0" Feb 27 10:37:02 crc kubenswrapper[4998]: I0227 10:37:02.784783 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Feb 27 10:37:02 crc kubenswrapper[4998]: I0227 10:37:02.786872 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/196e9f4a-19f0-4a5d-b07b-fdfadfce3f87-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"196e9f4a-19f0-4a5d-b07b-fdfadfce3f87\") " pod="openstack/ovn-northd-0" Feb 27 10:37:02 crc kubenswrapper[4998]: I0227 10:37:02.786950 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/196e9f4a-19f0-4a5d-b07b-fdfadfce3f87-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"196e9f4a-19f0-4a5d-b07b-fdfadfce3f87\") " pod="openstack/ovn-northd-0" Feb 27 10:37:02 crc kubenswrapper[4998]: I0227 10:37:02.787148 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/196e9f4a-19f0-4a5d-b07b-fdfadfce3f87-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"196e9f4a-19f0-4a5d-b07b-fdfadfce3f87\") " pod="openstack/ovn-northd-0" Feb 27 10:37:02 crc kubenswrapper[4998]: I0227 10:37:02.809283 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4mv4\" (UniqueName: \"kubernetes.io/projected/196e9f4a-19f0-4a5d-b07b-fdfadfce3f87-kube-api-access-c4mv4\") pod \"ovn-northd-0\" (UID: \"196e9f4a-19f0-4a5d-b07b-fdfadfce3f87\") " pod="openstack/ovn-northd-0" Feb 27 10:37:02 crc kubenswrapper[4998]: I0227 10:37:02.844548 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 27 10:37:03 crc kubenswrapper[4998]: I0227 10:37:03.262175 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 27 10:37:04 crc kubenswrapper[4998]: I0227 10:37:04.249207 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"196e9f4a-19f0-4a5d-b07b-fdfadfce3f87","Type":"ContainerStarted","Data":"9bf9550065d488238a6d88c31fd0cdeaa9fe8ac4d4f2ec6705fb2c244b8a4472"} Feb 27 10:37:06 crc kubenswrapper[4998]: I0227 10:37:06.263259 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 27 10:37:06 crc kubenswrapper[4998]: I0227 10:37:06.271402 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 27 10:37:06 crc kubenswrapper[4998]: I0227 10:37:06.310349 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"196e9f4a-19f0-4a5d-b07b-fdfadfce3f87","Type":"ContainerStarted","Data":"62c1b637d5c5b9a5b34b6ce5af3c843871a7fe695010b2cf8539fe2a279974de"} Feb 27 10:37:06 crc kubenswrapper[4998]: I0227 10:37:06.310449 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"196e9f4a-19f0-4a5d-b07b-fdfadfce3f87","Type":"ContainerStarted","Data":"a85dd2480f280d80700a7796cdef8d9374528d0d30c8d58d507d4b3d1c327290"} Feb 27 10:37:06 crc kubenswrapper[4998]: I0227 10:37:06.310538 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Feb 27 10:37:06 crc kubenswrapper[4998]: I0227 10:37:06.340212 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=3.220473386 podStartE2EDuration="4.340191479s" podCreationTimestamp="2026-02-27 10:37:02 +0000 UTC" firstStartedPulling="2026-02-27 10:37:03.267172491 +0000 UTC m=+1175.265443449" lastFinishedPulling="2026-02-27 10:37:04.386890574 +0000 UTC m=+1176.385161542" observedRunningTime="2026-02-27 10:37:06.334999583 +0000 UTC m=+1178.333270561" watchObservedRunningTime="2026-02-27 10:37:06.340191479 +0000 UTC m=+1178.338462447" Feb 27 10:37:06 crc kubenswrapper[4998]: I0227 10:37:06.377683 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 27 10:37:07 crc kubenswrapper[4998]: I0227 10:37:07.050418 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7fd796d7df-hdmgb" Feb 27 10:37:07 crc kubenswrapper[4998]: I0227 10:37:07.220393 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86db49b7ff-qw4jj" Feb 27 10:37:07 crc kubenswrapper[4998]: I0227 10:37:07.281323 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-hdmgb"] Feb 27 10:37:07 crc kubenswrapper[4998]: I0227 10:37:07.322006 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7fd796d7df-hdmgb" podUID="e2c169ad-e5d1-483b-be21-538827ae0f1d" containerName="dnsmasq-dns" containerID="cri-o://6b389ba838724db86fc0a5ecbf0338aab448620bb3b8193d892580dbb2193599" gracePeriod=10 Feb 27 10:37:07 crc kubenswrapper[4998]: I0227 10:37:07.396697 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 27 10:37:07 crc kubenswrapper[4998]: I0227 10:37:07.489503 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 27 10:37:07 crc kubenswrapper[4998]: I0227 10:37:07.489684 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 27 10:37:07 crc kubenswrapper[4998]: I0227 10:37:07.565298 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 27 10:37:07 crc kubenswrapper[4998]: I0227 10:37:07.806172 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-hdmgb" Feb 27 10:37:07 crc kubenswrapper[4998]: I0227 10:37:07.906967 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2c169ad-e5d1-483b-be21-538827ae0f1d-config\") pod \"e2c169ad-e5d1-483b-be21-538827ae0f1d\" (UID: \"e2c169ad-e5d1-483b-be21-538827ae0f1d\") " Feb 27 10:37:07 crc kubenswrapper[4998]: I0227 10:37:07.907618 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e2c169ad-e5d1-483b-be21-538827ae0f1d-dns-svc\") pod \"e2c169ad-e5d1-483b-be21-538827ae0f1d\" (UID: \"e2c169ad-e5d1-483b-be21-538827ae0f1d\") " Feb 27 10:37:07 crc kubenswrapper[4998]: I0227 10:37:07.907745 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wf5ql\" (UniqueName: \"kubernetes.io/projected/e2c169ad-e5d1-483b-be21-538827ae0f1d-kube-api-access-wf5ql\") pod \"e2c169ad-e5d1-483b-be21-538827ae0f1d\" (UID: \"e2c169ad-e5d1-483b-be21-538827ae0f1d\") " Feb 27 10:37:07 crc kubenswrapper[4998]: I0227 10:37:07.907819 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e2c169ad-e5d1-483b-be21-538827ae0f1d-ovsdbserver-nb\") pod \"e2c169ad-e5d1-483b-be21-538827ae0f1d\" (UID: \"e2c169ad-e5d1-483b-be21-538827ae0f1d\") " Feb 27 10:37:07 crc kubenswrapper[4998]: I0227 10:37:07.913279 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2c169ad-e5d1-483b-be21-538827ae0f1d-kube-api-access-wf5ql" (OuterVolumeSpecName: "kube-api-access-wf5ql") pod "e2c169ad-e5d1-483b-be21-538827ae0f1d" (UID: "e2c169ad-e5d1-483b-be21-538827ae0f1d"). InnerVolumeSpecName "kube-api-access-wf5ql". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:37:07 crc kubenswrapper[4998]: I0227 10:37:07.944490 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2c169ad-e5d1-483b-be21-538827ae0f1d-config" (OuterVolumeSpecName: "config") pod "e2c169ad-e5d1-483b-be21-538827ae0f1d" (UID: "e2c169ad-e5d1-483b-be21-538827ae0f1d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:37:07 crc kubenswrapper[4998]: I0227 10:37:07.950564 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2c169ad-e5d1-483b-be21-538827ae0f1d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e2c169ad-e5d1-483b-be21-538827ae0f1d" (UID: "e2c169ad-e5d1-483b-be21-538827ae0f1d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:37:07 crc kubenswrapper[4998]: I0227 10:37:07.962049 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2c169ad-e5d1-483b-be21-538827ae0f1d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e2c169ad-e5d1-483b-be21-538827ae0f1d" (UID: "e2c169ad-e5d1-483b-be21-538827ae0f1d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:37:08 crc kubenswrapper[4998]: I0227 10:37:08.010035 4998 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e2c169ad-e5d1-483b-be21-538827ae0f1d-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 27 10:37:08 crc kubenswrapper[4998]: I0227 10:37:08.010077 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wf5ql\" (UniqueName: \"kubernetes.io/projected/e2c169ad-e5d1-483b-be21-538827ae0f1d-kube-api-access-wf5ql\") on node \"crc\" DevicePath \"\"" Feb 27 10:37:08 crc kubenswrapper[4998]: I0227 10:37:08.010095 4998 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e2c169ad-e5d1-483b-be21-538827ae0f1d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 27 10:37:08 crc kubenswrapper[4998]: I0227 10:37:08.010108 4998 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2c169ad-e5d1-483b-be21-538827ae0f1d-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:37:08 crc kubenswrapper[4998]: I0227 10:37:08.331191 4998 generic.go:334] "Generic (PLEG): container finished" podID="e2c169ad-e5d1-483b-be21-538827ae0f1d" containerID="6b389ba838724db86fc0a5ecbf0338aab448620bb3b8193d892580dbb2193599" exitCode=0 Feb 27 10:37:08 crc kubenswrapper[4998]: I0227 10:37:08.331255 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-hdmgb" event={"ID":"e2c169ad-e5d1-483b-be21-538827ae0f1d","Type":"ContainerDied","Data":"6b389ba838724db86fc0a5ecbf0338aab448620bb3b8193d892580dbb2193599"} Feb 27 10:37:08 crc kubenswrapper[4998]: I0227 10:37:08.331305 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-hdmgb" event={"ID":"e2c169ad-e5d1-483b-be21-538827ae0f1d","Type":"ContainerDied","Data":"71f927a64c10d8e56408a68ed90bfc6835f7aa88f2167dd438bd38e939deefc1"} Feb 27 10:37:08 crc kubenswrapper[4998]: I0227 10:37:08.331324 4998 scope.go:117] "RemoveContainer" containerID="6b389ba838724db86fc0a5ecbf0338aab448620bb3b8193d892580dbb2193599" Feb 27 10:37:08 crc kubenswrapper[4998]: I0227 10:37:08.331341 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-hdmgb" Feb 27 10:37:08 crc kubenswrapper[4998]: I0227 10:37:08.350840 4998 scope.go:117] "RemoveContainer" containerID="b4a68918c620f1e2fdaf59e8bd00a7831c57a057ea11f7340fa03bcac790f842" Feb 27 10:37:08 crc kubenswrapper[4998]: I0227 10:37:08.366415 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-hdmgb"] Feb 27 10:37:08 crc kubenswrapper[4998]: I0227 10:37:08.372464 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-hdmgb"] Feb 27 10:37:08 crc kubenswrapper[4998]: I0227 10:37:08.382962 4998 scope.go:117] "RemoveContainer" containerID="6b389ba838724db86fc0a5ecbf0338aab448620bb3b8193d892580dbb2193599" Feb 27 10:37:08 crc kubenswrapper[4998]: E0227 10:37:08.383390 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b389ba838724db86fc0a5ecbf0338aab448620bb3b8193d892580dbb2193599\": container with ID starting with 6b389ba838724db86fc0a5ecbf0338aab448620bb3b8193d892580dbb2193599 not found: ID does not exist" containerID="6b389ba838724db86fc0a5ecbf0338aab448620bb3b8193d892580dbb2193599" Feb 27 10:37:08 crc kubenswrapper[4998]: I0227 10:37:08.383423 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b389ba838724db86fc0a5ecbf0338aab448620bb3b8193d892580dbb2193599"} err="failed to get container status \"6b389ba838724db86fc0a5ecbf0338aab448620bb3b8193d892580dbb2193599\": rpc error: code = NotFound desc = could not find container \"6b389ba838724db86fc0a5ecbf0338aab448620bb3b8193d892580dbb2193599\": container with ID starting with 6b389ba838724db86fc0a5ecbf0338aab448620bb3b8193d892580dbb2193599 not found: ID does not exist" Feb 27 10:37:08 crc kubenswrapper[4998]: I0227 10:37:08.383445 4998 scope.go:117] "RemoveContainer" containerID="b4a68918c620f1e2fdaf59e8bd00a7831c57a057ea11f7340fa03bcac790f842" Feb 27 10:37:08 crc kubenswrapper[4998]: E0227 10:37:08.383848 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4a68918c620f1e2fdaf59e8bd00a7831c57a057ea11f7340fa03bcac790f842\": container with ID starting with b4a68918c620f1e2fdaf59e8bd00a7831c57a057ea11f7340fa03bcac790f842 not found: ID does not exist" containerID="b4a68918c620f1e2fdaf59e8bd00a7831c57a057ea11f7340fa03bcac790f842" Feb 27 10:37:08 crc kubenswrapper[4998]: I0227 10:37:08.383869 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4a68918c620f1e2fdaf59e8bd00a7831c57a057ea11f7340fa03bcac790f842"} err="failed to get container status \"b4a68918c620f1e2fdaf59e8bd00a7831c57a057ea11f7340fa03bcac790f842\": rpc error: code = NotFound desc = could not find container \"b4a68918c620f1e2fdaf59e8bd00a7831c57a057ea11f7340fa03bcac790f842\": container with ID starting with b4a68918c620f1e2fdaf59e8bd00a7831c57a057ea11f7340fa03bcac790f842 not found: ID does not exist" Feb 27 10:37:08 crc kubenswrapper[4998]: I0227 10:37:08.407424 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 27 10:37:08 crc kubenswrapper[4998]: I0227 10:37:08.800375 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2c169ad-e5d1-483b-be21-538827ae0f1d" path="/var/lib/kubelet/pods/e2c169ad-e5d1-483b-be21-538827ae0f1d/volumes" Feb 27 10:37:08 crc kubenswrapper[4998]: I0227 10:37:08.853495 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-e6b9-account-create-update-zb9jg"] Feb 27 10:37:08 crc kubenswrapper[4998]: E0227 10:37:08.853968 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2c169ad-e5d1-483b-be21-538827ae0f1d" containerName="init" Feb 27 10:37:08 crc kubenswrapper[4998]: I0227 10:37:08.853994 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2c169ad-e5d1-483b-be21-538827ae0f1d" containerName="init" Feb 27 10:37:08 crc kubenswrapper[4998]: E0227 10:37:08.854010 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2c169ad-e5d1-483b-be21-538827ae0f1d" containerName="dnsmasq-dns" Feb 27 10:37:08 crc kubenswrapper[4998]: I0227 10:37:08.854019 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2c169ad-e5d1-483b-be21-538827ae0f1d" containerName="dnsmasq-dns" Feb 27 10:37:08 crc kubenswrapper[4998]: I0227 10:37:08.854247 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2c169ad-e5d1-483b-be21-538827ae0f1d" containerName="dnsmasq-dns" Feb 27 10:37:08 crc kubenswrapper[4998]: I0227 10:37:08.854797 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e6b9-account-create-update-zb9jg" Feb 27 10:37:08 crc kubenswrapper[4998]: I0227 10:37:08.857292 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 27 10:37:08 crc kubenswrapper[4998]: I0227 10:37:08.863003 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-e6b9-account-create-update-zb9jg"] Feb 27 10:37:08 crc kubenswrapper[4998]: I0227 10:37:08.873964 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-mfjlb"] Feb 27 10:37:08 crc kubenswrapper[4998]: I0227 10:37:08.875325 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-mfjlb" Feb 27 10:37:08 crc kubenswrapper[4998]: I0227 10:37:08.894505 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-mfjlb"] Feb 27 10:37:08 crc kubenswrapper[4998]: I0227 10:37:08.923826 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5w4h6\" (UniqueName: \"kubernetes.io/projected/c5297f13-d069-44e9-aa42-17bf298602e4-kube-api-access-5w4h6\") pod \"keystone-e6b9-account-create-update-zb9jg\" (UID: \"c5297f13-d069-44e9-aa42-17bf298602e4\") " pod="openstack/keystone-e6b9-account-create-update-zb9jg" Feb 27 10:37:08 crc kubenswrapper[4998]: I0227 10:37:08.923888 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5297f13-d069-44e9-aa42-17bf298602e4-operator-scripts\") pod \"keystone-e6b9-account-create-update-zb9jg\" (UID: \"c5297f13-d069-44e9-aa42-17bf298602e4\") " pod="openstack/keystone-e6b9-account-create-update-zb9jg" Feb 27 10:37:08 crc kubenswrapper[4998]: I0227 10:37:08.992153 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-q5h9v"] Feb 27 10:37:08 crc kubenswrapper[4998]: I0227 10:37:08.993556 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-q5h9v" Feb 27 10:37:08 crc kubenswrapper[4998]: I0227 10:37:08.999180 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-q5h9v"] Feb 27 10:37:09 crc kubenswrapper[4998]: I0227 10:37:09.026010 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5w4h6\" (UniqueName: \"kubernetes.io/projected/c5297f13-d069-44e9-aa42-17bf298602e4-kube-api-access-5w4h6\") pod \"keystone-e6b9-account-create-update-zb9jg\" (UID: \"c5297f13-d069-44e9-aa42-17bf298602e4\") " pod="openstack/keystone-e6b9-account-create-update-zb9jg" Feb 27 10:37:09 crc kubenswrapper[4998]: I0227 10:37:09.026059 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67qc8\" (UniqueName: \"kubernetes.io/projected/472a79a1-5809-4914-b8a3-1aa3a708bb9a-kube-api-access-67qc8\") pod \"keystone-db-create-mfjlb\" (UID: \"472a79a1-5809-4914-b8a3-1aa3a708bb9a\") " pod="openstack/keystone-db-create-mfjlb" Feb 27 10:37:09 crc kubenswrapper[4998]: I0227 10:37:09.026092 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5297f13-d069-44e9-aa42-17bf298602e4-operator-scripts\") pod \"keystone-e6b9-account-create-update-zb9jg\" (UID: \"c5297f13-d069-44e9-aa42-17bf298602e4\") " pod="openstack/keystone-e6b9-account-create-update-zb9jg" Feb 27 10:37:09 crc kubenswrapper[4998]: I0227 10:37:09.026139 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/472a79a1-5809-4914-b8a3-1aa3a708bb9a-operator-scripts\") pod \"keystone-db-create-mfjlb\" (UID: \"472a79a1-5809-4914-b8a3-1aa3a708bb9a\") " pod="openstack/keystone-db-create-mfjlb" Feb 27 10:37:09 crc kubenswrapper[4998]: I0227 10:37:09.026786 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5297f13-d069-44e9-aa42-17bf298602e4-operator-scripts\") pod \"keystone-e6b9-account-create-update-zb9jg\" (UID: \"c5297f13-d069-44e9-aa42-17bf298602e4\") " pod="openstack/keystone-e6b9-account-create-update-zb9jg" Feb 27 10:37:09 crc kubenswrapper[4998]: I0227 10:37:09.043806 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5w4h6\" (UniqueName: \"kubernetes.io/projected/c5297f13-d069-44e9-aa42-17bf298602e4-kube-api-access-5w4h6\") pod \"keystone-e6b9-account-create-update-zb9jg\" (UID: \"c5297f13-d069-44e9-aa42-17bf298602e4\") " pod="openstack/keystone-e6b9-account-create-update-zb9jg" Feb 27 10:37:09 crc kubenswrapper[4998]: I0227 10:37:09.083070 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-f443-account-create-update-2mv4s"] Feb 27 10:37:09 crc kubenswrapper[4998]: I0227 10:37:09.084007 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-f443-account-create-update-2mv4s" Feb 27 10:37:09 crc kubenswrapper[4998]: I0227 10:37:09.085939 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 27 10:37:09 crc kubenswrapper[4998]: I0227 10:37:09.091387 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-f443-account-create-update-2mv4s"] Feb 27 10:37:09 crc kubenswrapper[4998]: I0227 10:37:09.127969 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d29ee408-e38e-4bd8-b05c-9fe12d166c9e-operator-scripts\") pod \"placement-db-create-q5h9v\" (UID: \"d29ee408-e38e-4bd8-b05c-9fe12d166c9e\") " pod="openstack/placement-db-create-q5h9v" Feb 27 10:37:09 crc kubenswrapper[4998]: I0227 10:37:09.128022 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-km6qj\" (UniqueName: \"kubernetes.io/projected/d29ee408-e38e-4bd8-b05c-9fe12d166c9e-kube-api-access-km6qj\") pod \"placement-db-create-q5h9v\" (UID: \"d29ee408-e38e-4bd8-b05c-9fe12d166c9e\") " pod="openstack/placement-db-create-q5h9v" Feb 27 10:37:09 crc kubenswrapper[4998]: I0227 10:37:09.128138 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67qc8\" (UniqueName: \"kubernetes.io/projected/472a79a1-5809-4914-b8a3-1aa3a708bb9a-kube-api-access-67qc8\") pod \"keystone-db-create-mfjlb\" (UID: \"472a79a1-5809-4914-b8a3-1aa3a708bb9a\") " pod="openstack/keystone-db-create-mfjlb" Feb 27 10:37:09 crc kubenswrapper[4998]: I0227 10:37:09.128185 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/472a79a1-5809-4914-b8a3-1aa3a708bb9a-operator-scripts\") pod \"keystone-db-create-mfjlb\" (UID: \"472a79a1-5809-4914-b8a3-1aa3a708bb9a\") " pod="openstack/keystone-db-create-mfjlb" Feb 27 10:37:09 crc kubenswrapper[4998]: I0227 10:37:09.128814 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/472a79a1-5809-4914-b8a3-1aa3a708bb9a-operator-scripts\") pod \"keystone-db-create-mfjlb\" (UID: \"472a79a1-5809-4914-b8a3-1aa3a708bb9a\") " pod="openstack/keystone-db-create-mfjlb" Feb 27 10:37:09 crc kubenswrapper[4998]: I0227 10:37:09.143176 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67qc8\" (UniqueName: \"kubernetes.io/projected/472a79a1-5809-4914-b8a3-1aa3a708bb9a-kube-api-access-67qc8\") pod \"keystone-db-create-mfjlb\" (UID: \"472a79a1-5809-4914-b8a3-1aa3a708bb9a\") " pod="openstack/keystone-db-create-mfjlb" Feb 27 10:37:09 crc kubenswrapper[4998]: I0227 10:37:09.182856 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e6b9-account-create-update-zb9jg" Feb 27 10:37:09 crc kubenswrapper[4998]: I0227 10:37:09.194372 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-mfjlb" Feb 27 10:37:09 crc kubenswrapper[4998]: I0227 10:37:09.230165 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-km6qj\" (UniqueName: \"kubernetes.io/projected/d29ee408-e38e-4bd8-b05c-9fe12d166c9e-kube-api-access-km6qj\") pod \"placement-db-create-q5h9v\" (UID: \"d29ee408-e38e-4bd8-b05c-9fe12d166c9e\") " pod="openstack/placement-db-create-q5h9v" Feb 27 10:37:09 crc kubenswrapper[4998]: I0227 10:37:09.230215 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xf9jr\" (UniqueName: \"kubernetes.io/projected/91e2f326-c479-4e94-a24f-42ec17281073-kube-api-access-xf9jr\") pod \"placement-f443-account-create-update-2mv4s\" (UID: \"91e2f326-c479-4e94-a24f-42ec17281073\") " pod="openstack/placement-f443-account-create-update-2mv4s" Feb 27 10:37:09 crc kubenswrapper[4998]: I0227 10:37:09.230349 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91e2f326-c479-4e94-a24f-42ec17281073-operator-scripts\") pod \"placement-f443-account-create-update-2mv4s\" (UID: \"91e2f326-c479-4e94-a24f-42ec17281073\") " pod="openstack/placement-f443-account-create-update-2mv4s" Feb 27 10:37:09 crc kubenswrapper[4998]: I0227 10:37:09.230383 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d29ee408-e38e-4bd8-b05c-9fe12d166c9e-operator-scripts\") pod \"placement-db-create-q5h9v\" (UID: \"d29ee408-e38e-4bd8-b05c-9fe12d166c9e\") " pod="openstack/placement-db-create-q5h9v" Feb 27 10:37:09 crc kubenswrapper[4998]: I0227 10:37:09.231151 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d29ee408-e38e-4bd8-b05c-9fe12d166c9e-operator-scripts\") pod \"placement-db-create-q5h9v\" (UID: \"d29ee408-e38e-4bd8-b05c-9fe12d166c9e\") " pod="openstack/placement-db-create-q5h9v" Feb 27 10:37:09 crc kubenswrapper[4998]: I0227 10:37:09.247756 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-km6qj\" (UniqueName: \"kubernetes.io/projected/d29ee408-e38e-4bd8-b05c-9fe12d166c9e-kube-api-access-km6qj\") pod \"placement-db-create-q5h9v\" (UID: \"d29ee408-e38e-4bd8-b05c-9fe12d166c9e\") " pod="openstack/placement-db-create-q5h9v" Feb 27 10:37:09 crc kubenswrapper[4998]: I0227 10:37:09.306738 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-q5h9v" Feb 27 10:37:09 crc kubenswrapper[4998]: I0227 10:37:09.331762 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91e2f326-c479-4e94-a24f-42ec17281073-operator-scripts\") pod \"placement-f443-account-create-update-2mv4s\" (UID: \"91e2f326-c479-4e94-a24f-42ec17281073\") " pod="openstack/placement-f443-account-create-update-2mv4s" Feb 27 10:37:09 crc kubenswrapper[4998]: I0227 10:37:09.332107 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xf9jr\" (UniqueName: \"kubernetes.io/projected/91e2f326-c479-4e94-a24f-42ec17281073-kube-api-access-xf9jr\") pod \"placement-f443-account-create-update-2mv4s\" (UID: \"91e2f326-c479-4e94-a24f-42ec17281073\") " pod="openstack/placement-f443-account-create-update-2mv4s" Feb 27 10:37:09 crc kubenswrapper[4998]: I0227 10:37:09.332717 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91e2f326-c479-4e94-a24f-42ec17281073-operator-scripts\") pod \"placement-f443-account-create-update-2mv4s\" (UID: \"91e2f326-c479-4e94-a24f-42ec17281073\") " pod="openstack/placement-f443-account-create-update-2mv4s" Feb 27 10:37:09 crc kubenswrapper[4998]: I0227 10:37:09.351601 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xf9jr\" (UniqueName: \"kubernetes.io/projected/91e2f326-c479-4e94-a24f-42ec17281073-kube-api-access-xf9jr\") pod \"placement-f443-account-create-update-2mv4s\" (UID: \"91e2f326-c479-4e94-a24f-42ec17281073\") " pod="openstack/placement-f443-account-create-update-2mv4s" Feb 27 10:37:09 crc kubenswrapper[4998]: I0227 10:37:09.399822 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-f443-account-create-update-2mv4s" Feb 27 10:37:09 crc kubenswrapper[4998]: I0227 10:37:09.660470 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-mfjlb"] Feb 27 10:37:09 crc kubenswrapper[4998]: I0227 10:37:09.668218 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-e6b9-account-create-update-zb9jg"] Feb 27 10:37:09 crc kubenswrapper[4998]: I0227 10:37:09.831673 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-q5h9v"] Feb 27 10:37:09 crc kubenswrapper[4998]: I0227 10:37:09.935178 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-9bmcl"] Feb 27 10:37:09 crc kubenswrapper[4998]: I0227 10:37:09.940150 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-9bmcl" Feb 27 10:37:09 crc kubenswrapper[4998]: I0227 10:37:09.962334 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-9bmcl"] Feb 27 10:37:09 crc kubenswrapper[4998]: I0227 10:37:09.995468 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 27 10:37:10 crc kubenswrapper[4998]: I0227 10:37:10.028634 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-f443-account-create-update-2mv4s"] Feb 27 10:37:10 crc kubenswrapper[4998]: I0227 10:37:10.050693 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b29cf5b5-0760-4c81-a1e5-e434017c2414-dns-svc\") pod \"dnsmasq-dns-698758b865-9bmcl\" (UID: \"b29cf5b5-0760-4c81-a1e5-e434017c2414\") " pod="openstack/dnsmasq-dns-698758b865-9bmcl" Feb 27 10:37:10 crc kubenswrapper[4998]: I0227 10:37:10.063498 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v96q7\" (UniqueName: \"kubernetes.io/projected/b29cf5b5-0760-4c81-a1e5-e434017c2414-kube-api-access-v96q7\") pod \"dnsmasq-dns-698758b865-9bmcl\" (UID: \"b29cf5b5-0760-4c81-a1e5-e434017c2414\") " pod="openstack/dnsmasq-dns-698758b865-9bmcl" Feb 27 10:37:10 crc kubenswrapper[4998]: I0227 10:37:10.063541 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b29cf5b5-0760-4c81-a1e5-e434017c2414-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-9bmcl\" (UID: \"b29cf5b5-0760-4c81-a1e5-e434017c2414\") " pod="openstack/dnsmasq-dns-698758b865-9bmcl" Feb 27 10:37:10 crc kubenswrapper[4998]: I0227 10:37:10.063634 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b29cf5b5-0760-4c81-a1e5-e434017c2414-config\") pod \"dnsmasq-dns-698758b865-9bmcl\" (UID: \"b29cf5b5-0760-4c81-a1e5-e434017c2414\") " pod="openstack/dnsmasq-dns-698758b865-9bmcl" Feb 27 10:37:10 crc kubenswrapper[4998]: I0227 10:37:10.063653 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b29cf5b5-0760-4c81-a1e5-e434017c2414-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-9bmcl\" (UID: \"b29cf5b5-0760-4c81-a1e5-e434017c2414\") " pod="openstack/dnsmasq-dns-698758b865-9bmcl" Feb 27 10:37:10 crc kubenswrapper[4998]: I0227 10:37:10.164915 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b29cf5b5-0760-4c81-a1e5-e434017c2414-dns-svc\") pod \"dnsmasq-dns-698758b865-9bmcl\" (UID: \"b29cf5b5-0760-4c81-a1e5-e434017c2414\") " pod="openstack/dnsmasq-dns-698758b865-9bmcl" Feb 27 10:37:10 crc kubenswrapper[4998]: I0227 10:37:10.165000 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v96q7\" (UniqueName: \"kubernetes.io/projected/b29cf5b5-0760-4c81-a1e5-e434017c2414-kube-api-access-v96q7\") pod \"dnsmasq-dns-698758b865-9bmcl\" (UID: \"b29cf5b5-0760-4c81-a1e5-e434017c2414\") " pod="openstack/dnsmasq-dns-698758b865-9bmcl" Feb 27 10:37:10 crc kubenswrapper[4998]: I0227 10:37:10.165023 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b29cf5b5-0760-4c81-a1e5-e434017c2414-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-9bmcl\" (UID: \"b29cf5b5-0760-4c81-a1e5-e434017c2414\") " pod="openstack/dnsmasq-dns-698758b865-9bmcl" Feb 27 10:37:10 crc kubenswrapper[4998]: I0227 10:37:10.165067 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b29cf5b5-0760-4c81-a1e5-e434017c2414-config\") pod \"dnsmasq-dns-698758b865-9bmcl\" (UID: \"b29cf5b5-0760-4c81-a1e5-e434017c2414\") " pod="openstack/dnsmasq-dns-698758b865-9bmcl" Feb 27 10:37:10 crc kubenswrapper[4998]: I0227 10:37:10.165084 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b29cf5b5-0760-4c81-a1e5-e434017c2414-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-9bmcl\" (UID: \"b29cf5b5-0760-4c81-a1e5-e434017c2414\") " pod="openstack/dnsmasq-dns-698758b865-9bmcl" Feb 27 10:37:10 crc kubenswrapper[4998]: I0227 10:37:10.167024 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b29cf5b5-0760-4c81-a1e5-e434017c2414-dns-svc\") pod \"dnsmasq-dns-698758b865-9bmcl\" (UID: \"b29cf5b5-0760-4c81-a1e5-e434017c2414\") " pod="openstack/dnsmasq-dns-698758b865-9bmcl" Feb 27 10:37:10 crc kubenswrapper[4998]: I0227 10:37:10.167429 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b29cf5b5-0760-4c81-a1e5-e434017c2414-config\") pod \"dnsmasq-dns-698758b865-9bmcl\" (UID: \"b29cf5b5-0760-4c81-a1e5-e434017c2414\") " pod="openstack/dnsmasq-dns-698758b865-9bmcl" Feb 27 10:37:10 crc kubenswrapper[4998]: I0227 10:37:10.167701 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b29cf5b5-0760-4c81-a1e5-e434017c2414-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-9bmcl\" (UID: \"b29cf5b5-0760-4c81-a1e5-e434017c2414\") " pod="openstack/dnsmasq-dns-698758b865-9bmcl" Feb 27 10:37:10 crc kubenswrapper[4998]: I0227 10:37:10.168092 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b29cf5b5-0760-4c81-a1e5-e434017c2414-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-9bmcl\" (UID: \"b29cf5b5-0760-4c81-a1e5-e434017c2414\") " pod="openstack/dnsmasq-dns-698758b865-9bmcl" Feb 27 10:37:10 crc kubenswrapper[4998]: I0227 10:37:10.187551 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v96q7\" (UniqueName: \"kubernetes.io/projected/b29cf5b5-0760-4c81-a1e5-e434017c2414-kube-api-access-v96q7\") pod \"dnsmasq-dns-698758b865-9bmcl\" (UID: \"b29cf5b5-0760-4c81-a1e5-e434017c2414\") " pod="openstack/dnsmasq-dns-698758b865-9bmcl" Feb 27 10:37:10 crc kubenswrapper[4998]: I0227 10:37:10.273649 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-9bmcl" Feb 27 10:37:10 crc kubenswrapper[4998]: I0227 10:37:10.367935 4998 generic.go:334] "Generic (PLEG): container finished" podID="d29ee408-e38e-4bd8-b05c-9fe12d166c9e" containerID="2ad90dc2b9d9de393c6d67401b20841b7cc89ec38646e127c07ddd254b80f184" exitCode=0 Feb 27 10:37:10 crc kubenswrapper[4998]: I0227 10:37:10.368070 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-q5h9v" event={"ID":"d29ee408-e38e-4bd8-b05c-9fe12d166c9e","Type":"ContainerDied","Data":"2ad90dc2b9d9de393c6d67401b20841b7cc89ec38646e127c07ddd254b80f184"} Feb 27 10:37:10 crc kubenswrapper[4998]: I0227 10:37:10.368288 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-q5h9v" event={"ID":"d29ee408-e38e-4bd8-b05c-9fe12d166c9e","Type":"ContainerStarted","Data":"03e21f94acd00d51467baf132729d928599ef5f7ba95bca14da9e5cc455846e8"} Feb 27 10:37:10 crc kubenswrapper[4998]: I0227 10:37:10.369873 4998 generic.go:334] "Generic (PLEG): container finished" podID="472a79a1-5809-4914-b8a3-1aa3a708bb9a" containerID="61af5b34fc0062ba56c71848973511ea8009a7154cd302708c805aa4a500d9b1" exitCode=0 Feb 27 10:37:10 crc kubenswrapper[4998]: I0227 10:37:10.369946 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-mfjlb" event={"ID":"472a79a1-5809-4914-b8a3-1aa3a708bb9a","Type":"ContainerDied","Data":"61af5b34fc0062ba56c71848973511ea8009a7154cd302708c805aa4a500d9b1"} Feb 27 10:37:10 crc kubenswrapper[4998]: I0227 10:37:10.369976 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-mfjlb" event={"ID":"472a79a1-5809-4914-b8a3-1aa3a708bb9a","Type":"ContainerStarted","Data":"03a2c26597dc6c89dc68c85546223e9ff2863f19ce318454fa0a4d63f3487386"} Feb 27 10:37:10 crc kubenswrapper[4998]: I0227 10:37:10.371609 4998 generic.go:334] "Generic (PLEG): container finished" podID="c5297f13-d069-44e9-aa42-17bf298602e4" containerID="5a50d303aa58005bb4829d13042d3c899d47e3382e098a688f8bd18e6f33a20f" exitCode=0 Feb 27 10:37:10 crc kubenswrapper[4998]: I0227 10:37:10.371671 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-e6b9-account-create-update-zb9jg" event={"ID":"c5297f13-d069-44e9-aa42-17bf298602e4","Type":"ContainerDied","Data":"5a50d303aa58005bb4829d13042d3c899d47e3382e098a688f8bd18e6f33a20f"} Feb 27 10:37:10 crc kubenswrapper[4998]: I0227 10:37:10.371732 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-e6b9-account-create-update-zb9jg" event={"ID":"c5297f13-d069-44e9-aa42-17bf298602e4","Type":"ContainerStarted","Data":"67b7cb8e863322f373ab22923bf846eee6a50cc46f9ebd617545cfe74ddc9c15"} Feb 27 10:37:10 crc kubenswrapper[4998]: I0227 10:37:10.373387 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-f443-account-create-update-2mv4s" event={"ID":"91e2f326-c479-4e94-a24f-42ec17281073","Type":"ContainerStarted","Data":"3a8af43fb129cce6cf03894f349e6724ec05c00d5edef40b2543249728a3fae3"} Feb 27 10:37:10 crc kubenswrapper[4998]: I0227 10:37:10.373470 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-f443-account-create-update-2mv4s" event={"ID":"91e2f326-c479-4e94-a24f-42ec17281073","Type":"ContainerStarted","Data":"c65081342f91781e283df6eaefefe9d1c55541075d10fde4b2f8300f49607f23"} Feb 27 10:37:10 crc kubenswrapper[4998]: I0227 10:37:10.416074 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-f443-account-create-update-2mv4s" podStartSLOduration=1.416051891 podStartE2EDuration="1.416051891s" podCreationTimestamp="2026-02-27 10:37:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:37:10.411281919 +0000 UTC m=+1182.409552907" watchObservedRunningTime="2026-02-27 10:37:10.416051891 +0000 UTC m=+1182.414322869" Feb 27 10:37:10 crc kubenswrapper[4998]: I0227 10:37:10.505129 4998 patch_prober.go:28] interesting pod/machine-config-daemon-m6kr5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 10:37:10 crc kubenswrapper[4998]: I0227 10:37:10.505197 4998 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 10:37:10 crc kubenswrapper[4998]: I0227 10:37:10.701853 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-9bmcl"] Feb 27 10:37:11 crc kubenswrapper[4998]: I0227 10:37:11.049124 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Feb 27 10:37:11 crc kubenswrapper[4998]: I0227 10:37:11.057924 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 27 10:37:11 crc kubenswrapper[4998]: I0227 10:37:11.059810 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-wl2qg" Feb 27 10:37:11 crc kubenswrapper[4998]: I0227 10:37:11.059973 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Feb 27 10:37:11 crc kubenswrapper[4998]: I0227 10:37:11.060454 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Feb 27 10:37:11 crc kubenswrapper[4998]: I0227 10:37:11.060587 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Feb 27 10:37:11 crc kubenswrapper[4998]: I0227 10:37:11.062998 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 27 10:37:11 crc kubenswrapper[4998]: I0227 10:37:11.184991 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/0928c45d-8553-49e6-a068-3e2e75a28c69-cache\") pod \"swift-storage-0\" (UID: \"0928c45d-8553-49e6-a068-3e2e75a28c69\") " pod="openstack/swift-storage-0" Feb 27 10:37:11 crc kubenswrapper[4998]: I0227 10:37:11.185057 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/0928c45d-8553-49e6-a068-3e2e75a28c69-lock\") pod \"swift-storage-0\" (UID: \"0928c45d-8553-49e6-a068-3e2e75a28c69\") " pod="openstack/swift-storage-0" Feb 27 10:37:11 crc kubenswrapper[4998]: I0227 10:37:11.185085 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfslj\" (UniqueName: \"kubernetes.io/projected/0928c45d-8553-49e6-a068-3e2e75a28c69-kube-api-access-jfslj\") pod \"swift-storage-0\" (UID: \"0928c45d-8553-49e6-a068-3e2e75a28c69\") " pod="openstack/swift-storage-0" Feb 27 10:37:11 crc kubenswrapper[4998]: I0227 10:37:11.185109 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0928c45d-8553-49e6-a068-3e2e75a28c69-etc-swift\") pod \"swift-storage-0\" (UID: \"0928c45d-8553-49e6-a068-3e2e75a28c69\") " pod="openstack/swift-storage-0" Feb 27 10:37:11 crc kubenswrapper[4998]: I0227 10:37:11.185277 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"0928c45d-8553-49e6-a068-3e2e75a28c69\") " pod="openstack/swift-storage-0" Feb 27 10:37:11 crc kubenswrapper[4998]: I0227 10:37:11.185476 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0928c45d-8553-49e6-a068-3e2e75a28c69-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"0928c45d-8553-49e6-a068-3e2e75a28c69\") " pod="openstack/swift-storage-0" Feb 27 10:37:11 crc kubenswrapper[4998]: I0227 10:37:11.287290 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/0928c45d-8553-49e6-a068-3e2e75a28c69-cache\") pod \"swift-storage-0\" (UID: \"0928c45d-8553-49e6-a068-3e2e75a28c69\") " pod="openstack/swift-storage-0" Feb 27 10:37:11 crc kubenswrapper[4998]: I0227 10:37:11.287359 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/0928c45d-8553-49e6-a068-3e2e75a28c69-lock\") pod \"swift-storage-0\" (UID: \"0928c45d-8553-49e6-a068-3e2e75a28c69\") " pod="openstack/swift-storage-0" Feb 27 10:37:11 crc kubenswrapper[4998]: I0227 10:37:11.287388 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfslj\" (UniqueName: \"kubernetes.io/projected/0928c45d-8553-49e6-a068-3e2e75a28c69-kube-api-access-jfslj\") pod \"swift-storage-0\" (UID: \"0928c45d-8553-49e6-a068-3e2e75a28c69\") " pod="openstack/swift-storage-0" Feb 27 10:37:11 crc kubenswrapper[4998]: I0227 10:37:11.287423 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0928c45d-8553-49e6-a068-3e2e75a28c69-etc-swift\") pod \"swift-storage-0\" (UID: \"0928c45d-8553-49e6-a068-3e2e75a28c69\") " pod="openstack/swift-storage-0" Feb 27 10:37:11 crc kubenswrapper[4998]: I0227 10:37:11.287460 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"0928c45d-8553-49e6-a068-3e2e75a28c69\") " pod="openstack/swift-storage-0" Feb 27 10:37:11 crc kubenswrapper[4998]: I0227 10:37:11.287541 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0928c45d-8553-49e6-a068-3e2e75a28c69-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"0928c45d-8553-49e6-a068-3e2e75a28c69\") " pod="openstack/swift-storage-0" Feb 27 10:37:11 crc kubenswrapper[4998]: E0227 10:37:11.287647 4998 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 27 10:37:11 crc kubenswrapper[4998]: E0227 10:37:11.287684 4998 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 27 10:37:11 crc kubenswrapper[4998]: E0227 10:37:11.287757 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0928c45d-8553-49e6-a068-3e2e75a28c69-etc-swift podName:0928c45d-8553-49e6-a068-3e2e75a28c69 nodeName:}" failed. No retries permitted until 2026-02-27 10:37:11.787733585 +0000 UTC m=+1183.786004623 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/0928c45d-8553-49e6-a068-3e2e75a28c69-etc-swift") pod "swift-storage-0" (UID: "0928c45d-8553-49e6-a068-3e2e75a28c69") : configmap "swift-ring-files" not found Feb 27 10:37:11 crc kubenswrapper[4998]: I0227 10:37:11.287951 4998 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"0928c45d-8553-49e6-a068-3e2e75a28c69\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/swift-storage-0" Feb 27 10:37:11 crc kubenswrapper[4998]: I0227 10:37:11.287984 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/0928c45d-8553-49e6-a068-3e2e75a28c69-lock\") pod \"swift-storage-0\" (UID: \"0928c45d-8553-49e6-a068-3e2e75a28c69\") " pod="openstack/swift-storage-0" Feb 27 10:37:11 crc kubenswrapper[4998]: I0227 10:37:11.288239 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/0928c45d-8553-49e6-a068-3e2e75a28c69-cache\") pod \"swift-storage-0\" (UID: \"0928c45d-8553-49e6-a068-3e2e75a28c69\") " pod="openstack/swift-storage-0" Feb 27 10:37:11 crc kubenswrapper[4998]: I0227 10:37:11.296459 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0928c45d-8553-49e6-a068-3e2e75a28c69-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"0928c45d-8553-49e6-a068-3e2e75a28c69\") " pod="openstack/swift-storage-0" Feb 27 10:37:11 crc kubenswrapper[4998]: I0227 10:37:11.308534 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfslj\" (UniqueName: \"kubernetes.io/projected/0928c45d-8553-49e6-a068-3e2e75a28c69-kube-api-access-jfslj\") pod \"swift-storage-0\" (UID: \"0928c45d-8553-49e6-a068-3e2e75a28c69\") " pod="openstack/swift-storage-0" Feb 27 10:37:11 crc kubenswrapper[4998]: I0227 10:37:11.315761 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"0928c45d-8553-49e6-a068-3e2e75a28c69\") " pod="openstack/swift-storage-0" Feb 27 10:37:11 crc kubenswrapper[4998]: I0227 10:37:11.382457 4998 generic.go:334] "Generic (PLEG): container finished" podID="b29cf5b5-0760-4c81-a1e5-e434017c2414" containerID="5e38589cb6adadcd54fdd9743121ad7d62ad72c457498e1bb0e1b252a3ffd51a" exitCode=0 Feb 27 10:37:11 crc kubenswrapper[4998]: I0227 10:37:11.382564 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-9bmcl" event={"ID":"b29cf5b5-0760-4c81-a1e5-e434017c2414","Type":"ContainerDied","Data":"5e38589cb6adadcd54fdd9743121ad7d62ad72c457498e1bb0e1b252a3ffd51a"} Feb 27 10:37:11 crc kubenswrapper[4998]: I0227 10:37:11.382632 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-9bmcl" event={"ID":"b29cf5b5-0760-4c81-a1e5-e434017c2414","Type":"ContainerStarted","Data":"b4503c84c0476327af577e4033f1ebdb16802044b42c485d0141ee11074d0b3b"} Feb 27 10:37:11 crc kubenswrapper[4998]: I0227 10:37:11.384343 4998 generic.go:334] "Generic (PLEG): container finished" podID="91e2f326-c479-4e94-a24f-42ec17281073" containerID="3a8af43fb129cce6cf03894f349e6724ec05c00d5edef40b2543249728a3fae3" exitCode=0 Feb 27 10:37:11 crc kubenswrapper[4998]: I0227 10:37:11.384396 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-f443-account-create-update-2mv4s" event={"ID":"91e2f326-c479-4e94-a24f-42ec17281073","Type":"ContainerDied","Data":"3a8af43fb129cce6cf03894f349e6724ec05c00d5edef40b2543249728a3fae3"} Feb 27 10:37:11 crc kubenswrapper[4998]: I0227 10:37:11.574851 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-c2cn4"] Feb 27 10:37:11 crc kubenswrapper[4998]: I0227 10:37:11.578989 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-c2cn4" Feb 27 10:37:11 crc kubenswrapper[4998]: I0227 10:37:11.583756 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 27 10:37:11 crc kubenswrapper[4998]: I0227 10:37:11.584989 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Feb 27 10:37:11 crc kubenswrapper[4998]: I0227 10:37:11.585627 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Feb 27 10:37:11 crc kubenswrapper[4998]: I0227 10:37:11.591865 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-c2cn4"] Feb 27 10:37:11 crc kubenswrapper[4998]: I0227 10:37:11.696336 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/4ca09e09-cedc-4476-bbe3-d179893232c8-ring-data-devices\") pod \"swift-ring-rebalance-c2cn4\" (UID: \"4ca09e09-cedc-4476-bbe3-d179893232c8\") " pod="openstack/swift-ring-rebalance-c2cn4" Feb 27 10:37:11 crc kubenswrapper[4998]: I0227 10:37:11.696422 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/4ca09e09-cedc-4476-bbe3-d179893232c8-etc-swift\") pod \"swift-ring-rebalance-c2cn4\" (UID: \"4ca09e09-cedc-4476-bbe3-d179893232c8\") " pod="openstack/swift-ring-rebalance-c2cn4" Feb 27 10:37:11 crc kubenswrapper[4998]: I0227 10:37:11.696501 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4ca09e09-cedc-4476-bbe3-d179893232c8-scripts\") pod \"swift-ring-rebalance-c2cn4\" (UID: \"4ca09e09-cedc-4476-bbe3-d179893232c8\") " pod="openstack/swift-ring-rebalance-c2cn4" Feb 27 10:37:11 crc kubenswrapper[4998]: I0227 10:37:11.696556 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ca09e09-cedc-4476-bbe3-d179893232c8-combined-ca-bundle\") pod \"swift-ring-rebalance-c2cn4\" (UID: \"4ca09e09-cedc-4476-bbe3-d179893232c8\") " pod="openstack/swift-ring-rebalance-c2cn4" Feb 27 10:37:11 crc kubenswrapper[4998]: I0227 10:37:11.696615 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/4ca09e09-cedc-4476-bbe3-d179893232c8-swiftconf\") pod \"swift-ring-rebalance-c2cn4\" (UID: \"4ca09e09-cedc-4476-bbe3-d179893232c8\") " pod="openstack/swift-ring-rebalance-c2cn4" Feb 27 10:37:11 crc kubenswrapper[4998]: I0227 10:37:11.696662 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/4ca09e09-cedc-4476-bbe3-d179893232c8-dispersionconf\") pod \"swift-ring-rebalance-c2cn4\" (UID: \"4ca09e09-cedc-4476-bbe3-d179893232c8\") " pod="openstack/swift-ring-rebalance-c2cn4" Feb 27 10:37:11 crc kubenswrapper[4998]: I0227 10:37:11.696705 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbzkn\" (UniqueName: \"kubernetes.io/projected/4ca09e09-cedc-4476-bbe3-d179893232c8-kube-api-access-kbzkn\") pod \"swift-ring-rebalance-c2cn4\" (UID: \"4ca09e09-cedc-4476-bbe3-d179893232c8\") " pod="openstack/swift-ring-rebalance-c2cn4" Feb 27 10:37:11 crc kubenswrapper[4998]: I0227 10:37:11.799044 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/4ca09e09-cedc-4476-bbe3-d179893232c8-ring-data-devices\") pod \"swift-ring-rebalance-c2cn4\" (UID: \"4ca09e09-cedc-4476-bbe3-d179893232c8\") " pod="openstack/swift-ring-rebalance-c2cn4" Feb 27 10:37:11 crc kubenswrapper[4998]: I0227 10:37:11.799119 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/4ca09e09-cedc-4476-bbe3-d179893232c8-etc-swift\") pod \"swift-ring-rebalance-c2cn4\" (UID: \"4ca09e09-cedc-4476-bbe3-d179893232c8\") " pod="openstack/swift-ring-rebalance-c2cn4" Feb 27 10:37:11 crc kubenswrapper[4998]: I0227 10:37:11.799167 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4ca09e09-cedc-4476-bbe3-d179893232c8-scripts\") pod \"swift-ring-rebalance-c2cn4\" (UID: \"4ca09e09-cedc-4476-bbe3-d179893232c8\") " pod="openstack/swift-ring-rebalance-c2cn4" Feb 27 10:37:11 crc kubenswrapper[4998]: I0227 10:37:11.799214 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ca09e09-cedc-4476-bbe3-d179893232c8-combined-ca-bundle\") pod \"swift-ring-rebalance-c2cn4\" (UID: \"4ca09e09-cedc-4476-bbe3-d179893232c8\") " pod="openstack/swift-ring-rebalance-c2cn4" Feb 27 10:37:11 crc kubenswrapper[4998]: I0227 10:37:11.799293 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/4ca09e09-cedc-4476-bbe3-d179893232c8-swiftconf\") pod \"swift-ring-rebalance-c2cn4\" (UID: \"4ca09e09-cedc-4476-bbe3-d179893232c8\") " pod="openstack/swift-ring-rebalance-c2cn4" Feb 27 10:37:11 crc kubenswrapper[4998]: I0227 10:37:11.799326 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0928c45d-8553-49e6-a068-3e2e75a28c69-etc-swift\") pod \"swift-storage-0\" (UID: \"0928c45d-8553-49e6-a068-3e2e75a28c69\") " pod="openstack/swift-storage-0" Feb 27 10:37:11 crc kubenswrapper[4998]: I0227 10:37:11.799355 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/4ca09e09-cedc-4476-bbe3-d179893232c8-dispersionconf\") pod \"swift-ring-rebalance-c2cn4\" (UID: \"4ca09e09-cedc-4476-bbe3-d179893232c8\") " pod="openstack/swift-ring-rebalance-c2cn4" Feb 27 10:37:11 crc kubenswrapper[4998]: I0227 10:37:11.799386 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbzkn\" (UniqueName: \"kubernetes.io/projected/4ca09e09-cedc-4476-bbe3-d179893232c8-kube-api-access-kbzkn\") pod \"swift-ring-rebalance-c2cn4\" (UID: \"4ca09e09-cedc-4476-bbe3-d179893232c8\") " pod="openstack/swift-ring-rebalance-c2cn4" Feb 27 10:37:11 crc kubenswrapper[4998]: I0227 10:37:11.799819 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/4ca09e09-cedc-4476-bbe3-d179893232c8-etc-swift\") pod \"swift-ring-rebalance-c2cn4\" (UID: \"4ca09e09-cedc-4476-bbe3-d179893232c8\") " pod="openstack/swift-ring-rebalance-c2cn4" Feb 27 10:37:11 crc kubenswrapper[4998]: I0227 10:37:11.799858 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/4ca09e09-cedc-4476-bbe3-d179893232c8-ring-data-devices\") pod \"swift-ring-rebalance-c2cn4\" (UID: \"4ca09e09-cedc-4476-bbe3-d179893232c8\") " pod="openstack/swift-ring-rebalance-c2cn4" Feb 27 10:37:11 crc kubenswrapper[4998]: E0227 10:37:11.800037 4998 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 27 10:37:11 crc kubenswrapper[4998]: E0227 10:37:11.800080 4998 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 27 10:37:11 crc kubenswrapper[4998]: E0227 10:37:11.800153 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0928c45d-8553-49e6-a068-3e2e75a28c69-etc-swift podName:0928c45d-8553-49e6-a068-3e2e75a28c69 nodeName:}" failed. No retries permitted until 2026-02-27 10:37:12.800110577 +0000 UTC m=+1184.798381615 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/0928c45d-8553-49e6-a068-3e2e75a28c69-etc-swift") pod "swift-storage-0" (UID: "0928c45d-8553-49e6-a068-3e2e75a28c69") : configmap "swift-ring-files" not found Feb 27 10:37:11 crc kubenswrapper[4998]: I0227 10:37:11.800190 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4ca09e09-cedc-4476-bbe3-d179893232c8-scripts\") pod \"swift-ring-rebalance-c2cn4\" (UID: \"4ca09e09-cedc-4476-bbe3-d179893232c8\") " pod="openstack/swift-ring-rebalance-c2cn4" Feb 27 10:37:11 crc kubenswrapper[4998]: I0227 10:37:11.807663 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ca09e09-cedc-4476-bbe3-d179893232c8-combined-ca-bundle\") pod \"swift-ring-rebalance-c2cn4\" (UID: \"4ca09e09-cedc-4476-bbe3-d179893232c8\") " pod="openstack/swift-ring-rebalance-c2cn4" Feb 27 10:37:11 crc kubenswrapper[4998]: I0227 10:37:11.818677 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/4ca09e09-cedc-4476-bbe3-d179893232c8-swiftconf\") pod \"swift-ring-rebalance-c2cn4\" (UID: \"4ca09e09-cedc-4476-bbe3-d179893232c8\") " pod="openstack/swift-ring-rebalance-c2cn4" Feb 27 10:37:11 crc kubenswrapper[4998]: I0227 10:37:11.819891 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbzkn\" (UniqueName: \"kubernetes.io/projected/4ca09e09-cedc-4476-bbe3-d179893232c8-kube-api-access-kbzkn\") pod \"swift-ring-rebalance-c2cn4\" (UID: \"4ca09e09-cedc-4476-bbe3-d179893232c8\") " pod="openstack/swift-ring-rebalance-c2cn4" Feb 27 10:37:11 crc kubenswrapper[4998]: I0227 10:37:11.822025 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/4ca09e09-cedc-4476-bbe3-d179893232c8-dispersionconf\") pod \"swift-ring-rebalance-c2cn4\" (UID: \"4ca09e09-cedc-4476-bbe3-d179893232c8\") " pod="openstack/swift-ring-rebalance-c2cn4" Feb 27 10:37:11 crc kubenswrapper[4998]: I0227 10:37:11.885331 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e6b9-account-create-update-zb9jg" Feb 27 10:37:11 crc kubenswrapper[4998]: I0227 10:37:11.965463 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-mfjlb" Feb 27 10:37:11 crc kubenswrapper[4998]: I0227 10:37:11.971243 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-q5h9v" Feb 27 10:37:11 crc kubenswrapper[4998]: I0227 10:37:11.980275 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-c2cn4" Feb 27 10:37:12 crc kubenswrapper[4998]: I0227 10:37:12.003047 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5w4h6\" (UniqueName: \"kubernetes.io/projected/c5297f13-d069-44e9-aa42-17bf298602e4-kube-api-access-5w4h6\") pod \"c5297f13-d069-44e9-aa42-17bf298602e4\" (UID: \"c5297f13-d069-44e9-aa42-17bf298602e4\") " Feb 27 10:37:12 crc kubenswrapper[4998]: I0227 10:37:12.003202 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5297f13-d069-44e9-aa42-17bf298602e4-operator-scripts\") pod \"c5297f13-d069-44e9-aa42-17bf298602e4\" (UID: \"c5297f13-d069-44e9-aa42-17bf298602e4\") " Feb 27 10:37:12 crc kubenswrapper[4998]: I0227 10:37:12.004825 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5297f13-d069-44e9-aa42-17bf298602e4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c5297f13-d069-44e9-aa42-17bf298602e4" (UID: "c5297f13-d069-44e9-aa42-17bf298602e4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:37:12 crc kubenswrapper[4998]: I0227 10:37:12.007981 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5297f13-d069-44e9-aa42-17bf298602e4-kube-api-access-5w4h6" (OuterVolumeSpecName: "kube-api-access-5w4h6") pod "c5297f13-d069-44e9-aa42-17bf298602e4" (UID: "c5297f13-d069-44e9-aa42-17bf298602e4"). InnerVolumeSpecName "kube-api-access-5w4h6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:37:12 crc kubenswrapper[4998]: I0227 10:37:12.104460 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/472a79a1-5809-4914-b8a3-1aa3a708bb9a-operator-scripts\") pod \"472a79a1-5809-4914-b8a3-1aa3a708bb9a\" (UID: \"472a79a1-5809-4914-b8a3-1aa3a708bb9a\") " Feb 27 10:37:12 crc kubenswrapper[4998]: I0227 10:37:12.105132 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/472a79a1-5809-4914-b8a3-1aa3a708bb9a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "472a79a1-5809-4914-b8a3-1aa3a708bb9a" (UID: "472a79a1-5809-4914-b8a3-1aa3a708bb9a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:37:12 crc kubenswrapper[4998]: I0227 10:37:12.105364 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d29ee408-e38e-4bd8-b05c-9fe12d166c9e-operator-scripts\") pod \"d29ee408-e38e-4bd8-b05c-9fe12d166c9e\" (UID: \"d29ee408-e38e-4bd8-b05c-9fe12d166c9e\") " Feb 27 10:37:12 crc kubenswrapper[4998]: I0227 10:37:12.105804 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d29ee408-e38e-4bd8-b05c-9fe12d166c9e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d29ee408-e38e-4bd8-b05c-9fe12d166c9e" (UID: "d29ee408-e38e-4bd8-b05c-9fe12d166c9e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:37:12 crc kubenswrapper[4998]: I0227 10:37:12.105936 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-67qc8\" (UniqueName: \"kubernetes.io/projected/472a79a1-5809-4914-b8a3-1aa3a708bb9a-kube-api-access-67qc8\") pod \"472a79a1-5809-4914-b8a3-1aa3a708bb9a\" (UID: \"472a79a1-5809-4914-b8a3-1aa3a708bb9a\") " Feb 27 10:37:12 crc kubenswrapper[4998]: I0227 10:37:12.106488 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-km6qj\" (UniqueName: \"kubernetes.io/projected/d29ee408-e38e-4bd8-b05c-9fe12d166c9e-kube-api-access-km6qj\") pod \"d29ee408-e38e-4bd8-b05c-9fe12d166c9e\" (UID: \"d29ee408-e38e-4bd8-b05c-9fe12d166c9e\") " Feb 27 10:37:12 crc kubenswrapper[4998]: I0227 10:37:12.107310 4998 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5297f13-d069-44e9-aa42-17bf298602e4-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 10:37:12 crc kubenswrapper[4998]: I0227 10:37:12.107340 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5w4h6\" (UniqueName: \"kubernetes.io/projected/c5297f13-d069-44e9-aa42-17bf298602e4-kube-api-access-5w4h6\") on node \"crc\" DevicePath \"\"" Feb 27 10:37:12 crc kubenswrapper[4998]: I0227 10:37:12.107354 4998 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/472a79a1-5809-4914-b8a3-1aa3a708bb9a-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 10:37:12 crc kubenswrapper[4998]: I0227 10:37:12.107364 4998 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d29ee408-e38e-4bd8-b05c-9fe12d166c9e-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 10:37:12 crc kubenswrapper[4998]: I0227 10:37:12.112090 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/472a79a1-5809-4914-b8a3-1aa3a708bb9a-kube-api-access-67qc8" (OuterVolumeSpecName: "kube-api-access-67qc8") pod "472a79a1-5809-4914-b8a3-1aa3a708bb9a" (UID: "472a79a1-5809-4914-b8a3-1aa3a708bb9a"). InnerVolumeSpecName "kube-api-access-67qc8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:37:12 crc kubenswrapper[4998]: I0227 10:37:12.112718 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d29ee408-e38e-4bd8-b05c-9fe12d166c9e-kube-api-access-km6qj" (OuterVolumeSpecName: "kube-api-access-km6qj") pod "d29ee408-e38e-4bd8-b05c-9fe12d166c9e" (UID: "d29ee408-e38e-4bd8-b05c-9fe12d166c9e"). InnerVolumeSpecName "kube-api-access-km6qj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:37:12 crc kubenswrapper[4998]: I0227 10:37:12.208382 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-67qc8\" (UniqueName: \"kubernetes.io/projected/472a79a1-5809-4914-b8a3-1aa3a708bb9a-kube-api-access-67qc8\") on node \"crc\" DevicePath \"\"" Feb 27 10:37:12 crc kubenswrapper[4998]: I0227 10:37:12.208414 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-km6qj\" (UniqueName: \"kubernetes.io/projected/d29ee408-e38e-4bd8-b05c-9fe12d166c9e-kube-api-access-km6qj\") on node \"crc\" DevicePath \"\"" Feb 27 10:37:12 crc kubenswrapper[4998]: I0227 10:37:12.396145 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-c2cn4"] Feb 27 10:37:12 crc kubenswrapper[4998]: W0227 10:37:12.398813 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4ca09e09_cedc_4476_bbe3_d179893232c8.slice/crio-4b011128ac6b38c9c7bc2b6af664fa367fca730a2932757c609753371cd61c55 WatchSource:0}: Error finding container 4b011128ac6b38c9c7bc2b6af664fa367fca730a2932757c609753371cd61c55: Status 404 returned error can't find the container with id 4b011128ac6b38c9c7bc2b6af664fa367fca730a2932757c609753371cd61c55 Feb 27 10:37:12 crc kubenswrapper[4998]: I0227 10:37:12.403638 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-q5h9v" Feb 27 10:37:12 crc kubenswrapper[4998]: I0227 10:37:12.404630 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-q5h9v" event={"ID":"d29ee408-e38e-4bd8-b05c-9fe12d166c9e","Type":"ContainerDied","Data":"03e21f94acd00d51467baf132729d928599ef5f7ba95bca14da9e5cc455846e8"} Feb 27 10:37:12 crc kubenswrapper[4998]: I0227 10:37:12.404681 4998 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="03e21f94acd00d51467baf132729d928599ef5f7ba95bca14da9e5cc455846e8" Feb 27 10:37:12 crc kubenswrapper[4998]: I0227 10:37:12.408430 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-mfjlb" event={"ID":"472a79a1-5809-4914-b8a3-1aa3a708bb9a","Type":"ContainerDied","Data":"03a2c26597dc6c89dc68c85546223e9ff2863f19ce318454fa0a4d63f3487386"} Feb 27 10:37:12 crc kubenswrapper[4998]: I0227 10:37:12.408473 4998 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="03a2c26597dc6c89dc68c85546223e9ff2863f19ce318454fa0a4d63f3487386" Feb 27 10:37:12 crc kubenswrapper[4998]: I0227 10:37:12.408452 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-mfjlb" Feb 27 10:37:12 crc kubenswrapper[4998]: I0227 10:37:12.409600 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-e6b9-account-create-update-zb9jg" event={"ID":"c5297f13-d069-44e9-aa42-17bf298602e4","Type":"ContainerDied","Data":"67b7cb8e863322f373ab22923bf846eee6a50cc46f9ebd617545cfe74ddc9c15"} Feb 27 10:37:12 crc kubenswrapper[4998]: I0227 10:37:12.409631 4998 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="67b7cb8e863322f373ab22923bf846eee6a50cc46f9ebd617545cfe74ddc9c15" Feb 27 10:37:12 crc kubenswrapper[4998]: I0227 10:37:12.409670 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e6b9-account-create-update-zb9jg" Feb 27 10:37:12 crc kubenswrapper[4998]: I0227 10:37:12.423709 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-9bmcl" event={"ID":"b29cf5b5-0760-4c81-a1e5-e434017c2414","Type":"ContainerStarted","Data":"ecebfe9c67e1c6d424fbecd3d09f6b8af170a6407819fbec9d1f23550f8b66ca"} Feb 27 10:37:12 crc kubenswrapper[4998]: I0227 10:37:12.448935 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-9bmcl" podStartSLOduration=3.448912193 podStartE2EDuration="3.448912193s" podCreationTimestamp="2026-02-27 10:37:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:37:12.446621149 +0000 UTC m=+1184.444892117" watchObservedRunningTime="2026-02-27 10:37:12.448912193 +0000 UTC m=+1184.447183171" Feb 27 10:37:12 crc kubenswrapper[4998]: I0227 10:37:12.762208 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-f443-account-create-update-2mv4s" Feb 27 10:37:12 crc kubenswrapper[4998]: I0227 10:37:12.826286 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xf9jr\" (UniqueName: \"kubernetes.io/projected/91e2f326-c479-4e94-a24f-42ec17281073-kube-api-access-xf9jr\") pod \"91e2f326-c479-4e94-a24f-42ec17281073\" (UID: \"91e2f326-c479-4e94-a24f-42ec17281073\") " Feb 27 10:37:12 crc kubenswrapper[4998]: I0227 10:37:12.826373 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91e2f326-c479-4e94-a24f-42ec17281073-operator-scripts\") pod \"91e2f326-c479-4e94-a24f-42ec17281073\" (UID: \"91e2f326-c479-4e94-a24f-42ec17281073\") " Feb 27 10:37:12 crc kubenswrapper[4998]: I0227 10:37:12.826781 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0928c45d-8553-49e6-a068-3e2e75a28c69-etc-swift\") pod \"swift-storage-0\" (UID: \"0928c45d-8553-49e6-a068-3e2e75a28c69\") " pod="openstack/swift-storage-0" Feb 27 10:37:12 crc kubenswrapper[4998]: I0227 10:37:12.826837 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91e2f326-c479-4e94-a24f-42ec17281073-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "91e2f326-c479-4e94-a24f-42ec17281073" (UID: "91e2f326-c479-4e94-a24f-42ec17281073"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:37:12 crc kubenswrapper[4998]: E0227 10:37:12.826912 4998 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 27 10:37:12 crc kubenswrapper[4998]: E0227 10:37:12.826942 4998 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 27 10:37:12 crc kubenswrapper[4998]: I0227 10:37:12.826989 4998 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91e2f326-c479-4e94-a24f-42ec17281073-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 10:37:12 crc kubenswrapper[4998]: E0227 10:37:12.827005 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0928c45d-8553-49e6-a068-3e2e75a28c69-etc-swift podName:0928c45d-8553-49e6-a068-3e2e75a28c69 nodeName:}" failed. No retries permitted until 2026-02-27 10:37:14.826981179 +0000 UTC m=+1186.825252187 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/0928c45d-8553-49e6-a068-3e2e75a28c69-etc-swift") pod "swift-storage-0" (UID: "0928c45d-8553-49e6-a068-3e2e75a28c69") : configmap "swift-ring-files" not found Feb 27 10:37:12 crc kubenswrapper[4998]: I0227 10:37:12.831626 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91e2f326-c479-4e94-a24f-42ec17281073-kube-api-access-xf9jr" (OuterVolumeSpecName: "kube-api-access-xf9jr") pod "91e2f326-c479-4e94-a24f-42ec17281073" (UID: "91e2f326-c479-4e94-a24f-42ec17281073"). InnerVolumeSpecName "kube-api-access-xf9jr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:37:12 crc kubenswrapper[4998]: I0227 10:37:12.915840 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-2l67v"] Feb 27 10:37:12 crc kubenswrapper[4998]: E0227 10:37:12.916160 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="472a79a1-5809-4914-b8a3-1aa3a708bb9a" containerName="mariadb-database-create" Feb 27 10:37:12 crc kubenswrapper[4998]: I0227 10:37:12.916175 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="472a79a1-5809-4914-b8a3-1aa3a708bb9a" containerName="mariadb-database-create" Feb 27 10:37:12 crc kubenswrapper[4998]: E0227 10:37:12.916182 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d29ee408-e38e-4bd8-b05c-9fe12d166c9e" containerName="mariadb-database-create" Feb 27 10:37:12 crc kubenswrapper[4998]: I0227 10:37:12.916187 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="d29ee408-e38e-4bd8-b05c-9fe12d166c9e" containerName="mariadb-database-create" Feb 27 10:37:12 crc kubenswrapper[4998]: E0227 10:37:12.916202 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5297f13-d069-44e9-aa42-17bf298602e4" containerName="mariadb-account-create-update" Feb 27 10:37:12 crc kubenswrapper[4998]: I0227 10:37:12.916210 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5297f13-d069-44e9-aa42-17bf298602e4" containerName="mariadb-account-create-update" Feb 27 10:37:12 crc kubenswrapper[4998]: E0227 10:37:12.916221 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91e2f326-c479-4e94-a24f-42ec17281073" containerName="mariadb-account-create-update" Feb 27 10:37:12 crc kubenswrapper[4998]: I0227 10:37:12.916244 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="91e2f326-c479-4e94-a24f-42ec17281073" containerName="mariadb-account-create-update" Feb 27 10:37:12 crc kubenswrapper[4998]: I0227 10:37:12.916398 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5297f13-d069-44e9-aa42-17bf298602e4" containerName="mariadb-account-create-update" Feb 27 10:37:12 crc kubenswrapper[4998]: I0227 10:37:12.916409 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="472a79a1-5809-4914-b8a3-1aa3a708bb9a" containerName="mariadb-database-create" Feb 27 10:37:12 crc kubenswrapper[4998]: I0227 10:37:12.916416 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="91e2f326-c479-4e94-a24f-42ec17281073" containerName="mariadb-account-create-update" Feb 27 10:37:12 crc kubenswrapper[4998]: I0227 10:37:12.916434 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="d29ee408-e38e-4bd8-b05c-9fe12d166c9e" containerName="mariadb-database-create" Feb 27 10:37:12 crc kubenswrapper[4998]: I0227 10:37:12.919946 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-2l67v" Feb 27 10:37:12 crc kubenswrapper[4998]: I0227 10:37:12.929221 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xf9jr\" (UniqueName: \"kubernetes.io/projected/91e2f326-c479-4e94-a24f-42ec17281073-kube-api-access-xf9jr\") on node \"crc\" DevicePath \"\"" Feb 27 10:37:12 crc kubenswrapper[4998]: I0227 10:37:12.949907 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-2l67v"] Feb 27 10:37:13 crc kubenswrapper[4998]: I0227 10:37:13.012146 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-c2c7-account-create-update-fclht"] Feb 27 10:37:13 crc kubenswrapper[4998]: I0227 10:37:13.013384 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-c2c7-account-create-update-fclht" Feb 27 10:37:13 crc kubenswrapper[4998]: I0227 10:37:13.027508 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 27 10:37:13 crc kubenswrapper[4998]: I0227 10:37:13.030082 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvct8\" (UniqueName: \"kubernetes.io/projected/2173d9f2-1855-4ce8-bcfe-5dd0a8d99da5-kube-api-access-fvct8\") pod \"glance-db-create-2l67v\" (UID: \"2173d9f2-1855-4ce8-bcfe-5dd0a8d99da5\") " pod="openstack/glance-db-create-2l67v" Feb 27 10:37:13 crc kubenswrapper[4998]: I0227 10:37:13.030260 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2173d9f2-1855-4ce8-bcfe-5dd0a8d99da5-operator-scripts\") pod \"glance-db-create-2l67v\" (UID: \"2173d9f2-1855-4ce8-bcfe-5dd0a8d99da5\") " pod="openstack/glance-db-create-2l67v" Feb 27 10:37:13 crc kubenswrapper[4998]: I0227 10:37:13.041284 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-c2c7-account-create-update-fclht"] Feb 27 10:37:13 crc kubenswrapper[4998]: I0227 10:37:13.131792 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvct8\" (UniqueName: \"kubernetes.io/projected/2173d9f2-1855-4ce8-bcfe-5dd0a8d99da5-kube-api-access-fvct8\") pod \"glance-db-create-2l67v\" (UID: \"2173d9f2-1855-4ce8-bcfe-5dd0a8d99da5\") " pod="openstack/glance-db-create-2l67v" Feb 27 10:37:13 crc kubenswrapper[4998]: I0227 10:37:13.131947 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxffj\" (UniqueName: \"kubernetes.io/projected/4ef6de27-5d07-4c05-9da0-513855fbefa6-kube-api-access-sxffj\") pod \"glance-c2c7-account-create-update-fclht\" (UID: \"4ef6de27-5d07-4c05-9da0-513855fbefa6\") " pod="openstack/glance-c2c7-account-create-update-fclht" Feb 27 10:37:13 crc kubenswrapper[4998]: I0227 10:37:13.132005 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4ef6de27-5d07-4c05-9da0-513855fbefa6-operator-scripts\") pod \"glance-c2c7-account-create-update-fclht\" (UID: \"4ef6de27-5d07-4c05-9da0-513855fbefa6\") " pod="openstack/glance-c2c7-account-create-update-fclht" Feb 27 10:37:13 crc kubenswrapper[4998]: I0227 10:37:13.132051 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2173d9f2-1855-4ce8-bcfe-5dd0a8d99da5-operator-scripts\") pod \"glance-db-create-2l67v\" (UID: \"2173d9f2-1855-4ce8-bcfe-5dd0a8d99da5\") " pod="openstack/glance-db-create-2l67v" Feb 27 10:37:13 crc kubenswrapper[4998]: I0227 10:37:13.133317 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2173d9f2-1855-4ce8-bcfe-5dd0a8d99da5-operator-scripts\") pod \"glance-db-create-2l67v\" (UID: \"2173d9f2-1855-4ce8-bcfe-5dd0a8d99da5\") " pod="openstack/glance-db-create-2l67v" Feb 27 10:37:13 crc kubenswrapper[4998]: I0227 10:37:13.153972 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvct8\" (UniqueName: \"kubernetes.io/projected/2173d9f2-1855-4ce8-bcfe-5dd0a8d99da5-kube-api-access-fvct8\") pod \"glance-db-create-2l67v\" (UID: \"2173d9f2-1855-4ce8-bcfe-5dd0a8d99da5\") " pod="openstack/glance-db-create-2l67v" Feb 27 10:37:13 crc kubenswrapper[4998]: I0227 10:37:13.233099 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxffj\" (UniqueName: \"kubernetes.io/projected/4ef6de27-5d07-4c05-9da0-513855fbefa6-kube-api-access-sxffj\") pod \"glance-c2c7-account-create-update-fclht\" (UID: \"4ef6de27-5d07-4c05-9da0-513855fbefa6\") " pod="openstack/glance-c2c7-account-create-update-fclht" Feb 27 10:37:13 crc kubenswrapper[4998]: I0227 10:37:13.233165 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4ef6de27-5d07-4c05-9da0-513855fbefa6-operator-scripts\") pod \"glance-c2c7-account-create-update-fclht\" (UID: \"4ef6de27-5d07-4c05-9da0-513855fbefa6\") " pod="openstack/glance-c2c7-account-create-update-fclht" Feb 27 10:37:13 crc kubenswrapper[4998]: I0227 10:37:13.234028 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4ef6de27-5d07-4c05-9da0-513855fbefa6-operator-scripts\") pod \"glance-c2c7-account-create-update-fclht\" (UID: \"4ef6de27-5d07-4c05-9da0-513855fbefa6\") " pod="openstack/glance-c2c7-account-create-update-fclht" Feb 27 10:37:13 crc kubenswrapper[4998]: I0227 10:37:13.241325 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-2l67v" Feb 27 10:37:13 crc kubenswrapper[4998]: I0227 10:37:13.251330 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxffj\" (UniqueName: \"kubernetes.io/projected/4ef6de27-5d07-4c05-9da0-513855fbefa6-kube-api-access-sxffj\") pod \"glance-c2c7-account-create-update-fclht\" (UID: \"4ef6de27-5d07-4c05-9da0-513855fbefa6\") " pod="openstack/glance-c2c7-account-create-update-fclht" Feb 27 10:37:13 crc kubenswrapper[4998]: I0227 10:37:13.338999 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-c2c7-account-create-update-fclht" Feb 27 10:37:13 crc kubenswrapper[4998]: I0227 10:37:13.450936 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-c2cn4" event={"ID":"4ca09e09-cedc-4476-bbe3-d179893232c8","Type":"ContainerStarted","Data":"4b011128ac6b38c9c7bc2b6af664fa367fca730a2932757c609753371cd61c55"} Feb 27 10:37:13 crc kubenswrapper[4998]: I0227 10:37:13.466680 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-f443-account-create-update-2mv4s" Feb 27 10:37:13 crc kubenswrapper[4998]: I0227 10:37:13.467499 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-f443-account-create-update-2mv4s" event={"ID":"91e2f326-c479-4e94-a24f-42ec17281073","Type":"ContainerDied","Data":"c65081342f91781e283df6eaefefe9d1c55541075d10fde4b2f8300f49607f23"} Feb 27 10:37:13 crc kubenswrapper[4998]: I0227 10:37:13.467571 4998 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c65081342f91781e283df6eaefefe9d1c55541075d10fde4b2f8300f49607f23" Feb 27 10:37:13 crc kubenswrapper[4998]: I0227 10:37:13.467604 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-9bmcl" Feb 27 10:37:16 crc kubenswrapper[4998]: I0227 10:37:13.707906 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-2l67v"] Feb 27 10:37:16 crc kubenswrapper[4998]: W0227 10:37:13.710772 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2173d9f2_1855_4ce8_bcfe_5dd0a8d99da5.slice/crio-d16fe16a07253b7af8ad28bd185fa26506804ac3ca2cc7d0537974fdb3e37438 WatchSource:0}: Error finding container d16fe16a07253b7af8ad28bd185fa26506804ac3ca2cc7d0537974fdb3e37438: Status 404 returned error can't find the container with id d16fe16a07253b7af8ad28bd185fa26506804ac3ca2cc7d0537974fdb3e37438 Feb 27 10:37:16 crc kubenswrapper[4998]: I0227 10:37:13.828089 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-c2c7-account-create-update-fclht"] Feb 27 10:37:16 crc kubenswrapper[4998]: W0227 10:37:13.829640 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4ef6de27_5d07_4c05_9da0_513855fbefa6.slice/crio-e2eb70a8baf4bf113b619f71e77d4ff70fd5f09cc0a6dbde1735bad866258eba WatchSource:0}: Error finding container e2eb70a8baf4bf113b619f71e77d4ff70fd5f09cc0a6dbde1735bad866258eba: Status 404 returned error can't find the container with id e2eb70a8baf4bf113b619f71e77d4ff70fd5f09cc0a6dbde1735bad866258eba Feb 27 10:37:16 crc kubenswrapper[4998]: I0227 10:37:14.473566 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-2l67v" event={"ID":"2173d9f2-1855-4ce8-bcfe-5dd0a8d99da5","Type":"ContainerStarted","Data":"d16fe16a07253b7af8ad28bd185fa26506804ac3ca2cc7d0537974fdb3e37438"} Feb 27 10:37:16 crc kubenswrapper[4998]: I0227 10:37:14.474678 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-c2c7-account-create-update-fclht" event={"ID":"4ef6de27-5d07-4c05-9da0-513855fbefa6","Type":"ContainerStarted","Data":"e2eb70a8baf4bf113b619f71e77d4ff70fd5f09cc0a6dbde1735bad866258eba"} Feb 27 10:37:16 crc kubenswrapper[4998]: I0227 10:37:14.759965 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-79fcb"] Feb 27 10:37:16 crc kubenswrapper[4998]: I0227 10:37:14.761046 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-79fcb" Feb 27 10:37:16 crc kubenswrapper[4998]: I0227 10:37:14.769246 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 27 10:37:16 crc kubenswrapper[4998]: I0227 10:37:14.787457 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-79fcb"] Feb 27 10:37:16 crc kubenswrapper[4998]: I0227 10:37:14.864458 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b2c290e-bd53-4009-9c85-fa730a0d104d-operator-scripts\") pod \"root-account-create-update-79fcb\" (UID: \"2b2c290e-bd53-4009-9c85-fa730a0d104d\") " pod="openstack/root-account-create-update-79fcb" Feb 27 10:37:16 crc kubenswrapper[4998]: I0227 10:37:14.864538 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vf6k\" (UniqueName: \"kubernetes.io/projected/2b2c290e-bd53-4009-9c85-fa730a0d104d-kube-api-access-9vf6k\") pod \"root-account-create-update-79fcb\" (UID: \"2b2c290e-bd53-4009-9c85-fa730a0d104d\") " pod="openstack/root-account-create-update-79fcb" Feb 27 10:37:16 crc kubenswrapper[4998]: I0227 10:37:14.864579 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0928c45d-8553-49e6-a068-3e2e75a28c69-etc-swift\") pod \"swift-storage-0\" (UID: \"0928c45d-8553-49e6-a068-3e2e75a28c69\") " pod="openstack/swift-storage-0" Feb 27 10:37:16 crc kubenswrapper[4998]: E0227 10:37:14.864723 4998 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 27 10:37:16 crc kubenswrapper[4998]: E0227 10:37:14.864736 4998 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 27 10:37:16 crc kubenswrapper[4998]: E0227 10:37:14.864792 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0928c45d-8553-49e6-a068-3e2e75a28c69-etc-swift podName:0928c45d-8553-49e6-a068-3e2e75a28c69 nodeName:}" failed. No retries permitted until 2026-02-27 10:37:18.864775269 +0000 UTC m=+1190.863046277 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/0928c45d-8553-49e6-a068-3e2e75a28c69-etc-swift") pod "swift-storage-0" (UID: "0928c45d-8553-49e6-a068-3e2e75a28c69") : configmap "swift-ring-files" not found Feb 27 10:37:16 crc kubenswrapper[4998]: I0227 10:37:14.966690 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b2c290e-bd53-4009-9c85-fa730a0d104d-operator-scripts\") pod \"root-account-create-update-79fcb\" (UID: \"2b2c290e-bd53-4009-9c85-fa730a0d104d\") " pod="openstack/root-account-create-update-79fcb" Feb 27 10:37:16 crc kubenswrapper[4998]: I0227 10:37:14.966777 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vf6k\" (UniqueName: \"kubernetes.io/projected/2b2c290e-bd53-4009-9c85-fa730a0d104d-kube-api-access-9vf6k\") pod \"root-account-create-update-79fcb\" (UID: \"2b2c290e-bd53-4009-9c85-fa730a0d104d\") " pod="openstack/root-account-create-update-79fcb" Feb 27 10:37:16 crc kubenswrapper[4998]: I0227 10:37:14.967541 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b2c290e-bd53-4009-9c85-fa730a0d104d-operator-scripts\") pod \"root-account-create-update-79fcb\" (UID: \"2b2c290e-bd53-4009-9c85-fa730a0d104d\") " pod="openstack/root-account-create-update-79fcb" Feb 27 10:37:16 crc kubenswrapper[4998]: I0227 10:37:14.988037 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vf6k\" (UniqueName: \"kubernetes.io/projected/2b2c290e-bd53-4009-9c85-fa730a0d104d-kube-api-access-9vf6k\") pod \"root-account-create-update-79fcb\" (UID: \"2b2c290e-bd53-4009-9c85-fa730a0d104d\") " pod="openstack/root-account-create-update-79fcb" Feb 27 10:37:16 crc kubenswrapper[4998]: I0227 10:37:15.081959 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-79fcb" Feb 27 10:37:16 crc kubenswrapper[4998]: I0227 10:37:16.491630 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-2l67v" event={"ID":"2173d9f2-1855-4ce8-bcfe-5dd0a8d99da5","Type":"ContainerStarted","Data":"fa0c8c512b2c2d546aec604bc47d0a75a02fa597b9683950982d2e8a1ac3b7e4"} Feb 27 10:37:16 crc kubenswrapper[4998]: I0227 10:37:16.495849 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-c2c7-account-create-update-fclht" event={"ID":"4ef6de27-5d07-4c05-9da0-513855fbefa6","Type":"ContainerStarted","Data":"4b44d92ff2d42b91e0734dbb0372fcb64f363d789014828f845c926c059df932"} Feb 27 10:37:16 crc kubenswrapper[4998]: I0227 10:37:16.508862 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-2l67v" podStartSLOduration=4.5088446189999996 podStartE2EDuration="4.508844619s" podCreationTimestamp="2026-02-27 10:37:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:37:16.502388381 +0000 UTC m=+1188.500659369" watchObservedRunningTime="2026-02-27 10:37:16.508844619 +0000 UTC m=+1188.507115587" Feb 27 10:37:16 crc kubenswrapper[4998]: I0227 10:37:16.605380 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-79fcb"] Feb 27 10:37:17 crc kubenswrapper[4998]: I0227 10:37:17.507392 4998 generic.go:334] "Generic (PLEG): container finished" podID="2173d9f2-1855-4ce8-bcfe-5dd0a8d99da5" containerID="fa0c8c512b2c2d546aec604bc47d0a75a02fa597b9683950982d2e8a1ac3b7e4" exitCode=0 Feb 27 10:37:17 crc kubenswrapper[4998]: I0227 10:37:17.507964 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-2l67v" event={"ID":"2173d9f2-1855-4ce8-bcfe-5dd0a8d99da5","Type":"ContainerDied","Data":"fa0c8c512b2c2d546aec604bc47d0a75a02fa597b9683950982d2e8a1ac3b7e4"} Feb 27 10:37:17 crc kubenswrapper[4998]: I0227 10:37:17.509561 4998 generic.go:334] "Generic (PLEG): container finished" podID="4ef6de27-5d07-4c05-9da0-513855fbefa6" containerID="4b44d92ff2d42b91e0734dbb0372fcb64f363d789014828f845c926c059df932" exitCode=0 Feb 27 10:37:17 crc kubenswrapper[4998]: I0227 10:37:17.509696 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-c2c7-account-create-update-fclht" event={"ID":"4ef6de27-5d07-4c05-9da0-513855fbefa6","Type":"ContainerDied","Data":"4b44d92ff2d42b91e0734dbb0372fcb64f363d789014828f845c926c059df932"} Feb 27 10:37:17 crc kubenswrapper[4998]: I0227 10:37:17.522498 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-79fcb" event={"ID":"2b2c290e-bd53-4009-9c85-fa730a0d104d","Type":"ContainerDied","Data":"0ba3fcea0f9d222ef24cecc7742419820c91cefe7c2bbeb2a0e7fc1b0076d863"} Feb 27 10:37:17 crc kubenswrapper[4998]: I0227 10:37:17.524986 4998 generic.go:334] "Generic (PLEG): container finished" podID="2b2c290e-bd53-4009-9c85-fa730a0d104d" containerID="0ba3fcea0f9d222ef24cecc7742419820c91cefe7c2bbeb2a0e7fc1b0076d863" exitCode=0 Feb 27 10:37:17 crc kubenswrapper[4998]: I0227 10:37:17.525085 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-79fcb" event={"ID":"2b2c290e-bd53-4009-9c85-fa730a0d104d","Type":"ContainerStarted","Data":"5be30404f2ac6a79ea443ff1d0e7df5dceb51431742ebc1d4c201bbb234f5343"} Feb 27 10:37:18 crc kubenswrapper[4998]: I0227 10:37:18.937311 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0928c45d-8553-49e6-a068-3e2e75a28c69-etc-swift\") pod \"swift-storage-0\" (UID: \"0928c45d-8553-49e6-a068-3e2e75a28c69\") " pod="openstack/swift-storage-0" Feb 27 10:37:18 crc kubenswrapper[4998]: E0227 10:37:18.937505 4998 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 27 10:37:18 crc kubenswrapper[4998]: E0227 10:37:18.937738 4998 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 27 10:37:18 crc kubenswrapper[4998]: E0227 10:37:18.937799 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0928c45d-8553-49e6-a068-3e2e75a28c69-etc-swift podName:0928c45d-8553-49e6-a068-3e2e75a28c69 nodeName:}" failed. No retries permitted until 2026-02-27 10:37:26.937780016 +0000 UTC m=+1198.936051004 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/0928c45d-8553-49e6-a068-3e2e75a28c69-etc-swift") pod "swift-storage-0" (UID: "0928c45d-8553-49e6-a068-3e2e75a28c69") : configmap "swift-ring-files" not found Feb 27 10:37:19 crc kubenswrapper[4998]: I0227 10:37:19.555390 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-79fcb" event={"ID":"2b2c290e-bd53-4009-9c85-fa730a0d104d","Type":"ContainerDied","Data":"5be30404f2ac6a79ea443ff1d0e7df5dceb51431742ebc1d4c201bbb234f5343"} Feb 27 10:37:19 crc kubenswrapper[4998]: I0227 10:37:19.555755 4998 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5be30404f2ac6a79ea443ff1d0e7df5dceb51431742ebc1d4c201bbb234f5343" Feb 27 10:37:19 crc kubenswrapper[4998]: I0227 10:37:19.556847 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-2l67v" event={"ID":"2173d9f2-1855-4ce8-bcfe-5dd0a8d99da5","Type":"ContainerDied","Data":"d16fe16a07253b7af8ad28bd185fa26506804ac3ca2cc7d0537974fdb3e37438"} Feb 27 10:37:19 crc kubenswrapper[4998]: I0227 10:37:19.556872 4998 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d16fe16a07253b7af8ad28bd185fa26506804ac3ca2cc7d0537974fdb3e37438" Feb 27 10:37:19 crc kubenswrapper[4998]: I0227 10:37:19.558058 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-c2c7-account-create-update-fclht" event={"ID":"4ef6de27-5d07-4c05-9da0-513855fbefa6","Type":"ContainerDied","Data":"e2eb70a8baf4bf113b619f71e77d4ff70fd5f09cc0a6dbde1735bad866258eba"} Feb 27 10:37:19 crc kubenswrapper[4998]: I0227 10:37:19.558083 4998 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e2eb70a8baf4bf113b619f71e77d4ff70fd5f09cc0a6dbde1735bad866258eba" Feb 27 10:37:19 crc kubenswrapper[4998]: I0227 10:37:19.621023 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-2l67v" Feb 27 10:37:19 crc kubenswrapper[4998]: I0227 10:37:19.627590 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-c2c7-account-create-update-fclht" Feb 27 10:37:19 crc kubenswrapper[4998]: I0227 10:37:19.639093 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-79fcb" Feb 27 10:37:19 crc kubenswrapper[4998]: I0227 10:37:19.750532 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fvct8\" (UniqueName: \"kubernetes.io/projected/2173d9f2-1855-4ce8-bcfe-5dd0a8d99da5-kube-api-access-fvct8\") pod \"2173d9f2-1855-4ce8-bcfe-5dd0a8d99da5\" (UID: \"2173d9f2-1855-4ce8-bcfe-5dd0a8d99da5\") " Feb 27 10:37:19 crc kubenswrapper[4998]: I0227 10:37:19.750592 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sxffj\" (UniqueName: \"kubernetes.io/projected/4ef6de27-5d07-4c05-9da0-513855fbefa6-kube-api-access-sxffj\") pod \"4ef6de27-5d07-4c05-9da0-513855fbefa6\" (UID: \"4ef6de27-5d07-4c05-9da0-513855fbefa6\") " Feb 27 10:37:19 crc kubenswrapper[4998]: I0227 10:37:19.750619 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4ef6de27-5d07-4c05-9da0-513855fbefa6-operator-scripts\") pod \"4ef6de27-5d07-4c05-9da0-513855fbefa6\" (UID: \"4ef6de27-5d07-4c05-9da0-513855fbefa6\") " Feb 27 10:37:19 crc kubenswrapper[4998]: I0227 10:37:19.750724 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b2c290e-bd53-4009-9c85-fa730a0d104d-operator-scripts\") pod \"2b2c290e-bd53-4009-9c85-fa730a0d104d\" (UID: \"2b2c290e-bd53-4009-9c85-fa730a0d104d\") " Feb 27 10:37:19 crc kubenswrapper[4998]: I0227 10:37:19.750791 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vf6k\" (UniqueName: \"kubernetes.io/projected/2b2c290e-bd53-4009-9c85-fa730a0d104d-kube-api-access-9vf6k\") pod \"2b2c290e-bd53-4009-9c85-fa730a0d104d\" (UID: \"2b2c290e-bd53-4009-9c85-fa730a0d104d\") " Feb 27 10:37:19 crc kubenswrapper[4998]: I0227 10:37:19.750893 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2173d9f2-1855-4ce8-bcfe-5dd0a8d99da5-operator-scripts\") pod \"2173d9f2-1855-4ce8-bcfe-5dd0a8d99da5\" (UID: \"2173d9f2-1855-4ce8-bcfe-5dd0a8d99da5\") " Feb 27 10:37:19 crc kubenswrapper[4998]: I0227 10:37:19.751857 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2173d9f2-1855-4ce8-bcfe-5dd0a8d99da5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2173d9f2-1855-4ce8-bcfe-5dd0a8d99da5" (UID: "2173d9f2-1855-4ce8-bcfe-5dd0a8d99da5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:37:19 crc kubenswrapper[4998]: I0227 10:37:19.751911 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b2c290e-bd53-4009-9c85-fa730a0d104d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2b2c290e-bd53-4009-9c85-fa730a0d104d" (UID: "2b2c290e-bd53-4009-9c85-fa730a0d104d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:37:19 crc kubenswrapper[4998]: I0227 10:37:19.752270 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ef6de27-5d07-4c05-9da0-513855fbefa6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4ef6de27-5d07-4c05-9da0-513855fbefa6" (UID: "4ef6de27-5d07-4c05-9da0-513855fbefa6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:37:19 crc kubenswrapper[4998]: I0227 10:37:19.754408 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ef6de27-5d07-4c05-9da0-513855fbefa6-kube-api-access-sxffj" (OuterVolumeSpecName: "kube-api-access-sxffj") pod "4ef6de27-5d07-4c05-9da0-513855fbefa6" (UID: "4ef6de27-5d07-4c05-9da0-513855fbefa6"). InnerVolumeSpecName "kube-api-access-sxffj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:37:19 crc kubenswrapper[4998]: I0227 10:37:19.757098 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2173d9f2-1855-4ce8-bcfe-5dd0a8d99da5-kube-api-access-fvct8" (OuterVolumeSpecName: "kube-api-access-fvct8") pod "2173d9f2-1855-4ce8-bcfe-5dd0a8d99da5" (UID: "2173d9f2-1855-4ce8-bcfe-5dd0a8d99da5"). InnerVolumeSpecName "kube-api-access-fvct8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:37:19 crc kubenswrapper[4998]: I0227 10:37:19.757714 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b2c290e-bd53-4009-9c85-fa730a0d104d-kube-api-access-9vf6k" (OuterVolumeSpecName: "kube-api-access-9vf6k") pod "2b2c290e-bd53-4009-9c85-fa730a0d104d" (UID: "2b2c290e-bd53-4009-9c85-fa730a0d104d"). InnerVolumeSpecName "kube-api-access-9vf6k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:37:19 crc kubenswrapper[4998]: I0227 10:37:19.853828 4998 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2173d9f2-1855-4ce8-bcfe-5dd0a8d99da5-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 10:37:19 crc kubenswrapper[4998]: I0227 10:37:19.853869 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fvct8\" (UniqueName: \"kubernetes.io/projected/2173d9f2-1855-4ce8-bcfe-5dd0a8d99da5-kube-api-access-fvct8\") on node \"crc\" DevicePath \"\"" Feb 27 10:37:19 crc kubenswrapper[4998]: I0227 10:37:19.853888 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sxffj\" (UniqueName: \"kubernetes.io/projected/4ef6de27-5d07-4c05-9da0-513855fbefa6-kube-api-access-sxffj\") on node \"crc\" DevicePath \"\"" Feb 27 10:37:19 crc kubenswrapper[4998]: I0227 10:37:19.853926 4998 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4ef6de27-5d07-4c05-9da0-513855fbefa6-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 10:37:19 crc kubenswrapper[4998]: I0227 10:37:19.853954 4998 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b2c290e-bd53-4009-9c85-fa730a0d104d-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 10:37:19 crc kubenswrapper[4998]: I0227 10:37:19.853964 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9vf6k\" (UniqueName: \"kubernetes.io/projected/2b2c290e-bd53-4009-9c85-fa730a0d104d-kube-api-access-9vf6k\") on node \"crc\" DevicePath \"\"" Feb 27 10:37:20 crc kubenswrapper[4998]: I0227 10:37:20.275591 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-9bmcl" Feb 27 10:37:20 crc kubenswrapper[4998]: I0227 10:37:20.368508 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-qw4jj"] Feb 27 10:37:20 crc kubenswrapper[4998]: I0227 10:37:20.369072 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-qw4jj" podUID="3ec16531-729c-451c-b5a6-0bc04b3a1b3c" containerName="dnsmasq-dns" containerID="cri-o://86a67fa708ba3f13438b131b4c6c003e77ba8233e031690602c89353a08f487f" gracePeriod=10 Feb 27 10:37:20 crc kubenswrapper[4998]: I0227 10:37:20.567061 4998 generic.go:334] "Generic (PLEG): container finished" podID="3ec16531-729c-451c-b5a6-0bc04b3a1b3c" containerID="86a67fa708ba3f13438b131b4c6c003e77ba8233e031690602c89353a08f487f" exitCode=0 Feb 27 10:37:20 crc kubenswrapper[4998]: I0227 10:37:20.567135 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-qw4jj" event={"ID":"3ec16531-729c-451c-b5a6-0bc04b3a1b3c","Type":"ContainerDied","Data":"86a67fa708ba3f13438b131b4c6c003e77ba8233e031690602c89353a08f487f"} Feb 27 10:37:20 crc kubenswrapper[4998]: I0227 10:37:20.569504 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-79fcb" Feb 27 10:37:20 crc kubenswrapper[4998]: I0227 10:37:20.569496 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-c2cn4" event={"ID":"4ca09e09-cedc-4476-bbe3-d179893232c8","Type":"ContainerStarted","Data":"ac4ce26b5351d10eae2414191b089042c85ffbe89ede6f016126bedf7c98e92f"} Feb 27 10:37:20 crc kubenswrapper[4998]: I0227 10:37:20.569590 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-2l67v" Feb 27 10:37:20 crc kubenswrapper[4998]: I0227 10:37:20.569633 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-c2c7-account-create-update-fclht" Feb 27 10:37:20 crc kubenswrapper[4998]: I0227 10:37:20.620720 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-c2cn4" podStartSLOduration=2.547059851 podStartE2EDuration="9.620678037s" podCreationTimestamp="2026-02-27 10:37:11 +0000 UTC" firstStartedPulling="2026-02-27 10:37:12.401579739 +0000 UTC m=+1184.399850707" lastFinishedPulling="2026-02-27 10:37:19.475197915 +0000 UTC m=+1191.473468893" observedRunningTime="2026-02-27 10:37:20.596552049 +0000 UTC m=+1192.594823017" watchObservedRunningTime="2026-02-27 10:37:20.620678037 +0000 UTC m=+1192.618949005" Feb 27 10:37:20 crc kubenswrapper[4998]: I0227 10:37:20.863782 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-qw4jj" Feb 27 10:37:20 crc kubenswrapper[4998]: I0227 10:37:20.973585 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3ec16531-729c-451c-b5a6-0bc04b3a1b3c-ovsdbserver-nb\") pod \"3ec16531-729c-451c-b5a6-0bc04b3a1b3c\" (UID: \"3ec16531-729c-451c-b5a6-0bc04b3a1b3c\") " Feb 27 10:37:20 crc kubenswrapper[4998]: I0227 10:37:20.973670 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ec16531-729c-451c-b5a6-0bc04b3a1b3c-config\") pod \"3ec16531-729c-451c-b5a6-0bc04b3a1b3c\" (UID: \"3ec16531-729c-451c-b5a6-0bc04b3a1b3c\") " Feb 27 10:37:20 crc kubenswrapper[4998]: I0227 10:37:20.973738 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3ec16531-729c-451c-b5a6-0bc04b3a1b3c-dns-svc\") pod \"3ec16531-729c-451c-b5a6-0bc04b3a1b3c\" (UID: \"3ec16531-729c-451c-b5a6-0bc04b3a1b3c\") " Feb 27 10:37:20 crc kubenswrapper[4998]: I0227 10:37:20.973773 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zc6vr\" (UniqueName: \"kubernetes.io/projected/3ec16531-729c-451c-b5a6-0bc04b3a1b3c-kube-api-access-zc6vr\") pod \"3ec16531-729c-451c-b5a6-0bc04b3a1b3c\" (UID: \"3ec16531-729c-451c-b5a6-0bc04b3a1b3c\") " Feb 27 10:37:20 crc kubenswrapper[4998]: I0227 10:37:20.973814 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3ec16531-729c-451c-b5a6-0bc04b3a1b3c-ovsdbserver-sb\") pod \"3ec16531-729c-451c-b5a6-0bc04b3a1b3c\" (UID: \"3ec16531-729c-451c-b5a6-0bc04b3a1b3c\") " Feb 27 10:37:20 crc kubenswrapper[4998]: I0227 10:37:20.996851 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ec16531-729c-451c-b5a6-0bc04b3a1b3c-kube-api-access-zc6vr" (OuterVolumeSpecName: "kube-api-access-zc6vr") pod "3ec16531-729c-451c-b5a6-0bc04b3a1b3c" (UID: "3ec16531-729c-451c-b5a6-0bc04b3a1b3c"). InnerVolumeSpecName "kube-api-access-zc6vr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:37:21 crc kubenswrapper[4998]: I0227 10:37:21.011184 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ec16531-729c-451c-b5a6-0bc04b3a1b3c-config" (OuterVolumeSpecName: "config") pod "3ec16531-729c-451c-b5a6-0bc04b3a1b3c" (UID: "3ec16531-729c-451c-b5a6-0bc04b3a1b3c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:37:21 crc kubenswrapper[4998]: I0227 10:37:21.011731 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ec16531-729c-451c-b5a6-0bc04b3a1b3c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3ec16531-729c-451c-b5a6-0bc04b3a1b3c" (UID: "3ec16531-729c-451c-b5a6-0bc04b3a1b3c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:37:21 crc kubenswrapper[4998]: I0227 10:37:21.013655 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ec16531-729c-451c-b5a6-0bc04b3a1b3c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3ec16531-729c-451c-b5a6-0bc04b3a1b3c" (UID: "3ec16531-729c-451c-b5a6-0bc04b3a1b3c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:37:21 crc kubenswrapper[4998]: I0227 10:37:21.014188 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ec16531-729c-451c-b5a6-0bc04b3a1b3c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3ec16531-729c-451c-b5a6-0bc04b3a1b3c" (UID: "3ec16531-729c-451c-b5a6-0bc04b3a1b3c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:37:21 crc kubenswrapper[4998]: I0227 10:37:21.076136 4998 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3ec16531-729c-451c-b5a6-0bc04b3a1b3c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 27 10:37:21 crc kubenswrapper[4998]: I0227 10:37:21.076186 4998 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ec16531-729c-451c-b5a6-0bc04b3a1b3c-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:37:21 crc kubenswrapper[4998]: I0227 10:37:21.076202 4998 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3ec16531-729c-451c-b5a6-0bc04b3a1b3c-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 27 10:37:21 crc kubenswrapper[4998]: I0227 10:37:21.076216 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zc6vr\" (UniqueName: \"kubernetes.io/projected/3ec16531-729c-451c-b5a6-0bc04b3a1b3c-kube-api-access-zc6vr\") on node \"crc\" DevicePath \"\"" Feb 27 10:37:21 crc kubenswrapper[4998]: I0227 10:37:21.076247 4998 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3ec16531-729c-451c-b5a6-0bc04b3a1b3c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 27 10:37:21 crc kubenswrapper[4998]: I0227 10:37:21.335708 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-79fcb"] Feb 27 10:37:21 crc kubenswrapper[4998]: I0227 10:37:21.344675 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-79fcb"] Feb 27 10:37:21 crc kubenswrapper[4998]: I0227 10:37:21.590000 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-qw4jj" event={"ID":"3ec16531-729c-451c-b5a6-0bc04b3a1b3c","Type":"ContainerDied","Data":"f13aaba53e80e55fca3628314818cb4465eff21412055710b79d70c87ab575d2"} Feb 27 10:37:21 crc kubenswrapper[4998]: I0227 10:37:21.590264 4998 scope.go:117] "RemoveContainer" containerID="86a67fa708ba3f13438b131b4c6c003e77ba8233e031690602c89353a08f487f" Feb 27 10:37:21 crc kubenswrapper[4998]: I0227 10:37:21.590046 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-qw4jj" Feb 27 10:37:21 crc kubenswrapper[4998]: I0227 10:37:21.611003 4998 scope.go:117] "RemoveContainer" containerID="612ccacdc69672c5a2ff77de3871fff060984f107d460d1a708b108861dd7ba7" Feb 27 10:37:21 crc kubenswrapper[4998]: I0227 10:37:21.633461 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-qw4jj"] Feb 27 10:37:21 crc kubenswrapper[4998]: I0227 10:37:21.644544 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-qw4jj"] Feb 27 10:37:22 crc kubenswrapper[4998]: I0227 10:37:22.780885 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b2c290e-bd53-4009-9c85-fa730a0d104d" path="/var/lib/kubelet/pods/2b2c290e-bd53-4009-9c85-fa730a0d104d/volumes" Feb 27 10:37:22 crc kubenswrapper[4998]: I0227 10:37:22.781675 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ec16531-729c-451c-b5a6-0bc04b3a1b3c" path="/var/lib/kubelet/pods/3ec16531-729c-451c-b5a6-0bc04b3a1b3c/volumes" Feb 27 10:37:22 crc kubenswrapper[4998]: I0227 10:37:22.901939 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Feb 27 10:37:23 crc kubenswrapper[4998]: I0227 10:37:23.249830 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-kdmkv"] Feb 27 10:37:23 crc kubenswrapper[4998]: E0227 10:37:23.250540 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ef6de27-5d07-4c05-9da0-513855fbefa6" containerName="mariadb-account-create-update" Feb 27 10:37:23 crc kubenswrapper[4998]: I0227 10:37:23.250565 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ef6de27-5d07-4c05-9da0-513855fbefa6" containerName="mariadb-account-create-update" Feb 27 10:37:23 crc kubenswrapper[4998]: E0227 10:37:23.250584 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ec16531-729c-451c-b5a6-0bc04b3a1b3c" containerName="init" Feb 27 10:37:23 crc kubenswrapper[4998]: I0227 10:37:23.250593 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ec16531-729c-451c-b5a6-0bc04b3a1b3c" containerName="init" Feb 27 10:37:23 crc kubenswrapper[4998]: E0227 10:37:23.250615 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ec16531-729c-451c-b5a6-0bc04b3a1b3c" containerName="dnsmasq-dns" Feb 27 10:37:23 crc kubenswrapper[4998]: I0227 10:37:23.250623 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ec16531-729c-451c-b5a6-0bc04b3a1b3c" containerName="dnsmasq-dns" Feb 27 10:37:23 crc kubenswrapper[4998]: E0227 10:37:23.250643 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b2c290e-bd53-4009-9c85-fa730a0d104d" containerName="mariadb-account-create-update" Feb 27 10:37:23 crc kubenswrapper[4998]: I0227 10:37:23.250652 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b2c290e-bd53-4009-9c85-fa730a0d104d" containerName="mariadb-account-create-update" Feb 27 10:37:23 crc kubenswrapper[4998]: E0227 10:37:23.250667 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2173d9f2-1855-4ce8-bcfe-5dd0a8d99da5" containerName="mariadb-database-create" Feb 27 10:37:23 crc kubenswrapper[4998]: I0227 10:37:23.250675 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="2173d9f2-1855-4ce8-bcfe-5dd0a8d99da5" containerName="mariadb-database-create" Feb 27 10:37:23 crc kubenswrapper[4998]: I0227 10:37:23.250884 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b2c290e-bd53-4009-9c85-fa730a0d104d" containerName="mariadb-account-create-update" Feb 27 10:37:23 crc kubenswrapper[4998]: I0227 10:37:23.250907 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ef6de27-5d07-4c05-9da0-513855fbefa6" containerName="mariadb-account-create-update" Feb 27 10:37:23 crc kubenswrapper[4998]: I0227 10:37:23.250922 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="2173d9f2-1855-4ce8-bcfe-5dd0a8d99da5" containerName="mariadb-database-create" Feb 27 10:37:23 crc kubenswrapper[4998]: I0227 10:37:23.250938 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ec16531-729c-451c-b5a6-0bc04b3a1b3c" containerName="dnsmasq-dns" Feb 27 10:37:23 crc kubenswrapper[4998]: I0227 10:37:23.251581 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-kdmkv" Feb 27 10:37:23 crc kubenswrapper[4998]: I0227 10:37:23.254308 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Feb 27 10:37:23 crc kubenswrapper[4998]: I0227 10:37:23.254486 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-hwvwx" Feb 27 10:37:23 crc kubenswrapper[4998]: I0227 10:37:23.258269 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-kdmkv"] Feb 27 10:37:23 crc kubenswrapper[4998]: I0227 10:37:23.312623 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4c7cf30-091f-4dea-bbc1-156ad96a5451-combined-ca-bundle\") pod \"glance-db-sync-kdmkv\" (UID: \"b4c7cf30-091f-4dea-bbc1-156ad96a5451\") " pod="openstack/glance-db-sync-kdmkv" Feb 27 10:37:23 crc kubenswrapper[4998]: I0227 10:37:23.312683 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4c7cf30-091f-4dea-bbc1-156ad96a5451-config-data\") pod \"glance-db-sync-kdmkv\" (UID: \"b4c7cf30-091f-4dea-bbc1-156ad96a5451\") " pod="openstack/glance-db-sync-kdmkv" Feb 27 10:37:23 crc kubenswrapper[4998]: I0227 10:37:23.312714 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlkjz\" (UniqueName: \"kubernetes.io/projected/b4c7cf30-091f-4dea-bbc1-156ad96a5451-kube-api-access-vlkjz\") pod \"glance-db-sync-kdmkv\" (UID: \"b4c7cf30-091f-4dea-bbc1-156ad96a5451\") " pod="openstack/glance-db-sync-kdmkv" Feb 27 10:37:23 crc kubenswrapper[4998]: I0227 10:37:23.312970 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b4c7cf30-091f-4dea-bbc1-156ad96a5451-db-sync-config-data\") pod \"glance-db-sync-kdmkv\" (UID: \"b4c7cf30-091f-4dea-bbc1-156ad96a5451\") " pod="openstack/glance-db-sync-kdmkv" Feb 27 10:37:23 crc kubenswrapper[4998]: I0227 10:37:23.414781 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4c7cf30-091f-4dea-bbc1-156ad96a5451-combined-ca-bundle\") pod \"glance-db-sync-kdmkv\" (UID: \"b4c7cf30-091f-4dea-bbc1-156ad96a5451\") " pod="openstack/glance-db-sync-kdmkv" Feb 27 10:37:23 crc kubenswrapper[4998]: I0227 10:37:23.414849 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4c7cf30-091f-4dea-bbc1-156ad96a5451-config-data\") pod \"glance-db-sync-kdmkv\" (UID: \"b4c7cf30-091f-4dea-bbc1-156ad96a5451\") " pod="openstack/glance-db-sync-kdmkv" Feb 27 10:37:23 crc kubenswrapper[4998]: I0227 10:37:23.415775 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vlkjz\" (UniqueName: \"kubernetes.io/projected/b4c7cf30-091f-4dea-bbc1-156ad96a5451-kube-api-access-vlkjz\") pod \"glance-db-sync-kdmkv\" (UID: \"b4c7cf30-091f-4dea-bbc1-156ad96a5451\") " pod="openstack/glance-db-sync-kdmkv" Feb 27 10:37:23 crc kubenswrapper[4998]: I0227 10:37:23.415848 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b4c7cf30-091f-4dea-bbc1-156ad96a5451-db-sync-config-data\") pod \"glance-db-sync-kdmkv\" (UID: \"b4c7cf30-091f-4dea-bbc1-156ad96a5451\") " pod="openstack/glance-db-sync-kdmkv" Feb 27 10:37:23 crc kubenswrapper[4998]: I0227 10:37:23.420486 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4c7cf30-091f-4dea-bbc1-156ad96a5451-config-data\") pod \"glance-db-sync-kdmkv\" (UID: \"b4c7cf30-091f-4dea-bbc1-156ad96a5451\") " pod="openstack/glance-db-sync-kdmkv" Feb 27 10:37:23 crc kubenswrapper[4998]: I0227 10:37:23.420940 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4c7cf30-091f-4dea-bbc1-156ad96a5451-combined-ca-bundle\") pod \"glance-db-sync-kdmkv\" (UID: \"b4c7cf30-091f-4dea-bbc1-156ad96a5451\") " pod="openstack/glance-db-sync-kdmkv" Feb 27 10:37:23 crc kubenswrapper[4998]: I0227 10:37:23.421882 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b4c7cf30-091f-4dea-bbc1-156ad96a5451-db-sync-config-data\") pod \"glance-db-sync-kdmkv\" (UID: \"b4c7cf30-091f-4dea-bbc1-156ad96a5451\") " pod="openstack/glance-db-sync-kdmkv" Feb 27 10:37:23 crc kubenswrapper[4998]: I0227 10:37:23.433392 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vlkjz\" (UniqueName: \"kubernetes.io/projected/b4c7cf30-091f-4dea-bbc1-156ad96a5451-kube-api-access-vlkjz\") pod \"glance-db-sync-kdmkv\" (UID: \"b4c7cf30-091f-4dea-bbc1-156ad96a5451\") " pod="openstack/glance-db-sync-kdmkv" Feb 27 10:37:23 crc kubenswrapper[4998]: I0227 10:37:23.567001 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-kdmkv" Feb 27 10:37:24 crc kubenswrapper[4998]: I0227 10:37:24.123899 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-kdmkv"] Feb 27 10:37:24 crc kubenswrapper[4998]: W0227 10:37:24.129456 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb4c7cf30_091f_4dea_bbc1_156ad96a5451.slice/crio-b294f2f5a2b0a8cab723d817a197c1d995381f89387cbf10305c211c30d6f6f1 WatchSource:0}: Error finding container b294f2f5a2b0a8cab723d817a197c1d995381f89387cbf10305c211c30d6f6f1: Status 404 returned error can't find the container with id b294f2f5a2b0a8cab723d817a197c1d995381f89387cbf10305c211c30d6f6f1 Feb 27 10:37:24 crc kubenswrapper[4998]: I0227 10:37:24.615733 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-kdmkv" event={"ID":"b4c7cf30-091f-4dea-bbc1-156ad96a5451","Type":"ContainerStarted","Data":"b294f2f5a2b0a8cab723d817a197c1d995381f89387cbf10305c211c30d6f6f1"} Feb 27 10:37:24 crc kubenswrapper[4998]: I0227 10:37:24.784007 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-tpd7g"] Feb 27 10:37:24 crc kubenswrapper[4998]: I0227 10:37:24.785741 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-tpd7g" Feb 27 10:37:24 crc kubenswrapper[4998]: I0227 10:37:24.788048 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 27 10:37:24 crc kubenswrapper[4998]: I0227 10:37:24.817134 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-tpd7g"] Feb 27 10:37:24 crc kubenswrapper[4998]: I0227 10:37:24.850093 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfh26\" (UniqueName: \"kubernetes.io/projected/e867b7da-2dd2-4112-97cc-9b1ab5b13222-kube-api-access-dfh26\") pod \"root-account-create-update-tpd7g\" (UID: \"e867b7da-2dd2-4112-97cc-9b1ab5b13222\") " pod="openstack/root-account-create-update-tpd7g" Feb 27 10:37:24 crc kubenswrapper[4998]: I0227 10:37:24.850178 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e867b7da-2dd2-4112-97cc-9b1ab5b13222-operator-scripts\") pod \"root-account-create-update-tpd7g\" (UID: \"e867b7da-2dd2-4112-97cc-9b1ab5b13222\") " pod="openstack/root-account-create-update-tpd7g" Feb 27 10:37:24 crc kubenswrapper[4998]: I0227 10:37:24.955086 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfh26\" (UniqueName: \"kubernetes.io/projected/e867b7da-2dd2-4112-97cc-9b1ab5b13222-kube-api-access-dfh26\") pod \"root-account-create-update-tpd7g\" (UID: \"e867b7da-2dd2-4112-97cc-9b1ab5b13222\") " pod="openstack/root-account-create-update-tpd7g" Feb 27 10:37:24 crc kubenswrapper[4998]: I0227 10:37:24.955163 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e867b7da-2dd2-4112-97cc-9b1ab5b13222-operator-scripts\") pod \"root-account-create-update-tpd7g\" (UID: \"e867b7da-2dd2-4112-97cc-9b1ab5b13222\") " pod="openstack/root-account-create-update-tpd7g" Feb 27 10:37:24 crc kubenswrapper[4998]: I0227 10:37:24.955836 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e867b7da-2dd2-4112-97cc-9b1ab5b13222-operator-scripts\") pod \"root-account-create-update-tpd7g\" (UID: \"e867b7da-2dd2-4112-97cc-9b1ab5b13222\") " pod="openstack/root-account-create-update-tpd7g" Feb 27 10:37:24 crc kubenswrapper[4998]: I0227 10:37:24.983926 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfh26\" (UniqueName: \"kubernetes.io/projected/e867b7da-2dd2-4112-97cc-9b1ab5b13222-kube-api-access-dfh26\") pod \"root-account-create-update-tpd7g\" (UID: \"e867b7da-2dd2-4112-97cc-9b1ab5b13222\") " pod="openstack/root-account-create-update-tpd7g" Feb 27 10:37:25 crc kubenswrapper[4998]: I0227 10:37:25.105385 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-tpd7g" Feb 27 10:37:25 crc kubenswrapper[4998]: I0227 10:37:25.558560 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-tpd7g"] Feb 27 10:37:25 crc kubenswrapper[4998]: W0227 10:37:25.568844 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode867b7da_2dd2_4112_97cc_9b1ab5b13222.slice/crio-2ae9b7e42412c52159d9ba38ae3a833b2c56ef5a84a246b927309b5acf5e415f WatchSource:0}: Error finding container 2ae9b7e42412c52159d9ba38ae3a833b2c56ef5a84a246b927309b5acf5e415f: Status 404 returned error can't find the container with id 2ae9b7e42412c52159d9ba38ae3a833b2c56ef5a84a246b927309b5acf5e415f Feb 27 10:37:25 crc kubenswrapper[4998]: I0227 10:37:25.624430 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-tpd7g" event={"ID":"e867b7da-2dd2-4112-97cc-9b1ab5b13222","Type":"ContainerStarted","Data":"2ae9b7e42412c52159d9ba38ae3a833b2c56ef5a84a246b927309b5acf5e415f"} Feb 27 10:37:26 crc kubenswrapper[4998]: I0227 10:37:26.633838 4998 generic.go:334] "Generic (PLEG): container finished" podID="e867b7da-2dd2-4112-97cc-9b1ab5b13222" containerID="42f98cdbca523215a2588b6ba36df2dc314000f1fd3621d1824b3e4eb60b9a3d" exitCode=0 Feb 27 10:37:26 crc kubenswrapper[4998]: I0227 10:37:26.633894 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-tpd7g" event={"ID":"e867b7da-2dd2-4112-97cc-9b1ab5b13222","Type":"ContainerDied","Data":"42f98cdbca523215a2588b6ba36df2dc314000f1fd3621d1824b3e4eb60b9a3d"} Feb 27 10:37:26 crc kubenswrapper[4998]: I0227 10:37:26.636191 4998 generic.go:334] "Generic (PLEG): container finished" podID="4ca09e09-cedc-4476-bbe3-d179893232c8" containerID="ac4ce26b5351d10eae2414191b089042c85ffbe89ede6f016126bedf7c98e92f" exitCode=0 Feb 27 10:37:26 crc kubenswrapper[4998]: I0227 10:37:26.636265 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-c2cn4" event={"ID":"4ca09e09-cedc-4476-bbe3-d179893232c8","Type":"ContainerDied","Data":"ac4ce26b5351d10eae2414191b089042c85ffbe89ede6f016126bedf7c98e92f"} Feb 27 10:37:26 crc kubenswrapper[4998]: I0227 10:37:26.994885 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0928c45d-8553-49e6-a068-3e2e75a28c69-etc-swift\") pod \"swift-storage-0\" (UID: \"0928c45d-8553-49e6-a068-3e2e75a28c69\") " pod="openstack/swift-storage-0" Feb 27 10:37:27 crc kubenswrapper[4998]: I0227 10:37:27.009519 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0928c45d-8553-49e6-a068-3e2e75a28c69-etc-swift\") pod \"swift-storage-0\" (UID: \"0928c45d-8553-49e6-a068-3e2e75a28c69\") " pod="openstack/swift-storage-0" Feb 27 10:37:27 crc kubenswrapper[4998]: I0227 10:37:27.025157 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 27 10:37:27 crc kubenswrapper[4998]: I0227 10:37:27.597250 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 27 10:37:27 crc kubenswrapper[4998]: W0227 10:37:27.607308 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0928c45d_8553_49e6_a068_3e2e75a28c69.slice/crio-77f57b4080c7e5a776884ecda24d90b3a5807a105a488eb935ea23d096ee9cf0 WatchSource:0}: Error finding container 77f57b4080c7e5a776884ecda24d90b3a5807a105a488eb935ea23d096ee9cf0: Status 404 returned error can't find the container with id 77f57b4080c7e5a776884ecda24d90b3a5807a105a488eb935ea23d096ee9cf0 Feb 27 10:37:27 crc kubenswrapper[4998]: I0227 10:37:27.649262 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0928c45d-8553-49e6-a068-3e2e75a28c69","Type":"ContainerStarted","Data":"77f57b4080c7e5a776884ecda24d90b3a5807a105a488eb935ea23d096ee9cf0"} Feb 27 10:37:28 crc kubenswrapper[4998]: I0227 10:37:28.202661 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-tpd7g" Feb 27 10:37:28 crc kubenswrapper[4998]: I0227 10:37:28.210358 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-c2cn4" Feb 27 10:37:28 crc kubenswrapper[4998]: I0227 10:37:28.321202 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ca09e09-cedc-4476-bbe3-d179893232c8-combined-ca-bundle\") pod \"4ca09e09-cedc-4476-bbe3-d179893232c8\" (UID: \"4ca09e09-cedc-4476-bbe3-d179893232c8\") " Feb 27 10:37:28 crc kubenswrapper[4998]: I0227 10:37:28.322681 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/4ca09e09-cedc-4476-bbe3-d179893232c8-dispersionconf\") pod \"4ca09e09-cedc-4476-bbe3-d179893232c8\" (UID: \"4ca09e09-cedc-4476-bbe3-d179893232c8\") " Feb 27 10:37:28 crc kubenswrapper[4998]: I0227 10:37:28.322754 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dfh26\" (UniqueName: \"kubernetes.io/projected/e867b7da-2dd2-4112-97cc-9b1ab5b13222-kube-api-access-dfh26\") pod \"e867b7da-2dd2-4112-97cc-9b1ab5b13222\" (UID: \"e867b7da-2dd2-4112-97cc-9b1ab5b13222\") " Feb 27 10:37:28 crc kubenswrapper[4998]: I0227 10:37:28.322881 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/4ca09e09-cedc-4476-bbe3-d179893232c8-swiftconf\") pod \"4ca09e09-cedc-4476-bbe3-d179893232c8\" (UID: \"4ca09e09-cedc-4476-bbe3-d179893232c8\") " Feb 27 10:37:28 crc kubenswrapper[4998]: I0227 10:37:28.322909 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4ca09e09-cedc-4476-bbe3-d179893232c8-scripts\") pod \"4ca09e09-cedc-4476-bbe3-d179893232c8\" (UID: \"4ca09e09-cedc-4476-bbe3-d179893232c8\") " Feb 27 10:37:28 crc kubenswrapper[4998]: I0227 10:37:28.322975 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e867b7da-2dd2-4112-97cc-9b1ab5b13222-operator-scripts\") pod \"e867b7da-2dd2-4112-97cc-9b1ab5b13222\" (UID: \"e867b7da-2dd2-4112-97cc-9b1ab5b13222\") " Feb 27 10:37:28 crc kubenswrapper[4998]: I0227 10:37:28.323006 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/4ca09e09-cedc-4476-bbe3-d179893232c8-etc-swift\") pod \"4ca09e09-cedc-4476-bbe3-d179893232c8\" (UID: \"4ca09e09-cedc-4476-bbe3-d179893232c8\") " Feb 27 10:37:28 crc kubenswrapper[4998]: I0227 10:37:28.323028 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kbzkn\" (UniqueName: \"kubernetes.io/projected/4ca09e09-cedc-4476-bbe3-d179893232c8-kube-api-access-kbzkn\") pod \"4ca09e09-cedc-4476-bbe3-d179893232c8\" (UID: \"4ca09e09-cedc-4476-bbe3-d179893232c8\") " Feb 27 10:37:28 crc kubenswrapper[4998]: I0227 10:37:28.323054 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/4ca09e09-cedc-4476-bbe3-d179893232c8-ring-data-devices\") pod \"4ca09e09-cedc-4476-bbe3-d179893232c8\" (UID: \"4ca09e09-cedc-4476-bbe3-d179893232c8\") " Feb 27 10:37:28 crc kubenswrapper[4998]: I0227 10:37:28.323884 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e867b7da-2dd2-4112-97cc-9b1ab5b13222-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e867b7da-2dd2-4112-97cc-9b1ab5b13222" (UID: "e867b7da-2dd2-4112-97cc-9b1ab5b13222"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:37:28 crc kubenswrapper[4998]: I0227 10:37:28.324547 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ca09e09-cedc-4476-bbe3-d179893232c8-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "4ca09e09-cedc-4476-bbe3-d179893232c8" (UID: "4ca09e09-cedc-4476-bbe3-d179893232c8"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:37:28 crc kubenswrapper[4998]: I0227 10:37:28.324854 4998 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e867b7da-2dd2-4112-97cc-9b1ab5b13222-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 10:37:28 crc kubenswrapper[4998]: I0227 10:37:28.324876 4998 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/4ca09e09-cedc-4476-bbe3-d179893232c8-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 27 10:37:28 crc kubenswrapper[4998]: I0227 10:37:28.324852 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ca09e09-cedc-4476-bbe3-d179893232c8-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "4ca09e09-cedc-4476-bbe3-d179893232c8" (UID: "4ca09e09-cedc-4476-bbe3-d179893232c8"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:37:28 crc kubenswrapper[4998]: I0227 10:37:28.327486 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ca09e09-cedc-4476-bbe3-d179893232c8-kube-api-access-kbzkn" (OuterVolumeSpecName: "kube-api-access-kbzkn") pod "4ca09e09-cedc-4476-bbe3-d179893232c8" (UID: "4ca09e09-cedc-4476-bbe3-d179893232c8"). InnerVolumeSpecName "kube-api-access-kbzkn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:37:28 crc kubenswrapper[4998]: I0227 10:37:28.327690 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e867b7da-2dd2-4112-97cc-9b1ab5b13222-kube-api-access-dfh26" (OuterVolumeSpecName: "kube-api-access-dfh26") pod "e867b7da-2dd2-4112-97cc-9b1ab5b13222" (UID: "e867b7da-2dd2-4112-97cc-9b1ab5b13222"). InnerVolumeSpecName "kube-api-access-dfh26". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:37:28 crc kubenswrapper[4998]: I0227 10:37:28.334099 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ca09e09-cedc-4476-bbe3-d179893232c8-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "4ca09e09-cedc-4476-bbe3-d179893232c8" (UID: "4ca09e09-cedc-4476-bbe3-d179893232c8"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:37:28 crc kubenswrapper[4998]: I0227 10:37:28.355419 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ca09e09-cedc-4476-bbe3-d179893232c8-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "4ca09e09-cedc-4476-bbe3-d179893232c8" (UID: "4ca09e09-cedc-4476-bbe3-d179893232c8"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:37:28 crc kubenswrapper[4998]: I0227 10:37:28.357619 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ca09e09-cedc-4476-bbe3-d179893232c8-scripts" (OuterVolumeSpecName: "scripts") pod "4ca09e09-cedc-4476-bbe3-d179893232c8" (UID: "4ca09e09-cedc-4476-bbe3-d179893232c8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:37:28 crc kubenswrapper[4998]: I0227 10:37:28.359170 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ca09e09-cedc-4476-bbe3-d179893232c8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4ca09e09-cedc-4476-bbe3-d179893232c8" (UID: "4ca09e09-cedc-4476-bbe3-d179893232c8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:37:28 crc kubenswrapper[4998]: I0227 10:37:28.422389 4998 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-695g4" podUID="ca3e5ab9-22e3-473e-8e9f-3c78b654bdc9" containerName="ovn-controller" probeResult="failure" output=< Feb 27 10:37:28 crc kubenswrapper[4998]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 27 10:37:28 crc kubenswrapper[4998]: > Feb 27 10:37:28 crc kubenswrapper[4998]: I0227 10:37:28.427260 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kbzkn\" (UniqueName: \"kubernetes.io/projected/4ca09e09-cedc-4476-bbe3-d179893232c8-kube-api-access-kbzkn\") on node \"crc\" DevicePath \"\"" Feb 27 10:37:28 crc kubenswrapper[4998]: I0227 10:37:28.427297 4998 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/4ca09e09-cedc-4476-bbe3-d179893232c8-ring-data-devices\") on node \"crc\" DevicePath \"\"" Feb 27 10:37:28 crc kubenswrapper[4998]: I0227 10:37:28.427308 4998 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ca09e09-cedc-4476-bbe3-d179893232c8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:37:28 crc kubenswrapper[4998]: I0227 10:37:28.427322 4998 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/4ca09e09-cedc-4476-bbe3-d179893232c8-dispersionconf\") on node \"crc\" DevicePath \"\"" Feb 27 10:37:28 crc kubenswrapper[4998]: I0227 10:37:28.427334 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dfh26\" (UniqueName: \"kubernetes.io/projected/e867b7da-2dd2-4112-97cc-9b1ab5b13222-kube-api-access-dfh26\") on node \"crc\" DevicePath \"\"" Feb 27 10:37:28 crc kubenswrapper[4998]: I0227 10:37:28.427349 4998 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/4ca09e09-cedc-4476-bbe3-d179893232c8-swiftconf\") on node \"crc\" DevicePath \"\"" Feb 27 10:37:28 crc kubenswrapper[4998]: I0227 10:37:28.427364 4998 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4ca09e09-cedc-4476-bbe3-d179893232c8-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 10:37:28 crc kubenswrapper[4998]: I0227 10:37:28.439171 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-pds2s" Feb 27 10:37:28 crc kubenswrapper[4998]: I0227 10:37:28.440451 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-pds2s" Feb 27 10:37:28 crc kubenswrapper[4998]: I0227 10:37:28.671574 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-695g4-config-nmlsc"] Feb 27 10:37:28 crc kubenswrapper[4998]: E0227 10:37:28.672014 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ca09e09-cedc-4476-bbe3-d179893232c8" containerName="swift-ring-rebalance" Feb 27 10:37:28 crc kubenswrapper[4998]: I0227 10:37:28.672031 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ca09e09-cedc-4476-bbe3-d179893232c8" containerName="swift-ring-rebalance" Feb 27 10:37:28 crc kubenswrapper[4998]: E0227 10:37:28.672050 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e867b7da-2dd2-4112-97cc-9b1ab5b13222" containerName="mariadb-account-create-update" Feb 27 10:37:28 crc kubenswrapper[4998]: I0227 10:37:28.672058 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="e867b7da-2dd2-4112-97cc-9b1ab5b13222" containerName="mariadb-account-create-update" Feb 27 10:37:28 crc kubenswrapper[4998]: I0227 10:37:28.672248 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="e867b7da-2dd2-4112-97cc-9b1ab5b13222" containerName="mariadb-account-create-update" Feb 27 10:37:28 crc kubenswrapper[4998]: I0227 10:37:28.672275 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ca09e09-cedc-4476-bbe3-d179893232c8" containerName="swift-ring-rebalance" Feb 27 10:37:28 crc kubenswrapper[4998]: I0227 10:37:28.677459 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-695g4-config-nmlsc" Feb 27 10:37:28 crc kubenswrapper[4998]: I0227 10:37:28.678823 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-c2cn4" event={"ID":"4ca09e09-cedc-4476-bbe3-d179893232c8","Type":"ContainerDied","Data":"4b011128ac6b38c9c7bc2b6af664fa367fca730a2932757c609753371cd61c55"} Feb 27 10:37:28 crc kubenswrapper[4998]: I0227 10:37:28.678866 4998 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b011128ac6b38c9c7bc2b6af664fa367fca730a2932757c609753371cd61c55" Feb 27 10:37:28 crc kubenswrapper[4998]: I0227 10:37:28.678942 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-c2cn4" Feb 27 10:37:28 crc kubenswrapper[4998]: I0227 10:37:28.684147 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-695g4-config-nmlsc"] Feb 27 10:37:28 crc kubenswrapper[4998]: I0227 10:37:28.686339 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 27 10:37:28 crc kubenswrapper[4998]: I0227 10:37:28.688117 4998 generic.go:334] "Generic (PLEG): container finished" podID="0ca208a2-3ba0-43e6-a2c4-942c12e54b41" containerID="f72bedbf2a5292be214576f7a9572718cf7fad9c31f0bf452a153749bb16372b" exitCode=0 Feb 27 10:37:28 crc kubenswrapper[4998]: I0227 10:37:28.688172 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0ca208a2-3ba0-43e6-a2c4-942c12e54b41","Type":"ContainerDied","Data":"f72bedbf2a5292be214576f7a9572718cf7fad9c31f0bf452a153749bb16372b"} Feb 27 10:37:28 crc kubenswrapper[4998]: I0227 10:37:28.707168 4998 generic.go:334] "Generic (PLEG): container finished" podID="68cd6142-df7e-4994-97c0-0bc08ea1e3d4" containerID="009a183d6e1f7dc57da1884d0a8c4b15c0dd6b099075bea5e84e0e1183821aa1" exitCode=0 Feb 27 10:37:28 crc kubenswrapper[4998]: I0227 10:37:28.707384 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"68cd6142-df7e-4994-97c0-0bc08ea1e3d4","Type":"ContainerDied","Data":"009a183d6e1f7dc57da1884d0a8c4b15c0dd6b099075bea5e84e0e1183821aa1"} Feb 27 10:37:28 crc kubenswrapper[4998]: I0227 10:37:28.718373 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-tpd7g" Feb 27 10:37:28 crc kubenswrapper[4998]: I0227 10:37:28.718823 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-tpd7g" event={"ID":"e867b7da-2dd2-4112-97cc-9b1ab5b13222","Type":"ContainerDied","Data":"2ae9b7e42412c52159d9ba38ae3a833b2c56ef5a84a246b927309b5acf5e415f"} Feb 27 10:37:28 crc kubenswrapper[4998]: I0227 10:37:28.718865 4998 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ae9b7e42412c52159d9ba38ae3a833b2c56ef5a84a246b927309b5acf5e415f" Feb 27 10:37:28 crc kubenswrapper[4998]: I0227 10:37:28.834582 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f6ed7b5b-9973-4a1e-920c-ca0f05625565-scripts\") pod \"ovn-controller-695g4-config-nmlsc\" (UID: \"f6ed7b5b-9973-4a1e-920c-ca0f05625565\") " pod="openstack/ovn-controller-695g4-config-nmlsc" Feb 27 10:37:28 crc kubenswrapper[4998]: I0227 10:37:28.834955 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f6ed7b5b-9973-4a1e-920c-ca0f05625565-var-run-ovn\") pod \"ovn-controller-695g4-config-nmlsc\" (UID: \"f6ed7b5b-9973-4a1e-920c-ca0f05625565\") " pod="openstack/ovn-controller-695g4-config-nmlsc" Feb 27 10:37:28 crc kubenswrapper[4998]: I0227 10:37:28.835012 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9c89\" (UniqueName: \"kubernetes.io/projected/f6ed7b5b-9973-4a1e-920c-ca0f05625565-kube-api-access-f9c89\") pod \"ovn-controller-695g4-config-nmlsc\" (UID: \"f6ed7b5b-9973-4a1e-920c-ca0f05625565\") " pod="openstack/ovn-controller-695g4-config-nmlsc" Feb 27 10:37:28 crc kubenswrapper[4998]: I0227 10:37:28.835094 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f6ed7b5b-9973-4a1e-920c-ca0f05625565-additional-scripts\") pod \"ovn-controller-695g4-config-nmlsc\" (UID: \"f6ed7b5b-9973-4a1e-920c-ca0f05625565\") " pod="openstack/ovn-controller-695g4-config-nmlsc" Feb 27 10:37:28 crc kubenswrapper[4998]: I0227 10:37:28.835162 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f6ed7b5b-9973-4a1e-920c-ca0f05625565-var-log-ovn\") pod \"ovn-controller-695g4-config-nmlsc\" (UID: \"f6ed7b5b-9973-4a1e-920c-ca0f05625565\") " pod="openstack/ovn-controller-695g4-config-nmlsc" Feb 27 10:37:28 crc kubenswrapper[4998]: I0227 10:37:28.835494 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f6ed7b5b-9973-4a1e-920c-ca0f05625565-var-run\") pod \"ovn-controller-695g4-config-nmlsc\" (UID: \"f6ed7b5b-9973-4a1e-920c-ca0f05625565\") " pod="openstack/ovn-controller-695g4-config-nmlsc" Feb 27 10:37:28 crc kubenswrapper[4998]: I0227 10:37:28.937361 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f6ed7b5b-9973-4a1e-920c-ca0f05625565-var-run\") pod \"ovn-controller-695g4-config-nmlsc\" (UID: \"f6ed7b5b-9973-4a1e-920c-ca0f05625565\") " pod="openstack/ovn-controller-695g4-config-nmlsc" Feb 27 10:37:28 crc kubenswrapper[4998]: I0227 10:37:28.937456 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f6ed7b5b-9973-4a1e-920c-ca0f05625565-scripts\") pod \"ovn-controller-695g4-config-nmlsc\" (UID: \"f6ed7b5b-9973-4a1e-920c-ca0f05625565\") " pod="openstack/ovn-controller-695g4-config-nmlsc" Feb 27 10:37:28 crc kubenswrapper[4998]: I0227 10:37:28.937502 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f6ed7b5b-9973-4a1e-920c-ca0f05625565-var-run-ovn\") pod \"ovn-controller-695g4-config-nmlsc\" (UID: \"f6ed7b5b-9973-4a1e-920c-ca0f05625565\") " pod="openstack/ovn-controller-695g4-config-nmlsc" Feb 27 10:37:28 crc kubenswrapper[4998]: I0227 10:37:28.937535 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9c89\" (UniqueName: \"kubernetes.io/projected/f6ed7b5b-9973-4a1e-920c-ca0f05625565-kube-api-access-f9c89\") pod \"ovn-controller-695g4-config-nmlsc\" (UID: \"f6ed7b5b-9973-4a1e-920c-ca0f05625565\") " pod="openstack/ovn-controller-695g4-config-nmlsc" Feb 27 10:37:28 crc kubenswrapper[4998]: I0227 10:37:28.937595 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f6ed7b5b-9973-4a1e-920c-ca0f05625565-additional-scripts\") pod \"ovn-controller-695g4-config-nmlsc\" (UID: \"f6ed7b5b-9973-4a1e-920c-ca0f05625565\") " pod="openstack/ovn-controller-695g4-config-nmlsc" Feb 27 10:37:28 crc kubenswrapper[4998]: I0227 10:37:28.937648 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f6ed7b5b-9973-4a1e-920c-ca0f05625565-var-log-ovn\") pod \"ovn-controller-695g4-config-nmlsc\" (UID: \"f6ed7b5b-9973-4a1e-920c-ca0f05625565\") " pod="openstack/ovn-controller-695g4-config-nmlsc" Feb 27 10:37:28 crc kubenswrapper[4998]: I0227 10:37:28.937825 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f6ed7b5b-9973-4a1e-920c-ca0f05625565-var-log-ovn\") pod \"ovn-controller-695g4-config-nmlsc\" (UID: \"f6ed7b5b-9973-4a1e-920c-ca0f05625565\") " pod="openstack/ovn-controller-695g4-config-nmlsc" Feb 27 10:37:28 crc kubenswrapper[4998]: I0227 10:37:28.937986 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f6ed7b5b-9973-4a1e-920c-ca0f05625565-var-run-ovn\") pod \"ovn-controller-695g4-config-nmlsc\" (UID: \"f6ed7b5b-9973-4a1e-920c-ca0f05625565\") " pod="openstack/ovn-controller-695g4-config-nmlsc" Feb 27 10:37:28 crc kubenswrapper[4998]: I0227 10:37:28.938154 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f6ed7b5b-9973-4a1e-920c-ca0f05625565-var-run\") pod \"ovn-controller-695g4-config-nmlsc\" (UID: \"f6ed7b5b-9973-4a1e-920c-ca0f05625565\") " pod="openstack/ovn-controller-695g4-config-nmlsc" Feb 27 10:37:28 crc kubenswrapper[4998]: I0227 10:37:28.940407 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 27 10:37:28 crc kubenswrapper[4998]: I0227 10:37:28.940575 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f6ed7b5b-9973-4a1e-920c-ca0f05625565-scripts\") pod \"ovn-controller-695g4-config-nmlsc\" (UID: \"f6ed7b5b-9973-4a1e-920c-ca0f05625565\") " pod="openstack/ovn-controller-695g4-config-nmlsc" Feb 27 10:37:28 crc kubenswrapper[4998]: I0227 10:37:28.948941 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f6ed7b5b-9973-4a1e-920c-ca0f05625565-additional-scripts\") pod \"ovn-controller-695g4-config-nmlsc\" (UID: \"f6ed7b5b-9973-4a1e-920c-ca0f05625565\") " pod="openstack/ovn-controller-695g4-config-nmlsc" Feb 27 10:37:28 crc kubenswrapper[4998]: I0227 10:37:28.956044 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9c89\" (UniqueName: \"kubernetes.io/projected/f6ed7b5b-9973-4a1e-920c-ca0f05625565-kube-api-access-f9c89\") pod \"ovn-controller-695g4-config-nmlsc\" (UID: \"f6ed7b5b-9973-4a1e-920c-ca0f05625565\") " pod="openstack/ovn-controller-695g4-config-nmlsc" Feb 27 10:37:29 crc kubenswrapper[4998]: I0227 10:37:29.019966 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-695g4-config-nmlsc" Feb 27 10:37:29 crc kubenswrapper[4998]: I0227 10:37:29.516861 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-695g4-config-nmlsc"] Feb 27 10:37:29 crc kubenswrapper[4998]: I0227 10:37:29.742814 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0ca208a2-3ba0-43e6-a2c4-942c12e54b41","Type":"ContainerStarted","Data":"9ff77bb37ea66515101280ca31ee414ad46484284b3cad62d344ef3e3bdf1b88"} Feb 27 10:37:29 crc kubenswrapper[4998]: I0227 10:37:29.743135 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 27 10:37:29 crc kubenswrapper[4998]: I0227 10:37:29.744703 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-695g4-config-nmlsc" event={"ID":"f6ed7b5b-9973-4a1e-920c-ca0f05625565","Type":"ContainerStarted","Data":"ebc09166ba4fc7db169a97cc907dc41592348d6885556a6bb8b49e01683f299a"} Feb 27 10:37:29 crc kubenswrapper[4998]: I0227 10:37:29.749509 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"68cd6142-df7e-4994-97c0-0bc08ea1e3d4","Type":"ContainerStarted","Data":"f6a98a41d2d5f12fca3b789a125d274ad94dc22565def6bda4dfcc1cd769e21f"} Feb 27 10:37:29 crc kubenswrapper[4998]: I0227 10:37:29.749891 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:37:29 crc kubenswrapper[4998]: I0227 10:37:29.776534 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0928c45d-8553-49e6-a068-3e2e75a28c69","Type":"ContainerStarted","Data":"e0f7b68a05ea34accf0ec94d4c9746a74a7f4a18213a8e1ffba2906a2c1a54bb"} Feb 27 10:37:29 crc kubenswrapper[4998]: I0227 10:37:29.776803 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0928c45d-8553-49e6-a068-3e2e75a28c69","Type":"ContainerStarted","Data":"62b52c332b868582cbf5006a6e42e22831fb065d69bb2ad3f1a247bb232dd451"} Feb 27 10:37:29 crc kubenswrapper[4998]: I0227 10:37:29.776818 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0928c45d-8553-49e6-a068-3e2e75a28c69","Type":"ContainerStarted","Data":"d93f7fde603258b09311f48cfff567099f6d447d93680d2d33353a5e641f4de3"} Feb 27 10:37:29 crc kubenswrapper[4998]: I0227 10:37:29.805914 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=53.159160167 podStartE2EDuration="1m6.805861899s" podCreationTimestamp="2026-02-27 10:36:23 +0000 UTC" firstStartedPulling="2026-02-27 10:36:40.422727753 +0000 UTC m=+1152.420998721" lastFinishedPulling="2026-02-27 10:36:54.069429485 +0000 UTC m=+1166.067700453" observedRunningTime="2026-02-27 10:37:29.805827948 +0000 UTC m=+1201.804098936" watchObservedRunningTime="2026-02-27 10:37:29.805861899 +0000 UTC m=+1201.804132867" Feb 27 10:37:29 crc kubenswrapper[4998]: I0227 10:37:29.806312 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=52.941734885 podStartE2EDuration="1m6.806306423s" podCreationTimestamp="2026-02-27 10:36:23 +0000 UTC" firstStartedPulling="2026-02-27 10:36:40.442848001 +0000 UTC m=+1152.441118969" lastFinishedPulling="2026-02-27 10:36:54.307419539 +0000 UTC m=+1166.305690507" observedRunningTime="2026-02-27 10:37:29.774147267 +0000 UTC m=+1201.772418255" watchObservedRunningTime="2026-02-27 10:37:29.806306423 +0000 UTC m=+1201.804577411" Feb 27 10:37:30 crc kubenswrapper[4998]: I0227 10:37:30.786319 4998 generic.go:334] "Generic (PLEG): container finished" podID="f6ed7b5b-9973-4a1e-920c-ca0f05625565" containerID="87a1454506eff43f94db408a6272336f46547d1eb75b6b7ec2ad47b71e6445b0" exitCode=0 Feb 27 10:37:30 crc kubenswrapper[4998]: I0227 10:37:30.786368 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-695g4-config-nmlsc" event={"ID":"f6ed7b5b-9973-4a1e-920c-ca0f05625565","Type":"ContainerDied","Data":"87a1454506eff43f94db408a6272336f46547d1eb75b6b7ec2ad47b71e6445b0"} Feb 27 10:37:30 crc kubenswrapper[4998]: I0227 10:37:30.789793 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0928c45d-8553-49e6-a068-3e2e75a28c69","Type":"ContainerStarted","Data":"d4ce9a18732134a684f91c811b267e82f131368d46885ad75d62037daa8d8d1b"} Feb 27 10:37:31 crc kubenswrapper[4998]: I0227 10:37:31.340319 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-tpd7g"] Feb 27 10:37:31 crc kubenswrapper[4998]: I0227 10:37:31.347146 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-tpd7g"] Feb 27 10:37:32 crc kubenswrapper[4998]: I0227 10:37:32.776209 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e867b7da-2dd2-4112-97cc-9b1ab5b13222" path="/var/lib/kubelet/pods/e867b7da-2dd2-4112-97cc-9b1ab5b13222/volumes" Feb 27 10:37:33 crc kubenswrapper[4998]: I0227 10:37:33.418483 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-695g4" Feb 27 10:37:36 crc kubenswrapper[4998]: I0227 10:37:36.359048 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-q5krj"] Feb 27 10:37:36 crc kubenswrapper[4998]: I0227 10:37:36.384039 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-q5krj" Feb 27 10:37:36 crc kubenswrapper[4998]: I0227 10:37:36.387099 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 27 10:37:36 crc kubenswrapper[4998]: I0227 10:37:36.416656 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-q5krj"] Feb 27 10:37:36 crc kubenswrapper[4998]: I0227 10:37:36.494539 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jsmm\" (UniqueName: \"kubernetes.io/projected/6c2d1d1d-3eba-45d7-935c-6c71925d009a-kube-api-access-7jsmm\") pod \"root-account-create-update-q5krj\" (UID: \"6c2d1d1d-3eba-45d7-935c-6c71925d009a\") " pod="openstack/root-account-create-update-q5krj" Feb 27 10:37:36 crc kubenswrapper[4998]: I0227 10:37:36.494765 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6c2d1d1d-3eba-45d7-935c-6c71925d009a-operator-scripts\") pod \"root-account-create-update-q5krj\" (UID: \"6c2d1d1d-3eba-45d7-935c-6c71925d009a\") " pod="openstack/root-account-create-update-q5krj" Feb 27 10:37:36 crc kubenswrapper[4998]: I0227 10:37:36.596633 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jsmm\" (UniqueName: \"kubernetes.io/projected/6c2d1d1d-3eba-45d7-935c-6c71925d009a-kube-api-access-7jsmm\") pod \"root-account-create-update-q5krj\" (UID: \"6c2d1d1d-3eba-45d7-935c-6c71925d009a\") " pod="openstack/root-account-create-update-q5krj" Feb 27 10:37:36 crc kubenswrapper[4998]: I0227 10:37:36.596748 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6c2d1d1d-3eba-45d7-935c-6c71925d009a-operator-scripts\") pod \"root-account-create-update-q5krj\" (UID: \"6c2d1d1d-3eba-45d7-935c-6c71925d009a\") " pod="openstack/root-account-create-update-q5krj" Feb 27 10:37:36 crc kubenswrapper[4998]: I0227 10:37:36.597592 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6c2d1d1d-3eba-45d7-935c-6c71925d009a-operator-scripts\") pod \"root-account-create-update-q5krj\" (UID: \"6c2d1d1d-3eba-45d7-935c-6c71925d009a\") " pod="openstack/root-account-create-update-q5krj" Feb 27 10:37:36 crc kubenswrapper[4998]: I0227 10:37:36.617135 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jsmm\" (UniqueName: \"kubernetes.io/projected/6c2d1d1d-3eba-45d7-935c-6c71925d009a-kube-api-access-7jsmm\") pod \"root-account-create-update-q5krj\" (UID: \"6c2d1d1d-3eba-45d7-935c-6c71925d009a\") " pod="openstack/root-account-create-update-q5krj" Feb 27 10:37:36 crc kubenswrapper[4998]: I0227 10:37:36.712989 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-q5krj" Feb 27 10:37:39 crc kubenswrapper[4998]: I0227 10:37:39.645031 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-695g4-config-nmlsc" Feb 27 10:37:39 crc kubenswrapper[4998]: I0227 10:37:39.744366 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f6ed7b5b-9973-4a1e-920c-ca0f05625565-var-log-ovn\") pod \"f6ed7b5b-9973-4a1e-920c-ca0f05625565\" (UID: \"f6ed7b5b-9973-4a1e-920c-ca0f05625565\") " Feb 27 10:37:39 crc kubenswrapper[4998]: I0227 10:37:39.744687 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f6ed7b5b-9973-4a1e-920c-ca0f05625565-additional-scripts\") pod \"f6ed7b5b-9973-4a1e-920c-ca0f05625565\" (UID: \"f6ed7b5b-9973-4a1e-920c-ca0f05625565\") " Feb 27 10:37:39 crc kubenswrapper[4998]: I0227 10:37:39.744722 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f6ed7b5b-9973-4a1e-920c-ca0f05625565-var-run-ovn\") pod \"f6ed7b5b-9973-4a1e-920c-ca0f05625565\" (UID: \"f6ed7b5b-9973-4a1e-920c-ca0f05625565\") " Feb 27 10:37:39 crc kubenswrapper[4998]: I0227 10:37:39.744530 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f6ed7b5b-9973-4a1e-920c-ca0f05625565-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "f6ed7b5b-9973-4a1e-920c-ca0f05625565" (UID: "f6ed7b5b-9973-4a1e-920c-ca0f05625565"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 10:37:39 crc kubenswrapper[4998]: I0227 10:37:39.744785 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f6ed7b5b-9973-4a1e-920c-ca0f05625565-var-run\") pod \"f6ed7b5b-9973-4a1e-920c-ca0f05625565\" (UID: \"f6ed7b5b-9973-4a1e-920c-ca0f05625565\") " Feb 27 10:37:39 crc kubenswrapper[4998]: I0227 10:37:39.744825 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f6ed7b5b-9973-4a1e-920c-ca0f05625565-scripts\") pod \"f6ed7b5b-9973-4a1e-920c-ca0f05625565\" (UID: \"f6ed7b5b-9973-4a1e-920c-ca0f05625565\") " Feb 27 10:37:39 crc kubenswrapper[4998]: I0227 10:37:39.744853 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f9c89\" (UniqueName: \"kubernetes.io/projected/f6ed7b5b-9973-4a1e-920c-ca0f05625565-kube-api-access-f9c89\") pod \"f6ed7b5b-9973-4a1e-920c-ca0f05625565\" (UID: \"f6ed7b5b-9973-4a1e-920c-ca0f05625565\") " Feb 27 10:37:39 crc kubenswrapper[4998]: I0227 10:37:39.744928 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f6ed7b5b-9973-4a1e-920c-ca0f05625565-var-run" (OuterVolumeSpecName: "var-run") pod "f6ed7b5b-9973-4a1e-920c-ca0f05625565" (UID: "f6ed7b5b-9973-4a1e-920c-ca0f05625565"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 10:37:39 crc kubenswrapper[4998]: I0227 10:37:39.745001 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f6ed7b5b-9973-4a1e-920c-ca0f05625565-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "f6ed7b5b-9973-4a1e-920c-ca0f05625565" (UID: "f6ed7b5b-9973-4a1e-920c-ca0f05625565"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 10:37:39 crc kubenswrapper[4998]: I0227 10:37:39.745158 4998 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f6ed7b5b-9973-4a1e-920c-ca0f05625565-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 27 10:37:39 crc kubenswrapper[4998]: I0227 10:37:39.745171 4998 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f6ed7b5b-9973-4a1e-920c-ca0f05625565-var-run\") on node \"crc\" DevicePath \"\"" Feb 27 10:37:39 crc kubenswrapper[4998]: I0227 10:37:39.745178 4998 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f6ed7b5b-9973-4a1e-920c-ca0f05625565-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 27 10:37:39 crc kubenswrapper[4998]: I0227 10:37:39.745445 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6ed7b5b-9973-4a1e-920c-ca0f05625565-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "f6ed7b5b-9973-4a1e-920c-ca0f05625565" (UID: "f6ed7b5b-9973-4a1e-920c-ca0f05625565"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:37:39 crc kubenswrapper[4998]: I0227 10:37:39.745996 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6ed7b5b-9973-4a1e-920c-ca0f05625565-scripts" (OuterVolumeSpecName: "scripts") pod "f6ed7b5b-9973-4a1e-920c-ca0f05625565" (UID: "f6ed7b5b-9973-4a1e-920c-ca0f05625565"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:37:39 crc kubenswrapper[4998]: I0227 10:37:39.749929 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6ed7b5b-9973-4a1e-920c-ca0f05625565-kube-api-access-f9c89" (OuterVolumeSpecName: "kube-api-access-f9c89") pod "f6ed7b5b-9973-4a1e-920c-ca0f05625565" (UID: "f6ed7b5b-9973-4a1e-920c-ca0f05625565"). InnerVolumeSpecName "kube-api-access-f9c89". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:37:39 crc kubenswrapper[4998]: I0227 10:37:39.846675 4998 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f6ed7b5b-9973-4a1e-920c-ca0f05625565-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 10:37:39 crc kubenswrapper[4998]: I0227 10:37:39.846704 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f9c89\" (UniqueName: \"kubernetes.io/projected/f6ed7b5b-9973-4a1e-920c-ca0f05625565-kube-api-access-f9c89\") on node \"crc\" DevicePath \"\"" Feb 27 10:37:39 crc kubenswrapper[4998]: I0227 10:37:39.846715 4998 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f6ed7b5b-9973-4a1e-920c-ca0f05625565-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 10:37:39 crc kubenswrapper[4998]: I0227 10:37:39.897402 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-695g4-config-nmlsc" event={"ID":"f6ed7b5b-9973-4a1e-920c-ca0f05625565","Type":"ContainerDied","Data":"ebc09166ba4fc7db169a97cc907dc41592348d6885556a6bb8b49e01683f299a"} Feb 27 10:37:39 crc kubenswrapper[4998]: I0227 10:37:39.897445 4998 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ebc09166ba4fc7db169a97cc907dc41592348d6885556a6bb8b49e01683f299a" Feb 27 10:37:39 crc kubenswrapper[4998]: I0227 10:37:39.897491 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-695g4-config-nmlsc" Feb 27 10:37:40 crc kubenswrapper[4998]: I0227 10:37:40.015276 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-q5krj"] Feb 27 10:37:40 crc kubenswrapper[4998]: W0227 10:37:40.018703 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6c2d1d1d_3eba_45d7_935c_6c71925d009a.slice/crio-c1f227cc4d76bccd4fcdc3f8171b8bc6de0029201178a7a7361fce996aa72125 WatchSource:0}: Error finding container c1f227cc4d76bccd4fcdc3f8171b8bc6de0029201178a7a7361fce996aa72125: Status 404 returned error can't find the container with id c1f227cc4d76bccd4fcdc3f8171b8bc6de0029201178a7a7361fce996aa72125 Feb 27 10:37:40 crc kubenswrapper[4998]: I0227 10:37:40.505490 4998 patch_prober.go:28] interesting pod/machine-config-daemon-m6kr5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 10:37:40 crc kubenswrapper[4998]: I0227 10:37:40.505866 4998 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 10:37:40 crc kubenswrapper[4998]: I0227 10:37:40.505926 4998 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" Feb 27 10:37:40 crc kubenswrapper[4998]: I0227 10:37:40.506800 4998 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d4bd8462bb415ab0298cf24a40c264a6708906ed9fa7eae7a8b7e15bb36a14c4"} pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 27 10:37:40 crc kubenswrapper[4998]: I0227 10:37:40.506878 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" containerName="machine-config-daemon" containerID="cri-o://d4bd8462bb415ab0298cf24a40c264a6708906ed9fa7eae7a8b7e15bb36a14c4" gracePeriod=600 Feb 27 10:37:40 crc kubenswrapper[4998]: I0227 10:37:40.739757 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-695g4-config-nmlsc"] Feb 27 10:37:40 crc kubenswrapper[4998]: I0227 10:37:40.747823 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-695g4-config-nmlsc"] Feb 27 10:37:40 crc kubenswrapper[4998]: I0227 10:37:40.774186 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6ed7b5b-9973-4a1e-920c-ca0f05625565" path="/var/lib/kubelet/pods/f6ed7b5b-9973-4a1e-920c-ca0f05625565/volumes" Feb 27 10:37:40 crc kubenswrapper[4998]: I0227 10:37:40.910899 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-kdmkv" event={"ID":"b4c7cf30-091f-4dea-bbc1-156ad96a5451","Type":"ContainerStarted","Data":"27c60324302e68015b924ed479bfbe21ddc9277870418632bf5a046d96883cbb"} Feb 27 10:37:40 crc kubenswrapper[4998]: I0227 10:37:40.913259 4998 generic.go:334] "Generic (PLEG): container finished" podID="6c2d1d1d-3eba-45d7-935c-6c71925d009a" containerID="d4a209e1100cb7caa3a0e4db93d8d75115afcef62988d8eb6a691aa6bdd5023f" exitCode=0 Feb 27 10:37:40 crc kubenswrapper[4998]: I0227 10:37:40.913309 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-q5krj" event={"ID":"6c2d1d1d-3eba-45d7-935c-6c71925d009a","Type":"ContainerDied","Data":"d4a209e1100cb7caa3a0e4db93d8d75115afcef62988d8eb6a691aa6bdd5023f"} Feb 27 10:37:40 crc kubenswrapper[4998]: I0227 10:37:40.913328 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-q5krj" event={"ID":"6c2d1d1d-3eba-45d7-935c-6c71925d009a","Type":"ContainerStarted","Data":"c1f227cc4d76bccd4fcdc3f8171b8bc6de0029201178a7a7361fce996aa72125"} Feb 27 10:37:40 crc kubenswrapper[4998]: I0227 10:37:40.922114 4998 generic.go:334] "Generic (PLEG): container finished" podID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" containerID="d4bd8462bb415ab0298cf24a40c264a6708906ed9fa7eae7a8b7e15bb36a14c4" exitCode=0 Feb 27 10:37:40 crc kubenswrapper[4998]: I0227 10:37:40.922160 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" event={"ID":"400c5e2f-5448-49c6-bf8e-04b21e552bb2","Type":"ContainerDied","Data":"d4bd8462bb415ab0298cf24a40c264a6708906ed9fa7eae7a8b7e15bb36a14c4"} Feb 27 10:37:40 crc kubenswrapper[4998]: I0227 10:37:40.922193 4998 scope.go:117] "RemoveContainer" containerID="798a591820f18523d1f6d494045865d6035d0c926980498f800d24c0dbf69b5e" Feb 27 10:37:40 crc kubenswrapper[4998]: I0227 10:37:40.932101 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-kdmkv" podStartSLOduration=2.415342445 podStartE2EDuration="17.932080695s" podCreationTimestamp="2026-02-27 10:37:23 +0000 UTC" firstStartedPulling="2026-02-27 10:37:24.131628662 +0000 UTC m=+1196.129899630" lastFinishedPulling="2026-02-27 10:37:39.648366912 +0000 UTC m=+1211.646637880" observedRunningTime="2026-02-27 10:37:40.927845819 +0000 UTC m=+1212.926116787" watchObservedRunningTime="2026-02-27 10:37:40.932080695 +0000 UTC m=+1212.930351663" Feb 27 10:37:41 crc kubenswrapper[4998]: I0227 10:37:41.932559 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" event={"ID":"400c5e2f-5448-49c6-bf8e-04b21e552bb2","Type":"ContainerStarted","Data":"fa835617bfc870e1b2eabc00e16bdc9b210a2250fe70bb608d05ed5f2f06bfbc"} Feb 27 10:37:41 crc kubenswrapper[4998]: I0227 10:37:41.936662 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0928c45d-8553-49e6-a068-3e2e75a28c69","Type":"ContainerStarted","Data":"1ca4be16e80a784f69de58577ed65ff5ee6905ddaecdf24c618aeace201dafe7"} Feb 27 10:37:41 crc kubenswrapper[4998]: I0227 10:37:41.936703 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0928c45d-8553-49e6-a068-3e2e75a28c69","Type":"ContainerStarted","Data":"10faf27728d5da883eab064fbf0c8e7b72ed9b5ae9fd901763404ac911897252"} Feb 27 10:37:41 crc kubenswrapper[4998]: I0227 10:37:41.936713 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0928c45d-8553-49e6-a068-3e2e75a28c69","Type":"ContainerStarted","Data":"9430f3d0cce4dd9f200431dc1612489959b42b3218a6dddee34bc68ec74f9e5d"} Feb 27 10:37:41 crc kubenswrapper[4998]: I0227 10:37:41.936723 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0928c45d-8553-49e6-a068-3e2e75a28c69","Type":"ContainerStarted","Data":"8a42275ca3f7ce17f9743f524333e7ae9f4c51e65f0490ba0160fc5118b9cafb"} Feb 27 10:37:42 crc kubenswrapper[4998]: I0227 10:37:42.236374 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-q5krj" Feb 27 10:37:42 crc kubenswrapper[4998]: I0227 10:37:42.387762 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6c2d1d1d-3eba-45d7-935c-6c71925d009a-operator-scripts\") pod \"6c2d1d1d-3eba-45d7-935c-6c71925d009a\" (UID: \"6c2d1d1d-3eba-45d7-935c-6c71925d009a\") " Feb 27 10:37:42 crc kubenswrapper[4998]: I0227 10:37:42.388020 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jsmm\" (UniqueName: \"kubernetes.io/projected/6c2d1d1d-3eba-45d7-935c-6c71925d009a-kube-api-access-7jsmm\") pod \"6c2d1d1d-3eba-45d7-935c-6c71925d009a\" (UID: \"6c2d1d1d-3eba-45d7-935c-6c71925d009a\") " Feb 27 10:37:42 crc kubenswrapper[4998]: I0227 10:37:42.388705 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c2d1d1d-3eba-45d7-935c-6c71925d009a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6c2d1d1d-3eba-45d7-935c-6c71925d009a" (UID: "6c2d1d1d-3eba-45d7-935c-6c71925d009a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:37:42 crc kubenswrapper[4998]: I0227 10:37:42.393170 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c2d1d1d-3eba-45d7-935c-6c71925d009a-kube-api-access-7jsmm" (OuterVolumeSpecName: "kube-api-access-7jsmm") pod "6c2d1d1d-3eba-45d7-935c-6c71925d009a" (UID: "6c2d1d1d-3eba-45d7-935c-6c71925d009a"). InnerVolumeSpecName "kube-api-access-7jsmm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:37:42 crc kubenswrapper[4998]: I0227 10:37:42.490199 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7jsmm\" (UniqueName: \"kubernetes.io/projected/6c2d1d1d-3eba-45d7-935c-6c71925d009a-kube-api-access-7jsmm\") on node \"crc\" DevicePath \"\"" Feb 27 10:37:42 crc kubenswrapper[4998]: I0227 10:37:42.490308 4998 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6c2d1d1d-3eba-45d7-935c-6c71925d009a-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 10:37:42 crc kubenswrapper[4998]: I0227 10:37:42.944045 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-q5krj" Feb 27 10:37:42 crc kubenswrapper[4998]: I0227 10:37:42.944046 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-q5krj" event={"ID":"6c2d1d1d-3eba-45d7-935c-6c71925d009a","Type":"ContainerDied","Data":"c1f227cc4d76bccd4fcdc3f8171b8bc6de0029201178a7a7361fce996aa72125"} Feb 27 10:37:42 crc kubenswrapper[4998]: I0227 10:37:42.944440 4998 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1f227cc4d76bccd4fcdc3f8171b8bc6de0029201178a7a7361fce996aa72125" Feb 27 10:37:42 crc kubenswrapper[4998]: E0227 10:37:42.964819 4998 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6c2d1d1d_3eba_45d7_935c_6c71925d009a.slice\": RecentStats: unable to find data in memory cache]" Feb 27 10:37:44 crc kubenswrapper[4998]: I0227 10:37:44.748454 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:37:44 crc kubenswrapper[4998]: I0227 10:37:44.976145 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0928c45d-8553-49e6-a068-3e2e75a28c69","Type":"ContainerStarted","Data":"1badd7890d23779cac3b2f395175867d757211036058213b7252303a652d7e8e"} Feb 27 10:37:44 crc kubenswrapper[4998]: I0227 10:37:44.976209 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0928c45d-8553-49e6-a068-3e2e75a28c69","Type":"ContainerStarted","Data":"7793fdd3e129ec126bd9b7a4de31787bb5e3c55eb432620c273345e0cff76543"} Feb 27 10:37:44 crc kubenswrapper[4998]: I0227 10:37:44.976289 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0928c45d-8553-49e6-a068-3e2e75a28c69","Type":"ContainerStarted","Data":"969e5db112352cd0be9643bfcc9ffdb651554740c0c16a5db52b903369b39d8f"} Feb 27 10:37:44 crc kubenswrapper[4998]: I0227 10:37:44.976303 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0928c45d-8553-49e6-a068-3e2e75a28c69","Type":"ContainerStarted","Data":"431d7a77eac13300800c0af9fb525b8d25078127c2571bccf654a41fc66daa18"} Feb 27 10:37:45 crc kubenswrapper[4998]: I0227 10:37:45.087425 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 27 10:37:45 crc kubenswrapper[4998]: I0227 10:37:45.988932 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0928c45d-8553-49e6-a068-3e2e75a28c69","Type":"ContainerStarted","Data":"7fe43dc65f506f006581d5bfb01c8dce4a9739134213728c6835075b67d83d1f"} Feb 27 10:37:45 crc kubenswrapper[4998]: I0227 10:37:45.989455 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0928c45d-8553-49e6-a068-3e2e75a28c69","Type":"ContainerStarted","Data":"e4a19587d90e118cad81a340d47014caeb435154ba560194fdb7b29e78fdbdad"} Feb 27 10:37:45 crc kubenswrapper[4998]: I0227 10:37:45.989475 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0928c45d-8553-49e6-a068-3e2e75a28c69","Type":"ContainerStarted","Data":"68a9ef7a53457952a6e8acb4fa60c3f20e95b0f4feec98deaeaacfccf6fb4d66"} Feb 27 10:37:46 crc kubenswrapper[4998]: I0227 10:37:46.042940 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=19.603951866 podStartE2EDuration="36.042918538s" podCreationTimestamp="2026-02-27 10:37:10 +0000 UTC" firstStartedPulling="2026-02-27 10:37:27.609655697 +0000 UTC m=+1199.607926665" lastFinishedPulling="2026-02-27 10:37:44.048622369 +0000 UTC m=+1216.046893337" observedRunningTime="2026-02-27 10:37:46.036666247 +0000 UTC m=+1218.034937225" watchObservedRunningTime="2026-02-27 10:37:46.042918538 +0000 UTC m=+1218.041189506" Feb 27 10:37:46 crc kubenswrapper[4998]: I0227 10:37:46.309197 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-dsf4k"] Feb 27 10:37:46 crc kubenswrapper[4998]: E0227 10:37:46.309964 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c2d1d1d-3eba-45d7-935c-6c71925d009a" containerName="mariadb-account-create-update" Feb 27 10:37:46 crc kubenswrapper[4998]: I0227 10:37:46.309994 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c2d1d1d-3eba-45d7-935c-6c71925d009a" containerName="mariadb-account-create-update" Feb 27 10:37:46 crc kubenswrapper[4998]: E0227 10:37:46.310014 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6ed7b5b-9973-4a1e-920c-ca0f05625565" containerName="ovn-config" Feb 27 10:37:46 crc kubenswrapper[4998]: I0227 10:37:46.310023 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6ed7b5b-9973-4a1e-920c-ca0f05625565" containerName="ovn-config" Feb 27 10:37:46 crc kubenswrapper[4998]: I0227 10:37:46.310256 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6ed7b5b-9973-4a1e-920c-ca0f05625565" containerName="ovn-config" Feb 27 10:37:46 crc kubenswrapper[4998]: I0227 10:37:46.310285 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c2d1d1d-3eba-45d7-935c-6c71925d009a" containerName="mariadb-account-create-update" Feb 27 10:37:46 crc kubenswrapper[4998]: I0227 10:37:46.311322 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-dsf4k" Feb 27 10:37:46 crc kubenswrapper[4998]: I0227 10:37:46.313473 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Feb 27 10:37:46 crc kubenswrapper[4998]: I0227 10:37:46.328046 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-dsf4k"] Feb 27 10:37:46 crc kubenswrapper[4998]: I0227 10:37:46.476695 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f71ed8ad-2ac2-4463-9341-87d8dde20ec8-config\") pod \"dnsmasq-dns-764c5664d7-dsf4k\" (UID: \"f71ed8ad-2ac2-4463-9341-87d8dde20ec8\") " pod="openstack/dnsmasq-dns-764c5664d7-dsf4k" Feb 27 10:37:46 crc kubenswrapper[4998]: I0227 10:37:46.476979 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f71ed8ad-2ac2-4463-9341-87d8dde20ec8-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-dsf4k\" (UID: \"f71ed8ad-2ac2-4463-9341-87d8dde20ec8\") " pod="openstack/dnsmasq-dns-764c5664d7-dsf4k" Feb 27 10:37:46 crc kubenswrapper[4998]: I0227 10:37:46.477106 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f71ed8ad-2ac2-4463-9341-87d8dde20ec8-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-dsf4k\" (UID: \"f71ed8ad-2ac2-4463-9341-87d8dde20ec8\") " pod="openstack/dnsmasq-dns-764c5664d7-dsf4k" Feb 27 10:37:46 crc kubenswrapper[4998]: I0227 10:37:46.477213 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwbxf\" (UniqueName: \"kubernetes.io/projected/f71ed8ad-2ac2-4463-9341-87d8dde20ec8-kube-api-access-mwbxf\") pod \"dnsmasq-dns-764c5664d7-dsf4k\" (UID: \"f71ed8ad-2ac2-4463-9341-87d8dde20ec8\") " pod="openstack/dnsmasq-dns-764c5664d7-dsf4k" Feb 27 10:37:46 crc kubenswrapper[4998]: I0227 10:37:46.477352 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f71ed8ad-2ac2-4463-9341-87d8dde20ec8-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-dsf4k\" (UID: \"f71ed8ad-2ac2-4463-9341-87d8dde20ec8\") " pod="openstack/dnsmasq-dns-764c5664d7-dsf4k" Feb 27 10:37:46 crc kubenswrapper[4998]: I0227 10:37:46.477389 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f71ed8ad-2ac2-4463-9341-87d8dde20ec8-dns-svc\") pod \"dnsmasq-dns-764c5664d7-dsf4k\" (UID: \"f71ed8ad-2ac2-4463-9341-87d8dde20ec8\") " pod="openstack/dnsmasq-dns-764c5664d7-dsf4k" Feb 27 10:37:46 crc kubenswrapper[4998]: I0227 10:37:46.579199 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f71ed8ad-2ac2-4463-9341-87d8dde20ec8-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-dsf4k\" (UID: \"f71ed8ad-2ac2-4463-9341-87d8dde20ec8\") " pod="openstack/dnsmasq-dns-764c5664d7-dsf4k" Feb 27 10:37:46 crc kubenswrapper[4998]: I0227 10:37:46.579272 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwbxf\" (UniqueName: \"kubernetes.io/projected/f71ed8ad-2ac2-4463-9341-87d8dde20ec8-kube-api-access-mwbxf\") pod \"dnsmasq-dns-764c5664d7-dsf4k\" (UID: \"f71ed8ad-2ac2-4463-9341-87d8dde20ec8\") " pod="openstack/dnsmasq-dns-764c5664d7-dsf4k" Feb 27 10:37:46 crc kubenswrapper[4998]: I0227 10:37:46.579334 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f71ed8ad-2ac2-4463-9341-87d8dde20ec8-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-dsf4k\" (UID: \"f71ed8ad-2ac2-4463-9341-87d8dde20ec8\") " pod="openstack/dnsmasq-dns-764c5664d7-dsf4k" Feb 27 10:37:46 crc kubenswrapper[4998]: I0227 10:37:46.579357 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f71ed8ad-2ac2-4463-9341-87d8dde20ec8-dns-svc\") pod \"dnsmasq-dns-764c5664d7-dsf4k\" (UID: \"f71ed8ad-2ac2-4463-9341-87d8dde20ec8\") " pod="openstack/dnsmasq-dns-764c5664d7-dsf4k" Feb 27 10:37:46 crc kubenswrapper[4998]: I0227 10:37:46.579399 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f71ed8ad-2ac2-4463-9341-87d8dde20ec8-config\") pod \"dnsmasq-dns-764c5664d7-dsf4k\" (UID: \"f71ed8ad-2ac2-4463-9341-87d8dde20ec8\") " pod="openstack/dnsmasq-dns-764c5664d7-dsf4k" Feb 27 10:37:46 crc kubenswrapper[4998]: I0227 10:37:46.579421 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f71ed8ad-2ac2-4463-9341-87d8dde20ec8-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-dsf4k\" (UID: \"f71ed8ad-2ac2-4463-9341-87d8dde20ec8\") " pod="openstack/dnsmasq-dns-764c5664d7-dsf4k" Feb 27 10:37:46 crc kubenswrapper[4998]: I0227 10:37:46.580390 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f71ed8ad-2ac2-4463-9341-87d8dde20ec8-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-dsf4k\" (UID: \"f71ed8ad-2ac2-4463-9341-87d8dde20ec8\") " pod="openstack/dnsmasq-dns-764c5664d7-dsf4k" Feb 27 10:37:46 crc kubenswrapper[4998]: I0227 10:37:46.580409 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f71ed8ad-2ac2-4463-9341-87d8dde20ec8-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-dsf4k\" (UID: \"f71ed8ad-2ac2-4463-9341-87d8dde20ec8\") " pod="openstack/dnsmasq-dns-764c5664d7-dsf4k" Feb 27 10:37:46 crc kubenswrapper[4998]: I0227 10:37:46.580424 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f71ed8ad-2ac2-4463-9341-87d8dde20ec8-dns-svc\") pod \"dnsmasq-dns-764c5664d7-dsf4k\" (UID: \"f71ed8ad-2ac2-4463-9341-87d8dde20ec8\") " pod="openstack/dnsmasq-dns-764c5664d7-dsf4k" Feb 27 10:37:46 crc kubenswrapper[4998]: I0227 10:37:46.580611 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f71ed8ad-2ac2-4463-9341-87d8dde20ec8-config\") pod \"dnsmasq-dns-764c5664d7-dsf4k\" (UID: \"f71ed8ad-2ac2-4463-9341-87d8dde20ec8\") " pod="openstack/dnsmasq-dns-764c5664d7-dsf4k" Feb 27 10:37:46 crc kubenswrapper[4998]: I0227 10:37:46.580932 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f71ed8ad-2ac2-4463-9341-87d8dde20ec8-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-dsf4k\" (UID: \"f71ed8ad-2ac2-4463-9341-87d8dde20ec8\") " pod="openstack/dnsmasq-dns-764c5664d7-dsf4k" Feb 27 10:37:46 crc kubenswrapper[4998]: I0227 10:37:46.600155 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwbxf\" (UniqueName: \"kubernetes.io/projected/f71ed8ad-2ac2-4463-9341-87d8dde20ec8-kube-api-access-mwbxf\") pod \"dnsmasq-dns-764c5664d7-dsf4k\" (UID: \"f71ed8ad-2ac2-4463-9341-87d8dde20ec8\") " pod="openstack/dnsmasq-dns-764c5664d7-dsf4k" Feb 27 10:37:46 crc kubenswrapper[4998]: I0227 10:37:46.638264 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-dsf4k" Feb 27 10:37:46 crc kubenswrapper[4998]: I0227 10:37:46.870451 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-j226t"] Feb 27 10:37:46 crc kubenswrapper[4998]: I0227 10:37:46.872315 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-j226t" Feb 27 10:37:46 crc kubenswrapper[4998]: I0227 10:37:46.885706 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-j226t"] Feb 27 10:37:46 crc kubenswrapper[4998]: I0227 10:37:46.972652 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-70b3-account-create-update-jxckj"] Feb 27 10:37:46 crc kubenswrapper[4998]: I0227 10:37:46.973741 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-70b3-account-create-update-jxckj" Feb 27 10:37:46 crc kubenswrapper[4998]: I0227 10:37:46.976884 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 27 10:37:46 crc kubenswrapper[4998]: I0227 10:37:46.992040 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9frhg\" (UniqueName: \"kubernetes.io/projected/a51c99e4-d488-4245-8642-7e02c861919c-kube-api-access-9frhg\") pod \"cinder-db-create-j226t\" (UID: \"a51c99e4-d488-4245-8642-7e02c861919c\") " pod="openstack/cinder-db-create-j226t" Feb 27 10:37:46 crc kubenswrapper[4998]: I0227 10:37:46.992106 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a51c99e4-d488-4245-8642-7e02c861919c-operator-scripts\") pod \"cinder-db-create-j226t\" (UID: \"a51c99e4-d488-4245-8642-7e02c861919c\") " pod="openstack/cinder-db-create-j226t" Feb 27 10:37:47 crc kubenswrapper[4998]: I0227 10:37:47.009858 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-70b3-account-create-update-jxckj"] Feb 27 10:37:47 crc kubenswrapper[4998]: I0227 10:37:47.073027 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-tdqvg"] Feb 27 10:37:47 crc kubenswrapper[4998]: I0227 10:37:47.074033 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-tdqvg" Feb 27 10:37:47 crc kubenswrapper[4998]: I0227 10:37:47.093157 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f0c813f4-c426-471d-a640-9889450bfec7-operator-scripts\") pod \"cinder-70b3-account-create-update-jxckj\" (UID: \"f0c813f4-c426-471d-a640-9889450bfec7\") " pod="openstack/cinder-70b3-account-create-update-jxckj" Feb 27 10:37:47 crc kubenswrapper[4998]: I0227 10:37:47.093292 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9frhg\" (UniqueName: \"kubernetes.io/projected/a51c99e4-d488-4245-8642-7e02c861919c-kube-api-access-9frhg\") pod \"cinder-db-create-j226t\" (UID: \"a51c99e4-d488-4245-8642-7e02c861919c\") " pod="openstack/cinder-db-create-j226t" Feb 27 10:37:47 crc kubenswrapper[4998]: I0227 10:37:47.093334 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-526p5\" (UniqueName: \"kubernetes.io/projected/f0c813f4-c426-471d-a640-9889450bfec7-kube-api-access-526p5\") pod \"cinder-70b3-account-create-update-jxckj\" (UID: \"f0c813f4-c426-471d-a640-9889450bfec7\") " pod="openstack/cinder-70b3-account-create-update-jxckj" Feb 27 10:37:47 crc kubenswrapper[4998]: I0227 10:37:47.093388 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a51c99e4-d488-4245-8642-7e02c861919c-operator-scripts\") pod \"cinder-db-create-j226t\" (UID: \"a51c99e4-d488-4245-8642-7e02c861919c\") " pod="openstack/cinder-db-create-j226t" Feb 27 10:37:47 crc kubenswrapper[4998]: I0227 10:37:47.095970 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-tdqvg"] Feb 27 10:37:47 crc kubenswrapper[4998]: I0227 10:37:47.096326 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a51c99e4-d488-4245-8642-7e02c861919c-operator-scripts\") pod \"cinder-db-create-j226t\" (UID: \"a51c99e4-d488-4245-8642-7e02c861919c\") " pod="openstack/cinder-db-create-j226t" Feb 27 10:37:47 crc kubenswrapper[4998]: I0227 10:37:47.115512 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9frhg\" (UniqueName: \"kubernetes.io/projected/a51c99e4-d488-4245-8642-7e02c861919c-kube-api-access-9frhg\") pod \"cinder-db-create-j226t\" (UID: \"a51c99e4-d488-4245-8642-7e02c861919c\") " pod="openstack/cinder-db-create-j226t" Feb 27 10:37:47 crc kubenswrapper[4998]: I0227 10:37:47.196059 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a99365b3-16bc-4dce-9952-9f5cc37dfe2b-operator-scripts\") pod \"barbican-db-create-tdqvg\" (UID: \"a99365b3-16bc-4dce-9952-9f5cc37dfe2b\") " pod="openstack/barbican-db-create-tdqvg" Feb 27 10:37:47 crc kubenswrapper[4998]: I0227 10:37:47.196106 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f0c813f4-c426-471d-a640-9889450bfec7-operator-scripts\") pod \"cinder-70b3-account-create-update-jxckj\" (UID: \"f0c813f4-c426-471d-a640-9889450bfec7\") " pod="openstack/cinder-70b3-account-create-update-jxckj" Feb 27 10:37:47 crc kubenswrapper[4998]: I0227 10:37:47.196202 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxn5t\" (UniqueName: \"kubernetes.io/projected/a99365b3-16bc-4dce-9952-9f5cc37dfe2b-kube-api-access-kxn5t\") pod \"barbican-db-create-tdqvg\" (UID: \"a99365b3-16bc-4dce-9952-9f5cc37dfe2b\") " pod="openstack/barbican-db-create-tdqvg" Feb 27 10:37:47 crc kubenswrapper[4998]: I0227 10:37:47.196292 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-526p5\" (UniqueName: \"kubernetes.io/projected/f0c813f4-c426-471d-a640-9889450bfec7-kube-api-access-526p5\") pod \"cinder-70b3-account-create-update-jxckj\" (UID: \"f0c813f4-c426-471d-a640-9889450bfec7\") " pod="openstack/cinder-70b3-account-create-update-jxckj" Feb 27 10:37:47 crc kubenswrapper[4998]: I0227 10:37:47.196920 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f0c813f4-c426-471d-a640-9889450bfec7-operator-scripts\") pod \"cinder-70b3-account-create-update-jxckj\" (UID: \"f0c813f4-c426-471d-a640-9889450bfec7\") " pod="openstack/cinder-70b3-account-create-update-jxckj" Feb 27 10:37:47 crc kubenswrapper[4998]: I0227 10:37:47.204780 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-f503-account-create-update-sppwc"] Feb 27 10:37:47 crc kubenswrapper[4998]: I0227 10:37:47.205991 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-f503-account-create-update-sppwc" Feb 27 10:37:47 crc kubenswrapper[4998]: I0227 10:37:47.209667 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Feb 27 10:37:47 crc kubenswrapper[4998]: I0227 10:37:47.217911 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-j226t" Feb 27 10:37:47 crc kubenswrapper[4998]: I0227 10:37:47.244484 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-526p5\" (UniqueName: \"kubernetes.io/projected/f0c813f4-c426-471d-a640-9889450bfec7-kube-api-access-526p5\") pod \"cinder-70b3-account-create-update-jxckj\" (UID: \"f0c813f4-c426-471d-a640-9889450bfec7\") " pod="openstack/cinder-70b3-account-create-update-jxckj" Feb 27 10:37:47 crc kubenswrapper[4998]: I0227 10:37:47.247296 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-68rxz"] Feb 27 10:37:47 crc kubenswrapper[4998]: I0227 10:37:47.248542 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-68rxz" Feb 27 10:37:47 crc kubenswrapper[4998]: I0227 10:37:47.267310 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-f503-account-create-update-sppwc"] Feb 27 10:37:47 crc kubenswrapper[4998]: I0227 10:37:47.278302 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-68rxz"] Feb 27 10:37:47 crc kubenswrapper[4998]: I0227 10:37:47.297992 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nckbx\" (UniqueName: \"kubernetes.io/projected/33b458da-5079-4368-935a-74562555231c-kube-api-access-nckbx\") pod \"barbican-f503-account-create-update-sppwc\" (UID: \"33b458da-5079-4368-935a-74562555231c\") " pod="openstack/barbican-f503-account-create-update-sppwc" Feb 27 10:37:47 crc kubenswrapper[4998]: I0227 10:37:47.298040 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33b458da-5079-4368-935a-74562555231c-operator-scripts\") pod \"barbican-f503-account-create-update-sppwc\" (UID: \"33b458da-5079-4368-935a-74562555231c\") " pod="openstack/barbican-f503-account-create-update-sppwc" Feb 27 10:37:47 crc kubenswrapper[4998]: I0227 10:37:47.298086 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxn5t\" (UniqueName: \"kubernetes.io/projected/a99365b3-16bc-4dce-9952-9f5cc37dfe2b-kube-api-access-kxn5t\") pod \"barbican-db-create-tdqvg\" (UID: \"a99365b3-16bc-4dce-9952-9f5cc37dfe2b\") " pod="openstack/barbican-db-create-tdqvg" Feb 27 10:37:47 crc kubenswrapper[4998]: I0227 10:37:47.298207 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a99365b3-16bc-4dce-9952-9f5cc37dfe2b-operator-scripts\") pod \"barbican-db-create-tdqvg\" (UID: \"a99365b3-16bc-4dce-9952-9f5cc37dfe2b\") " pod="openstack/barbican-db-create-tdqvg" Feb 27 10:37:47 crc kubenswrapper[4998]: I0227 10:37:47.298899 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a99365b3-16bc-4dce-9952-9f5cc37dfe2b-operator-scripts\") pod \"barbican-db-create-tdqvg\" (UID: \"a99365b3-16bc-4dce-9952-9f5cc37dfe2b\") " pod="openstack/barbican-db-create-tdqvg" Feb 27 10:37:47 crc kubenswrapper[4998]: I0227 10:37:47.311764 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-70b3-account-create-update-jxckj" Feb 27 10:37:47 crc kubenswrapper[4998]: I0227 10:37:47.323165 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxn5t\" (UniqueName: \"kubernetes.io/projected/a99365b3-16bc-4dce-9952-9f5cc37dfe2b-kube-api-access-kxn5t\") pod \"barbican-db-create-tdqvg\" (UID: \"a99365b3-16bc-4dce-9952-9f5cc37dfe2b\") " pod="openstack/barbican-db-create-tdqvg" Feb 27 10:37:47 crc kubenswrapper[4998]: I0227 10:37:47.346192 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-dsf4k"] Feb 27 10:37:47 crc kubenswrapper[4998]: I0227 10:37:47.361585 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-8vc7w"] Feb 27 10:37:47 crc kubenswrapper[4998]: I0227 10:37:47.362932 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-8vc7w" Feb 27 10:37:47 crc kubenswrapper[4998]: I0227 10:37:47.370207 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 27 10:37:47 crc kubenswrapper[4998]: I0227 10:37:47.370482 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 27 10:37:47 crc kubenswrapper[4998]: I0227 10:37:47.370784 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 27 10:37:47 crc kubenswrapper[4998]: I0227 10:37:47.374855 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-4n75w" Feb 27 10:37:47 crc kubenswrapper[4998]: I0227 10:37:47.390559 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-8vc7w"] Feb 27 10:37:47 crc kubenswrapper[4998]: I0227 10:37:47.400059 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c53e7f43-9c1d-487b-984a-f6ea82b5caec-operator-scripts\") pod \"neutron-db-create-68rxz\" (UID: \"c53e7f43-9c1d-487b-984a-f6ea82b5caec\") " pod="openstack/neutron-db-create-68rxz" Feb 27 10:37:47 crc kubenswrapper[4998]: I0227 10:37:47.400120 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rw6q6\" (UniqueName: \"kubernetes.io/projected/c53e7f43-9c1d-487b-984a-f6ea82b5caec-kube-api-access-rw6q6\") pod \"neutron-db-create-68rxz\" (UID: \"c53e7f43-9c1d-487b-984a-f6ea82b5caec\") " pod="openstack/neutron-db-create-68rxz" Feb 27 10:37:47 crc kubenswrapper[4998]: I0227 10:37:47.400256 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nckbx\" (UniqueName: \"kubernetes.io/projected/33b458da-5079-4368-935a-74562555231c-kube-api-access-nckbx\") pod \"barbican-f503-account-create-update-sppwc\" (UID: \"33b458da-5079-4368-935a-74562555231c\") " pod="openstack/barbican-f503-account-create-update-sppwc" Feb 27 10:37:47 crc kubenswrapper[4998]: I0227 10:37:47.400291 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33b458da-5079-4368-935a-74562555231c-operator-scripts\") pod \"barbican-f503-account-create-update-sppwc\" (UID: \"33b458da-5079-4368-935a-74562555231c\") " pod="openstack/barbican-f503-account-create-update-sppwc" Feb 27 10:37:47 crc kubenswrapper[4998]: I0227 10:37:47.416424 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-tdqvg" Feb 27 10:37:47 crc kubenswrapper[4998]: I0227 10:37:47.419460 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33b458da-5079-4368-935a-74562555231c-operator-scripts\") pod \"barbican-f503-account-create-update-sppwc\" (UID: \"33b458da-5079-4368-935a-74562555231c\") " pod="openstack/barbican-f503-account-create-update-sppwc" Feb 27 10:37:47 crc kubenswrapper[4998]: I0227 10:37:47.443745 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-3ce1-account-create-update-25bnc"] Feb 27 10:37:47 crc kubenswrapper[4998]: I0227 10:37:47.462566 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-3ce1-account-create-update-25bnc" Feb 27 10:37:47 crc kubenswrapper[4998]: I0227 10:37:47.488684 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 27 10:37:47 crc kubenswrapper[4998]: I0227 10:37:47.489208 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nckbx\" (UniqueName: \"kubernetes.io/projected/33b458da-5079-4368-935a-74562555231c-kube-api-access-nckbx\") pod \"barbican-f503-account-create-update-sppwc\" (UID: \"33b458da-5079-4368-935a-74562555231c\") " pod="openstack/barbican-f503-account-create-update-sppwc" Feb 27 10:37:47 crc kubenswrapper[4998]: I0227 10:37:47.503117 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b35bf46b-b800-4e48-a90f-1a5e25eb3e3b-operator-scripts\") pod \"neutron-3ce1-account-create-update-25bnc\" (UID: \"b35bf46b-b800-4e48-a90f-1a5e25eb3e3b\") " pod="openstack/neutron-3ce1-account-create-update-25bnc" Feb 27 10:37:47 crc kubenswrapper[4998]: I0227 10:37:47.503214 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c53e7f43-9c1d-487b-984a-f6ea82b5caec-operator-scripts\") pod \"neutron-db-create-68rxz\" (UID: \"c53e7f43-9c1d-487b-984a-f6ea82b5caec\") " pod="openstack/neutron-db-create-68rxz" Feb 27 10:37:47 crc kubenswrapper[4998]: I0227 10:37:47.503259 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87a57573-5e1f-4004-bb42-4de9e20de0ef-combined-ca-bundle\") pod \"keystone-db-sync-8vc7w\" (UID: \"87a57573-5e1f-4004-bb42-4de9e20de0ef\") " pod="openstack/keystone-db-sync-8vc7w" Feb 27 10:37:47 crc kubenswrapper[4998]: I0227 10:37:47.503301 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rw6q6\" (UniqueName: \"kubernetes.io/projected/c53e7f43-9c1d-487b-984a-f6ea82b5caec-kube-api-access-rw6q6\") pod \"neutron-db-create-68rxz\" (UID: \"c53e7f43-9c1d-487b-984a-f6ea82b5caec\") " pod="openstack/neutron-db-create-68rxz" Feb 27 10:37:47 crc kubenswrapper[4998]: I0227 10:37:47.503327 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7t8np\" (UniqueName: \"kubernetes.io/projected/b35bf46b-b800-4e48-a90f-1a5e25eb3e3b-kube-api-access-7t8np\") pod \"neutron-3ce1-account-create-update-25bnc\" (UID: \"b35bf46b-b800-4e48-a90f-1a5e25eb3e3b\") " pod="openstack/neutron-3ce1-account-create-update-25bnc" Feb 27 10:37:47 crc kubenswrapper[4998]: I0227 10:37:47.503375 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87a57573-5e1f-4004-bb42-4de9e20de0ef-config-data\") pod \"keystone-db-sync-8vc7w\" (UID: \"87a57573-5e1f-4004-bb42-4de9e20de0ef\") " pod="openstack/keystone-db-sync-8vc7w" Feb 27 10:37:47 crc kubenswrapper[4998]: I0227 10:37:47.503457 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdtp6\" (UniqueName: \"kubernetes.io/projected/87a57573-5e1f-4004-bb42-4de9e20de0ef-kube-api-access-jdtp6\") pod \"keystone-db-sync-8vc7w\" (UID: \"87a57573-5e1f-4004-bb42-4de9e20de0ef\") " pod="openstack/keystone-db-sync-8vc7w" Feb 27 10:37:47 crc kubenswrapper[4998]: I0227 10:37:47.509413 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c53e7f43-9c1d-487b-984a-f6ea82b5caec-operator-scripts\") pod \"neutron-db-create-68rxz\" (UID: \"c53e7f43-9c1d-487b-984a-f6ea82b5caec\") " pod="openstack/neutron-db-create-68rxz" Feb 27 10:37:47 crc kubenswrapper[4998]: I0227 10:37:47.527353 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-3ce1-account-create-update-25bnc"] Feb 27 10:37:47 crc kubenswrapper[4998]: I0227 10:37:47.546979 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rw6q6\" (UniqueName: \"kubernetes.io/projected/c53e7f43-9c1d-487b-984a-f6ea82b5caec-kube-api-access-rw6q6\") pod \"neutron-db-create-68rxz\" (UID: \"c53e7f43-9c1d-487b-984a-f6ea82b5caec\") " pod="openstack/neutron-db-create-68rxz" Feb 27 10:37:47 crc kubenswrapper[4998]: I0227 10:37:47.606700 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdtp6\" (UniqueName: \"kubernetes.io/projected/87a57573-5e1f-4004-bb42-4de9e20de0ef-kube-api-access-jdtp6\") pod \"keystone-db-sync-8vc7w\" (UID: \"87a57573-5e1f-4004-bb42-4de9e20de0ef\") " pod="openstack/keystone-db-sync-8vc7w" Feb 27 10:37:47 crc kubenswrapper[4998]: I0227 10:37:47.606785 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b35bf46b-b800-4e48-a90f-1a5e25eb3e3b-operator-scripts\") pod \"neutron-3ce1-account-create-update-25bnc\" (UID: \"b35bf46b-b800-4e48-a90f-1a5e25eb3e3b\") " pod="openstack/neutron-3ce1-account-create-update-25bnc" Feb 27 10:37:47 crc kubenswrapper[4998]: I0227 10:37:47.606831 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87a57573-5e1f-4004-bb42-4de9e20de0ef-combined-ca-bundle\") pod \"keystone-db-sync-8vc7w\" (UID: \"87a57573-5e1f-4004-bb42-4de9e20de0ef\") " pod="openstack/keystone-db-sync-8vc7w" Feb 27 10:37:47 crc kubenswrapper[4998]: I0227 10:37:47.606871 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7t8np\" (UniqueName: \"kubernetes.io/projected/b35bf46b-b800-4e48-a90f-1a5e25eb3e3b-kube-api-access-7t8np\") pod \"neutron-3ce1-account-create-update-25bnc\" (UID: \"b35bf46b-b800-4e48-a90f-1a5e25eb3e3b\") " pod="openstack/neutron-3ce1-account-create-update-25bnc" Feb 27 10:37:47 crc kubenswrapper[4998]: I0227 10:37:47.606905 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87a57573-5e1f-4004-bb42-4de9e20de0ef-config-data\") pod \"keystone-db-sync-8vc7w\" (UID: \"87a57573-5e1f-4004-bb42-4de9e20de0ef\") " pod="openstack/keystone-db-sync-8vc7w" Feb 27 10:37:47 crc kubenswrapper[4998]: I0227 10:37:47.609149 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b35bf46b-b800-4e48-a90f-1a5e25eb3e3b-operator-scripts\") pod \"neutron-3ce1-account-create-update-25bnc\" (UID: \"b35bf46b-b800-4e48-a90f-1a5e25eb3e3b\") " pod="openstack/neutron-3ce1-account-create-update-25bnc" Feb 27 10:37:47 crc kubenswrapper[4998]: I0227 10:37:47.616086 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87a57573-5e1f-4004-bb42-4de9e20de0ef-config-data\") pod \"keystone-db-sync-8vc7w\" (UID: \"87a57573-5e1f-4004-bb42-4de9e20de0ef\") " pod="openstack/keystone-db-sync-8vc7w" Feb 27 10:37:47 crc kubenswrapper[4998]: I0227 10:37:47.616159 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87a57573-5e1f-4004-bb42-4de9e20de0ef-combined-ca-bundle\") pod \"keystone-db-sync-8vc7w\" (UID: \"87a57573-5e1f-4004-bb42-4de9e20de0ef\") " pod="openstack/keystone-db-sync-8vc7w" Feb 27 10:37:47 crc kubenswrapper[4998]: I0227 10:37:47.628911 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdtp6\" (UniqueName: \"kubernetes.io/projected/87a57573-5e1f-4004-bb42-4de9e20de0ef-kube-api-access-jdtp6\") pod \"keystone-db-sync-8vc7w\" (UID: \"87a57573-5e1f-4004-bb42-4de9e20de0ef\") " pod="openstack/keystone-db-sync-8vc7w" Feb 27 10:37:47 crc kubenswrapper[4998]: I0227 10:37:47.656344 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7t8np\" (UniqueName: \"kubernetes.io/projected/b35bf46b-b800-4e48-a90f-1a5e25eb3e3b-kube-api-access-7t8np\") pod \"neutron-3ce1-account-create-update-25bnc\" (UID: \"b35bf46b-b800-4e48-a90f-1a5e25eb3e3b\") " pod="openstack/neutron-3ce1-account-create-update-25bnc" Feb 27 10:37:47 crc kubenswrapper[4998]: I0227 10:37:47.741336 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-f503-account-create-update-sppwc" Feb 27 10:37:47 crc kubenswrapper[4998]: I0227 10:37:47.803909 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-68rxz" Feb 27 10:37:47 crc kubenswrapper[4998]: I0227 10:37:47.842375 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-8vc7w" Feb 27 10:37:47 crc kubenswrapper[4998]: I0227 10:37:47.857321 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-3ce1-account-create-update-25bnc" Feb 27 10:37:48 crc kubenswrapper[4998]: I0227 10:37:48.061636 4998 generic.go:334] "Generic (PLEG): container finished" podID="f71ed8ad-2ac2-4463-9341-87d8dde20ec8" containerID="1af0031a8540c3ae59a9d8f404c215c10d1be72639658620295bb6b14509e92b" exitCode=0 Feb 27 10:37:48 crc kubenswrapper[4998]: I0227 10:37:48.061683 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-dsf4k" event={"ID":"f71ed8ad-2ac2-4463-9341-87d8dde20ec8","Type":"ContainerDied","Data":"1af0031a8540c3ae59a9d8f404c215c10d1be72639658620295bb6b14509e92b"} Feb 27 10:37:48 crc kubenswrapper[4998]: I0227 10:37:48.061709 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-dsf4k" event={"ID":"f71ed8ad-2ac2-4463-9341-87d8dde20ec8","Type":"ContainerStarted","Data":"107a04af26b7a5d95baa8c5dd506a31fcd53611a61532b794ac0cce06d72bd3f"} Feb 27 10:37:48 crc kubenswrapper[4998]: I0227 10:37:48.116519 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-tdqvg"] Feb 27 10:37:48 crc kubenswrapper[4998]: I0227 10:37:48.170328 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-f503-account-create-update-sppwc"] Feb 27 10:37:48 crc kubenswrapper[4998]: I0227 10:37:48.192819 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-70b3-account-create-update-jxckj"] Feb 27 10:37:48 crc kubenswrapper[4998]: I0227 10:37:48.199889 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-j226t"] Feb 27 10:37:48 crc kubenswrapper[4998]: I0227 10:37:48.591901 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-68rxz"] Feb 27 10:37:48 crc kubenswrapper[4998]: I0227 10:37:48.712844 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-8vc7w"] Feb 27 10:37:48 crc kubenswrapper[4998]: I0227 10:37:48.748104 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-3ce1-account-create-update-25bnc"] Feb 27 10:37:49 crc kubenswrapper[4998]: I0227 10:37:49.072707 4998 generic.go:334] "Generic (PLEG): container finished" podID="a99365b3-16bc-4dce-9952-9f5cc37dfe2b" containerID="c70d09dd99c13f00697825f7e4e0c0041a19779e0ef465ce606cbe63ae93670f" exitCode=0 Feb 27 10:37:49 crc kubenswrapper[4998]: I0227 10:37:49.072843 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-tdqvg" event={"ID":"a99365b3-16bc-4dce-9952-9f5cc37dfe2b","Type":"ContainerDied","Data":"c70d09dd99c13f00697825f7e4e0c0041a19779e0ef465ce606cbe63ae93670f"} Feb 27 10:37:49 crc kubenswrapper[4998]: I0227 10:37:49.073083 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-tdqvg" event={"ID":"a99365b3-16bc-4dce-9952-9f5cc37dfe2b","Type":"ContainerStarted","Data":"c956569fd7f30d42cf5506ee12f942bcf57ccaebeaf8625106163f9c6470a3f7"} Feb 27 10:37:49 crc kubenswrapper[4998]: I0227 10:37:49.076947 4998 generic.go:334] "Generic (PLEG): container finished" podID="33b458da-5079-4368-935a-74562555231c" containerID="e76b698c2159f521a393996218c6ebaeec4738abd51d34008556d34317066f14" exitCode=0 Feb 27 10:37:49 crc kubenswrapper[4998]: I0227 10:37:49.077005 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-f503-account-create-update-sppwc" event={"ID":"33b458da-5079-4368-935a-74562555231c","Type":"ContainerDied","Data":"e76b698c2159f521a393996218c6ebaeec4738abd51d34008556d34317066f14"} Feb 27 10:37:49 crc kubenswrapper[4998]: I0227 10:37:49.077028 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-f503-account-create-update-sppwc" event={"ID":"33b458da-5079-4368-935a-74562555231c","Type":"ContainerStarted","Data":"41bee23c48140cfcea0f4f039a194a29b6c97e1f0ace5a92fdc17437714158ca"} Feb 27 10:37:49 crc kubenswrapper[4998]: I0227 10:37:49.082006 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-3ce1-account-create-update-25bnc" event={"ID":"b35bf46b-b800-4e48-a90f-1a5e25eb3e3b","Type":"ContainerStarted","Data":"3ad8ead1d374638d70d30ef0515106557398f5e552665a5da2284551e7f7e532"} Feb 27 10:37:49 crc kubenswrapper[4998]: I0227 10:37:49.082086 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-3ce1-account-create-update-25bnc" event={"ID":"b35bf46b-b800-4e48-a90f-1a5e25eb3e3b","Type":"ContainerStarted","Data":"f121eb9489cc67ae657cb08cec497a554d33431d43b40bc79aba19fa93c86bff"} Feb 27 10:37:49 crc kubenswrapper[4998]: I0227 10:37:49.087741 4998 generic.go:334] "Generic (PLEG): container finished" podID="a51c99e4-d488-4245-8642-7e02c861919c" containerID="da647573782f2ae67fe422e061eca74eab0037368f1c39126eea24206d3ad9e9" exitCode=0 Feb 27 10:37:49 crc kubenswrapper[4998]: I0227 10:37:49.087812 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-j226t" event={"ID":"a51c99e4-d488-4245-8642-7e02c861919c","Type":"ContainerDied","Data":"da647573782f2ae67fe422e061eca74eab0037368f1c39126eea24206d3ad9e9"} Feb 27 10:37:49 crc kubenswrapper[4998]: I0227 10:37:49.087840 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-j226t" event={"ID":"a51c99e4-d488-4245-8642-7e02c861919c","Type":"ContainerStarted","Data":"04c797730dfb97af3e18cd9c3e7d1874c2d91c4f3b3824db8085e63cec5d511b"} Feb 27 10:37:49 crc kubenswrapper[4998]: I0227 10:37:49.088936 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-68rxz" event={"ID":"c53e7f43-9c1d-487b-984a-f6ea82b5caec","Type":"ContainerStarted","Data":"df502cfcbbdee7b48d76fc4a8b1f4caf4a3ae3fbdc80f4ec4db85fab974eb7d0"} Feb 27 10:37:49 crc kubenswrapper[4998]: I0227 10:37:49.088965 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-68rxz" event={"ID":"c53e7f43-9c1d-487b-984a-f6ea82b5caec","Type":"ContainerStarted","Data":"2c280bf696a362f789bc02c50729f158f37a716674f761d05fd56a16ed9ec017"} Feb 27 10:37:49 crc kubenswrapper[4998]: I0227 10:37:49.091827 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-8vc7w" event={"ID":"87a57573-5e1f-4004-bb42-4de9e20de0ef","Type":"ContainerStarted","Data":"89731fd3726b12b959dfc8b3b7f4328b23903629fcc1239425c758a8417a1730"} Feb 27 10:37:49 crc kubenswrapper[4998]: I0227 10:37:49.098285 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-dsf4k" event={"ID":"f71ed8ad-2ac2-4463-9341-87d8dde20ec8","Type":"ContainerStarted","Data":"049ab440c9f02273c2a85a275e0c4fd3ac01dafcb482e995e48f39894c63f637"} Feb 27 10:37:49 crc kubenswrapper[4998]: I0227 10:37:49.098525 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-764c5664d7-dsf4k" Feb 27 10:37:49 crc kubenswrapper[4998]: I0227 10:37:49.102261 4998 generic.go:334] "Generic (PLEG): container finished" podID="f0c813f4-c426-471d-a640-9889450bfec7" containerID="e3ed6fa400f5b4e9c3001899d26ba4be388dd3a9b92720a015b3ee8d86e584be" exitCode=0 Feb 27 10:37:49 crc kubenswrapper[4998]: I0227 10:37:49.102304 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-70b3-account-create-update-jxckj" event={"ID":"f0c813f4-c426-471d-a640-9889450bfec7","Type":"ContainerDied","Data":"e3ed6fa400f5b4e9c3001899d26ba4be388dd3a9b92720a015b3ee8d86e584be"} Feb 27 10:37:49 crc kubenswrapper[4998]: I0227 10:37:49.102392 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-70b3-account-create-update-jxckj" event={"ID":"f0c813f4-c426-471d-a640-9889450bfec7","Type":"ContainerStarted","Data":"fe7ee4e0afd1428f0fe95d633aea3a463004deced27d271f6a2f8db1421d13fb"} Feb 27 10:37:49 crc kubenswrapper[4998]: I0227 10:37:49.116691 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-3ce1-account-create-update-25bnc" podStartSLOduration=2.116668222 podStartE2EDuration="2.116668222s" podCreationTimestamp="2026-02-27 10:37:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:37:49.110407001 +0000 UTC m=+1221.108677969" watchObservedRunningTime="2026-02-27 10:37:49.116668222 +0000 UTC m=+1221.114939190" Feb 27 10:37:49 crc kubenswrapper[4998]: I0227 10:37:49.179114 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-764c5664d7-dsf4k" podStartSLOduration=3.179091792 podStartE2EDuration="3.179091792s" podCreationTimestamp="2026-02-27 10:37:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:37:49.148550669 +0000 UTC m=+1221.146821647" watchObservedRunningTime="2026-02-27 10:37:49.179091792 +0000 UTC m=+1221.177362760" Feb 27 10:37:50 crc kubenswrapper[4998]: I0227 10:37:50.126801 4998 generic.go:334] "Generic (PLEG): container finished" podID="c53e7f43-9c1d-487b-984a-f6ea82b5caec" containerID="df502cfcbbdee7b48d76fc4a8b1f4caf4a3ae3fbdc80f4ec4db85fab974eb7d0" exitCode=0 Feb 27 10:37:50 crc kubenswrapper[4998]: I0227 10:37:50.126875 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-68rxz" event={"ID":"c53e7f43-9c1d-487b-984a-f6ea82b5caec","Type":"ContainerDied","Data":"df502cfcbbdee7b48d76fc4a8b1f4caf4a3ae3fbdc80f4ec4db85fab974eb7d0"} Feb 27 10:37:50 crc kubenswrapper[4998]: I0227 10:37:50.128386 4998 generic.go:334] "Generic (PLEG): container finished" podID="b35bf46b-b800-4e48-a90f-1a5e25eb3e3b" containerID="3ad8ead1d374638d70d30ef0515106557398f5e552665a5da2284551e7f7e532" exitCode=0 Feb 27 10:37:50 crc kubenswrapper[4998]: I0227 10:37:50.128493 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-3ce1-account-create-update-25bnc" event={"ID":"b35bf46b-b800-4e48-a90f-1a5e25eb3e3b","Type":"ContainerDied","Data":"3ad8ead1d374638d70d30ef0515106557398f5e552665a5da2284551e7f7e532"} Feb 27 10:37:50 crc kubenswrapper[4998]: I0227 10:37:50.627100 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-f503-account-create-update-sppwc" Feb 27 10:37:50 crc kubenswrapper[4998]: I0227 10:37:50.677201 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33b458da-5079-4368-935a-74562555231c-operator-scripts\") pod \"33b458da-5079-4368-935a-74562555231c\" (UID: \"33b458da-5079-4368-935a-74562555231c\") " Feb 27 10:37:50 crc kubenswrapper[4998]: I0227 10:37:50.677315 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nckbx\" (UniqueName: \"kubernetes.io/projected/33b458da-5079-4368-935a-74562555231c-kube-api-access-nckbx\") pod \"33b458da-5079-4368-935a-74562555231c\" (UID: \"33b458da-5079-4368-935a-74562555231c\") " Feb 27 10:37:50 crc kubenswrapper[4998]: I0227 10:37:50.680699 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33b458da-5079-4368-935a-74562555231c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "33b458da-5079-4368-935a-74562555231c" (UID: "33b458da-5079-4368-935a-74562555231c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:37:50 crc kubenswrapper[4998]: I0227 10:37:50.689452 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33b458da-5079-4368-935a-74562555231c-kube-api-access-nckbx" (OuterVolumeSpecName: "kube-api-access-nckbx") pod "33b458da-5079-4368-935a-74562555231c" (UID: "33b458da-5079-4368-935a-74562555231c"). InnerVolumeSpecName "kube-api-access-nckbx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:37:50 crc kubenswrapper[4998]: I0227 10:37:50.784388 4998 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33b458da-5079-4368-935a-74562555231c-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 10:37:50 crc kubenswrapper[4998]: I0227 10:37:50.784425 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nckbx\" (UniqueName: \"kubernetes.io/projected/33b458da-5079-4368-935a-74562555231c-kube-api-access-nckbx\") on node \"crc\" DevicePath \"\"" Feb 27 10:37:50 crc kubenswrapper[4998]: I0227 10:37:50.819790 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-68rxz" Feb 27 10:37:50 crc kubenswrapper[4998]: I0227 10:37:50.829877 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-70b3-account-create-update-jxckj" Feb 27 10:37:50 crc kubenswrapper[4998]: I0227 10:37:50.842590 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-tdqvg" Feb 27 10:37:50 crc kubenswrapper[4998]: I0227 10:37:50.879208 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-j226t" Feb 27 10:37:50 crc kubenswrapper[4998]: I0227 10:37:50.885620 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c53e7f43-9c1d-487b-984a-f6ea82b5caec-operator-scripts\") pod \"c53e7f43-9c1d-487b-984a-f6ea82b5caec\" (UID: \"c53e7f43-9c1d-487b-984a-f6ea82b5caec\") " Feb 27 10:37:50 crc kubenswrapper[4998]: I0227 10:37:50.885686 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f0c813f4-c426-471d-a640-9889450bfec7-operator-scripts\") pod \"f0c813f4-c426-471d-a640-9889450bfec7\" (UID: \"f0c813f4-c426-471d-a640-9889450bfec7\") " Feb 27 10:37:50 crc kubenswrapper[4998]: I0227 10:37:50.885714 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-526p5\" (UniqueName: \"kubernetes.io/projected/f0c813f4-c426-471d-a640-9889450bfec7-kube-api-access-526p5\") pod \"f0c813f4-c426-471d-a640-9889450bfec7\" (UID: \"f0c813f4-c426-471d-a640-9889450bfec7\") " Feb 27 10:37:50 crc kubenswrapper[4998]: I0227 10:37:50.885821 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a99365b3-16bc-4dce-9952-9f5cc37dfe2b-operator-scripts\") pod \"a99365b3-16bc-4dce-9952-9f5cc37dfe2b\" (UID: \"a99365b3-16bc-4dce-9952-9f5cc37dfe2b\") " Feb 27 10:37:50 crc kubenswrapper[4998]: I0227 10:37:50.885882 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kxn5t\" (UniqueName: \"kubernetes.io/projected/a99365b3-16bc-4dce-9952-9f5cc37dfe2b-kube-api-access-kxn5t\") pod \"a99365b3-16bc-4dce-9952-9f5cc37dfe2b\" (UID: \"a99365b3-16bc-4dce-9952-9f5cc37dfe2b\") " Feb 27 10:37:50 crc kubenswrapper[4998]: I0227 10:37:50.885909 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rw6q6\" (UniqueName: \"kubernetes.io/projected/c53e7f43-9c1d-487b-984a-f6ea82b5caec-kube-api-access-rw6q6\") pod \"c53e7f43-9c1d-487b-984a-f6ea82b5caec\" (UID: \"c53e7f43-9c1d-487b-984a-f6ea82b5caec\") " Feb 27 10:37:50 crc kubenswrapper[4998]: I0227 10:37:50.886125 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c53e7f43-9c1d-487b-984a-f6ea82b5caec-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c53e7f43-9c1d-487b-984a-f6ea82b5caec" (UID: "c53e7f43-9c1d-487b-984a-f6ea82b5caec"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:37:50 crc kubenswrapper[4998]: I0227 10:37:50.886531 4998 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c53e7f43-9c1d-487b-984a-f6ea82b5caec-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 10:37:50 crc kubenswrapper[4998]: I0227 10:37:50.887420 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0c813f4-c426-471d-a640-9889450bfec7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f0c813f4-c426-471d-a640-9889450bfec7" (UID: "f0c813f4-c426-471d-a640-9889450bfec7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:37:50 crc kubenswrapper[4998]: I0227 10:37:50.887469 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a99365b3-16bc-4dce-9952-9f5cc37dfe2b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a99365b3-16bc-4dce-9952-9f5cc37dfe2b" (UID: "a99365b3-16bc-4dce-9952-9f5cc37dfe2b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:37:50 crc kubenswrapper[4998]: I0227 10:37:50.891545 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a99365b3-16bc-4dce-9952-9f5cc37dfe2b-kube-api-access-kxn5t" (OuterVolumeSpecName: "kube-api-access-kxn5t") pod "a99365b3-16bc-4dce-9952-9f5cc37dfe2b" (UID: "a99365b3-16bc-4dce-9952-9f5cc37dfe2b"). InnerVolumeSpecName "kube-api-access-kxn5t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:37:50 crc kubenswrapper[4998]: I0227 10:37:50.891658 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c53e7f43-9c1d-487b-984a-f6ea82b5caec-kube-api-access-rw6q6" (OuterVolumeSpecName: "kube-api-access-rw6q6") pod "c53e7f43-9c1d-487b-984a-f6ea82b5caec" (UID: "c53e7f43-9c1d-487b-984a-f6ea82b5caec"). InnerVolumeSpecName "kube-api-access-rw6q6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:37:50 crc kubenswrapper[4998]: I0227 10:37:50.895454 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0c813f4-c426-471d-a640-9889450bfec7-kube-api-access-526p5" (OuterVolumeSpecName: "kube-api-access-526p5") pod "f0c813f4-c426-471d-a640-9889450bfec7" (UID: "f0c813f4-c426-471d-a640-9889450bfec7"). InnerVolumeSpecName "kube-api-access-526p5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:37:50 crc kubenswrapper[4998]: I0227 10:37:50.988087 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9frhg\" (UniqueName: \"kubernetes.io/projected/a51c99e4-d488-4245-8642-7e02c861919c-kube-api-access-9frhg\") pod \"a51c99e4-d488-4245-8642-7e02c861919c\" (UID: \"a51c99e4-d488-4245-8642-7e02c861919c\") " Feb 27 10:37:50 crc kubenswrapper[4998]: I0227 10:37:50.988184 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a51c99e4-d488-4245-8642-7e02c861919c-operator-scripts\") pod \"a51c99e4-d488-4245-8642-7e02c861919c\" (UID: \"a51c99e4-d488-4245-8642-7e02c861919c\") " Feb 27 10:37:50 crc kubenswrapper[4998]: I0227 10:37:50.988897 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a51c99e4-d488-4245-8642-7e02c861919c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a51c99e4-d488-4245-8642-7e02c861919c" (UID: "a51c99e4-d488-4245-8642-7e02c861919c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:37:50 crc kubenswrapper[4998]: I0227 10:37:50.989188 4998 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a51c99e4-d488-4245-8642-7e02c861919c-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 10:37:50 crc kubenswrapper[4998]: I0227 10:37:50.989270 4998 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f0c813f4-c426-471d-a640-9889450bfec7-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 10:37:50 crc kubenswrapper[4998]: I0227 10:37:50.989287 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-526p5\" (UniqueName: \"kubernetes.io/projected/f0c813f4-c426-471d-a640-9889450bfec7-kube-api-access-526p5\") on node \"crc\" DevicePath \"\"" Feb 27 10:37:50 crc kubenswrapper[4998]: I0227 10:37:50.989301 4998 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a99365b3-16bc-4dce-9952-9f5cc37dfe2b-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 10:37:50 crc kubenswrapper[4998]: I0227 10:37:50.989341 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kxn5t\" (UniqueName: \"kubernetes.io/projected/a99365b3-16bc-4dce-9952-9f5cc37dfe2b-kube-api-access-kxn5t\") on node \"crc\" DevicePath \"\"" Feb 27 10:37:50 crc kubenswrapper[4998]: I0227 10:37:50.989353 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rw6q6\" (UniqueName: \"kubernetes.io/projected/c53e7f43-9c1d-487b-984a-f6ea82b5caec-kube-api-access-rw6q6\") on node \"crc\" DevicePath \"\"" Feb 27 10:37:50 crc kubenswrapper[4998]: I0227 10:37:50.991496 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a51c99e4-d488-4245-8642-7e02c861919c-kube-api-access-9frhg" (OuterVolumeSpecName: "kube-api-access-9frhg") pod "a51c99e4-d488-4245-8642-7e02c861919c" (UID: "a51c99e4-d488-4245-8642-7e02c861919c"). InnerVolumeSpecName "kube-api-access-9frhg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:37:51 crc kubenswrapper[4998]: I0227 10:37:51.090275 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9frhg\" (UniqueName: \"kubernetes.io/projected/a51c99e4-d488-4245-8642-7e02c861919c-kube-api-access-9frhg\") on node \"crc\" DevicePath \"\"" Feb 27 10:37:51 crc kubenswrapper[4998]: I0227 10:37:51.137038 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-70b3-account-create-update-jxckj" event={"ID":"f0c813f4-c426-471d-a640-9889450bfec7","Type":"ContainerDied","Data":"fe7ee4e0afd1428f0fe95d633aea3a463004deced27d271f6a2f8db1421d13fb"} Feb 27 10:37:51 crc kubenswrapper[4998]: I0227 10:37:51.137106 4998 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe7ee4e0afd1428f0fe95d633aea3a463004deced27d271f6a2f8db1421d13fb" Feb 27 10:37:51 crc kubenswrapper[4998]: I0227 10:37:51.137143 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-70b3-account-create-update-jxckj" Feb 27 10:37:51 crc kubenswrapper[4998]: I0227 10:37:51.138644 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-tdqvg" event={"ID":"a99365b3-16bc-4dce-9952-9f5cc37dfe2b","Type":"ContainerDied","Data":"c956569fd7f30d42cf5506ee12f942bcf57ccaebeaf8625106163f9c6470a3f7"} Feb 27 10:37:51 crc kubenswrapper[4998]: I0227 10:37:51.138661 4998 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c956569fd7f30d42cf5506ee12f942bcf57ccaebeaf8625106163f9c6470a3f7" Feb 27 10:37:51 crc kubenswrapper[4998]: I0227 10:37:51.138658 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-tdqvg" Feb 27 10:37:51 crc kubenswrapper[4998]: I0227 10:37:51.140184 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-f503-account-create-update-sppwc" event={"ID":"33b458da-5079-4368-935a-74562555231c","Type":"ContainerDied","Data":"41bee23c48140cfcea0f4f039a194a29b6c97e1f0ace5a92fdc17437714158ca"} Feb 27 10:37:51 crc kubenswrapper[4998]: I0227 10:37:51.140204 4998 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41bee23c48140cfcea0f4f039a194a29b6c97e1f0ace5a92fdc17437714158ca" Feb 27 10:37:51 crc kubenswrapper[4998]: I0227 10:37:51.140205 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-f503-account-create-update-sppwc" Feb 27 10:37:51 crc kubenswrapper[4998]: I0227 10:37:51.141735 4998 generic.go:334] "Generic (PLEG): container finished" podID="b4c7cf30-091f-4dea-bbc1-156ad96a5451" containerID="27c60324302e68015b924ed479bfbe21ddc9277870418632bf5a046d96883cbb" exitCode=0 Feb 27 10:37:51 crc kubenswrapper[4998]: I0227 10:37:51.141778 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-kdmkv" event={"ID":"b4c7cf30-091f-4dea-bbc1-156ad96a5451","Type":"ContainerDied","Data":"27c60324302e68015b924ed479bfbe21ddc9277870418632bf5a046d96883cbb"} Feb 27 10:37:51 crc kubenswrapper[4998]: I0227 10:37:51.144690 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-j226t" event={"ID":"a51c99e4-d488-4245-8642-7e02c861919c","Type":"ContainerDied","Data":"04c797730dfb97af3e18cd9c3e7d1874c2d91c4f3b3824db8085e63cec5d511b"} Feb 27 10:37:51 crc kubenswrapper[4998]: I0227 10:37:51.144731 4998 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="04c797730dfb97af3e18cd9c3e7d1874c2d91c4f3b3824db8085e63cec5d511b" Feb 27 10:37:51 crc kubenswrapper[4998]: I0227 10:37:51.144790 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-j226t" Feb 27 10:37:51 crc kubenswrapper[4998]: I0227 10:37:51.146685 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-68rxz" Feb 27 10:37:51 crc kubenswrapper[4998]: I0227 10:37:51.149302 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-68rxz" event={"ID":"c53e7f43-9c1d-487b-984a-f6ea82b5caec","Type":"ContainerDied","Data":"2c280bf696a362f789bc02c50729f158f37a716674f761d05fd56a16ed9ec017"} Feb 27 10:37:51 crc kubenswrapper[4998]: I0227 10:37:51.149339 4998 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c280bf696a362f789bc02c50729f158f37a716674f761d05fd56a16ed9ec017" Feb 27 10:37:53 crc kubenswrapper[4998]: I0227 10:37:53.904499 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-3ce1-account-create-update-25bnc" Feb 27 10:37:53 crc kubenswrapper[4998]: I0227 10:37:53.914078 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-kdmkv" Feb 27 10:37:53 crc kubenswrapper[4998]: I0227 10:37:53.943720 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7t8np\" (UniqueName: \"kubernetes.io/projected/b35bf46b-b800-4e48-a90f-1a5e25eb3e3b-kube-api-access-7t8np\") pod \"b35bf46b-b800-4e48-a90f-1a5e25eb3e3b\" (UID: \"b35bf46b-b800-4e48-a90f-1a5e25eb3e3b\") " Feb 27 10:37:53 crc kubenswrapper[4998]: I0227 10:37:53.943806 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4c7cf30-091f-4dea-bbc1-156ad96a5451-combined-ca-bundle\") pod \"b4c7cf30-091f-4dea-bbc1-156ad96a5451\" (UID: \"b4c7cf30-091f-4dea-bbc1-156ad96a5451\") " Feb 27 10:37:53 crc kubenswrapper[4998]: I0227 10:37:53.943869 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4c7cf30-091f-4dea-bbc1-156ad96a5451-config-data\") pod \"b4c7cf30-091f-4dea-bbc1-156ad96a5451\" (UID: \"b4c7cf30-091f-4dea-bbc1-156ad96a5451\") " Feb 27 10:37:53 crc kubenswrapper[4998]: I0227 10:37:53.943896 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b35bf46b-b800-4e48-a90f-1a5e25eb3e3b-operator-scripts\") pod \"b35bf46b-b800-4e48-a90f-1a5e25eb3e3b\" (UID: \"b35bf46b-b800-4e48-a90f-1a5e25eb3e3b\") " Feb 27 10:37:53 crc kubenswrapper[4998]: I0227 10:37:53.943922 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b4c7cf30-091f-4dea-bbc1-156ad96a5451-db-sync-config-data\") pod \"b4c7cf30-091f-4dea-bbc1-156ad96a5451\" (UID: \"b4c7cf30-091f-4dea-bbc1-156ad96a5451\") " Feb 27 10:37:53 crc kubenswrapper[4998]: I0227 10:37:53.943936 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vlkjz\" (UniqueName: \"kubernetes.io/projected/b4c7cf30-091f-4dea-bbc1-156ad96a5451-kube-api-access-vlkjz\") pod \"b4c7cf30-091f-4dea-bbc1-156ad96a5451\" (UID: \"b4c7cf30-091f-4dea-bbc1-156ad96a5451\") " Feb 27 10:37:53 crc kubenswrapper[4998]: I0227 10:37:53.945367 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b35bf46b-b800-4e48-a90f-1a5e25eb3e3b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b35bf46b-b800-4e48-a90f-1a5e25eb3e3b" (UID: "b35bf46b-b800-4e48-a90f-1a5e25eb3e3b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:37:53 crc kubenswrapper[4998]: I0227 10:37:53.949010 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4c7cf30-091f-4dea-bbc1-156ad96a5451-kube-api-access-vlkjz" (OuterVolumeSpecName: "kube-api-access-vlkjz") pod "b4c7cf30-091f-4dea-bbc1-156ad96a5451" (UID: "b4c7cf30-091f-4dea-bbc1-156ad96a5451"). InnerVolumeSpecName "kube-api-access-vlkjz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:37:53 crc kubenswrapper[4998]: I0227 10:37:53.961543 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4c7cf30-091f-4dea-bbc1-156ad96a5451-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "b4c7cf30-091f-4dea-bbc1-156ad96a5451" (UID: "b4c7cf30-091f-4dea-bbc1-156ad96a5451"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:37:53 crc kubenswrapper[4998]: I0227 10:37:53.962333 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b35bf46b-b800-4e48-a90f-1a5e25eb3e3b-kube-api-access-7t8np" (OuterVolumeSpecName: "kube-api-access-7t8np") pod "b35bf46b-b800-4e48-a90f-1a5e25eb3e3b" (UID: "b35bf46b-b800-4e48-a90f-1a5e25eb3e3b"). InnerVolumeSpecName "kube-api-access-7t8np". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:37:53 crc kubenswrapper[4998]: I0227 10:37:53.983842 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4c7cf30-091f-4dea-bbc1-156ad96a5451-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b4c7cf30-091f-4dea-bbc1-156ad96a5451" (UID: "b4c7cf30-091f-4dea-bbc1-156ad96a5451"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:37:54 crc kubenswrapper[4998]: I0227 10:37:54.023399 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4c7cf30-091f-4dea-bbc1-156ad96a5451-config-data" (OuterVolumeSpecName: "config-data") pod "b4c7cf30-091f-4dea-bbc1-156ad96a5451" (UID: "b4c7cf30-091f-4dea-bbc1-156ad96a5451"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:37:54 crc kubenswrapper[4998]: I0227 10:37:54.045466 4998 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b35bf46b-b800-4e48-a90f-1a5e25eb3e3b-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 10:37:54 crc kubenswrapper[4998]: I0227 10:37:54.045513 4998 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b4c7cf30-091f-4dea-bbc1-156ad96a5451-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 10:37:54 crc kubenswrapper[4998]: I0227 10:37:54.045528 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vlkjz\" (UniqueName: \"kubernetes.io/projected/b4c7cf30-091f-4dea-bbc1-156ad96a5451-kube-api-access-vlkjz\") on node \"crc\" DevicePath \"\"" Feb 27 10:37:54 crc kubenswrapper[4998]: I0227 10:37:54.045542 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7t8np\" (UniqueName: \"kubernetes.io/projected/b35bf46b-b800-4e48-a90f-1a5e25eb3e3b-kube-api-access-7t8np\") on node \"crc\" DevicePath \"\"" Feb 27 10:37:54 crc kubenswrapper[4998]: I0227 10:37:54.045556 4998 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4c7cf30-091f-4dea-bbc1-156ad96a5451-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:37:54 crc kubenswrapper[4998]: I0227 10:37:54.045568 4998 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4c7cf30-091f-4dea-bbc1-156ad96a5451-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 10:37:54 crc kubenswrapper[4998]: I0227 10:37:54.193101 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-kdmkv" event={"ID":"b4c7cf30-091f-4dea-bbc1-156ad96a5451","Type":"ContainerDied","Data":"b294f2f5a2b0a8cab723d817a197c1d995381f89387cbf10305c211c30d6f6f1"} Feb 27 10:37:54 crc kubenswrapper[4998]: I0227 10:37:54.193171 4998 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b294f2f5a2b0a8cab723d817a197c1d995381f89387cbf10305c211c30d6f6f1" Feb 27 10:37:54 crc kubenswrapper[4998]: I0227 10:37:54.193135 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-kdmkv" Feb 27 10:37:54 crc kubenswrapper[4998]: I0227 10:37:54.195893 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-8vc7w" event={"ID":"87a57573-5e1f-4004-bb42-4de9e20de0ef","Type":"ContainerStarted","Data":"90031f06f62cdca0d25a84c9f00af5f52b5558ebac987e5e53ec011eba89cd2f"} Feb 27 10:37:54 crc kubenswrapper[4998]: I0227 10:37:54.197788 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-3ce1-account-create-update-25bnc" event={"ID":"b35bf46b-b800-4e48-a90f-1a5e25eb3e3b","Type":"ContainerDied","Data":"f121eb9489cc67ae657cb08cec497a554d33431d43b40bc79aba19fa93c86bff"} Feb 27 10:37:54 crc kubenswrapper[4998]: I0227 10:37:54.197824 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-3ce1-account-create-update-25bnc" Feb 27 10:37:54 crc kubenswrapper[4998]: I0227 10:37:54.197827 4998 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f121eb9489cc67ae657cb08cec497a554d33431d43b40bc79aba19fa93c86bff" Feb 27 10:37:54 crc kubenswrapper[4998]: I0227 10:37:54.221980 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-8vc7w" podStartSLOduration=2.17622218 podStartE2EDuration="7.221958155s" podCreationTimestamp="2026-02-27 10:37:47 +0000 UTC" firstStartedPulling="2026-02-27 10:37:48.745793468 +0000 UTC m=+1220.744064436" lastFinishedPulling="2026-02-27 10:37:53.791529443 +0000 UTC m=+1225.789800411" observedRunningTime="2026-02-27 10:37:54.218344029 +0000 UTC m=+1226.216614997" watchObservedRunningTime="2026-02-27 10:37:54.221958155 +0000 UTC m=+1226.220229113" Feb 27 10:37:55 crc kubenswrapper[4998]: I0227 10:37:55.347470 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-dsf4k"] Feb 27 10:37:55 crc kubenswrapper[4998]: I0227 10:37:55.348856 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-764c5664d7-dsf4k" podUID="f71ed8ad-2ac2-4463-9341-87d8dde20ec8" containerName="dnsmasq-dns" containerID="cri-o://049ab440c9f02273c2a85a275e0c4fd3ac01dafcb482e995e48f39894c63f637" gracePeriod=10 Feb 27 10:37:55 crc kubenswrapper[4998]: I0227 10:37:55.351705 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-764c5664d7-dsf4k" Feb 27 10:37:55 crc kubenswrapper[4998]: I0227 10:37:55.391514 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-brzqt"] Feb 27 10:37:55 crc kubenswrapper[4998]: E0227 10:37:55.391916 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33b458da-5079-4368-935a-74562555231c" containerName="mariadb-account-create-update" Feb 27 10:37:55 crc kubenswrapper[4998]: I0227 10:37:55.391938 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="33b458da-5079-4368-935a-74562555231c" containerName="mariadb-account-create-update" Feb 27 10:37:55 crc kubenswrapper[4998]: E0227 10:37:55.391958 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a51c99e4-d488-4245-8642-7e02c861919c" containerName="mariadb-database-create" Feb 27 10:37:55 crc kubenswrapper[4998]: I0227 10:37:55.391966 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="a51c99e4-d488-4245-8642-7e02c861919c" containerName="mariadb-database-create" Feb 27 10:37:55 crc kubenswrapper[4998]: E0227 10:37:55.391993 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4c7cf30-091f-4dea-bbc1-156ad96a5451" containerName="glance-db-sync" Feb 27 10:37:55 crc kubenswrapper[4998]: I0227 10:37:55.392000 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4c7cf30-091f-4dea-bbc1-156ad96a5451" containerName="glance-db-sync" Feb 27 10:37:55 crc kubenswrapper[4998]: E0227 10:37:55.392013 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b35bf46b-b800-4e48-a90f-1a5e25eb3e3b" containerName="mariadb-account-create-update" Feb 27 10:37:55 crc kubenswrapper[4998]: I0227 10:37:55.392019 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="b35bf46b-b800-4e48-a90f-1a5e25eb3e3b" containerName="mariadb-account-create-update" Feb 27 10:37:55 crc kubenswrapper[4998]: E0227 10:37:55.392031 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0c813f4-c426-471d-a640-9889450bfec7" containerName="mariadb-account-create-update" Feb 27 10:37:55 crc kubenswrapper[4998]: I0227 10:37:55.392037 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0c813f4-c426-471d-a640-9889450bfec7" containerName="mariadb-account-create-update" Feb 27 10:37:55 crc kubenswrapper[4998]: E0227 10:37:55.392045 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c53e7f43-9c1d-487b-984a-f6ea82b5caec" containerName="mariadb-database-create" Feb 27 10:37:55 crc kubenswrapper[4998]: I0227 10:37:55.392051 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="c53e7f43-9c1d-487b-984a-f6ea82b5caec" containerName="mariadb-database-create" Feb 27 10:37:55 crc kubenswrapper[4998]: E0227 10:37:55.392062 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a99365b3-16bc-4dce-9952-9f5cc37dfe2b" containerName="mariadb-database-create" Feb 27 10:37:55 crc kubenswrapper[4998]: I0227 10:37:55.392068 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="a99365b3-16bc-4dce-9952-9f5cc37dfe2b" containerName="mariadb-database-create" Feb 27 10:37:55 crc kubenswrapper[4998]: I0227 10:37:55.392217 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="33b458da-5079-4368-935a-74562555231c" containerName="mariadb-account-create-update" Feb 27 10:37:55 crc kubenswrapper[4998]: I0227 10:37:55.392248 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="a99365b3-16bc-4dce-9952-9f5cc37dfe2b" containerName="mariadb-database-create" Feb 27 10:37:55 crc kubenswrapper[4998]: I0227 10:37:55.392259 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="a51c99e4-d488-4245-8642-7e02c861919c" containerName="mariadb-database-create" Feb 27 10:37:55 crc kubenswrapper[4998]: I0227 10:37:55.392271 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4c7cf30-091f-4dea-bbc1-156ad96a5451" containerName="glance-db-sync" Feb 27 10:37:55 crc kubenswrapper[4998]: I0227 10:37:55.392278 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="c53e7f43-9c1d-487b-984a-f6ea82b5caec" containerName="mariadb-database-create" Feb 27 10:37:55 crc kubenswrapper[4998]: I0227 10:37:55.392291 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0c813f4-c426-471d-a640-9889450bfec7" containerName="mariadb-account-create-update" Feb 27 10:37:55 crc kubenswrapper[4998]: I0227 10:37:55.392300 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="b35bf46b-b800-4e48-a90f-1a5e25eb3e3b" containerName="mariadb-account-create-update" Feb 27 10:37:55 crc kubenswrapper[4998]: I0227 10:37:55.393193 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-brzqt" Feb 27 10:37:55 crc kubenswrapper[4998]: I0227 10:37:55.414428 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-brzqt"] Feb 27 10:37:55 crc kubenswrapper[4998]: I0227 10:37:55.469553 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c343a748-2af6-438a-a7ee-760c29b8eba6-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-brzqt\" (UID: \"c343a748-2af6-438a-a7ee-760c29b8eba6\") " pod="openstack/dnsmasq-dns-74f6bcbc87-brzqt" Feb 27 10:37:55 crc kubenswrapper[4998]: I0227 10:37:55.469713 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c343a748-2af6-438a-a7ee-760c29b8eba6-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-brzqt\" (UID: \"c343a748-2af6-438a-a7ee-760c29b8eba6\") " pod="openstack/dnsmasq-dns-74f6bcbc87-brzqt" Feb 27 10:37:55 crc kubenswrapper[4998]: I0227 10:37:55.469747 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c343a748-2af6-438a-a7ee-760c29b8eba6-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-brzqt\" (UID: \"c343a748-2af6-438a-a7ee-760c29b8eba6\") " pod="openstack/dnsmasq-dns-74f6bcbc87-brzqt" Feb 27 10:37:55 crc kubenswrapper[4998]: I0227 10:37:55.469801 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c343a748-2af6-438a-a7ee-760c29b8eba6-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-brzqt\" (UID: \"c343a748-2af6-438a-a7ee-760c29b8eba6\") " pod="openstack/dnsmasq-dns-74f6bcbc87-brzqt" Feb 27 10:37:55 crc kubenswrapper[4998]: I0227 10:37:55.469817 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c343a748-2af6-438a-a7ee-760c29b8eba6-config\") pod \"dnsmasq-dns-74f6bcbc87-brzqt\" (UID: \"c343a748-2af6-438a-a7ee-760c29b8eba6\") " pod="openstack/dnsmasq-dns-74f6bcbc87-brzqt" Feb 27 10:37:55 crc kubenswrapper[4998]: I0227 10:37:55.469862 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t75n4\" (UniqueName: \"kubernetes.io/projected/c343a748-2af6-438a-a7ee-760c29b8eba6-kube-api-access-t75n4\") pod \"dnsmasq-dns-74f6bcbc87-brzqt\" (UID: \"c343a748-2af6-438a-a7ee-760c29b8eba6\") " pod="openstack/dnsmasq-dns-74f6bcbc87-brzqt" Feb 27 10:37:55 crc kubenswrapper[4998]: I0227 10:37:55.571456 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c343a748-2af6-438a-a7ee-760c29b8eba6-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-brzqt\" (UID: \"c343a748-2af6-438a-a7ee-760c29b8eba6\") " pod="openstack/dnsmasq-dns-74f6bcbc87-brzqt" Feb 27 10:37:55 crc kubenswrapper[4998]: I0227 10:37:55.571518 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c343a748-2af6-438a-a7ee-760c29b8eba6-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-brzqt\" (UID: \"c343a748-2af6-438a-a7ee-760c29b8eba6\") " pod="openstack/dnsmasq-dns-74f6bcbc87-brzqt" Feb 27 10:37:55 crc kubenswrapper[4998]: I0227 10:37:55.571562 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c343a748-2af6-438a-a7ee-760c29b8eba6-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-brzqt\" (UID: \"c343a748-2af6-438a-a7ee-760c29b8eba6\") " pod="openstack/dnsmasq-dns-74f6bcbc87-brzqt" Feb 27 10:37:55 crc kubenswrapper[4998]: I0227 10:37:55.571584 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c343a748-2af6-438a-a7ee-760c29b8eba6-config\") pod \"dnsmasq-dns-74f6bcbc87-brzqt\" (UID: \"c343a748-2af6-438a-a7ee-760c29b8eba6\") " pod="openstack/dnsmasq-dns-74f6bcbc87-brzqt" Feb 27 10:37:55 crc kubenswrapper[4998]: I0227 10:37:55.571804 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t75n4\" (UniqueName: \"kubernetes.io/projected/c343a748-2af6-438a-a7ee-760c29b8eba6-kube-api-access-t75n4\") pod \"dnsmasq-dns-74f6bcbc87-brzqt\" (UID: \"c343a748-2af6-438a-a7ee-760c29b8eba6\") " pod="openstack/dnsmasq-dns-74f6bcbc87-brzqt" Feb 27 10:37:55 crc kubenswrapper[4998]: I0227 10:37:55.571844 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c343a748-2af6-438a-a7ee-760c29b8eba6-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-brzqt\" (UID: \"c343a748-2af6-438a-a7ee-760c29b8eba6\") " pod="openstack/dnsmasq-dns-74f6bcbc87-brzqt" Feb 27 10:37:55 crc kubenswrapper[4998]: I0227 10:37:55.572633 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c343a748-2af6-438a-a7ee-760c29b8eba6-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-brzqt\" (UID: \"c343a748-2af6-438a-a7ee-760c29b8eba6\") " pod="openstack/dnsmasq-dns-74f6bcbc87-brzqt" Feb 27 10:37:55 crc kubenswrapper[4998]: I0227 10:37:55.572662 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c343a748-2af6-438a-a7ee-760c29b8eba6-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-brzqt\" (UID: \"c343a748-2af6-438a-a7ee-760c29b8eba6\") " pod="openstack/dnsmasq-dns-74f6bcbc87-brzqt" Feb 27 10:37:55 crc kubenswrapper[4998]: I0227 10:37:55.573204 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c343a748-2af6-438a-a7ee-760c29b8eba6-config\") pod \"dnsmasq-dns-74f6bcbc87-brzqt\" (UID: \"c343a748-2af6-438a-a7ee-760c29b8eba6\") " pod="openstack/dnsmasq-dns-74f6bcbc87-brzqt" Feb 27 10:37:55 crc kubenswrapper[4998]: I0227 10:37:55.573415 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c343a748-2af6-438a-a7ee-760c29b8eba6-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-brzqt\" (UID: \"c343a748-2af6-438a-a7ee-760c29b8eba6\") " pod="openstack/dnsmasq-dns-74f6bcbc87-brzqt" Feb 27 10:37:55 crc kubenswrapper[4998]: I0227 10:37:55.574075 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c343a748-2af6-438a-a7ee-760c29b8eba6-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-brzqt\" (UID: \"c343a748-2af6-438a-a7ee-760c29b8eba6\") " pod="openstack/dnsmasq-dns-74f6bcbc87-brzqt" Feb 27 10:37:55 crc kubenswrapper[4998]: I0227 10:37:55.594018 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t75n4\" (UniqueName: \"kubernetes.io/projected/c343a748-2af6-438a-a7ee-760c29b8eba6-kube-api-access-t75n4\") pod \"dnsmasq-dns-74f6bcbc87-brzqt\" (UID: \"c343a748-2af6-438a-a7ee-760c29b8eba6\") " pod="openstack/dnsmasq-dns-74f6bcbc87-brzqt" Feb 27 10:37:55 crc kubenswrapper[4998]: I0227 10:37:55.742091 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-brzqt" Feb 27 10:37:55 crc kubenswrapper[4998]: I0227 10:37:55.884516 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-dsf4k" Feb 27 10:37:55 crc kubenswrapper[4998]: I0227 10:37:55.978840 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f71ed8ad-2ac2-4463-9341-87d8dde20ec8-ovsdbserver-sb\") pod \"f71ed8ad-2ac2-4463-9341-87d8dde20ec8\" (UID: \"f71ed8ad-2ac2-4463-9341-87d8dde20ec8\") " Feb 27 10:37:55 crc kubenswrapper[4998]: I0227 10:37:55.978989 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f71ed8ad-2ac2-4463-9341-87d8dde20ec8-dns-svc\") pod \"f71ed8ad-2ac2-4463-9341-87d8dde20ec8\" (UID: \"f71ed8ad-2ac2-4463-9341-87d8dde20ec8\") " Feb 27 10:37:55 crc kubenswrapper[4998]: I0227 10:37:55.979050 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f71ed8ad-2ac2-4463-9341-87d8dde20ec8-dns-swift-storage-0\") pod \"f71ed8ad-2ac2-4463-9341-87d8dde20ec8\" (UID: \"f71ed8ad-2ac2-4463-9341-87d8dde20ec8\") " Feb 27 10:37:55 crc kubenswrapper[4998]: I0227 10:37:55.979077 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mwbxf\" (UniqueName: \"kubernetes.io/projected/f71ed8ad-2ac2-4463-9341-87d8dde20ec8-kube-api-access-mwbxf\") pod \"f71ed8ad-2ac2-4463-9341-87d8dde20ec8\" (UID: \"f71ed8ad-2ac2-4463-9341-87d8dde20ec8\") " Feb 27 10:37:55 crc kubenswrapper[4998]: I0227 10:37:55.979107 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f71ed8ad-2ac2-4463-9341-87d8dde20ec8-ovsdbserver-nb\") pod \"f71ed8ad-2ac2-4463-9341-87d8dde20ec8\" (UID: \"f71ed8ad-2ac2-4463-9341-87d8dde20ec8\") " Feb 27 10:37:55 crc kubenswrapper[4998]: I0227 10:37:55.979157 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f71ed8ad-2ac2-4463-9341-87d8dde20ec8-config\") pod \"f71ed8ad-2ac2-4463-9341-87d8dde20ec8\" (UID: \"f71ed8ad-2ac2-4463-9341-87d8dde20ec8\") " Feb 27 10:37:55 crc kubenswrapper[4998]: I0227 10:37:55.991808 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f71ed8ad-2ac2-4463-9341-87d8dde20ec8-kube-api-access-mwbxf" (OuterVolumeSpecName: "kube-api-access-mwbxf") pod "f71ed8ad-2ac2-4463-9341-87d8dde20ec8" (UID: "f71ed8ad-2ac2-4463-9341-87d8dde20ec8"). InnerVolumeSpecName "kube-api-access-mwbxf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:37:56 crc kubenswrapper[4998]: I0227 10:37:56.027681 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f71ed8ad-2ac2-4463-9341-87d8dde20ec8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f71ed8ad-2ac2-4463-9341-87d8dde20ec8" (UID: "f71ed8ad-2ac2-4463-9341-87d8dde20ec8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:37:56 crc kubenswrapper[4998]: I0227 10:37:56.036720 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f71ed8ad-2ac2-4463-9341-87d8dde20ec8-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f71ed8ad-2ac2-4463-9341-87d8dde20ec8" (UID: "f71ed8ad-2ac2-4463-9341-87d8dde20ec8"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:37:56 crc kubenswrapper[4998]: I0227 10:37:56.036888 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f71ed8ad-2ac2-4463-9341-87d8dde20ec8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f71ed8ad-2ac2-4463-9341-87d8dde20ec8" (UID: "f71ed8ad-2ac2-4463-9341-87d8dde20ec8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:37:56 crc kubenswrapper[4998]: I0227 10:37:56.050029 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f71ed8ad-2ac2-4463-9341-87d8dde20ec8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f71ed8ad-2ac2-4463-9341-87d8dde20ec8" (UID: "f71ed8ad-2ac2-4463-9341-87d8dde20ec8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:37:56 crc kubenswrapper[4998]: I0227 10:37:56.054074 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f71ed8ad-2ac2-4463-9341-87d8dde20ec8-config" (OuterVolumeSpecName: "config") pod "f71ed8ad-2ac2-4463-9341-87d8dde20ec8" (UID: "f71ed8ad-2ac2-4463-9341-87d8dde20ec8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:37:56 crc kubenswrapper[4998]: I0227 10:37:56.081123 4998 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f71ed8ad-2ac2-4463-9341-87d8dde20ec8-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 27 10:37:56 crc kubenswrapper[4998]: I0227 10:37:56.081154 4998 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f71ed8ad-2ac2-4463-9341-87d8dde20ec8-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 27 10:37:56 crc kubenswrapper[4998]: I0227 10:37:56.081165 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mwbxf\" (UniqueName: \"kubernetes.io/projected/f71ed8ad-2ac2-4463-9341-87d8dde20ec8-kube-api-access-mwbxf\") on node \"crc\" DevicePath \"\"" Feb 27 10:37:56 crc kubenswrapper[4998]: I0227 10:37:56.081174 4998 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f71ed8ad-2ac2-4463-9341-87d8dde20ec8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 27 10:37:56 crc kubenswrapper[4998]: I0227 10:37:56.081183 4998 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f71ed8ad-2ac2-4463-9341-87d8dde20ec8-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:37:56 crc kubenswrapper[4998]: I0227 10:37:56.081191 4998 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f71ed8ad-2ac2-4463-9341-87d8dde20ec8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 27 10:37:56 crc kubenswrapper[4998]: I0227 10:37:56.238181 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-brzqt"] Feb 27 10:37:56 crc kubenswrapper[4998]: W0227 10:37:56.242262 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc343a748_2af6_438a_a7ee_760c29b8eba6.slice/crio-33d4547e69864a171b4602810fd9892d02afe6d896645020b0df940e30f4fe54 WatchSource:0}: Error finding container 33d4547e69864a171b4602810fd9892d02afe6d896645020b0df940e30f4fe54: Status 404 returned error can't find the container with id 33d4547e69864a171b4602810fd9892d02afe6d896645020b0df940e30f4fe54 Feb 27 10:37:56 crc kubenswrapper[4998]: I0227 10:37:56.242392 4998 generic.go:334] "Generic (PLEG): container finished" podID="f71ed8ad-2ac2-4463-9341-87d8dde20ec8" containerID="049ab440c9f02273c2a85a275e0c4fd3ac01dafcb482e995e48f39894c63f637" exitCode=0 Feb 27 10:37:56 crc kubenswrapper[4998]: I0227 10:37:56.242423 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-dsf4k" event={"ID":"f71ed8ad-2ac2-4463-9341-87d8dde20ec8","Type":"ContainerDied","Data":"049ab440c9f02273c2a85a275e0c4fd3ac01dafcb482e995e48f39894c63f637"} Feb 27 10:37:56 crc kubenswrapper[4998]: I0227 10:37:56.242449 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-dsf4k" event={"ID":"f71ed8ad-2ac2-4463-9341-87d8dde20ec8","Type":"ContainerDied","Data":"107a04af26b7a5d95baa8c5dd506a31fcd53611a61532b794ac0cce06d72bd3f"} Feb 27 10:37:56 crc kubenswrapper[4998]: I0227 10:37:56.242465 4998 scope.go:117] "RemoveContainer" containerID="049ab440c9f02273c2a85a275e0c4fd3ac01dafcb482e995e48f39894c63f637" Feb 27 10:37:56 crc kubenswrapper[4998]: I0227 10:37:56.242574 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-dsf4k" Feb 27 10:37:56 crc kubenswrapper[4998]: I0227 10:37:56.260870 4998 scope.go:117] "RemoveContainer" containerID="1af0031a8540c3ae59a9d8f404c215c10d1be72639658620295bb6b14509e92b" Feb 27 10:37:56 crc kubenswrapper[4998]: I0227 10:37:56.290333 4998 scope.go:117] "RemoveContainer" containerID="049ab440c9f02273c2a85a275e0c4fd3ac01dafcb482e995e48f39894c63f637" Feb 27 10:37:56 crc kubenswrapper[4998]: E0227 10:37:56.291133 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"049ab440c9f02273c2a85a275e0c4fd3ac01dafcb482e995e48f39894c63f637\": container with ID starting with 049ab440c9f02273c2a85a275e0c4fd3ac01dafcb482e995e48f39894c63f637 not found: ID does not exist" containerID="049ab440c9f02273c2a85a275e0c4fd3ac01dafcb482e995e48f39894c63f637" Feb 27 10:37:56 crc kubenswrapper[4998]: I0227 10:37:56.291208 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"049ab440c9f02273c2a85a275e0c4fd3ac01dafcb482e995e48f39894c63f637"} err="failed to get container status \"049ab440c9f02273c2a85a275e0c4fd3ac01dafcb482e995e48f39894c63f637\": rpc error: code = NotFound desc = could not find container \"049ab440c9f02273c2a85a275e0c4fd3ac01dafcb482e995e48f39894c63f637\": container with ID starting with 049ab440c9f02273c2a85a275e0c4fd3ac01dafcb482e995e48f39894c63f637 not found: ID does not exist" Feb 27 10:37:56 crc kubenswrapper[4998]: I0227 10:37:56.291263 4998 scope.go:117] "RemoveContainer" containerID="1af0031a8540c3ae59a9d8f404c215c10d1be72639658620295bb6b14509e92b" Feb 27 10:37:56 crc kubenswrapper[4998]: I0227 10:37:56.291456 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-dsf4k"] Feb 27 10:37:56 crc kubenswrapper[4998]: E0227 10:37:56.292095 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1af0031a8540c3ae59a9d8f404c215c10d1be72639658620295bb6b14509e92b\": container with ID starting with 1af0031a8540c3ae59a9d8f404c215c10d1be72639658620295bb6b14509e92b not found: ID does not exist" containerID="1af0031a8540c3ae59a9d8f404c215c10d1be72639658620295bb6b14509e92b" Feb 27 10:37:56 crc kubenswrapper[4998]: I0227 10:37:56.292126 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1af0031a8540c3ae59a9d8f404c215c10d1be72639658620295bb6b14509e92b"} err="failed to get container status \"1af0031a8540c3ae59a9d8f404c215c10d1be72639658620295bb6b14509e92b\": rpc error: code = NotFound desc = could not find container \"1af0031a8540c3ae59a9d8f404c215c10d1be72639658620295bb6b14509e92b\": container with ID starting with 1af0031a8540c3ae59a9d8f404c215c10d1be72639658620295bb6b14509e92b not found: ID does not exist" Feb 27 10:37:56 crc kubenswrapper[4998]: I0227 10:37:56.300813 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-dsf4k"] Feb 27 10:37:56 crc kubenswrapper[4998]: I0227 10:37:56.774214 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f71ed8ad-2ac2-4463-9341-87d8dde20ec8" path="/var/lib/kubelet/pods/f71ed8ad-2ac2-4463-9341-87d8dde20ec8/volumes" Feb 27 10:37:57 crc kubenswrapper[4998]: I0227 10:37:57.255030 4998 generic.go:334] "Generic (PLEG): container finished" podID="c343a748-2af6-438a-a7ee-760c29b8eba6" containerID="9279a5ff61a85b4b7dad373de2a2ee92dc19045a38409d1fd4aa97ddb911815a" exitCode=0 Feb 27 10:37:57 crc kubenswrapper[4998]: I0227 10:37:57.255111 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-brzqt" event={"ID":"c343a748-2af6-438a-a7ee-760c29b8eba6","Type":"ContainerDied","Data":"9279a5ff61a85b4b7dad373de2a2ee92dc19045a38409d1fd4aa97ddb911815a"} Feb 27 10:37:57 crc kubenswrapper[4998]: I0227 10:37:57.255142 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-brzqt" event={"ID":"c343a748-2af6-438a-a7ee-760c29b8eba6","Type":"ContainerStarted","Data":"33d4547e69864a171b4602810fd9892d02afe6d896645020b0df940e30f4fe54"} Feb 27 10:37:58 crc kubenswrapper[4998]: I0227 10:37:58.269446 4998 generic.go:334] "Generic (PLEG): container finished" podID="87a57573-5e1f-4004-bb42-4de9e20de0ef" containerID="90031f06f62cdca0d25a84c9f00af5f52b5558ebac987e5e53ec011eba89cd2f" exitCode=0 Feb 27 10:37:58 crc kubenswrapper[4998]: I0227 10:37:58.269536 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-8vc7w" event={"ID":"87a57573-5e1f-4004-bb42-4de9e20de0ef","Type":"ContainerDied","Data":"90031f06f62cdca0d25a84c9f00af5f52b5558ebac987e5e53ec011eba89cd2f"} Feb 27 10:37:58 crc kubenswrapper[4998]: I0227 10:37:58.272478 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-brzqt" event={"ID":"c343a748-2af6-438a-a7ee-760c29b8eba6","Type":"ContainerStarted","Data":"d32d354523fc4c3ab615bd26e04a4ac9b788c451e69bb434b39ade2249a56b8b"} Feb 27 10:37:58 crc kubenswrapper[4998]: I0227 10:37:58.272613 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-74f6bcbc87-brzqt" Feb 27 10:37:58 crc kubenswrapper[4998]: I0227 10:37:58.305335 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-74f6bcbc87-brzqt" podStartSLOduration=3.305313176 podStartE2EDuration="3.305313176s" podCreationTimestamp="2026-02-27 10:37:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:37:58.299684585 +0000 UTC m=+1230.297955573" watchObservedRunningTime="2026-02-27 10:37:58.305313176 +0000 UTC m=+1230.303584144" Feb 27 10:37:59 crc kubenswrapper[4998]: I0227 10:37:59.643832 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-8vc7w" Feb 27 10:37:59 crc kubenswrapper[4998]: I0227 10:37:59.744658 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87a57573-5e1f-4004-bb42-4de9e20de0ef-combined-ca-bundle\") pod \"87a57573-5e1f-4004-bb42-4de9e20de0ef\" (UID: \"87a57573-5e1f-4004-bb42-4de9e20de0ef\") " Feb 27 10:37:59 crc kubenswrapper[4998]: I0227 10:37:59.744753 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87a57573-5e1f-4004-bb42-4de9e20de0ef-config-data\") pod \"87a57573-5e1f-4004-bb42-4de9e20de0ef\" (UID: \"87a57573-5e1f-4004-bb42-4de9e20de0ef\") " Feb 27 10:37:59 crc kubenswrapper[4998]: I0227 10:37:59.744826 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jdtp6\" (UniqueName: \"kubernetes.io/projected/87a57573-5e1f-4004-bb42-4de9e20de0ef-kube-api-access-jdtp6\") pod \"87a57573-5e1f-4004-bb42-4de9e20de0ef\" (UID: \"87a57573-5e1f-4004-bb42-4de9e20de0ef\") " Feb 27 10:37:59 crc kubenswrapper[4998]: I0227 10:37:59.750906 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87a57573-5e1f-4004-bb42-4de9e20de0ef-kube-api-access-jdtp6" (OuterVolumeSpecName: "kube-api-access-jdtp6") pod "87a57573-5e1f-4004-bb42-4de9e20de0ef" (UID: "87a57573-5e1f-4004-bb42-4de9e20de0ef"). InnerVolumeSpecName "kube-api-access-jdtp6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:37:59 crc kubenswrapper[4998]: I0227 10:37:59.769324 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87a57573-5e1f-4004-bb42-4de9e20de0ef-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "87a57573-5e1f-4004-bb42-4de9e20de0ef" (UID: "87a57573-5e1f-4004-bb42-4de9e20de0ef"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:37:59 crc kubenswrapper[4998]: I0227 10:37:59.796618 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87a57573-5e1f-4004-bb42-4de9e20de0ef-config-data" (OuterVolumeSpecName: "config-data") pod "87a57573-5e1f-4004-bb42-4de9e20de0ef" (UID: "87a57573-5e1f-4004-bb42-4de9e20de0ef"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:37:59 crc kubenswrapper[4998]: I0227 10:37:59.846623 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jdtp6\" (UniqueName: \"kubernetes.io/projected/87a57573-5e1f-4004-bb42-4de9e20de0ef-kube-api-access-jdtp6\") on node \"crc\" DevicePath \"\"" Feb 27 10:37:59 crc kubenswrapper[4998]: I0227 10:37:59.846659 4998 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87a57573-5e1f-4004-bb42-4de9e20de0ef-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:37:59 crc kubenswrapper[4998]: I0227 10:37:59.846673 4998 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87a57573-5e1f-4004-bb42-4de9e20de0ef-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 10:38:00 crc kubenswrapper[4998]: I0227 10:38:00.130340 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536478-6z2th"] Feb 27 10:38:00 crc kubenswrapper[4998]: E0227 10:38:00.131076 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87a57573-5e1f-4004-bb42-4de9e20de0ef" containerName="keystone-db-sync" Feb 27 10:38:00 crc kubenswrapper[4998]: I0227 10:38:00.131096 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="87a57573-5e1f-4004-bb42-4de9e20de0ef" containerName="keystone-db-sync" Feb 27 10:38:00 crc kubenswrapper[4998]: E0227 10:38:00.131129 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f71ed8ad-2ac2-4463-9341-87d8dde20ec8" containerName="dnsmasq-dns" Feb 27 10:38:00 crc kubenswrapper[4998]: I0227 10:38:00.131136 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="f71ed8ad-2ac2-4463-9341-87d8dde20ec8" containerName="dnsmasq-dns" Feb 27 10:38:00 crc kubenswrapper[4998]: E0227 10:38:00.131143 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f71ed8ad-2ac2-4463-9341-87d8dde20ec8" containerName="init" Feb 27 10:38:00 crc kubenswrapper[4998]: I0227 10:38:00.131149 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="f71ed8ad-2ac2-4463-9341-87d8dde20ec8" containerName="init" Feb 27 10:38:00 crc kubenswrapper[4998]: I0227 10:38:00.131363 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="f71ed8ad-2ac2-4463-9341-87d8dde20ec8" containerName="dnsmasq-dns" Feb 27 10:38:00 crc kubenswrapper[4998]: I0227 10:38:00.131403 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="87a57573-5e1f-4004-bb42-4de9e20de0ef" containerName="keystone-db-sync" Feb 27 10:38:00 crc kubenswrapper[4998]: I0227 10:38:00.132060 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536478-6z2th" Feb 27 10:38:00 crc kubenswrapper[4998]: I0227 10:38:00.134510 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 10:38:00 crc kubenswrapper[4998]: I0227 10:38:00.134508 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-b74ch" Feb 27 10:38:00 crc kubenswrapper[4998]: I0227 10:38:00.134954 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 10:38:00 crc kubenswrapper[4998]: I0227 10:38:00.138287 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536478-6z2th"] Feb 27 10:38:00 crc kubenswrapper[4998]: I0227 10:38:00.179707 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bkl9\" (UniqueName: \"kubernetes.io/projected/525020c3-603a-4430-8f28-1743f62fb179-kube-api-access-4bkl9\") pod \"auto-csr-approver-29536478-6z2th\" (UID: \"525020c3-603a-4430-8f28-1743f62fb179\") " pod="openshift-infra/auto-csr-approver-29536478-6z2th" Feb 27 10:38:00 crc kubenswrapper[4998]: I0227 10:38:00.281900 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bkl9\" (UniqueName: \"kubernetes.io/projected/525020c3-603a-4430-8f28-1743f62fb179-kube-api-access-4bkl9\") pod \"auto-csr-approver-29536478-6z2th\" (UID: \"525020c3-603a-4430-8f28-1743f62fb179\") " pod="openshift-infra/auto-csr-approver-29536478-6z2th" Feb 27 10:38:00 crc kubenswrapper[4998]: I0227 10:38:00.289968 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-8vc7w" event={"ID":"87a57573-5e1f-4004-bb42-4de9e20de0ef","Type":"ContainerDied","Data":"89731fd3726b12b959dfc8b3b7f4328b23903629fcc1239425c758a8417a1730"} Feb 27 10:38:00 crc kubenswrapper[4998]: I0227 10:38:00.290014 4998 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89731fd3726b12b959dfc8b3b7f4328b23903629fcc1239425c758a8417a1730" Feb 27 10:38:00 crc kubenswrapper[4998]: I0227 10:38:00.290082 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-8vc7w" Feb 27 10:38:00 crc kubenswrapper[4998]: I0227 10:38:00.309663 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bkl9\" (UniqueName: \"kubernetes.io/projected/525020c3-603a-4430-8f28-1743f62fb179-kube-api-access-4bkl9\") pod \"auto-csr-approver-29536478-6z2th\" (UID: \"525020c3-603a-4430-8f28-1743f62fb179\") " pod="openshift-infra/auto-csr-approver-29536478-6z2th" Feb 27 10:38:00 crc kubenswrapper[4998]: I0227 10:38:00.437606 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-brzqt"] Feb 27 10:38:00 crc kubenswrapper[4998]: I0227 10:38:00.437876 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-74f6bcbc87-brzqt" podUID="c343a748-2af6-438a-a7ee-760c29b8eba6" containerName="dnsmasq-dns" containerID="cri-o://d32d354523fc4c3ab615bd26e04a4ac9b788c451e69bb434b39ade2249a56b8b" gracePeriod=10 Feb 27 10:38:00 crc kubenswrapper[4998]: I0227 10:38:00.476623 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-mkwcq"] Feb 27 10:38:00 crc kubenswrapper[4998]: I0227 10:38:00.477865 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-mkwcq" Feb 27 10:38:00 crc kubenswrapper[4998]: I0227 10:38:00.480537 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 27 10:38:00 crc kubenswrapper[4998]: I0227 10:38:00.480640 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-4n75w" Feb 27 10:38:00 crc kubenswrapper[4998]: I0227 10:38:00.480950 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 27 10:38:00 crc kubenswrapper[4998]: I0227 10:38:00.481316 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 27 10:38:00 crc kubenswrapper[4998]: I0227 10:38:00.483847 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 27 10:38:00 crc kubenswrapper[4998]: I0227 10:38:00.491258 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536478-6z2th" Feb 27 10:38:00 crc kubenswrapper[4998]: I0227 10:38:00.505176 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-nlkgk"] Feb 27 10:38:00 crc kubenswrapper[4998]: I0227 10:38:00.506563 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-nlkgk" Feb 27 10:38:00 crc kubenswrapper[4998]: I0227 10:38:00.533295 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-mkwcq"] Feb 27 10:38:00 crc kubenswrapper[4998]: I0227 10:38:00.566728 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-nlkgk"] Feb 27 10:38:00 crc kubenswrapper[4998]: I0227 10:38:00.591103 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c77da3a-fe67-4207-8d0e-8f8938c0902b-config\") pod \"dnsmasq-dns-847c4cc679-nlkgk\" (UID: \"8c77da3a-fe67-4207-8d0e-8f8938c0902b\") " pod="openstack/dnsmasq-dns-847c4cc679-nlkgk" Feb 27 10:38:00 crc kubenswrapper[4998]: I0227 10:38:00.591148 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d6f746cd-8061-4166-98ab-6d7b8151deb1-fernet-keys\") pod \"keystone-bootstrap-mkwcq\" (UID: \"d6f746cd-8061-4166-98ab-6d7b8151deb1\") " pod="openstack/keystone-bootstrap-mkwcq" Feb 27 10:38:00 crc kubenswrapper[4998]: I0227 10:38:00.591185 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d6f746cd-8061-4166-98ab-6d7b8151deb1-scripts\") pod \"keystone-bootstrap-mkwcq\" (UID: \"d6f746cd-8061-4166-98ab-6d7b8151deb1\") " pod="openstack/keystone-bootstrap-mkwcq" Feb 27 10:38:00 crc kubenswrapper[4998]: I0227 10:38:00.591207 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6f746cd-8061-4166-98ab-6d7b8151deb1-config-data\") pod \"keystone-bootstrap-mkwcq\" (UID: \"d6f746cd-8061-4166-98ab-6d7b8151deb1\") " pod="openstack/keystone-bootstrap-mkwcq" Feb 27 10:38:00 crc kubenswrapper[4998]: I0227 10:38:00.591267 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8c77da3a-fe67-4207-8d0e-8f8938c0902b-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-nlkgk\" (UID: \"8c77da3a-fe67-4207-8d0e-8f8938c0902b\") " pod="openstack/dnsmasq-dns-847c4cc679-nlkgk" Feb 27 10:38:00 crc kubenswrapper[4998]: I0227 10:38:00.591285 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8c77da3a-fe67-4207-8d0e-8f8938c0902b-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-nlkgk\" (UID: \"8c77da3a-fe67-4207-8d0e-8f8938c0902b\") " pod="openstack/dnsmasq-dns-847c4cc679-nlkgk" Feb 27 10:38:00 crc kubenswrapper[4998]: I0227 10:38:00.591312 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d6f746cd-8061-4166-98ab-6d7b8151deb1-credential-keys\") pod \"keystone-bootstrap-mkwcq\" (UID: \"d6f746cd-8061-4166-98ab-6d7b8151deb1\") " pod="openstack/keystone-bootstrap-mkwcq" Feb 27 10:38:00 crc kubenswrapper[4998]: I0227 10:38:00.591326 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jq96p\" (UniqueName: \"kubernetes.io/projected/8c77da3a-fe67-4207-8d0e-8f8938c0902b-kube-api-access-jq96p\") pod \"dnsmasq-dns-847c4cc679-nlkgk\" (UID: \"8c77da3a-fe67-4207-8d0e-8f8938c0902b\") " pod="openstack/dnsmasq-dns-847c4cc679-nlkgk" Feb 27 10:38:00 crc kubenswrapper[4998]: I0227 10:38:00.591347 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8c77da3a-fe67-4207-8d0e-8f8938c0902b-dns-svc\") pod \"dnsmasq-dns-847c4cc679-nlkgk\" (UID: \"8c77da3a-fe67-4207-8d0e-8f8938c0902b\") " pod="openstack/dnsmasq-dns-847c4cc679-nlkgk" Feb 27 10:38:00 crc kubenswrapper[4998]: I0227 10:38:00.591367 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6j679\" (UniqueName: \"kubernetes.io/projected/d6f746cd-8061-4166-98ab-6d7b8151deb1-kube-api-access-6j679\") pod \"keystone-bootstrap-mkwcq\" (UID: \"d6f746cd-8061-4166-98ab-6d7b8151deb1\") " pod="openstack/keystone-bootstrap-mkwcq" Feb 27 10:38:00 crc kubenswrapper[4998]: I0227 10:38:00.591390 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6f746cd-8061-4166-98ab-6d7b8151deb1-combined-ca-bundle\") pod \"keystone-bootstrap-mkwcq\" (UID: \"d6f746cd-8061-4166-98ab-6d7b8151deb1\") " pod="openstack/keystone-bootstrap-mkwcq" Feb 27 10:38:00 crc kubenswrapper[4998]: I0227 10:38:00.591410 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8c77da3a-fe67-4207-8d0e-8f8938c0902b-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-nlkgk\" (UID: \"8c77da3a-fe67-4207-8d0e-8f8938c0902b\") " pod="openstack/dnsmasq-dns-847c4cc679-nlkgk" Feb 27 10:38:00 crc kubenswrapper[4998]: I0227 10:38:00.701365 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6j679\" (UniqueName: \"kubernetes.io/projected/d6f746cd-8061-4166-98ab-6d7b8151deb1-kube-api-access-6j679\") pod \"keystone-bootstrap-mkwcq\" (UID: \"d6f746cd-8061-4166-98ab-6d7b8151deb1\") " pod="openstack/keystone-bootstrap-mkwcq" Feb 27 10:38:00 crc kubenswrapper[4998]: I0227 10:38:00.701704 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6f746cd-8061-4166-98ab-6d7b8151deb1-combined-ca-bundle\") pod \"keystone-bootstrap-mkwcq\" (UID: \"d6f746cd-8061-4166-98ab-6d7b8151deb1\") " pod="openstack/keystone-bootstrap-mkwcq" Feb 27 10:38:00 crc kubenswrapper[4998]: I0227 10:38:00.701747 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8c77da3a-fe67-4207-8d0e-8f8938c0902b-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-nlkgk\" (UID: \"8c77da3a-fe67-4207-8d0e-8f8938c0902b\") " pod="openstack/dnsmasq-dns-847c4cc679-nlkgk" Feb 27 10:38:00 crc kubenswrapper[4998]: I0227 10:38:00.701802 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c77da3a-fe67-4207-8d0e-8f8938c0902b-config\") pod \"dnsmasq-dns-847c4cc679-nlkgk\" (UID: \"8c77da3a-fe67-4207-8d0e-8f8938c0902b\") " pod="openstack/dnsmasq-dns-847c4cc679-nlkgk" Feb 27 10:38:00 crc kubenswrapper[4998]: I0227 10:38:00.701842 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d6f746cd-8061-4166-98ab-6d7b8151deb1-fernet-keys\") pod \"keystone-bootstrap-mkwcq\" (UID: \"d6f746cd-8061-4166-98ab-6d7b8151deb1\") " pod="openstack/keystone-bootstrap-mkwcq" Feb 27 10:38:00 crc kubenswrapper[4998]: I0227 10:38:00.701907 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d6f746cd-8061-4166-98ab-6d7b8151deb1-scripts\") pod \"keystone-bootstrap-mkwcq\" (UID: \"d6f746cd-8061-4166-98ab-6d7b8151deb1\") " pod="openstack/keystone-bootstrap-mkwcq" Feb 27 10:38:00 crc kubenswrapper[4998]: I0227 10:38:00.701945 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6f746cd-8061-4166-98ab-6d7b8151deb1-config-data\") pod \"keystone-bootstrap-mkwcq\" (UID: \"d6f746cd-8061-4166-98ab-6d7b8151deb1\") " pod="openstack/keystone-bootstrap-mkwcq" Feb 27 10:38:00 crc kubenswrapper[4998]: I0227 10:38:00.702044 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8c77da3a-fe67-4207-8d0e-8f8938c0902b-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-nlkgk\" (UID: \"8c77da3a-fe67-4207-8d0e-8f8938c0902b\") " pod="openstack/dnsmasq-dns-847c4cc679-nlkgk" Feb 27 10:38:00 crc kubenswrapper[4998]: I0227 10:38:00.702067 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8c77da3a-fe67-4207-8d0e-8f8938c0902b-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-nlkgk\" (UID: \"8c77da3a-fe67-4207-8d0e-8f8938c0902b\") " pod="openstack/dnsmasq-dns-847c4cc679-nlkgk" Feb 27 10:38:00 crc kubenswrapper[4998]: I0227 10:38:00.702125 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d6f746cd-8061-4166-98ab-6d7b8151deb1-credential-keys\") pod \"keystone-bootstrap-mkwcq\" (UID: \"d6f746cd-8061-4166-98ab-6d7b8151deb1\") " pod="openstack/keystone-bootstrap-mkwcq" Feb 27 10:38:00 crc kubenswrapper[4998]: I0227 10:38:00.702145 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jq96p\" (UniqueName: \"kubernetes.io/projected/8c77da3a-fe67-4207-8d0e-8f8938c0902b-kube-api-access-jq96p\") pod \"dnsmasq-dns-847c4cc679-nlkgk\" (UID: \"8c77da3a-fe67-4207-8d0e-8f8938c0902b\") " pod="openstack/dnsmasq-dns-847c4cc679-nlkgk" Feb 27 10:38:00 crc kubenswrapper[4998]: I0227 10:38:00.702194 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8c77da3a-fe67-4207-8d0e-8f8938c0902b-dns-svc\") pod \"dnsmasq-dns-847c4cc679-nlkgk\" (UID: \"8c77da3a-fe67-4207-8d0e-8f8938c0902b\") " pod="openstack/dnsmasq-dns-847c4cc679-nlkgk" Feb 27 10:38:00 crc kubenswrapper[4998]: I0227 10:38:00.703429 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c77da3a-fe67-4207-8d0e-8f8938c0902b-config\") pod \"dnsmasq-dns-847c4cc679-nlkgk\" (UID: \"8c77da3a-fe67-4207-8d0e-8f8938c0902b\") " pod="openstack/dnsmasq-dns-847c4cc679-nlkgk" Feb 27 10:38:00 crc kubenswrapper[4998]: I0227 10:38:00.703429 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8c77da3a-fe67-4207-8d0e-8f8938c0902b-dns-svc\") pod \"dnsmasq-dns-847c4cc679-nlkgk\" (UID: \"8c77da3a-fe67-4207-8d0e-8f8938c0902b\") " pod="openstack/dnsmasq-dns-847c4cc679-nlkgk" Feb 27 10:38:00 crc kubenswrapper[4998]: I0227 10:38:00.705612 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8c77da3a-fe67-4207-8d0e-8f8938c0902b-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-nlkgk\" (UID: \"8c77da3a-fe67-4207-8d0e-8f8938c0902b\") " pod="openstack/dnsmasq-dns-847c4cc679-nlkgk" Feb 27 10:38:00 crc kubenswrapper[4998]: I0227 10:38:00.716459 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d6f746cd-8061-4166-98ab-6d7b8151deb1-credential-keys\") pod \"keystone-bootstrap-mkwcq\" (UID: \"d6f746cd-8061-4166-98ab-6d7b8151deb1\") " pod="openstack/keystone-bootstrap-mkwcq" Feb 27 10:38:00 crc kubenswrapper[4998]: I0227 10:38:00.717306 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8c77da3a-fe67-4207-8d0e-8f8938c0902b-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-nlkgk\" (UID: \"8c77da3a-fe67-4207-8d0e-8f8938c0902b\") " pod="openstack/dnsmasq-dns-847c4cc679-nlkgk" Feb 27 10:38:00 crc kubenswrapper[4998]: I0227 10:38:00.718262 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8c77da3a-fe67-4207-8d0e-8f8938c0902b-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-nlkgk\" (UID: \"8c77da3a-fe67-4207-8d0e-8f8938c0902b\") " pod="openstack/dnsmasq-dns-847c4cc679-nlkgk" Feb 27 10:38:00 crc kubenswrapper[4998]: I0227 10:38:00.728796 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d6f746cd-8061-4166-98ab-6d7b8151deb1-fernet-keys\") pod \"keystone-bootstrap-mkwcq\" (UID: \"d6f746cd-8061-4166-98ab-6d7b8151deb1\") " pod="openstack/keystone-bootstrap-mkwcq" Feb 27 10:38:00 crc kubenswrapper[4998]: I0227 10:38:00.732181 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6f746cd-8061-4166-98ab-6d7b8151deb1-config-data\") pod \"keystone-bootstrap-mkwcq\" (UID: \"d6f746cd-8061-4166-98ab-6d7b8151deb1\") " pod="openstack/keystone-bootstrap-mkwcq" Feb 27 10:38:00 crc kubenswrapper[4998]: I0227 10:38:00.734643 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d6f746cd-8061-4166-98ab-6d7b8151deb1-scripts\") pod \"keystone-bootstrap-mkwcq\" (UID: \"d6f746cd-8061-4166-98ab-6d7b8151deb1\") " pod="openstack/keystone-bootstrap-mkwcq" Feb 27 10:38:00 crc kubenswrapper[4998]: I0227 10:38:00.748348 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6f746cd-8061-4166-98ab-6d7b8151deb1-combined-ca-bundle\") pod \"keystone-bootstrap-mkwcq\" (UID: \"d6f746cd-8061-4166-98ab-6d7b8151deb1\") " pod="openstack/keystone-bootstrap-mkwcq" Feb 27 10:38:00 crc kubenswrapper[4998]: I0227 10:38:00.752489 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-954574d65-vqjfh"] Feb 27 10:38:00 crc kubenswrapper[4998]: I0227 10:38:00.766797 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6j679\" (UniqueName: \"kubernetes.io/projected/d6f746cd-8061-4166-98ab-6d7b8151deb1-kube-api-access-6j679\") pod \"keystone-bootstrap-mkwcq\" (UID: \"d6f746cd-8061-4166-98ab-6d7b8151deb1\") " pod="openstack/keystone-bootstrap-mkwcq" Feb 27 10:38:00 crc kubenswrapper[4998]: I0227 10:38:00.797943 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jq96p\" (UniqueName: \"kubernetes.io/projected/8c77da3a-fe67-4207-8d0e-8f8938c0902b-kube-api-access-jq96p\") pod \"dnsmasq-dns-847c4cc679-nlkgk\" (UID: \"8c77da3a-fe67-4207-8d0e-8f8938c0902b\") " pod="openstack/dnsmasq-dns-847c4cc679-nlkgk" Feb 27 10:38:00 crc kubenswrapper[4998]: I0227 10:38:00.801021 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-954574d65-vqjfh" Feb 27 10:38:00 crc kubenswrapper[4998]: I0227 10:38:00.812636 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-2xz46" Feb 27 10:38:00 crc kubenswrapper[4998]: I0227 10:38:00.812991 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Feb 27 10:38:00 crc kubenswrapper[4998]: I0227 10:38:00.813105 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Feb 27 10:38:00 crc kubenswrapper[4998]: I0227 10:38:00.813208 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Feb 27 10:38:00 crc kubenswrapper[4998]: I0227 10:38:00.813606 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-mkwcq" Feb 27 10:38:00 crc kubenswrapper[4998]: I0227 10:38:00.825308 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-nlkgk" Feb 27 10:38:00 crc kubenswrapper[4998]: I0227 10:38:00.874690 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-954574d65-vqjfh"] Feb 27 10:38:00 crc kubenswrapper[4998]: I0227 10:38:00.874775 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-b8h8c"] Feb 27 10:38:00 crc kubenswrapper[4998]: I0227 10:38:00.876094 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-b8h8c" Feb 27 10:38:00 crc kubenswrapper[4998]: I0227 10:38:00.878411 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 27 10:38:00 crc kubenswrapper[4998]: I0227 10:38:00.878587 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 27 10:38:00 crc kubenswrapper[4998]: I0227 10:38:00.878863 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-vp4t7" Feb 27 10:38:00 crc kubenswrapper[4998]: I0227 10:38:00.907153 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e758d850-f266-4136-8ad2-9f7cf30bc777-scripts\") pod \"horizon-954574d65-vqjfh\" (UID: \"e758d850-f266-4136-8ad2-9f7cf30bc777\") " pod="openstack/horizon-954574d65-vqjfh" Feb 27 10:38:00 crc kubenswrapper[4998]: I0227 10:38:00.907270 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-947lq\" (UniqueName: \"kubernetes.io/projected/e758d850-f266-4136-8ad2-9f7cf30bc777-kube-api-access-947lq\") pod \"horizon-954574d65-vqjfh\" (UID: \"e758d850-f266-4136-8ad2-9f7cf30bc777\") " pod="openstack/horizon-954574d65-vqjfh" Feb 27 10:38:00 crc kubenswrapper[4998]: I0227 10:38:00.907315 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e758d850-f266-4136-8ad2-9f7cf30bc777-config-data\") pod \"horizon-954574d65-vqjfh\" (UID: \"e758d850-f266-4136-8ad2-9f7cf30bc777\") " pod="openstack/horizon-954574d65-vqjfh" Feb 27 10:38:00 crc kubenswrapper[4998]: I0227 10:38:00.907391 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e758d850-f266-4136-8ad2-9f7cf30bc777-logs\") pod \"horizon-954574d65-vqjfh\" (UID: \"e758d850-f266-4136-8ad2-9f7cf30bc777\") " pod="openstack/horizon-954574d65-vqjfh" Feb 27 10:38:00 crc kubenswrapper[4998]: I0227 10:38:00.907415 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e758d850-f266-4136-8ad2-9f7cf30bc777-horizon-secret-key\") pod \"horizon-954574d65-vqjfh\" (UID: \"e758d850-f266-4136-8ad2-9f7cf30bc777\") " pod="openstack/horizon-954574d65-vqjfh" Feb 27 10:38:00 crc kubenswrapper[4998]: I0227 10:38:00.989337 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-b8h8c"] Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.011455 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97372f36-bf18-4a79-917b-cf9b6d0f92a2-combined-ca-bundle\") pod \"neutron-db-sync-b8h8c\" (UID: \"97372f36-bf18-4a79-917b-cf9b6d0f92a2\") " pod="openstack/neutron-db-sync-b8h8c" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.024972 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e758d850-f266-4136-8ad2-9f7cf30bc777-logs\") pod \"horizon-954574d65-vqjfh\" (UID: \"e758d850-f266-4136-8ad2-9f7cf30bc777\") " pod="openstack/horizon-954574d65-vqjfh" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.025030 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e758d850-f266-4136-8ad2-9f7cf30bc777-horizon-secret-key\") pod \"horizon-954574d65-vqjfh\" (UID: \"e758d850-f266-4136-8ad2-9f7cf30bc777\") " pod="openstack/horizon-954574d65-vqjfh" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.025098 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/97372f36-bf18-4a79-917b-cf9b6d0f92a2-config\") pod \"neutron-db-sync-b8h8c\" (UID: \"97372f36-bf18-4a79-917b-cf9b6d0f92a2\") " pod="openstack/neutron-db-sync-b8h8c" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.025157 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e758d850-f266-4136-8ad2-9f7cf30bc777-scripts\") pod \"horizon-954574d65-vqjfh\" (UID: \"e758d850-f266-4136-8ad2-9f7cf30bc777\") " pod="openstack/horizon-954574d65-vqjfh" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.025422 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-947lq\" (UniqueName: \"kubernetes.io/projected/e758d850-f266-4136-8ad2-9f7cf30bc777-kube-api-access-947lq\") pod \"horizon-954574d65-vqjfh\" (UID: \"e758d850-f266-4136-8ad2-9f7cf30bc777\") " pod="openstack/horizon-954574d65-vqjfh" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.025462 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e758d850-f266-4136-8ad2-9f7cf30bc777-config-data\") pod \"horizon-954574d65-vqjfh\" (UID: \"e758d850-f266-4136-8ad2-9f7cf30bc777\") " pod="openstack/horizon-954574d65-vqjfh" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.025603 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkszr\" (UniqueName: \"kubernetes.io/projected/97372f36-bf18-4a79-917b-cf9b6d0f92a2-kube-api-access-dkszr\") pod \"neutron-db-sync-b8h8c\" (UID: \"97372f36-bf18-4a79-917b-cf9b6d0f92a2\") " pod="openstack/neutron-db-sync-b8h8c" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.031543 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e758d850-f266-4136-8ad2-9f7cf30bc777-logs\") pod \"horizon-954574d65-vqjfh\" (UID: \"e758d850-f266-4136-8ad2-9f7cf30bc777\") " pod="openstack/horizon-954574d65-vqjfh" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.034153 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e758d850-f266-4136-8ad2-9f7cf30bc777-scripts\") pod \"horizon-954574d65-vqjfh\" (UID: \"e758d850-f266-4136-8ad2-9f7cf30bc777\") " pod="openstack/horizon-954574d65-vqjfh" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.035535 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e758d850-f266-4136-8ad2-9f7cf30bc777-config-data\") pod \"horizon-954574d65-vqjfh\" (UID: \"e758d850-f266-4136-8ad2-9f7cf30bc777\") " pod="openstack/horizon-954574d65-vqjfh" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.055877 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-nlkgk"] Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.074012 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e758d850-f266-4136-8ad2-9f7cf30bc777-horizon-secret-key\") pod \"horizon-954574d65-vqjfh\" (UID: \"e758d850-f266-4136-8ad2-9f7cf30bc777\") " pod="openstack/horizon-954574d65-vqjfh" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.075215 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-tf4n2"] Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.076207 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-tf4n2" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.082419 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-947lq\" (UniqueName: \"kubernetes.io/projected/e758d850-f266-4136-8ad2-9f7cf30bc777-kube-api-access-947lq\") pod \"horizon-954574d65-vqjfh\" (UID: \"e758d850-f266-4136-8ad2-9f7cf30bc777\") " pod="openstack/horizon-954574d65-vqjfh" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.087290 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.087467 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.087666 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-z9824" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.096316 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-tf4n2"] Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.127906 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c96d5b82-3e0b-49a0-be3d-7f2fae6dd592-combined-ca-bundle\") pod \"cinder-db-sync-tf4n2\" (UID: \"c96d5b82-3e0b-49a0-be3d-7f2fae6dd592\") " pod="openstack/cinder-db-sync-tf4n2" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.127973 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/97372f36-bf18-4a79-917b-cf9b6d0f92a2-config\") pod \"neutron-db-sync-b8h8c\" (UID: \"97372f36-bf18-4a79-917b-cf9b6d0f92a2\") " pod="openstack/neutron-db-sync-b8h8c" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.128047 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c96d5b82-3e0b-49a0-be3d-7f2fae6dd592-db-sync-config-data\") pod \"cinder-db-sync-tf4n2\" (UID: \"c96d5b82-3e0b-49a0-be3d-7f2fae6dd592\") " pod="openstack/cinder-db-sync-tf4n2" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.128146 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkszr\" (UniqueName: \"kubernetes.io/projected/97372f36-bf18-4a79-917b-cf9b6d0f92a2-kube-api-access-dkszr\") pod \"neutron-db-sync-b8h8c\" (UID: \"97372f36-bf18-4a79-917b-cf9b6d0f92a2\") " pod="openstack/neutron-db-sync-b8h8c" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.128166 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c96d5b82-3e0b-49a0-be3d-7f2fae6dd592-scripts\") pod \"cinder-db-sync-tf4n2\" (UID: \"c96d5b82-3e0b-49a0-be3d-7f2fae6dd592\") " pod="openstack/cinder-db-sync-tf4n2" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.128199 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c96d5b82-3e0b-49a0-be3d-7f2fae6dd592-config-data\") pod \"cinder-db-sync-tf4n2\" (UID: \"c96d5b82-3e0b-49a0-be3d-7f2fae6dd592\") " pod="openstack/cinder-db-sync-tf4n2" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.128254 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97372f36-bf18-4a79-917b-cf9b6d0f92a2-combined-ca-bundle\") pod \"neutron-db-sync-b8h8c\" (UID: \"97372f36-bf18-4a79-917b-cf9b6d0f92a2\") " pod="openstack/neutron-db-sync-b8h8c" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.128312 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdvmf\" (UniqueName: \"kubernetes.io/projected/c96d5b82-3e0b-49a0-be3d-7f2fae6dd592-kube-api-access-zdvmf\") pod \"cinder-db-sync-tf4n2\" (UID: \"c96d5b82-3e0b-49a0-be3d-7f2fae6dd592\") " pod="openstack/cinder-db-sync-tf4n2" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.128340 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c96d5b82-3e0b-49a0-be3d-7f2fae6dd592-etc-machine-id\") pod \"cinder-db-sync-tf4n2\" (UID: \"c96d5b82-3e0b-49a0-be3d-7f2fae6dd592\") " pod="openstack/cinder-db-sync-tf4n2" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.146009 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-5x42c"] Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.147410 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-5x42c" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.149072 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97372f36-bf18-4a79-917b-cf9b6d0f92a2-combined-ca-bundle\") pod \"neutron-db-sync-b8h8c\" (UID: \"97372f36-bf18-4a79-917b-cf9b6d0f92a2\") " pod="openstack/neutron-db-sync-b8h8c" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.149171 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/97372f36-bf18-4a79-917b-cf9b6d0f92a2-config\") pod \"neutron-db-sync-b8h8c\" (UID: \"97372f36-bf18-4a79-917b-cf9b6d0f92a2\") " pod="openstack/neutron-db-sync-b8h8c" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.161531 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-58gdq" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.161757 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.172112 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-5x42c"] Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.179973 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkszr\" (UniqueName: \"kubernetes.io/projected/97372f36-bf18-4a79-917b-cf9b6d0f92a2-kube-api-access-dkszr\") pod \"neutron-db-sync-b8h8c\" (UID: \"97372f36-bf18-4a79-917b-cf9b6d0f92a2\") " pod="openstack/neutron-db-sync-b8h8c" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.198018 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-59cmz"] Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.205444 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-59cmz" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.222342 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.224660 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.226678 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.226835 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.234091 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdvmf\" (UniqueName: \"kubernetes.io/projected/c96d5b82-3e0b-49a0-be3d-7f2fae6dd592-kube-api-access-zdvmf\") pod \"cinder-db-sync-tf4n2\" (UID: \"c96d5b82-3e0b-49a0-be3d-7f2fae6dd592\") " pod="openstack/cinder-db-sync-tf4n2" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.234129 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c96d5b82-3e0b-49a0-be3d-7f2fae6dd592-etc-machine-id\") pod \"cinder-db-sync-tf4n2\" (UID: \"c96d5b82-3e0b-49a0-be3d-7f2fae6dd592\") " pod="openstack/cinder-db-sync-tf4n2" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.234157 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c96d5b82-3e0b-49a0-be3d-7f2fae6dd592-combined-ca-bundle\") pod \"cinder-db-sync-tf4n2\" (UID: \"c96d5b82-3e0b-49a0-be3d-7f2fae6dd592\") " pod="openstack/cinder-db-sync-tf4n2" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.234201 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d455fe16-80bf-42c1-be16-a87102249bf8-db-sync-config-data\") pod \"barbican-db-sync-5x42c\" (UID: \"d455fe16-80bf-42c1-be16-a87102249bf8\") " pod="openstack/barbican-db-sync-5x42c" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.234332 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d455fe16-80bf-42c1-be16-a87102249bf8-combined-ca-bundle\") pod \"barbican-db-sync-5x42c\" (UID: \"d455fe16-80bf-42c1-be16-a87102249bf8\") " pod="openstack/barbican-db-sync-5x42c" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.234395 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c96d5b82-3e0b-49a0-be3d-7f2fae6dd592-db-sync-config-data\") pod \"cinder-db-sync-tf4n2\" (UID: \"c96d5b82-3e0b-49a0-be3d-7f2fae6dd592\") " pod="openstack/cinder-db-sync-tf4n2" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.234549 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c96d5b82-3e0b-49a0-be3d-7f2fae6dd592-scripts\") pod \"cinder-db-sync-tf4n2\" (UID: \"c96d5b82-3e0b-49a0-be3d-7f2fae6dd592\") " pod="openstack/cinder-db-sync-tf4n2" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.234586 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c96d5b82-3e0b-49a0-be3d-7f2fae6dd592-config-data\") pod \"cinder-db-sync-tf4n2\" (UID: \"c96d5b82-3e0b-49a0-be3d-7f2fae6dd592\") " pod="openstack/cinder-db-sync-tf4n2" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.234618 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmz29\" (UniqueName: \"kubernetes.io/projected/d455fe16-80bf-42c1-be16-a87102249bf8-kube-api-access-wmz29\") pod \"barbican-db-sync-5x42c\" (UID: \"d455fe16-80bf-42c1-be16-a87102249bf8\") " pod="openstack/barbican-db-sync-5x42c" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.235564 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c96d5b82-3e0b-49a0-be3d-7f2fae6dd592-etc-machine-id\") pod \"cinder-db-sync-tf4n2\" (UID: \"c96d5b82-3e0b-49a0-be3d-7f2fae6dd592\") " pod="openstack/cinder-db-sync-tf4n2" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.238273 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-brzqt" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.241249 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c96d5b82-3e0b-49a0-be3d-7f2fae6dd592-combined-ca-bundle\") pod \"cinder-db-sync-tf4n2\" (UID: \"c96d5b82-3e0b-49a0-be3d-7f2fae6dd592\") " pod="openstack/cinder-db-sync-tf4n2" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.241301 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-plff2"] Feb 27 10:38:01 crc kubenswrapper[4998]: E0227 10:38:01.241647 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c343a748-2af6-438a-a7ee-760c29b8eba6" containerName="init" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.241659 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="c343a748-2af6-438a-a7ee-760c29b8eba6" containerName="init" Feb 27 10:38:01 crc kubenswrapper[4998]: E0227 10:38:01.241677 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c343a748-2af6-438a-a7ee-760c29b8eba6" containerName="dnsmasq-dns" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.241682 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="c343a748-2af6-438a-a7ee-760c29b8eba6" containerName="dnsmasq-dns" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.241826 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="c343a748-2af6-438a-a7ee-760c29b8eba6" containerName="dnsmasq-dns" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.242369 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-plff2" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.245172 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-xlt66" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.245439 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.246108 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.255682 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.257188 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c96d5b82-3e0b-49a0-be3d-7f2fae6dd592-db-sync-config-data\") pod \"cinder-db-sync-tf4n2\" (UID: \"c96d5b82-3e0b-49a0-be3d-7f2fae6dd592\") " pod="openstack/cinder-db-sync-tf4n2" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.257454 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c96d5b82-3e0b-49a0-be3d-7f2fae6dd592-config-data\") pod \"cinder-db-sync-tf4n2\" (UID: \"c96d5b82-3e0b-49a0-be3d-7f2fae6dd592\") " pod="openstack/cinder-db-sync-tf4n2" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.262968 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.268113 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c96d5b82-3e0b-49a0-be3d-7f2fae6dd592-scripts\") pod \"cinder-db-sync-tf4n2\" (UID: \"c96d5b82-3e0b-49a0-be3d-7f2fae6dd592\") " pod="openstack/cinder-db-sync-tf4n2" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.271653 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.271687 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdvmf\" (UniqueName: \"kubernetes.io/projected/c96d5b82-3e0b-49a0-be3d-7f2fae6dd592-kube-api-access-zdvmf\") pod \"cinder-db-sync-tf4n2\" (UID: \"c96d5b82-3e0b-49a0-be3d-7f2fae6dd592\") " pod="openstack/cinder-db-sync-tf4n2" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.273869 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-hwvwx" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.274043 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.274100 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-plff2"] Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.284660 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.287149 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-954574d65-vqjfh" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.289091 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-59cmz"] Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.316497 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.330191 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.336553 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c343a748-2af6-438a-a7ee-760c29b8eba6-dns-swift-storage-0\") pod \"c343a748-2af6-438a-a7ee-760c29b8eba6\" (UID: \"c343a748-2af6-438a-a7ee-760c29b8eba6\") " Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.336621 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c343a748-2af6-438a-a7ee-760c29b8eba6-dns-svc\") pod \"c343a748-2af6-438a-a7ee-760c29b8eba6\" (UID: \"c343a748-2af6-438a-a7ee-760c29b8eba6\") " Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.336675 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t75n4\" (UniqueName: \"kubernetes.io/projected/c343a748-2af6-438a-a7ee-760c29b8eba6-kube-api-access-t75n4\") pod \"c343a748-2af6-438a-a7ee-760c29b8eba6\" (UID: \"c343a748-2af6-438a-a7ee-760c29b8eba6\") " Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.336700 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c343a748-2af6-438a-a7ee-760c29b8eba6-ovsdbserver-sb\") pod \"c343a748-2af6-438a-a7ee-760c29b8eba6\" (UID: \"c343a748-2af6-438a-a7ee-760c29b8eba6\") " Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.336720 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c343a748-2af6-438a-a7ee-760c29b8eba6-ovsdbserver-nb\") pod \"c343a748-2af6-438a-a7ee-760c29b8eba6\" (UID: \"c343a748-2af6-438a-a7ee-760c29b8eba6\") " Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.336817 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c343a748-2af6-438a-a7ee-760c29b8eba6-config\") pod \"c343a748-2af6-438a-a7ee-760c29b8eba6\" (UID: \"c343a748-2af6-438a-a7ee-760c29b8eba6\") " Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.337102 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"a6d3a1f2-9e57-4c10-9480-669366053f4b\") " pod="openstack/glance-default-external-api-0" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.337135 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nk6cf\" (UniqueName: \"kubernetes.io/projected/e55e4748-da26-4ed7-8bba-e7260a78ba19-kube-api-access-nk6cf\") pod \"placement-db-sync-plff2\" (UID: \"e55e4748-da26-4ed7-8bba-e7260a78ba19\") " pod="openstack/placement-db-sync-plff2" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.337156 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e55e4748-da26-4ed7-8bba-e7260a78ba19-combined-ca-bundle\") pod \"placement-db-sync-plff2\" (UID: \"e55e4748-da26-4ed7-8bba-e7260a78ba19\") " pod="openstack/placement-db-sync-plff2" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.337176 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6d3a1f2-9e57-4c10-9480-669366053f4b-logs\") pod \"glance-default-external-api-0\" (UID: \"a6d3a1f2-9e57-4c10-9480-669366053f4b\") " pod="openstack/glance-default-external-api-0" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.337196 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63050361-4a13-4b25-8c1a-ff9fed854172-scripts\") pod \"ceilometer-0\" (UID: \"63050361-4a13-4b25-8c1a-ff9fed854172\") " pod="openstack/ceilometer-0" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.337214 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e55e4748-da26-4ed7-8bba-e7260a78ba19-scripts\") pod \"placement-db-sync-plff2\" (UID: \"e55e4748-da26-4ed7-8bba-e7260a78ba19\") " pod="openstack/placement-db-sync-plff2" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.337251 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6d3a1f2-9e57-4c10-9480-669366053f4b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"a6d3a1f2-9e57-4c10-9480-669366053f4b\") " pod="openstack/glance-default-external-api-0" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.337280 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6c82c2bb-efea-40ad-9915-c9c61d9f53cf-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-59cmz\" (UID: \"6c82c2bb-efea-40ad-9915-c9c61d9f53cf\") " pod="openstack/dnsmasq-dns-785d8bcb8c-59cmz" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.337307 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/63050361-4a13-4b25-8c1a-ff9fed854172-run-httpd\") pod \"ceilometer-0\" (UID: \"63050361-4a13-4b25-8c1a-ff9fed854172\") " pod="openstack/ceilometer-0" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.337336 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6c82c2bb-efea-40ad-9915-c9c61d9f53cf-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-59cmz\" (UID: \"6c82c2bb-efea-40ad-9915-c9c61d9f53cf\") " pod="openstack/dnsmasq-dns-785d8bcb8c-59cmz" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.337361 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/63050361-4a13-4b25-8c1a-ff9fed854172-log-httpd\") pod \"ceilometer-0\" (UID: \"63050361-4a13-4b25-8c1a-ff9fed854172\") " pod="openstack/ceilometer-0" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.337381 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c82c2bb-efea-40ad-9915-c9c61d9f53cf-config\") pod \"dnsmasq-dns-785d8bcb8c-59cmz\" (UID: \"6c82c2bb-efea-40ad-9915-c9c61d9f53cf\") " pod="openstack/dnsmasq-dns-785d8bcb8c-59cmz" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.337400 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e55e4748-da26-4ed7-8bba-e7260a78ba19-logs\") pod \"placement-db-sync-plff2\" (UID: \"e55e4748-da26-4ed7-8bba-e7260a78ba19\") " pod="openstack/placement-db-sync-plff2" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.337421 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmz29\" (UniqueName: \"kubernetes.io/projected/d455fe16-80bf-42c1-be16-a87102249bf8-kube-api-access-wmz29\") pod \"barbican-db-sync-5x42c\" (UID: \"d455fe16-80bf-42c1-be16-a87102249bf8\") " pod="openstack/barbican-db-sync-5x42c" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.337456 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/63050361-4a13-4b25-8c1a-ff9fed854172-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"63050361-4a13-4b25-8c1a-ff9fed854172\") " pod="openstack/ceilometer-0" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.337474 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63050361-4a13-4b25-8c1a-ff9fed854172-config-data\") pod \"ceilometer-0\" (UID: \"63050361-4a13-4b25-8c1a-ff9fed854172\") " pod="openstack/ceilometer-0" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.337490 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6c82c2bb-efea-40ad-9915-c9c61d9f53cf-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-59cmz\" (UID: \"6c82c2bb-efea-40ad-9915-c9c61d9f53cf\") " pod="openstack/dnsmasq-dns-785d8bcb8c-59cmz" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.337510 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6c82c2bb-efea-40ad-9915-c9c61d9f53cf-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-59cmz\" (UID: \"6c82c2bb-efea-40ad-9915-c9c61d9f53cf\") " pod="openstack/dnsmasq-dns-785d8bcb8c-59cmz" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.337525 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nd95s\" (UniqueName: \"kubernetes.io/projected/6c82c2bb-efea-40ad-9915-c9c61d9f53cf-kube-api-access-nd95s\") pod \"dnsmasq-dns-785d8bcb8c-59cmz\" (UID: \"6c82c2bb-efea-40ad-9915-c9c61d9f53cf\") " pod="openstack/dnsmasq-dns-785d8bcb8c-59cmz" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.337545 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e55e4748-da26-4ed7-8bba-e7260a78ba19-config-data\") pod \"placement-db-sync-plff2\" (UID: \"e55e4748-da26-4ed7-8bba-e7260a78ba19\") " pod="openstack/placement-db-sync-plff2" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.337566 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63050361-4a13-4b25-8c1a-ff9fed854172-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"63050361-4a13-4b25-8c1a-ff9fed854172\") " pod="openstack/ceilometer-0" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.337584 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pvtx\" (UniqueName: \"kubernetes.io/projected/63050361-4a13-4b25-8c1a-ff9fed854172-kube-api-access-6pvtx\") pod \"ceilometer-0\" (UID: \"63050361-4a13-4b25-8c1a-ff9fed854172\") " pod="openstack/ceilometer-0" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.337604 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d455fe16-80bf-42c1-be16-a87102249bf8-db-sync-config-data\") pod \"barbican-db-sync-5x42c\" (UID: \"d455fe16-80bf-42c1-be16-a87102249bf8\") " pod="openstack/barbican-db-sync-5x42c" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.337627 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6d3a1f2-9e57-4c10-9480-669366053f4b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"a6d3a1f2-9e57-4c10-9480-669366053f4b\") " pod="openstack/glance-default-external-api-0" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.337647 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d455fe16-80bf-42c1-be16-a87102249bf8-combined-ca-bundle\") pod \"barbican-db-sync-5x42c\" (UID: \"d455fe16-80bf-42c1-be16-a87102249bf8\") " pod="openstack/barbican-db-sync-5x42c" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.337665 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czx6r\" (UniqueName: \"kubernetes.io/projected/a6d3a1f2-9e57-4c10-9480-669366053f4b-kube-api-access-czx6r\") pod \"glance-default-external-api-0\" (UID: \"a6d3a1f2-9e57-4c10-9480-669366053f4b\") " pod="openstack/glance-default-external-api-0" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.337680 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6d3a1f2-9e57-4c10-9480-669366053f4b-config-data\") pod \"glance-default-external-api-0\" (UID: \"a6d3a1f2-9e57-4c10-9480-669366053f4b\") " pod="openstack/glance-default-external-api-0" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.337696 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a6d3a1f2-9e57-4c10-9480-669366053f4b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"a6d3a1f2-9e57-4c10-9480-669366053f4b\") " pod="openstack/glance-default-external-api-0" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.337734 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6d3a1f2-9e57-4c10-9480-669366053f4b-scripts\") pod \"glance-default-external-api-0\" (UID: \"a6d3a1f2-9e57-4c10-9480-669366053f4b\") " pod="openstack/glance-default-external-api-0" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.341699 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5844564ccc-p674r"] Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.342833 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c343a748-2af6-438a-a7ee-760c29b8eba6-kube-api-access-t75n4" (OuterVolumeSpecName: "kube-api-access-t75n4") pod "c343a748-2af6-438a-a7ee-760c29b8eba6" (UID: "c343a748-2af6-438a-a7ee-760c29b8eba6"). InnerVolumeSpecName "kube-api-access-t75n4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.346602 4998 generic.go:334] "Generic (PLEG): container finished" podID="c343a748-2af6-438a-a7ee-760c29b8eba6" containerID="d32d354523fc4c3ab615bd26e04a4ac9b788c451e69bb434b39ade2249a56b8b" exitCode=0 Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.346695 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-brzqt" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.349128 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-brzqt" event={"ID":"c343a748-2af6-438a-a7ee-760c29b8eba6","Type":"ContainerDied","Data":"d32d354523fc4c3ab615bd26e04a4ac9b788c451e69bb434b39ade2249a56b8b"} Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.349162 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-brzqt" event={"ID":"c343a748-2af6-438a-a7ee-760c29b8eba6","Type":"ContainerDied","Data":"33d4547e69864a171b4602810fd9892d02afe6d896645020b0df940e30f4fe54"} Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.349183 4998 scope.go:117] "RemoveContainer" containerID="d32d354523fc4c3ab615bd26e04a4ac9b788c451e69bb434b39ade2249a56b8b" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.350255 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5844564ccc-p674r" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.358805 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d455fe16-80bf-42c1-be16-a87102249bf8-db-sync-config-data\") pod \"barbican-db-sync-5x42c\" (UID: \"d455fe16-80bf-42c1-be16-a87102249bf8\") " pod="openstack/barbican-db-sync-5x42c" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.359969 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d455fe16-80bf-42c1-be16-a87102249bf8-combined-ca-bundle\") pod \"barbican-db-sync-5x42c\" (UID: \"d455fe16-80bf-42c1-be16-a87102249bf8\") " pod="openstack/barbican-db-sync-5x42c" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.372108 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmz29\" (UniqueName: \"kubernetes.io/projected/d455fe16-80bf-42c1-be16-a87102249bf8-kube-api-access-wmz29\") pod \"barbican-db-sync-5x42c\" (UID: \"d455fe16-80bf-42c1-be16-a87102249bf8\") " pod="openstack/barbican-db-sync-5x42c" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.402043 4998 scope.go:117] "RemoveContainer" containerID="9279a5ff61a85b4b7dad373de2a2ee92dc19045a38409d1fd4aa97ddb911815a" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.405736 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-b8h8c" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.415313 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.418079 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.444006 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-tf4n2" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.444508 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5844564ccc-p674r"] Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.445575 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/63050361-4a13-4b25-8c1a-ff9fed854172-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"63050361-4a13-4b25-8c1a-ff9fed854172\") " pod="openstack/ceilometer-0" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.445607 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63050361-4a13-4b25-8c1a-ff9fed854172-config-data\") pod \"ceilometer-0\" (UID: \"63050361-4a13-4b25-8c1a-ff9fed854172\") " pod="openstack/ceilometer-0" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.445633 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6c82c2bb-efea-40ad-9915-c9c61d9f53cf-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-59cmz\" (UID: \"6c82c2bb-efea-40ad-9915-c9c61d9f53cf\") " pod="openstack/dnsmasq-dns-785d8bcb8c-59cmz" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.445659 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6c82c2bb-efea-40ad-9915-c9c61d9f53cf-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-59cmz\" (UID: \"6c82c2bb-efea-40ad-9915-c9c61d9f53cf\") " pod="openstack/dnsmasq-dns-785d8bcb8c-59cmz" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.445682 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nd95s\" (UniqueName: \"kubernetes.io/projected/6c82c2bb-efea-40ad-9915-c9c61d9f53cf-kube-api-access-nd95s\") pod \"dnsmasq-dns-785d8bcb8c-59cmz\" (UID: \"6c82c2bb-efea-40ad-9915-c9c61d9f53cf\") " pod="openstack/dnsmasq-dns-785d8bcb8c-59cmz" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.445710 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e55e4748-da26-4ed7-8bba-e7260a78ba19-config-data\") pod \"placement-db-sync-plff2\" (UID: \"e55e4748-da26-4ed7-8bba-e7260a78ba19\") " pod="openstack/placement-db-sync-plff2" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.445737 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x25bj\" (UniqueName: \"kubernetes.io/projected/3819243f-aca7-47b2-8ed5-ea24e21c8ca4-kube-api-access-x25bj\") pod \"horizon-5844564ccc-p674r\" (UID: \"3819243f-aca7-47b2-8ed5-ea24e21c8ca4\") " pod="openstack/horizon-5844564ccc-p674r" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.445765 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63050361-4a13-4b25-8c1a-ff9fed854172-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"63050361-4a13-4b25-8c1a-ff9fed854172\") " pod="openstack/ceilometer-0" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.445795 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6pvtx\" (UniqueName: \"kubernetes.io/projected/63050361-4a13-4b25-8c1a-ff9fed854172-kube-api-access-6pvtx\") pod \"ceilometer-0\" (UID: \"63050361-4a13-4b25-8c1a-ff9fed854172\") " pod="openstack/ceilometer-0" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.445829 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6d3a1f2-9e57-4c10-9480-669366053f4b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"a6d3a1f2-9e57-4c10-9480-669366053f4b\") " pod="openstack/glance-default-external-api-0" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.445861 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czx6r\" (UniqueName: \"kubernetes.io/projected/a6d3a1f2-9e57-4c10-9480-669366053f4b-kube-api-access-czx6r\") pod \"glance-default-external-api-0\" (UID: \"a6d3a1f2-9e57-4c10-9480-669366053f4b\") " pod="openstack/glance-default-external-api-0" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.445884 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6d3a1f2-9e57-4c10-9480-669366053f4b-config-data\") pod \"glance-default-external-api-0\" (UID: \"a6d3a1f2-9e57-4c10-9480-669366053f4b\") " pod="openstack/glance-default-external-api-0" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.445905 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a6d3a1f2-9e57-4c10-9480-669366053f4b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"a6d3a1f2-9e57-4c10-9480-669366053f4b\") " pod="openstack/glance-default-external-api-0" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.445949 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3819243f-aca7-47b2-8ed5-ea24e21c8ca4-scripts\") pod \"horizon-5844564ccc-p674r\" (UID: \"3819243f-aca7-47b2-8ed5-ea24e21c8ca4\") " pod="openstack/horizon-5844564ccc-p674r" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.445976 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6d3a1f2-9e57-4c10-9480-669366053f4b-scripts\") pod \"glance-default-external-api-0\" (UID: \"a6d3a1f2-9e57-4c10-9480-669366053f4b\") " pod="openstack/glance-default-external-api-0" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.446001 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"a6d3a1f2-9e57-4c10-9480-669366053f4b\") " pod="openstack/glance-default-external-api-0" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.446031 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nk6cf\" (UniqueName: \"kubernetes.io/projected/e55e4748-da26-4ed7-8bba-e7260a78ba19-kube-api-access-nk6cf\") pod \"placement-db-sync-plff2\" (UID: \"e55e4748-da26-4ed7-8bba-e7260a78ba19\") " pod="openstack/placement-db-sync-plff2" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.446057 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e55e4748-da26-4ed7-8bba-e7260a78ba19-combined-ca-bundle\") pod \"placement-db-sync-plff2\" (UID: \"e55e4748-da26-4ed7-8bba-e7260a78ba19\") " pod="openstack/placement-db-sync-plff2" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.446082 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6d3a1f2-9e57-4c10-9480-669366053f4b-logs\") pod \"glance-default-external-api-0\" (UID: \"a6d3a1f2-9e57-4c10-9480-669366053f4b\") " pod="openstack/glance-default-external-api-0" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.446116 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63050361-4a13-4b25-8c1a-ff9fed854172-scripts\") pod \"ceilometer-0\" (UID: \"63050361-4a13-4b25-8c1a-ff9fed854172\") " pod="openstack/ceilometer-0" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.446142 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e55e4748-da26-4ed7-8bba-e7260a78ba19-scripts\") pod \"placement-db-sync-plff2\" (UID: \"e55e4748-da26-4ed7-8bba-e7260a78ba19\") " pod="openstack/placement-db-sync-plff2" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.446162 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6d3a1f2-9e57-4c10-9480-669366053f4b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"a6d3a1f2-9e57-4c10-9480-669366053f4b\") " pod="openstack/glance-default-external-api-0" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.446186 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3819243f-aca7-47b2-8ed5-ea24e21c8ca4-logs\") pod \"horizon-5844564ccc-p674r\" (UID: \"3819243f-aca7-47b2-8ed5-ea24e21c8ca4\") " pod="openstack/horizon-5844564ccc-p674r" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.446206 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6c82c2bb-efea-40ad-9915-c9c61d9f53cf-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-59cmz\" (UID: \"6c82c2bb-efea-40ad-9915-c9c61d9f53cf\") " pod="openstack/dnsmasq-dns-785d8bcb8c-59cmz" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.446260 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3819243f-aca7-47b2-8ed5-ea24e21c8ca4-config-data\") pod \"horizon-5844564ccc-p674r\" (UID: \"3819243f-aca7-47b2-8ed5-ea24e21c8ca4\") " pod="openstack/horizon-5844564ccc-p674r" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.446288 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/63050361-4a13-4b25-8c1a-ff9fed854172-run-httpd\") pod \"ceilometer-0\" (UID: \"63050361-4a13-4b25-8c1a-ff9fed854172\") " pod="openstack/ceilometer-0" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.446315 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6c82c2bb-efea-40ad-9915-c9c61d9f53cf-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-59cmz\" (UID: \"6c82c2bb-efea-40ad-9915-c9c61d9f53cf\") " pod="openstack/dnsmasq-dns-785d8bcb8c-59cmz" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.446359 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/63050361-4a13-4b25-8c1a-ff9fed854172-log-httpd\") pod \"ceilometer-0\" (UID: \"63050361-4a13-4b25-8c1a-ff9fed854172\") " pod="openstack/ceilometer-0" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.446381 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3819243f-aca7-47b2-8ed5-ea24e21c8ca4-horizon-secret-key\") pod \"horizon-5844564ccc-p674r\" (UID: \"3819243f-aca7-47b2-8ed5-ea24e21c8ca4\") " pod="openstack/horizon-5844564ccc-p674r" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.446403 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c82c2bb-efea-40ad-9915-c9c61d9f53cf-config\") pod \"dnsmasq-dns-785d8bcb8c-59cmz\" (UID: \"6c82c2bb-efea-40ad-9915-c9c61d9f53cf\") " pod="openstack/dnsmasq-dns-785d8bcb8c-59cmz" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.446431 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e55e4748-da26-4ed7-8bba-e7260a78ba19-logs\") pod \"placement-db-sync-plff2\" (UID: \"e55e4748-da26-4ed7-8bba-e7260a78ba19\") " pod="openstack/placement-db-sync-plff2" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.446497 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t75n4\" (UniqueName: \"kubernetes.io/projected/c343a748-2af6-438a-a7ee-760c29b8eba6-kube-api-access-t75n4\") on node \"crc\" DevicePath \"\"" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.447247 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.447420 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.451035 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6c82c2bb-efea-40ad-9915-c9c61d9f53cf-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-59cmz\" (UID: \"6c82c2bb-efea-40ad-9915-c9c61d9f53cf\") " pod="openstack/dnsmasq-dns-785d8bcb8c-59cmz" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.452655 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e55e4748-da26-4ed7-8bba-e7260a78ba19-logs\") pod \"placement-db-sync-plff2\" (UID: \"e55e4748-da26-4ed7-8bba-e7260a78ba19\") " pod="openstack/placement-db-sync-plff2" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.455017 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.457019 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6d3a1f2-9e57-4c10-9480-669366053f4b-logs\") pod \"glance-default-external-api-0\" (UID: \"a6d3a1f2-9e57-4c10-9480-669366053f4b\") " pod="openstack/glance-default-external-api-0" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.457724 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a6d3a1f2-9e57-4c10-9480-669366053f4b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"a6d3a1f2-9e57-4c10-9480-669366053f4b\") " pod="openstack/glance-default-external-api-0" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.446257 4998 scope.go:117] "RemoveContainer" containerID="d32d354523fc4c3ab615bd26e04a4ac9b788c451e69bb434b39ade2249a56b8b" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.458081 4998 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"a6d3a1f2-9e57-4c10-9480-669366053f4b\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-external-api-0" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.469118 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6d3a1f2-9e57-4c10-9480-669366053f4b-scripts\") pod \"glance-default-external-api-0\" (UID: \"a6d3a1f2-9e57-4c10-9480-669366053f4b\") " pod="openstack/glance-default-external-api-0" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.470427 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6d3a1f2-9e57-4c10-9480-669366053f4b-config-data\") pod \"glance-default-external-api-0\" (UID: \"a6d3a1f2-9e57-4c10-9480-669366053f4b\") " pod="openstack/glance-default-external-api-0" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.470492 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/63050361-4a13-4b25-8c1a-ff9fed854172-log-httpd\") pod \"ceilometer-0\" (UID: \"63050361-4a13-4b25-8c1a-ff9fed854172\") " pod="openstack/ceilometer-0" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.470709 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/63050361-4a13-4b25-8c1a-ff9fed854172-run-httpd\") pod \"ceilometer-0\" (UID: \"63050361-4a13-4b25-8c1a-ff9fed854172\") " pod="openstack/ceilometer-0" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.470732 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63050361-4a13-4b25-8c1a-ff9fed854172-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"63050361-4a13-4b25-8c1a-ff9fed854172\") " pod="openstack/ceilometer-0" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.471216 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6c82c2bb-efea-40ad-9915-c9c61d9f53cf-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-59cmz\" (UID: \"6c82c2bb-efea-40ad-9915-c9c61d9f53cf\") " pod="openstack/dnsmasq-dns-785d8bcb8c-59cmz" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.473653 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6c82c2bb-efea-40ad-9915-c9c61d9f53cf-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-59cmz\" (UID: \"6c82c2bb-efea-40ad-9915-c9c61d9f53cf\") " pod="openstack/dnsmasq-dns-785d8bcb8c-59cmz" Feb 27 10:38:01 crc kubenswrapper[4998]: E0227 10:38:01.474094 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d32d354523fc4c3ab615bd26e04a4ac9b788c451e69bb434b39ade2249a56b8b\": container with ID starting with d32d354523fc4c3ab615bd26e04a4ac9b788c451e69bb434b39ade2249a56b8b not found: ID does not exist" containerID="d32d354523fc4c3ab615bd26e04a4ac9b788c451e69bb434b39ade2249a56b8b" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.474136 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d32d354523fc4c3ab615bd26e04a4ac9b788c451e69bb434b39ade2249a56b8b"} err="failed to get container status \"d32d354523fc4c3ab615bd26e04a4ac9b788c451e69bb434b39ade2249a56b8b\": rpc error: code = NotFound desc = could not find container \"d32d354523fc4c3ab615bd26e04a4ac9b788c451e69bb434b39ade2249a56b8b\": container with ID starting with d32d354523fc4c3ab615bd26e04a4ac9b788c451e69bb434b39ade2249a56b8b not found: ID does not exist" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.474164 4998 scope.go:117] "RemoveContainer" containerID="9279a5ff61a85b4b7dad373de2a2ee92dc19045a38409d1fd4aa97ddb911815a" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.467104 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6c82c2bb-efea-40ad-9915-c9c61d9f53cf-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-59cmz\" (UID: \"6c82c2bb-efea-40ad-9915-c9c61d9f53cf\") " pod="openstack/dnsmasq-dns-785d8bcb8c-59cmz" Feb 27 10:38:01 crc kubenswrapper[4998]: E0227 10:38:01.475743 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9279a5ff61a85b4b7dad373de2a2ee92dc19045a38409d1fd4aa97ddb911815a\": container with ID starting with 9279a5ff61a85b4b7dad373de2a2ee92dc19045a38409d1fd4aa97ddb911815a not found: ID does not exist" containerID="9279a5ff61a85b4b7dad373de2a2ee92dc19045a38409d1fd4aa97ddb911815a" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.475791 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9279a5ff61a85b4b7dad373de2a2ee92dc19045a38409d1fd4aa97ddb911815a"} err="failed to get container status \"9279a5ff61a85b4b7dad373de2a2ee92dc19045a38409d1fd4aa97ddb911815a\": rpc error: code = NotFound desc = could not find container \"9279a5ff61a85b4b7dad373de2a2ee92dc19045a38409d1fd4aa97ddb911815a\": container with ID starting with 9279a5ff61a85b4b7dad373de2a2ee92dc19045a38409d1fd4aa97ddb911815a not found: ID does not exist" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.477624 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63050361-4a13-4b25-8c1a-ff9fed854172-config-data\") pod \"ceilometer-0\" (UID: \"63050361-4a13-4b25-8c1a-ff9fed854172\") " pod="openstack/ceilometer-0" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.481978 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e55e4748-da26-4ed7-8bba-e7260a78ba19-combined-ca-bundle\") pod \"placement-db-sync-plff2\" (UID: \"e55e4748-da26-4ed7-8bba-e7260a78ba19\") " pod="openstack/placement-db-sync-plff2" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.495600 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c82c2bb-efea-40ad-9915-c9c61d9f53cf-config\") pod \"dnsmasq-dns-785d8bcb8c-59cmz\" (UID: \"6c82c2bb-efea-40ad-9915-c9c61d9f53cf\") " pod="openstack/dnsmasq-dns-785d8bcb8c-59cmz" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.497194 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nd95s\" (UniqueName: \"kubernetes.io/projected/6c82c2bb-efea-40ad-9915-c9c61d9f53cf-kube-api-access-nd95s\") pod \"dnsmasq-dns-785d8bcb8c-59cmz\" (UID: \"6c82c2bb-efea-40ad-9915-c9c61d9f53cf\") " pod="openstack/dnsmasq-dns-785d8bcb8c-59cmz" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.498917 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pvtx\" (UniqueName: \"kubernetes.io/projected/63050361-4a13-4b25-8c1a-ff9fed854172-kube-api-access-6pvtx\") pod \"ceilometer-0\" (UID: \"63050361-4a13-4b25-8c1a-ff9fed854172\") " pod="openstack/ceilometer-0" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.498961 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/63050361-4a13-4b25-8c1a-ff9fed854172-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"63050361-4a13-4b25-8c1a-ff9fed854172\") " pod="openstack/ceilometer-0" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.501863 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63050361-4a13-4b25-8c1a-ff9fed854172-scripts\") pod \"ceilometer-0\" (UID: \"63050361-4a13-4b25-8c1a-ff9fed854172\") " pod="openstack/ceilometer-0" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.502271 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-5x42c" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.505659 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e55e4748-da26-4ed7-8bba-e7260a78ba19-scripts\") pod \"placement-db-sync-plff2\" (UID: \"e55e4748-da26-4ed7-8bba-e7260a78ba19\") " pod="openstack/placement-db-sync-plff2" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.509383 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e55e4748-da26-4ed7-8bba-e7260a78ba19-config-data\") pod \"placement-db-sync-plff2\" (UID: \"e55e4748-da26-4ed7-8bba-e7260a78ba19\") " pod="openstack/placement-db-sync-plff2" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.532622 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nk6cf\" (UniqueName: \"kubernetes.io/projected/e55e4748-da26-4ed7-8bba-e7260a78ba19-kube-api-access-nk6cf\") pod \"placement-db-sync-plff2\" (UID: \"e55e4748-da26-4ed7-8bba-e7260a78ba19\") " pod="openstack/placement-db-sync-plff2" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.532686 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6d3a1f2-9e57-4c10-9480-669366053f4b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"a6d3a1f2-9e57-4c10-9480-669366053f4b\") " pod="openstack/glance-default-external-api-0" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.533130 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-59cmz" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.534624 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czx6r\" (UniqueName: \"kubernetes.io/projected/a6d3a1f2-9e57-4c10-9480-669366053f4b-kube-api-access-czx6r\") pod \"glance-default-external-api-0\" (UID: \"a6d3a1f2-9e57-4c10-9480-669366053f4b\") " pod="openstack/glance-default-external-api-0" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.536351 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c343a748-2af6-438a-a7ee-760c29b8eba6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c343a748-2af6-438a-a7ee-760c29b8eba6" (UID: "c343a748-2af6-438a-a7ee-760c29b8eba6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.536886 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6d3a1f2-9e57-4c10-9480-669366053f4b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"a6d3a1f2-9e57-4c10-9480-669366053f4b\") " pod="openstack/glance-default-external-api-0" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.544191 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c343a748-2af6-438a-a7ee-760c29b8eba6-config" (OuterVolumeSpecName: "config") pod "c343a748-2af6-438a-a7ee-760c29b8eba6" (UID: "c343a748-2af6-438a-a7ee-760c29b8eba6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.551639 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3819243f-aca7-47b2-8ed5-ea24e21c8ca4-scripts\") pod \"horizon-5844564ccc-p674r\" (UID: \"3819243f-aca7-47b2-8ed5-ea24e21c8ca4\") " pod="openstack/horizon-5844564ccc-p674r" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.551716 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/52d87daa-c2cb-4bcf-b365-0333589800e4-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"52d87daa-c2cb-4bcf-b365-0333589800e4\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.551734 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/52d87daa-c2cb-4bcf-b365-0333589800e4-logs\") pod \"glance-default-internal-api-0\" (UID: \"52d87daa-c2cb-4bcf-b365-0333589800e4\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.551791 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/52d87daa-c2cb-4bcf-b365-0333589800e4-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"52d87daa-c2cb-4bcf-b365-0333589800e4\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.551828 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3819243f-aca7-47b2-8ed5-ea24e21c8ca4-logs\") pod \"horizon-5844564ccc-p674r\" (UID: \"3819243f-aca7-47b2-8ed5-ea24e21c8ca4\") " pod="openstack/horizon-5844564ccc-p674r" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.551847 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3819243f-aca7-47b2-8ed5-ea24e21c8ca4-config-data\") pod \"horizon-5844564ccc-p674r\" (UID: \"3819243f-aca7-47b2-8ed5-ea24e21c8ca4\") " pod="openstack/horizon-5844564ccc-p674r" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.551890 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52d87daa-c2cb-4bcf-b365-0333589800e4-config-data\") pod \"glance-default-internal-api-0\" (UID: \"52d87daa-c2cb-4bcf-b365-0333589800e4\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.551914 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52d87daa-c2cb-4bcf-b365-0333589800e4-scripts\") pod \"glance-default-internal-api-0\" (UID: \"52d87daa-c2cb-4bcf-b365-0333589800e4\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.551940 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3819243f-aca7-47b2-8ed5-ea24e21c8ca4-horizon-secret-key\") pod \"horizon-5844564ccc-p674r\" (UID: \"3819243f-aca7-47b2-8ed5-ea24e21c8ca4\") " pod="openstack/horizon-5844564ccc-p674r" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.551990 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"52d87daa-c2cb-4bcf-b365-0333589800e4\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.552036 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52d87daa-c2cb-4bcf-b365-0333589800e4-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"52d87daa-c2cb-4bcf-b365-0333589800e4\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.552086 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x25bj\" (UniqueName: \"kubernetes.io/projected/3819243f-aca7-47b2-8ed5-ea24e21c8ca4-kube-api-access-x25bj\") pod \"horizon-5844564ccc-p674r\" (UID: \"3819243f-aca7-47b2-8ed5-ea24e21c8ca4\") " pod="openstack/horizon-5844564ccc-p674r" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.552109 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t44cx\" (UniqueName: \"kubernetes.io/projected/52d87daa-c2cb-4bcf-b365-0333589800e4-kube-api-access-t44cx\") pod \"glance-default-internal-api-0\" (UID: \"52d87daa-c2cb-4bcf-b365-0333589800e4\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.552215 4998 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c343a748-2af6-438a-a7ee-760c29b8eba6-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.552275 4998 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c343a748-2af6-438a-a7ee-760c29b8eba6-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.552860 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3819243f-aca7-47b2-8ed5-ea24e21c8ca4-scripts\") pod \"horizon-5844564ccc-p674r\" (UID: \"3819243f-aca7-47b2-8ed5-ea24e21c8ca4\") " pod="openstack/horizon-5844564ccc-p674r" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.553083 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3819243f-aca7-47b2-8ed5-ea24e21c8ca4-logs\") pod \"horizon-5844564ccc-p674r\" (UID: \"3819243f-aca7-47b2-8ed5-ea24e21c8ca4\") " pod="openstack/horizon-5844564ccc-p674r" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.553964 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3819243f-aca7-47b2-8ed5-ea24e21c8ca4-config-data\") pod \"horizon-5844564ccc-p674r\" (UID: \"3819243f-aca7-47b2-8ed5-ea24e21c8ca4\") " pod="openstack/horizon-5844564ccc-p674r" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.558108 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3819243f-aca7-47b2-8ed5-ea24e21c8ca4-horizon-secret-key\") pod \"horizon-5844564ccc-p674r\" (UID: \"3819243f-aca7-47b2-8ed5-ea24e21c8ca4\") " pod="openstack/horizon-5844564ccc-p674r" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.610074 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c343a748-2af6-438a-a7ee-760c29b8eba6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c343a748-2af6-438a-a7ee-760c29b8eba6" (UID: "c343a748-2af6-438a-a7ee-760c29b8eba6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.617610 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.619208 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-plff2" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.631055 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c343a748-2af6-438a-a7ee-760c29b8eba6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c343a748-2af6-438a-a7ee-760c29b8eba6" (UID: "c343a748-2af6-438a-a7ee-760c29b8eba6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.631728 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x25bj\" (UniqueName: \"kubernetes.io/projected/3819243f-aca7-47b2-8ed5-ea24e21c8ca4-kube-api-access-x25bj\") pod \"horizon-5844564ccc-p674r\" (UID: \"3819243f-aca7-47b2-8ed5-ea24e21c8ca4\") " pod="openstack/horizon-5844564ccc-p674r" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.635483 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c343a748-2af6-438a-a7ee-760c29b8eba6-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c343a748-2af6-438a-a7ee-760c29b8eba6" (UID: "c343a748-2af6-438a-a7ee-760c29b8eba6"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.648713 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"a6d3a1f2-9e57-4c10-9480-669366053f4b\") " pod="openstack/glance-default-external-api-0" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.653959 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"52d87daa-c2cb-4bcf-b365-0333589800e4\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.654003 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52d87daa-c2cb-4bcf-b365-0333589800e4-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"52d87daa-c2cb-4bcf-b365-0333589800e4\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.654041 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t44cx\" (UniqueName: \"kubernetes.io/projected/52d87daa-c2cb-4bcf-b365-0333589800e4-kube-api-access-t44cx\") pod \"glance-default-internal-api-0\" (UID: \"52d87daa-c2cb-4bcf-b365-0333589800e4\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.654104 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/52d87daa-c2cb-4bcf-b365-0333589800e4-logs\") pod \"glance-default-internal-api-0\" (UID: \"52d87daa-c2cb-4bcf-b365-0333589800e4\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.654123 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/52d87daa-c2cb-4bcf-b365-0333589800e4-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"52d87daa-c2cb-4bcf-b365-0333589800e4\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.654166 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/52d87daa-c2cb-4bcf-b365-0333589800e4-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"52d87daa-c2cb-4bcf-b365-0333589800e4\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.654206 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52d87daa-c2cb-4bcf-b365-0333589800e4-config-data\") pod \"glance-default-internal-api-0\" (UID: \"52d87daa-c2cb-4bcf-b365-0333589800e4\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.654241 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52d87daa-c2cb-4bcf-b365-0333589800e4-scripts\") pod \"glance-default-internal-api-0\" (UID: \"52d87daa-c2cb-4bcf-b365-0333589800e4\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.654301 4998 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c343a748-2af6-438a-a7ee-760c29b8eba6-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.654312 4998 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c343a748-2af6-438a-a7ee-760c29b8eba6-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.654321 4998 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c343a748-2af6-438a-a7ee-760c29b8eba6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.655424 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/52d87daa-c2cb-4bcf-b365-0333589800e4-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"52d87daa-c2cb-4bcf-b365-0333589800e4\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.655466 4998 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"52d87daa-c2cb-4bcf-b365-0333589800e4\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-internal-api-0" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.655822 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/52d87daa-c2cb-4bcf-b365-0333589800e4-logs\") pod \"glance-default-internal-api-0\" (UID: \"52d87daa-c2cb-4bcf-b365-0333589800e4\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.658950 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/52d87daa-c2cb-4bcf-b365-0333589800e4-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"52d87daa-c2cb-4bcf-b365-0333589800e4\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.659112 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52d87daa-c2cb-4bcf-b365-0333589800e4-config-data\") pod \"glance-default-internal-api-0\" (UID: \"52d87daa-c2cb-4bcf-b365-0333589800e4\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.660081 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52d87daa-c2cb-4bcf-b365-0333589800e4-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"52d87daa-c2cb-4bcf-b365-0333589800e4\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.683736 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52d87daa-c2cb-4bcf-b365-0333589800e4-scripts\") pod \"glance-default-internal-api-0\" (UID: \"52d87daa-c2cb-4bcf-b365-0333589800e4\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.714380 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5844564ccc-p674r" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.742481 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-nlkgk"] Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.744844 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t44cx\" (UniqueName: \"kubernetes.io/projected/52d87daa-c2cb-4bcf-b365-0333589800e4-kube-api-access-t44cx\") pod \"glance-default-internal-api-0\" (UID: \"52d87daa-c2cb-4bcf-b365-0333589800e4\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.761570 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536478-6z2th"] Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.768628 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"52d87daa-c2cb-4bcf-b365-0333589800e4\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.786618 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-mkwcq"] Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.935650 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 27 10:38:01 crc kubenswrapper[4998]: I0227 10:38:01.977616 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-brzqt"] Feb 27 10:38:02 crc kubenswrapper[4998]: I0227 10:38:02.008270 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-brzqt"] Feb 27 10:38:02 crc kubenswrapper[4998]: I0227 10:38:02.022010 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-954574d65-vqjfh"] Feb 27 10:38:02 crc kubenswrapper[4998]: I0227 10:38:02.076622 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 27 10:38:02 crc kubenswrapper[4998]: I0227 10:38:02.373106 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-954574d65-vqjfh" event={"ID":"e758d850-f266-4136-8ad2-9f7cf30bc777","Type":"ContainerStarted","Data":"7bcbd2176035981790f57daeb17b4522c4f287a411fa7f6d987bb03015d6c1cc"} Feb 27 10:38:02 crc kubenswrapper[4998]: I0227 10:38:02.377503 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-nlkgk" event={"ID":"8c77da3a-fe67-4207-8d0e-8f8938c0902b","Type":"ContainerStarted","Data":"19cc3a65f3446a50193cd879c70976e4f9b325316b151a2f435e211fded98326"} Feb 27 10:38:02 crc kubenswrapper[4998]: I0227 10:38:02.390649 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536478-6z2th" event={"ID":"525020c3-603a-4430-8f28-1743f62fb179","Type":"ContainerStarted","Data":"adf25c57b95b354e22b06000c11471d65dc17919376fd493bcc91747edd782db"} Feb 27 10:38:02 crc kubenswrapper[4998]: I0227 10:38:02.409066 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-mkwcq" event={"ID":"d6f746cd-8061-4166-98ab-6d7b8151deb1","Type":"ContainerStarted","Data":"67d8c8bcdc26f2b7777e8d423f7bf91875531ce8e98056825c557fb6bf15825d"} Feb 27 10:38:02 crc kubenswrapper[4998]: I0227 10:38:02.530123 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-b8h8c"] Feb 27 10:38:02 crc kubenswrapper[4998]: I0227 10:38:02.555901 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-tf4n2"] Feb 27 10:38:02 crc kubenswrapper[4998]: I0227 10:38:02.630029 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-59cmz"] Feb 27 10:38:02 crc kubenswrapper[4998]: I0227 10:38:02.785365 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c343a748-2af6-438a-a7ee-760c29b8eba6" path="/var/lib/kubelet/pods/c343a748-2af6-438a-a7ee-760c29b8eba6/volumes" Feb 27 10:38:02 crc kubenswrapper[4998]: I0227 10:38:02.952591 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-954574d65-vqjfh"] Feb 27 10:38:03 crc kubenswrapper[4998]: I0227 10:38:03.007542 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 27 10:38:03 crc kubenswrapper[4998]: I0227 10:38:03.017914 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7cf746bbbf-fvmpp"] Feb 27 10:38:03 crc kubenswrapper[4998]: I0227 10:38:03.019486 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7cf746bbbf-fvmpp" Feb 27 10:38:03 crc kubenswrapper[4998]: I0227 10:38:03.059290 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7cf746bbbf-fvmpp"] Feb 27 10:38:03 crc kubenswrapper[4998]: I0227 10:38:03.069324 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-plff2"] Feb 27 10:38:03 crc kubenswrapper[4998]: I0227 10:38:03.077441 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 27 10:38:03 crc kubenswrapper[4998]: I0227 10:38:03.111894 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-5x42c"] Feb 27 10:38:03 crc kubenswrapper[4998]: I0227 10:38:03.125247 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9c739ae9-3515-4373-9dc4-80f1b866e4c8-scripts\") pod \"horizon-7cf746bbbf-fvmpp\" (UID: \"9c739ae9-3515-4373-9dc4-80f1b866e4c8\") " pod="openstack/horizon-7cf746bbbf-fvmpp" Feb 27 10:38:03 crc kubenswrapper[4998]: I0227 10:38:03.125298 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9c739ae9-3515-4373-9dc4-80f1b866e4c8-config-data\") pod \"horizon-7cf746bbbf-fvmpp\" (UID: \"9c739ae9-3515-4373-9dc4-80f1b866e4c8\") " pod="openstack/horizon-7cf746bbbf-fvmpp" Feb 27 10:38:03 crc kubenswrapper[4998]: I0227 10:38:03.125316 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9c739ae9-3515-4373-9dc4-80f1b866e4c8-horizon-secret-key\") pod \"horizon-7cf746bbbf-fvmpp\" (UID: \"9c739ae9-3515-4373-9dc4-80f1b866e4c8\") " pod="openstack/horizon-7cf746bbbf-fvmpp" Feb 27 10:38:03 crc kubenswrapper[4998]: I0227 10:38:03.125423 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c739ae9-3515-4373-9dc4-80f1b866e4c8-logs\") pod \"horizon-7cf746bbbf-fvmpp\" (UID: \"9c739ae9-3515-4373-9dc4-80f1b866e4c8\") " pod="openstack/horizon-7cf746bbbf-fvmpp" Feb 27 10:38:03 crc kubenswrapper[4998]: I0227 10:38:03.125448 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tddvl\" (UniqueName: \"kubernetes.io/projected/9c739ae9-3515-4373-9dc4-80f1b866e4c8-kube-api-access-tddvl\") pod \"horizon-7cf746bbbf-fvmpp\" (UID: \"9c739ae9-3515-4373-9dc4-80f1b866e4c8\") " pod="openstack/horizon-7cf746bbbf-fvmpp" Feb 27 10:38:03 crc kubenswrapper[4998]: I0227 10:38:03.135114 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 27 10:38:03 crc kubenswrapper[4998]: I0227 10:38:03.153143 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5844564ccc-p674r"] Feb 27 10:38:03 crc kubenswrapper[4998]: I0227 10:38:03.171496 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 27 10:38:03 crc kubenswrapper[4998]: I0227 10:38:03.227702 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tddvl\" (UniqueName: \"kubernetes.io/projected/9c739ae9-3515-4373-9dc4-80f1b866e4c8-kube-api-access-tddvl\") pod \"horizon-7cf746bbbf-fvmpp\" (UID: \"9c739ae9-3515-4373-9dc4-80f1b866e4c8\") " pod="openstack/horizon-7cf746bbbf-fvmpp" Feb 27 10:38:03 crc kubenswrapper[4998]: I0227 10:38:03.227870 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9c739ae9-3515-4373-9dc4-80f1b866e4c8-scripts\") pod \"horizon-7cf746bbbf-fvmpp\" (UID: \"9c739ae9-3515-4373-9dc4-80f1b866e4c8\") " pod="openstack/horizon-7cf746bbbf-fvmpp" Feb 27 10:38:03 crc kubenswrapper[4998]: I0227 10:38:03.227914 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9c739ae9-3515-4373-9dc4-80f1b866e4c8-config-data\") pod \"horizon-7cf746bbbf-fvmpp\" (UID: \"9c739ae9-3515-4373-9dc4-80f1b866e4c8\") " pod="openstack/horizon-7cf746bbbf-fvmpp" Feb 27 10:38:03 crc kubenswrapper[4998]: I0227 10:38:03.227937 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9c739ae9-3515-4373-9dc4-80f1b866e4c8-horizon-secret-key\") pod \"horizon-7cf746bbbf-fvmpp\" (UID: \"9c739ae9-3515-4373-9dc4-80f1b866e4c8\") " pod="openstack/horizon-7cf746bbbf-fvmpp" Feb 27 10:38:03 crc kubenswrapper[4998]: I0227 10:38:03.228084 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c739ae9-3515-4373-9dc4-80f1b866e4c8-logs\") pod \"horizon-7cf746bbbf-fvmpp\" (UID: \"9c739ae9-3515-4373-9dc4-80f1b866e4c8\") " pod="openstack/horizon-7cf746bbbf-fvmpp" Feb 27 10:38:03 crc kubenswrapper[4998]: I0227 10:38:03.229540 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9c739ae9-3515-4373-9dc4-80f1b866e4c8-config-data\") pod \"horizon-7cf746bbbf-fvmpp\" (UID: \"9c739ae9-3515-4373-9dc4-80f1b866e4c8\") " pod="openstack/horizon-7cf746bbbf-fvmpp" Feb 27 10:38:03 crc kubenswrapper[4998]: I0227 10:38:03.230074 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9c739ae9-3515-4373-9dc4-80f1b866e4c8-scripts\") pod \"horizon-7cf746bbbf-fvmpp\" (UID: \"9c739ae9-3515-4373-9dc4-80f1b866e4c8\") " pod="openstack/horizon-7cf746bbbf-fvmpp" Feb 27 10:38:03 crc kubenswrapper[4998]: I0227 10:38:03.233164 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c739ae9-3515-4373-9dc4-80f1b866e4c8-logs\") pod \"horizon-7cf746bbbf-fvmpp\" (UID: \"9c739ae9-3515-4373-9dc4-80f1b866e4c8\") " pod="openstack/horizon-7cf746bbbf-fvmpp" Feb 27 10:38:03 crc kubenswrapper[4998]: I0227 10:38:03.260644 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tddvl\" (UniqueName: \"kubernetes.io/projected/9c739ae9-3515-4373-9dc4-80f1b866e4c8-kube-api-access-tddvl\") pod \"horizon-7cf746bbbf-fvmpp\" (UID: \"9c739ae9-3515-4373-9dc4-80f1b866e4c8\") " pod="openstack/horizon-7cf746bbbf-fvmpp" Feb 27 10:38:03 crc kubenswrapper[4998]: I0227 10:38:03.260924 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 27 10:38:03 crc kubenswrapper[4998]: I0227 10:38:03.272401 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9c739ae9-3515-4373-9dc4-80f1b866e4c8-horizon-secret-key\") pod \"horizon-7cf746bbbf-fvmpp\" (UID: \"9c739ae9-3515-4373-9dc4-80f1b866e4c8\") " pod="openstack/horizon-7cf746bbbf-fvmpp" Feb 27 10:38:03 crc kubenswrapper[4998]: W0227 10:38:03.302259 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda6d3a1f2_9e57_4c10_9480_669366053f4b.slice/crio-8238718ed2c65702164469bd57b659441f4fcccdd1360a18af49b3857ed9c977 WatchSource:0}: Error finding container 8238718ed2c65702164469bd57b659441f4fcccdd1360a18af49b3857ed9c977: Status 404 returned error can't find the container with id 8238718ed2c65702164469bd57b659441f4fcccdd1360a18af49b3857ed9c977 Feb 27 10:38:03 crc kubenswrapper[4998]: I0227 10:38:03.384017 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7cf746bbbf-fvmpp" Feb 27 10:38:03 crc kubenswrapper[4998]: I0227 10:38:03.452640 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a6d3a1f2-9e57-4c10-9480-669366053f4b","Type":"ContainerStarted","Data":"8238718ed2c65702164469bd57b659441f4fcccdd1360a18af49b3857ed9c977"} Feb 27 10:38:03 crc kubenswrapper[4998]: I0227 10:38:03.458484 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5844564ccc-p674r" event={"ID":"3819243f-aca7-47b2-8ed5-ea24e21c8ca4","Type":"ContainerStarted","Data":"cacee258059665f98edbb81905afc282817cf8df2f9bea109f38ecadf9d14331"} Feb 27 10:38:03 crc kubenswrapper[4998]: I0227 10:38:03.462001 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-plff2" event={"ID":"e55e4748-da26-4ed7-8bba-e7260a78ba19","Type":"ContainerStarted","Data":"6e227790258b331609f52b1c9341ffbabdb338cfb175d53f3d4a928e2ad8b547"} Feb 27 10:38:03 crc kubenswrapper[4998]: I0227 10:38:03.471135 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"63050361-4a13-4b25-8c1a-ff9fed854172","Type":"ContainerStarted","Data":"84892f17463b620030b60d047789717dafaa1e5dbeea66da153ddf671f8677fa"} Feb 27 10:38:03 crc kubenswrapper[4998]: I0227 10:38:03.486210 4998 generic.go:334] "Generic (PLEG): container finished" podID="8c77da3a-fe67-4207-8d0e-8f8938c0902b" containerID="f5fdb999c832c34aa725a920bacaca09150cd6b02830ec845c9b20b626b6c245" exitCode=0 Feb 27 10:38:03 crc kubenswrapper[4998]: I0227 10:38:03.486311 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-nlkgk" event={"ID":"8c77da3a-fe67-4207-8d0e-8f8938c0902b","Type":"ContainerDied","Data":"f5fdb999c832c34aa725a920bacaca09150cd6b02830ec845c9b20b626b6c245"} Feb 27 10:38:03 crc kubenswrapper[4998]: I0227 10:38:03.493342 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-tf4n2" event={"ID":"c96d5b82-3e0b-49a0-be3d-7f2fae6dd592","Type":"ContainerStarted","Data":"543a0ae511145be1c75bb0fe8ed3bcb18995ba599ac85b7d568ac91b207e77c8"} Feb 27 10:38:03 crc kubenswrapper[4998]: I0227 10:38:03.499877 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-b8h8c" event={"ID":"97372f36-bf18-4a79-917b-cf9b6d0f92a2","Type":"ContainerStarted","Data":"de81a5f70de9a03abbd0d4f927a765587abce470f9ae52dc8086c284b44f0c93"} Feb 27 10:38:03 crc kubenswrapper[4998]: I0227 10:38:03.499931 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-b8h8c" event={"ID":"97372f36-bf18-4a79-917b-cf9b6d0f92a2","Type":"ContainerStarted","Data":"aceba0ec71dae70656679e65d811b53fd10f444d3c8610573b64b23e7a678d50"} Feb 27 10:38:03 crc kubenswrapper[4998]: I0227 10:38:03.539938 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-mkwcq" event={"ID":"d6f746cd-8061-4166-98ab-6d7b8151deb1","Type":"ContainerStarted","Data":"f25d062ee75caa879239d82c88522c8a89ff642eb2088ca30aca323a958a5c6e"} Feb 27 10:38:03 crc kubenswrapper[4998]: I0227 10:38:03.542885 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-b8h8c" podStartSLOduration=3.542862954 podStartE2EDuration="3.542862954s" podCreationTimestamp="2026-02-27 10:38:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:38:03.533846707 +0000 UTC m=+1235.532117685" watchObservedRunningTime="2026-02-27 10:38:03.542862954 +0000 UTC m=+1235.541133922" Feb 27 10:38:03 crc kubenswrapper[4998]: I0227 10:38:03.555556 4998 generic.go:334] "Generic (PLEG): container finished" podID="6c82c2bb-efea-40ad-9915-c9c61d9f53cf" containerID="e06d77fca15ce2b688b643ad6b139954c8693ce9c53763f5b646060f0ef1cfe4" exitCode=0 Feb 27 10:38:03 crc kubenswrapper[4998]: I0227 10:38:03.555641 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-59cmz" event={"ID":"6c82c2bb-efea-40ad-9915-c9c61d9f53cf","Type":"ContainerDied","Data":"e06d77fca15ce2b688b643ad6b139954c8693ce9c53763f5b646060f0ef1cfe4"} Feb 27 10:38:03 crc kubenswrapper[4998]: I0227 10:38:03.555667 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-59cmz" event={"ID":"6c82c2bb-efea-40ad-9915-c9c61d9f53cf","Type":"ContainerStarted","Data":"999c6b964a6edc639c6b4adb6b0df1b8b48568aa45e963f0bc0e6696fdfe26f9"} Feb 27 10:38:03 crc kubenswrapper[4998]: I0227 10:38:03.557119 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-5x42c" event={"ID":"d455fe16-80bf-42c1-be16-a87102249bf8","Type":"ContainerStarted","Data":"e50cd2128f7e91c20076b24bc3e650b4a658d257e453555c893d70f3ae1f9226"} Feb 27 10:38:03 crc kubenswrapper[4998]: I0227 10:38:03.604062 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-mkwcq" podStartSLOduration=3.6040377059999997 podStartE2EDuration="3.604037706s" podCreationTimestamp="2026-02-27 10:38:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:38:03.581948645 +0000 UTC m=+1235.580219613" watchObservedRunningTime="2026-02-27 10:38:03.604037706 +0000 UTC m=+1235.602308674" Feb 27 10:38:03 crc kubenswrapper[4998]: E0227 10:38:03.655744 4998 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6c82c2bb_efea_40ad_9915_c9c61d9f53cf.slice/crio-conmon-e06d77fca15ce2b688b643ad6b139954c8693ce9c53763f5b646060f0ef1cfe4.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6c82c2bb_efea_40ad_9915_c9c61d9f53cf.slice/crio-e06d77fca15ce2b688b643ad6b139954c8693ce9c53763f5b646060f0ef1cfe4.scope\": RecentStats: unable to find data in memory cache]" Feb 27 10:38:04 crc kubenswrapper[4998]: I0227 10:38:04.072145 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-nlkgk" Feb 27 10:38:04 crc kubenswrapper[4998]: I0227 10:38:04.171404 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7cf746bbbf-fvmpp"] Feb 27 10:38:04 crc kubenswrapper[4998]: W0227 10:38:04.175329 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9c739ae9_3515_4373_9dc4_80f1b866e4c8.slice/crio-acce384e04ac53334de54734447e439507a269b1ce001d7f6e03aca8a3d0218c WatchSource:0}: Error finding container acce384e04ac53334de54734447e439507a269b1ce001d7f6e03aca8a3d0218c: Status 404 returned error can't find the container with id acce384e04ac53334de54734447e439507a269b1ce001d7f6e03aca8a3d0218c Feb 27 10:38:04 crc kubenswrapper[4998]: I0227 10:38:04.184985 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c77da3a-fe67-4207-8d0e-8f8938c0902b-config\") pod \"8c77da3a-fe67-4207-8d0e-8f8938c0902b\" (UID: \"8c77da3a-fe67-4207-8d0e-8f8938c0902b\") " Feb 27 10:38:04 crc kubenswrapper[4998]: I0227 10:38:04.185256 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8c77da3a-fe67-4207-8d0e-8f8938c0902b-dns-svc\") pod \"8c77da3a-fe67-4207-8d0e-8f8938c0902b\" (UID: \"8c77da3a-fe67-4207-8d0e-8f8938c0902b\") " Feb 27 10:38:04 crc kubenswrapper[4998]: I0227 10:38:04.185377 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8c77da3a-fe67-4207-8d0e-8f8938c0902b-ovsdbserver-sb\") pod \"8c77da3a-fe67-4207-8d0e-8f8938c0902b\" (UID: \"8c77da3a-fe67-4207-8d0e-8f8938c0902b\") " Feb 27 10:38:04 crc kubenswrapper[4998]: I0227 10:38:04.185566 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jq96p\" (UniqueName: \"kubernetes.io/projected/8c77da3a-fe67-4207-8d0e-8f8938c0902b-kube-api-access-jq96p\") pod \"8c77da3a-fe67-4207-8d0e-8f8938c0902b\" (UID: \"8c77da3a-fe67-4207-8d0e-8f8938c0902b\") " Feb 27 10:38:04 crc kubenswrapper[4998]: I0227 10:38:04.185667 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8c77da3a-fe67-4207-8d0e-8f8938c0902b-ovsdbserver-nb\") pod \"8c77da3a-fe67-4207-8d0e-8f8938c0902b\" (UID: \"8c77da3a-fe67-4207-8d0e-8f8938c0902b\") " Feb 27 10:38:04 crc kubenswrapper[4998]: I0227 10:38:04.185728 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8c77da3a-fe67-4207-8d0e-8f8938c0902b-dns-swift-storage-0\") pod \"8c77da3a-fe67-4207-8d0e-8f8938c0902b\" (UID: \"8c77da3a-fe67-4207-8d0e-8f8938c0902b\") " Feb 27 10:38:04 crc kubenswrapper[4998]: I0227 10:38:04.203881 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c77da3a-fe67-4207-8d0e-8f8938c0902b-kube-api-access-jq96p" (OuterVolumeSpecName: "kube-api-access-jq96p") pod "8c77da3a-fe67-4207-8d0e-8f8938c0902b" (UID: "8c77da3a-fe67-4207-8d0e-8f8938c0902b"). InnerVolumeSpecName "kube-api-access-jq96p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:38:04 crc kubenswrapper[4998]: I0227 10:38:04.210783 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c77da3a-fe67-4207-8d0e-8f8938c0902b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8c77da3a-fe67-4207-8d0e-8f8938c0902b" (UID: "8c77da3a-fe67-4207-8d0e-8f8938c0902b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:38:04 crc kubenswrapper[4998]: I0227 10:38:04.227403 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c77da3a-fe67-4207-8d0e-8f8938c0902b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8c77da3a-fe67-4207-8d0e-8f8938c0902b" (UID: "8c77da3a-fe67-4207-8d0e-8f8938c0902b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:38:04 crc kubenswrapper[4998]: I0227 10:38:04.255896 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c77da3a-fe67-4207-8d0e-8f8938c0902b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8c77da3a-fe67-4207-8d0e-8f8938c0902b" (UID: "8c77da3a-fe67-4207-8d0e-8f8938c0902b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:38:04 crc kubenswrapper[4998]: I0227 10:38:04.263528 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c77da3a-fe67-4207-8d0e-8f8938c0902b-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "8c77da3a-fe67-4207-8d0e-8f8938c0902b" (UID: "8c77da3a-fe67-4207-8d0e-8f8938c0902b"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:38:04 crc kubenswrapper[4998]: I0227 10:38:04.265273 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c77da3a-fe67-4207-8d0e-8f8938c0902b-config" (OuterVolumeSpecName: "config") pod "8c77da3a-fe67-4207-8d0e-8f8938c0902b" (UID: "8c77da3a-fe67-4207-8d0e-8f8938c0902b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:38:04 crc kubenswrapper[4998]: I0227 10:38:04.271138 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 27 10:38:04 crc kubenswrapper[4998]: I0227 10:38:04.289123 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jq96p\" (UniqueName: \"kubernetes.io/projected/8c77da3a-fe67-4207-8d0e-8f8938c0902b-kube-api-access-jq96p\") on node \"crc\" DevicePath \"\"" Feb 27 10:38:04 crc kubenswrapper[4998]: I0227 10:38:04.289163 4998 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8c77da3a-fe67-4207-8d0e-8f8938c0902b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 27 10:38:04 crc kubenswrapper[4998]: I0227 10:38:04.289177 4998 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8c77da3a-fe67-4207-8d0e-8f8938c0902b-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 27 10:38:04 crc kubenswrapper[4998]: I0227 10:38:04.289189 4998 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c77da3a-fe67-4207-8d0e-8f8938c0902b-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:38:04 crc kubenswrapper[4998]: I0227 10:38:04.289200 4998 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8c77da3a-fe67-4207-8d0e-8f8938c0902b-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 27 10:38:04 crc kubenswrapper[4998]: I0227 10:38:04.289211 4998 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8c77da3a-fe67-4207-8d0e-8f8938c0902b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 27 10:38:04 crc kubenswrapper[4998]: W0227 10:38:04.310574 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod52d87daa_c2cb_4bcf_b365_0333589800e4.slice/crio-853ac6072ac5ead6add5ba0bd0bc5dac7a3862eab15c19b203498235c9c3ac84 WatchSource:0}: Error finding container 853ac6072ac5ead6add5ba0bd0bc5dac7a3862eab15c19b203498235c9c3ac84: Status 404 returned error can't find the container with id 853ac6072ac5ead6add5ba0bd0bc5dac7a3862eab15c19b203498235c9c3ac84 Feb 27 10:38:04 crc kubenswrapper[4998]: I0227 10:38:04.582764 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-nlkgk" event={"ID":"8c77da3a-fe67-4207-8d0e-8f8938c0902b","Type":"ContainerDied","Data":"19cc3a65f3446a50193cd879c70976e4f9b325316b151a2f435e211fded98326"} Feb 27 10:38:04 crc kubenswrapper[4998]: I0227 10:38:04.583138 4998 scope.go:117] "RemoveContainer" containerID="f5fdb999c832c34aa725a920bacaca09150cd6b02830ec845c9b20b626b6c245" Feb 27 10:38:04 crc kubenswrapper[4998]: I0227 10:38:04.582764 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-nlkgk" Feb 27 10:38:04 crc kubenswrapper[4998]: I0227 10:38:04.606710 4998 generic.go:334] "Generic (PLEG): container finished" podID="525020c3-603a-4430-8f28-1743f62fb179" containerID="68e79fd98d37b7031b30a7a293616484b3448431de5ca8febeb855b0cf6bfa4c" exitCode=0 Feb 27 10:38:04 crc kubenswrapper[4998]: I0227 10:38:04.606817 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536478-6z2th" event={"ID":"525020c3-603a-4430-8f28-1743f62fb179","Type":"ContainerDied","Data":"68e79fd98d37b7031b30a7a293616484b3448431de5ca8febeb855b0cf6bfa4c"} Feb 27 10:38:04 crc kubenswrapper[4998]: I0227 10:38:04.623890 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a6d3a1f2-9e57-4c10-9480-669366053f4b","Type":"ContainerStarted","Data":"8d8a0d2fef601f74d77d7101ab158b41f5db80a93c85d0402b61c7357155bd16"} Feb 27 10:38:04 crc kubenswrapper[4998]: I0227 10:38:04.625703 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"52d87daa-c2cb-4bcf-b365-0333589800e4","Type":"ContainerStarted","Data":"853ac6072ac5ead6add5ba0bd0bc5dac7a3862eab15c19b203498235c9c3ac84"} Feb 27 10:38:04 crc kubenswrapper[4998]: I0227 10:38:04.627320 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7cf746bbbf-fvmpp" event={"ID":"9c739ae9-3515-4373-9dc4-80f1b866e4c8","Type":"ContainerStarted","Data":"acce384e04ac53334de54734447e439507a269b1ce001d7f6e03aca8a3d0218c"} Feb 27 10:38:04 crc kubenswrapper[4998]: I0227 10:38:04.662353 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-59cmz" event={"ID":"6c82c2bb-efea-40ad-9915-c9c61d9f53cf","Type":"ContainerStarted","Data":"78f2e3cb632e97aca39a37ee02726f2d018a2584bae0e246feb315f4c646eda3"} Feb 27 10:38:04 crc kubenswrapper[4998]: I0227 10:38:04.662401 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-785d8bcb8c-59cmz" Feb 27 10:38:04 crc kubenswrapper[4998]: I0227 10:38:04.732551 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-nlkgk"] Feb 27 10:38:04 crc kubenswrapper[4998]: I0227 10:38:04.759401 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-nlkgk"] Feb 27 10:38:04 crc kubenswrapper[4998]: I0227 10:38:04.792997 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-785d8bcb8c-59cmz" podStartSLOduration=4.792974217 podStartE2EDuration="4.792974217s" podCreationTimestamp="2026-02-27 10:38:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:38:04.768713627 +0000 UTC m=+1236.766984605" watchObservedRunningTime="2026-02-27 10:38:04.792974217 +0000 UTC m=+1236.791245175" Feb 27 10:38:04 crc kubenswrapper[4998]: I0227 10:38:04.923431 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c77da3a-fe67-4207-8d0e-8f8938c0902b" path="/var/lib/kubelet/pods/8c77da3a-fe67-4207-8d0e-8f8938c0902b/volumes" Feb 27 10:38:06 crc kubenswrapper[4998]: I0227 10:38:06.153089 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536478-6z2th" Feb 27 10:38:06 crc kubenswrapper[4998]: I0227 10:38:06.230031 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4bkl9\" (UniqueName: \"kubernetes.io/projected/525020c3-603a-4430-8f28-1743f62fb179-kube-api-access-4bkl9\") pod \"525020c3-603a-4430-8f28-1743f62fb179\" (UID: \"525020c3-603a-4430-8f28-1743f62fb179\") " Feb 27 10:38:06 crc kubenswrapper[4998]: I0227 10:38:06.235789 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/525020c3-603a-4430-8f28-1743f62fb179-kube-api-access-4bkl9" (OuterVolumeSpecName: "kube-api-access-4bkl9") pod "525020c3-603a-4430-8f28-1743f62fb179" (UID: "525020c3-603a-4430-8f28-1743f62fb179"). InnerVolumeSpecName "kube-api-access-4bkl9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:38:06 crc kubenswrapper[4998]: I0227 10:38:06.332605 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4bkl9\" (UniqueName: \"kubernetes.io/projected/525020c3-603a-4430-8f28-1743f62fb179-kube-api-access-4bkl9\") on node \"crc\" DevicePath \"\"" Feb 27 10:38:06 crc kubenswrapper[4998]: I0227 10:38:06.688283 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536478-6z2th" event={"ID":"525020c3-603a-4430-8f28-1743f62fb179","Type":"ContainerDied","Data":"adf25c57b95b354e22b06000c11471d65dc17919376fd493bcc91747edd782db"} Feb 27 10:38:06 crc kubenswrapper[4998]: I0227 10:38:06.688326 4998 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="adf25c57b95b354e22b06000c11471d65dc17919376fd493bcc91747edd782db" Feb 27 10:38:06 crc kubenswrapper[4998]: I0227 10:38:06.688387 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536478-6z2th" Feb 27 10:38:06 crc kubenswrapper[4998]: I0227 10:38:06.700095 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a6d3a1f2-9e57-4c10-9480-669366053f4b","Type":"ContainerStarted","Data":"3bd6ae72de7a2e630c6afbc45e6bf84e157d6024d722f2ac89b29182aa9b90fb"} Feb 27 10:38:06 crc kubenswrapper[4998]: I0227 10:38:06.700289 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="a6d3a1f2-9e57-4c10-9480-669366053f4b" containerName="glance-log" containerID="cri-o://8d8a0d2fef601f74d77d7101ab158b41f5db80a93c85d0402b61c7357155bd16" gracePeriod=30 Feb 27 10:38:06 crc kubenswrapper[4998]: I0227 10:38:06.700769 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="a6d3a1f2-9e57-4c10-9480-669366053f4b" containerName="glance-httpd" containerID="cri-o://3bd6ae72de7a2e630c6afbc45e6bf84e157d6024d722f2ac89b29182aa9b90fb" gracePeriod=30 Feb 27 10:38:06 crc kubenswrapper[4998]: I0227 10:38:06.704445 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"52d87daa-c2cb-4bcf-b365-0333589800e4","Type":"ContainerStarted","Data":"49b5f63b3c9365790e6f33a8ce9744ba4a33b0c79cf429c553eea414f2918134"} Feb 27 10:38:06 crc kubenswrapper[4998]: I0227 10:38:06.758032 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.758007969 podStartE2EDuration="5.758007969s" podCreationTimestamp="2026-02-27 10:38:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:38:06.753804416 +0000 UTC m=+1238.752075384" watchObservedRunningTime="2026-02-27 10:38:06.758007969 +0000 UTC m=+1238.756278937" Feb 27 10:38:07 crc kubenswrapper[4998]: I0227 10:38:07.245205 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536472-f7wqm"] Feb 27 10:38:07 crc kubenswrapper[4998]: I0227 10:38:07.263867 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536472-f7wqm"] Feb 27 10:38:07 crc kubenswrapper[4998]: I0227 10:38:07.720689 4998 generic.go:334] "Generic (PLEG): container finished" podID="a6d3a1f2-9e57-4c10-9480-669366053f4b" containerID="3bd6ae72de7a2e630c6afbc45e6bf84e157d6024d722f2ac89b29182aa9b90fb" exitCode=0 Feb 27 10:38:07 crc kubenswrapper[4998]: I0227 10:38:07.721024 4998 generic.go:334] "Generic (PLEG): container finished" podID="a6d3a1f2-9e57-4c10-9480-669366053f4b" containerID="8d8a0d2fef601f74d77d7101ab158b41f5db80a93c85d0402b61c7357155bd16" exitCode=143 Feb 27 10:38:07 crc kubenswrapper[4998]: I0227 10:38:07.721584 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a6d3a1f2-9e57-4c10-9480-669366053f4b","Type":"ContainerDied","Data":"3bd6ae72de7a2e630c6afbc45e6bf84e157d6024d722f2ac89b29182aa9b90fb"} Feb 27 10:38:07 crc kubenswrapper[4998]: I0227 10:38:07.721642 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a6d3a1f2-9e57-4c10-9480-669366053f4b","Type":"ContainerDied","Data":"8d8a0d2fef601f74d77d7101ab158b41f5db80a93c85d0402b61c7357155bd16"} Feb 27 10:38:07 crc kubenswrapper[4998]: I0227 10:38:07.725308 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"52d87daa-c2cb-4bcf-b365-0333589800e4","Type":"ContainerStarted","Data":"ed4668b054966dd7317556579dc533a819bfcb8f8c15230c4dcd7a75075814e0"} Feb 27 10:38:07 crc kubenswrapper[4998]: I0227 10:38:07.726250 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="52d87daa-c2cb-4bcf-b365-0333589800e4" containerName="glance-log" containerID="cri-o://49b5f63b3c9365790e6f33a8ce9744ba4a33b0c79cf429c553eea414f2918134" gracePeriod=30 Feb 27 10:38:07 crc kubenswrapper[4998]: I0227 10:38:07.726311 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="52d87daa-c2cb-4bcf-b365-0333589800e4" containerName="glance-httpd" containerID="cri-o://ed4668b054966dd7317556579dc533a819bfcb8f8c15230c4dcd7a75075814e0" gracePeriod=30 Feb 27 10:38:07 crc kubenswrapper[4998]: I0227 10:38:07.770353 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.770334067 podStartE2EDuration="6.770334067s" podCreationTimestamp="2026-02-27 10:38:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:38:07.758599545 +0000 UTC m=+1239.756870513" watchObservedRunningTime="2026-02-27 10:38:07.770334067 +0000 UTC m=+1239.768605035" Feb 27 10:38:08 crc kubenswrapper[4998]: I0227 10:38:08.738345 4998 generic.go:334] "Generic (PLEG): container finished" podID="52d87daa-c2cb-4bcf-b365-0333589800e4" containerID="ed4668b054966dd7317556579dc533a819bfcb8f8c15230c4dcd7a75075814e0" exitCode=0 Feb 27 10:38:08 crc kubenswrapper[4998]: I0227 10:38:08.738512 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"52d87daa-c2cb-4bcf-b365-0333589800e4","Type":"ContainerDied","Data":"ed4668b054966dd7317556579dc533a819bfcb8f8c15230c4dcd7a75075814e0"} Feb 27 10:38:08 crc kubenswrapper[4998]: I0227 10:38:08.738552 4998 generic.go:334] "Generic (PLEG): container finished" podID="52d87daa-c2cb-4bcf-b365-0333589800e4" containerID="49b5f63b3c9365790e6f33a8ce9744ba4a33b0c79cf429c553eea414f2918134" exitCode=143 Feb 27 10:38:08 crc kubenswrapper[4998]: I0227 10:38:08.738593 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"52d87daa-c2cb-4bcf-b365-0333589800e4","Type":"ContainerDied","Data":"49b5f63b3c9365790e6f33a8ce9744ba4a33b0c79cf429c553eea414f2918134"} Feb 27 10:38:08 crc kubenswrapper[4998]: I0227 10:38:08.740633 4998 generic.go:334] "Generic (PLEG): container finished" podID="d6f746cd-8061-4166-98ab-6d7b8151deb1" containerID="f25d062ee75caa879239d82c88522c8a89ff642eb2088ca30aca323a958a5c6e" exitCode=0 Feb 27 10:38:08 crc kubenswrapper[4998]: I0227 10:38:08.740679 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-mkwcq" event={"ID":"d6f746cd-8061-4166-98ab-6d7b8151deb1","Type":"ContainerDied","Data":"f25d062ee75caa879239d82c88522c8a89ff642eb2088ca30aca323a958a5c6e"} Feb 27 10:38:08 crc kubenswrapper[4998]: I0227 10:38:08.796815 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98bf12bf-459c-458c-b028-e0e0b59a3a34" path="/var/lib/kubelet/pods/98bf12bf-459c-458c-b028-e0e0b59a3a34/volumes" Feb 27 10:38:09 crc kubenswrapper[4998]: I0227 10:38:09.535485 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5844564ccc-p674r"] Feb 27 10:38:09 crc kubenswrapper[4998]: I0227 10:38:09.568066 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-64c87cb5cd-prbxl"] Feb 27 10:38:09 crc kubenswrapper[4998]: E0227 10:38:09.568376 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c77da3a-fe67-4207-8d0e-8f8938c0902b" containerName="init" Feb 27 10:38:09 crc kubenswrapper[4998]: I0227 10:38:09.568390 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c77da3a-fe67-4207-8d0e-8f8938c0902b" containerName="init" Feb 27 10:38:09 crc kubenswrapper[4998]: E0227 10:38:09.568400 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="525020c3-603a-4430-8f28-1743f62fb179" containerName="oc" Feb 27 10:38:09 crc kubenswrapper[4998]: I0227 10:38:09.568410 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="525020c3-603a-4430-8f28-1743f62fb179" containerName="oc" Feb 27 10:38:09 crc kubenswrapper[4998]: I0227 10:38:09.568597 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c77da3a-fe67-4207-8d0e-8f8938c0902b" containerName="init" Feb 27 10:38:09 crc kubenswrapper[4998]: I0227 10:38:09.568608 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="525020c3-603a-4430-8f28-1743f62fb179" containerName="oc" Feb 27 10:38:09 crc kubenswrapper[4998]: I0227 10:38:09.574607 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-64c87cb5cd-prbxl" Feb 27 10:38:09 crc kubenswrapper[4998]: I0227 10:38:09.584315 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Feb 27 10:38:09 crc kubenswrapper[4998]: I0227 10:38:09.604620 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-64c87cb5cd-prbxl"] Feb 27 10:38:09 crc kubenswrapper[4998]: I0227 10:38:09.648325 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7cf746bbbf-fvmpp"] Feb 27 10:38:09 crc kubenswrapper[4998]: I0227 10:38:09.675920 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5d7f558cb4-k5mxh"] Feb 27 10:38:09 crc kubenswrapper[4998]: I0227 10:38:09.677289 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5d7f558cb4-k5mxh" Feb 27 10:38:09 crc kubenswrapper[4998]: I0227 10:38:09.684034 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5d7f558cb4-k5mxh"] Feb 27 10:38:09 crc kubenswrapper[4998]: I0227 10:38:09.717311 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6f8dcd8-b50e-47b8-b54c-2aa103be577c-combined-ca-bundle\") pod \"horizon-5d7f558cb4-k5mxh\" (UID: \"c6f8dcd8-b50e-47b8-b54c-2aa103be577c\") " pod="openstack/horizon-5d7f558cb4-k5mxh" Feb 27 10:38:09 crc kubenswrapper[4998]: I0227 10:38:09.717374 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/63a87b91-16fb-436d-8c53-317b204acebc-config-data\") pod \"horizon-64c87cb5cd-prbxl\" (UID: \"63a87b91-16fb-436d-8c53-317b204acebc\") " pod="openstack/horizon-64c87cb5cd-prbxl" Feb 27 10:38:09 crc kubenswrapper[4998]: I0227 10:38:09.717403 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/63a87b91-16fb-436d-8c53-317b204acebc-logs\") pod \"horizon-64c87cb5cd-prbxl\" (UID: \"63a87b91-16fb-436d-8c53-317b204acebc\") " pod="openstack/horizon-64c87cb5cd-prbxl" Feb 27 10:38:09 crc kubenswrapper[4998]: I0227 10:38:09.717513 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w95x4\" (UniqueName: \"kubernetes.io/projected/c6f8dcd8-b50e-47b8-b54c-2aa103be577c-kube-api-access-w95x4\") pod \"horizon-5d7f558cb4-k5mxh\" (UID: \"c6f8dcd8-b50e-47b8-b54c-2aa103be577c\") " pod="openstack/horizon-5d7f558cb4-k5mxh" Feb 27 10:38:09 crc kubenswrapper[4998]: I0227 10:38:09.717551 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63a87b91-16fb-436d-8c53-317b204acebc-combined-ca-bundle\") pod \"horizon-64c87cb5cd-prbxl\" (UID: \"63a87b91-16fb-436d-8c53-317b204acebc\") " pod="openstack/horizon-64c87cb5cd-prbxl" Feb 27 10:38:09 crc kubenswrapper[4998]: I0227 10:38:09.717595 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjkf4\" (UniqueName: \"kubernetes.io/projected/63a87b91-16fb-436d-8c53-317b204acebc-kube-api-access-vjkf4\") pod \"horizon-64c87cb5cd-prbxl\" (UID: \"63a87b91-16fb-436d-8c53-317b204acebc\") " pod="openstack/horizon-64c87cb5cd-prbxl" Feb 27 10:38:09 crc kubenswrapper[4998]: I0227 10:38:09.717616 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6f8dcd8-b50e-47b8-b54c-2aa103be577c-horizon-tls-certs\") pod \"horizon-5d7f558cb4-k5mxh\" (UID: \"c6f8dcd8-b50e-47b8-b54c-2aa103be577c\") " pod="openstack/horizon-5d7f558cb4-k5mxh" Feb 27 10:38:09 crc kubenswrapper[4998]: I0227 10:38:09.717801 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/63a87b91-16fb-436d-8c53-317b204acebc-horizon-tls-certs\") pod \"horizon-64c87cb5cd-prbxl\" (UID: \"63a87b91-16fb-436d-8c53-317b204acebc\") " pod="openstack/horizon-64c87cb5cd-prbxl" Feb 27 10:38:09 crc kubenswrapper[4998]: I0227 10:38:09.717846 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/63a87b91-16fb-436d-8c53-317b204acebc-scripts\") pod \"horizon-64c87cb5cd-prbxl\" (UID: \"63a87b91-16fb-436d-8c53-317b204acebc\") " pod="openstack/horizon-64c87cb5cd-prbxl" Feb 27 10:38:09 crc kubenswrapper[4998]: I0227 10:38:09.717905 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/63a87b91-16fb-436d-8c53-317b204acebc-horizon-secret-key\") pod \"horizon-64c87cb5cd-prbxl\" (UID: \"63a87b91-16fb-436d-8c53-317b204acebc\") " pod="openstack/horizon-64c87cb5cd-prbxl" Feb 27 10:38:09 crc kubenswrapper[4998]: I0227 10:38:09.717927 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c6f8dcd8-b50e-47b8-b54c-2aa103be577c-scripts\") pod \"horizon-5d7f558cb4-k5mxh\" (UID: \"c6f8dcd8-b50e-47b8-b54c-2aa103be577c\") " pod="openstack/horizon-5d7f558cb4-k5mxh" Feb 27 10:38:09 crc kubenswrapper[4998]: I0227 10:38:09.717969 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c6f8dcd8-b50e-47b8-b54c-2aa103be577c-config-data\") pod \"horizon-5d7f558cb4-k5mxh\" (UID: \"c6f8dcd8-b50e-47b8-b54c-2aa103be577c\") " pod="openstack/horizon-5d7f558cb4-k5mxh" Feb 27 10:38:09 crc kubenswrapper[4998]: I0227 10:38:09.718010 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c6f8dcd8-b50e-47b8-b54c-2aa103be577c-horizon-secret-key\") pod \"horizon-5d7f558cb4-k5mxh\" (UID: \"c6f8dcd8-b50e-47b8-b54c-2aa103be577c\") " pod="openstack/horizon-5d7f558cb4-k5mxh" Feb 27 10:38:09 crc kubenswrapper[4998]: I0227 10:38:09.718041 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6f8dcd8-b50e-47b8-b54c-2aa103be577c-logs\") pod \"horizon-5d7f558cb4-k5mxh\" (UID: \"c6f8dcd8-b50e-47b8-b54c-2aa103be577c\") " pod="openstack/horizon-5d7f558cb4-k5mxh" Feb 27 10:38:09 crc kubenswrapper[4998]: I0227 10:38:09.819497 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c6f8dcd8-b50e-47b8-b54c-2aa103be577c-config-data\") pod \"horizon-5d7f558cb4-k5mxh\" (UID: \"c6f8dcd8-b50e-47b8-b54c-2aa103be577c\") " pod="openstack/horizon-5d7f558cb4-k5mxh" Feb 27 10:38:09 crc kubenswrapper[4998]: I0227 10:38:09.819582 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c6f8dcd8-b50e-47b8-b54c-2aa103be577c-horizon-secret-key\") pod \"horizon-5d7f558cb4-k5mxh\" (UID: \"c6f8dcd8-b50e-47b8-b54c-2aa103be577c\") " pod="openstack/horizon-5d7f558cb4-k5mxh" Feb 27 10:38:09 crc kubenswrapper[4998]: I0227 10:38:09.819633 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6f8dcd8-b50e-47b8-b54c-2aa103be577c-logs\") pod \"horizon-5d7f558cb4-k5mxh\" (UID: \"c6f8dcd8-b50e-47b8-b54c-2aa103be577c\") " pod="openstack/horizon-5d7f558cb4-k5mxh" Feb 27 10:38:09 crc kubenswrapper[4998]: I0227 10:38:09.819692 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6f8dcd8-b50e-47b8-b54c-2aa103be577c-combined-ca-bundle\") pod \"horizon-5d7f558cb4-k5mxh\" (UID: \"c6f8dcd8-b50e-47b8-b54c-2aa103be577c\") " pod="openstack/horizon-5d7f558cb4-k5mxh" Feb 27 10:38:09 crc kubenswrapper[4998]: I0227 10:38:09.819721 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/63a87b91-16fb-436d-8c53-317b204acebc-config-data\") pod \"horizon-64c87cb5cd-prbxl\" (UID: \"63a87b91-16fb-436d-8c53-317b204acebc\") " pod="openstack/horizon-64c87cb5cd-prbxl" Feb 27 10:38:09 crc kubenswrapper[4998]: I0227 10:38:09.819746 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/63a87b91-16fb-436d-8c53-317b204acebc-logs\") pod \"horizon-64c87cb5cd-prbxl\" (UID: \"63a87b91-16fb-436d-8c53-317b204acebc\") " pod="openstack/horizon-64c87cb5cd-prbxl" Feb 27 10:38:09 crc kubenswrapper[4998]: I0227 10:38:09.819772 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w95x4\" (UniqueName: \"kubernetes.io/projected/c6f8dcd8-b50e-47b8-b54c-2aa103be577c-kube-api-access-w95x4\") pod \"horizon-5d7f558cb4-k5mxh\" (UID: \"c6f8dcd8-b50e-47b8-b54c-2aa103be577c\") " pod="openstack/horizon-5d7f558cb4-k5mxh" Feb 27 10:38:09 crc kubenswrapper[4998]: I0227 10:38:09.819794 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63a87b91-16fb-436d-8c53-317b204acebc-combined-ca-bundle\") pod \"horizon-64c87cb5cd-prbxl\" (UID: \"63a87b91-16fb-436d-8c53-317b204acebc\") " pod="openstack/horizon-64c87cb5cd-prbxl" Feb 27 10:38:09 crc kubenswrapper[4998]: I0227 10:38:09.819822 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjkf4\" (UniqueName: \"kubernetes.io/projected/63a87b91-16fb-436d-8c53-317b204acebc-kube-api-access-vjkf4\") pod \"horizon-64c87cb5cd-prbxl\" (UID: \"63a87b91-16fb-436d-8c53-317b204acebc\") " pod="openstack/horizon-64c87cb5cd-prbxl" Feb 27 10:38:09 crc kubenswrapper[4998]: I0227 10:38:09.819837 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6f8dcd8-b50e-47b8-b54c-2aa103be577c-horizon-tls-certs\") pod \"horizon-5d7f558cb4-k5mxh\" (UID: \"c6f8dcd8-b50e-47b8-b54c-2aa103be577c\") " pod="openstack/horizon-5d7f558cb4-k5mxh" Feb 27 10:38:09 crc kubenswrapper[4998]: I0227 10:38:09.819893 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/63a87b91-16fb-436d-8c53-317b204acebc-horizon-tls-certs\") pod \"horizon-64c87cb5cd-prbxl\" (UID: \"63a87b91-16fb-436d-8c53-317b204acebc\") " pod="openstack/horizon-64c87cb5cd-prbxl" Feb 27 10:38:09 crc kubenswrapper[4998]: I0227 10:38:09.819957 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/63a87b91-16fb-436d-8c53-317b204acebc-scripts\") pod \"horizon-64c87cb5cd-prbxl\" (UID: \"63a87b91-16fb-436d-8c53-317b204acebc\") " pod="openstack/horizon-64c87cb5cd-prbxl" Feb 27 10:38:09 crc kubenswrapper[4998]: I0227 10:38:09.819997 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/63a87b91-16fb-436d-8c53-317b204acebc-horizon-secret-key\") pod \"horizon-64c87cb5cd-prbxl\" (UID: \"63a87b91-16fb-436d-8c53-317b204acebc\") " pod="openstack/horizon-64c87cb5cd-prbxl" Feb 27 10:38:09 crc kubenswrapper[4998]: I0227 10:38:09.820015 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c6f8dcd8-b50e-47b8-b54c-2aa103be577c-scripts\") pod \"horizon-5d7f558cb4-k5mxh\" (UID: \"c6f8dcd8-b50e-47b8-b54c-2aa103be577c\") " pod="openstack/horizon-5d7f558cb4-k5mxh" Feb 27 10:38:09 crc kubenswrapper[4998]: I0227 10:38:09.820763 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c6f8dcd8-b50e-47b8-b54c-2aa103be577c-scripts\") pod \"horizon-5d7f558cb4-k5mxh\" (UID: \"c6f8dcd8-b50e-47b8-b54c-2aa103be577c\") " pod="openstack/horizon-5d7f558cb4-k5mxh" Feb 27 10:38:09 crc kubenswrapper[4998]: I0227 10:38:09.820933 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c6f8dcd8-b50e-47b8-b54c-2aa103be577c-config-data\") pod \"horizon-5d7f558cb4-k5mxh\" (UID: \"c6f8dcd8-b50e-47b8-b54c-2aa103be577c\") " pod="openstack/horizon-5d7f558cb4-k5mxh" Feb 27 10:38:09 crc kubenswrapper[4998]: I0227 10:38:09.821028 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6f8dcd8-b50e-47b8-b54c-2aa103be577c-logs\") pod \"horizon-5d7f558cb4-k5mxh\" (UID: \"c6f8dcd8-b50e-47b8-b54c-2aa103be577c\") " pod="openstack/horizon-5d7f558cb4-k5mxh" Feb 27 10:38:09 crc kubenswrapper[4998]: I0227 10:38:09.821809 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/63a87b91-16fb-436d-8c53-317b204acebc-logs\") pod \"horizon-64c87cb5cd-prbxl\" (UID: \"63a87b91-16fb-436d-8c53-317b204acebc\") " pod="openstack/horizon-64c87cb5cd-prbxl" Feb 27 10:38:09 crc kubenswrapper[4998]: I0227 10:38:09.822898 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/63a87b91-16fb-436d-8c53-317b204acebc-config-data\") pod \"horizon-64c87cb5cd-prbxl\" (UID: \"63a87b91-16fb-436d-8c53-317b204acebc\") " pod="openstack/horizon-64c87cb5cd-prbxl" Feb 27 10:38:09 crc kubenswrapper[4998]: I0227 10:38:09.823097 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/63a87b91-16fb-436d-8c53-317b204acebc-scripts\") pod \"horizon-64c87cb5cd-prbxl\" (UID: \"63a87b91-16fb-436d-8c53-317b204acebc\") " pod="openstack/horizon-64c87cb5cd-prbxl" Feb 27 10:38:09 crc kubenswrapper[4998]: I0227 10:38:09.827033 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/63a87b91-16fb-436d-8c53-317b204acebc-horizon-tls-certs\") pod \"horizon-64c87cb5cd-prbxl\" (UID: \"63a87b91-16fb-436d-8c53-317b204acebc\") " pod="openstack/horizon-64c87cb5cd-prbxl" Feb 27 10:38:09 crc kubenswrapper[4998]: I0227 10:38:09.831408 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/63a87b91-16fb-436d-8c53-317b204acebc-horizon-secret-key\") pod \"horizon-64c87cb5cd-prbxl\" (UID: \"63a87b91-16fb-436d-8c53-317b204acebc\") " pod="openstack/horizon-64c87cb5cd-prbxl" Feb 27 10:38:09 crc kubenswrapper[4998]: I0227 10:38:09.834790 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6f8dcd8-b50e-47b8-b54c-2aa103be577c-combined-ca-bundle\") pod \"horizon-5d7f558cb4-k5mxh\" (UID: \"c6f8dcd8-b50e-47b8-b54c-2aa103be577c\") " pod="openstack/horizon-5d7f558cb4-k5mxh" Feb 27 10:38:09 crc kubenswrapper[4998]: I0227 10:38:09.835276 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63a87b91-16fb-436d-8c53-317b204acebc-combined-ca-bundle\") pod \"horizon-64c87cb5cd-prbxl\" (UID: \"63a87b91-16fb-436d-8c53-317b204acebc\") " pod="openstack/horizon-64c87cb5cd-prbxl" Feb 27 10:38:09 crc kubenswrapper[4998]: I0227 10:38:09.833381 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6f8dcd8-b50e-47b8-b54c-2aa103be577c-horizon-tls-certs\") pod \"horizon-5d7f558cb4-k5mxh\" (UID: \"c6f8dcd8-b50e-47b8-b54c-2aa103be577c\") " pod="openstack/horizon-5d7f558cb4-k5mxh" Feb 27 10:38:09 crc kubenswrapper[4998]: I0227 10:38:09.847007 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c6f8dcd8-b50e-47b8-b54c-2aa103be577c-horizon-secret-key\") pod \"horizon-5d7f558cb4-k5mxh\" (UID: \"c6f8dcd8-b50e-47b8-b54c-2aa103be577c\") " pod="openstack/horizon-5d7f558cb4-k5mxh" Feb 27 10:38:09 crc kubenswrapper[4998]: I0227 10:38:09.847808 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjkf4\" (UniqueName: \"kubernetes.io/projected/63a87b91-16fb-436d-8c53-317b204acebc-kube-api-access-vjkf4\") pod \"horizon-64c87cb5cd-prbxl\" (UID: \"63a87b91-16fb-436d-8c53-317b204acebc\") " pod="openstack/horizon-64c87cb5cd-prbxl" Feb 27 10:38:09 crc kubenswrapper[4998]: I0227 10:38:09.847932 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w95x4\" (UniqueName: \"kubernetes.io/projected/c6f8dcd8-b50e-47b8-b54c-2aa103be577c-kube-api-access-w95x4\") pod \"horizon-5d7f558cb4-k5mxh\" (UID: \"c6f8dcd8-b50e-47b8-b54c-2aa103be577c\") " pod="openstack/horizon-5d7f558cb4-k5mxh" Feb 27 10:38:09 crc kubenswrapper[4998]: I0227 10:38:09.899065 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-64c87cb5cd-prbxl" Feb 27 10:38:09 crc kubenswrapper[4998]: I0227 10:38:09.999442 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5d7f558cb4-k5mxh" Feb 27 10:38:11 crc kubenswrapper[4998]: I0227 10:38:11.534412 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-785d8bcb8c-59cmz" Feb 27 10:38:11 crc kubenswrapper[4998]: I0227 10:38:11.599084 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-9bmcl"] Feb 27 10:38:11 crc kubenswrapper[4998]: I0227 10:38:11.599382 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-9bmcl" podUID="b29cf5b5-0760-4c81-a1e5-e434017c2414" containerName="dnsmasq-dns" containerID="cri-o://ecebfe9c67e1c6d424fbecd3d09f6b8af170a6407819fbec9d1f23550f8b66ca" gracePeriod=10 Feb 27 10:38:11 crc kubenswrapper[4998]: I0227 10:38:11.810354 4998 generic.go:334] "Generic (PLEG): container finished" podID="b29cf5b5-0760-4c81-a1e5-e434017c2414" containerID="ecebfe9c67e1c6d424fbecd3d09f6b8af170a6407819fbec9d1f23550f8b66ca" exitCode=0 Feb 27 10:38:11 crc kubenswrapper[4998]: I0227 10:38:11.810544 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-9bmcl" event={"ID":"b29cf5b5-0760-4c81-a1e5-e434017c2414","Type":"ContainerDied","Data":"ecebfe9c67e1c6d424fbecd3d09f6b8af170a6407819fbec9d1f23550f8b66ca"} Feb 27 10:38:15 crc kubenswrapper[4998]: I0227 10:38:15.274197 4998 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-9bmcl" podUID="b29cf5b5-0760-4c81-a1e5-e434017c2414" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.119:5353: connect: connection refused" Feb 27 10:38:20 crc kubenswrapper[4998]: E0227 10:38:20.001670 4998 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Feb 27 10:38:20 crc kubenswrapper[4998]: E0227 10:38:20.002269 4998 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n84h5dch77h676h5d5h5dch587h695hf9hc8h598h57ch664h558h55hfch5fbh695h8dhf8hdfh94h595h5c5h5d4h5b8h666h685hfdh5c9hf4h5d6q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x25bj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-5844564ccc-p674r_openstack(3819243f-aca7-47b2-8ed5-ea24e21c8ca4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 27 10:38:20 crc kubenswrapper[4998]: E0227 10:38:20.006663 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-5844564ccc-p674r" podUID="3819243f-aca7-47b2-8ed5-ea24e21c8ca4" Feb 27 10:38:20 crc kubenswrapper[4998]: E0227 10:38:20.075036 4998 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Feb 27 10:38:20 crc kubenswrapper[4998]: E0227 10:38:20.075217 4998 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5b7h5fch667h67bh667h8dh57h68chf9h699h74h568h59fh659hdbh558h688h64bh55ch59bh654h669h75h9bh8ch74h65bh8chcfh594h5bbh57bq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tddvl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-7cf746bbbf-fvmpp_openstack(9c739ae9-3515-4373-9dc4-80f1b866e4c8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 27 10:38:20 crc kubenswrapper[4998]: E0227 10:38:20.077870 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-7cf746bbbf-fvmpp" podUID="9c739ae9-3515-4373-9dc4-80f1b866e4c8" Feb 27 10:38:20 crc kubenswrapper[4998]: I0227 10:38:20.274639 4998 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-9bmcl" podUID="b29cf5b5-0760-4c81-a1e5-e434017c2414" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.119:5353: connect: connection refused" Feb 27 10:38:22 crc kubenswrapper[4998]: I0227 10:38:22.207739 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7cf746bbbf-fvmpp" Feb 27 10:38:22 crc kubenswrapper[4998]: I0227 10:38:22.226912 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-mkwcq" Feb 27 10:38:22 crc kubenswrapper[4998]: I0227 10:38:22.246781 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 27 10:38:22 crc kubenswrapper[4998]: I0227 10:38:22.276144 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d6f746cd-8061-4166-98ab-6d7b8151deb1-scripts\") pod \"d6f746cd-8061-4166-98ab-6d7b8151deb1\" (UID: \"d6f746cd-8061-4166-98ab-6d7b8151deb1\") " Feb 27 10:38:22 crc kubenswrapper[4998]: I0227 10:38:22.276188 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9c739ae9-3515-4373-9dc4-80f1b866e4c8-horizon-secret-key\") pod \"9c739ae9-3515-4373-9dc4-80f1b866e4c8\" (UID: \"9c739ae9-3515-4373-9dc4-80f1b866e4c8\") " Feb 27 10:38:22 crc kubenswrapper[4998]: I0227 10:38:22.276217 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t44cx\" (UniqueName: \"kubernetes.io/projected/52d87daa-c2cb-4bcf-b365-0333589800e4-kube-api-access-t44cx\") pod \"52d87daa-c2cb-4bcf-b365-0333589800e4\" (UID: \"52d87daa-c2cb-4bcf-b365-0333589800e4\") " Feb 27 10:38:22 crc kubenswrapper[4998]: I0227 10:38:22.276344 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"52d87daa-c2cb-4bcf-b365-0333589800e4\" (UID: \"52d87daa-c2cb-4bcf-b365-0333589800e4\") " Feb 27 10:38:22 crc kubenswrapper[4998]: I0227 10:38:22.276366 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52d87daa-c2cb-4bcf-b365-0333589800e4-config-data\") pod \"52d87daa-c2cb-4bcf-b365-0333589800e4\" (UID: \"52d87daa-c2cb-4bcf-b365-0333589800e4\") " Feb 27 10:38:22 crc kubenswrapper[4998]: I0227 10:38:22.276398 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/52d87daa-c2cb-4bcf-b365-0333589800e4-internal-tls-certs\") pod \"52d87daa-c2cb-4bcf-b365-0333589800e4\" (UID: \"52d87daa-c2cb-4bcf-b365-0333589800e4\") " Feb 27 10:38:22 crc kubenswrapper[4998]: I0227 10:38:22.276444 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c739ae9-3515-4373-9dc4-80f1b866e4c8-logs\") pod \"9c739ae9-3515-4373-9dc4-80f1b866e4c8\" (UID: \"9c739ae9-3515-4373-9dc4-80f1b866e4c8\") " Feb 27 10:38:22 crc kubenswrapper[4998]: I0227 10:38:22.276467 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d6f746cd-8061-4166-98ab-6d7b8151deb1-credential-keys\") pod \"d6f746cd-8061-4166-98ab-6d7b8151deb1\" (UID: \"d6f746cd-8061-4166-98ab-6d7b8151deb1\") " Feb 27 10:38:22 crc kubenswrapper[4998]: I0227 10:38:22.276494 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d6f746cd-8061-4166-98ab-6d7b8151deb1-fernet-keys\") pod \"d6f746cd-8061-4166-98ab-6d7b8151deb1\" (UID: \"d6f746cd-8061-4166-98ab-6d7b8151deb1\") " Feb 27 10:38:22 crc kubenswrapper[4998]: I0227 10:38:22.276542 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/52d87daa-c2cb-4bcf-b365-0333589800e4-httpd-run\") pod \"52d87daa-c2cb-4bcf-b365-0333589800e4\" (UID: \"52d87daa-c2cb-4bcf-b365-0333589800e4\") " Feb 27 10:38:22 crc kubenswrapper[4998]: I0227 10:38:22.276581 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9c739ae9-3515-4373-9dc4-80f1b866e4c8-scripts\") pod \"9c739ae9-3515-4373-9dc4-80f1b866e4c8\" (UID: \"9c739ae9-3515-4373-9dc4-80f1b866e4c8\") " Feb 27 10:38:22 crc kubenswrapper[4998]: I0227 10:38:22.276613 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tddvl\" (UniqueName: \"kubernetes.io/projected/9c739ae9-3515-4373-9dc4-80f1b866e4c8-kube-api-access-tddvl\") pod \"9c739ae9-3515-4373-9dc4-80f1b866e4c8\" (UID: \"9c739ae9-3515-4373-9dc4-80f1b866e4c8\") " Feb 27 10:38:22 crc kubenswrapper[4998]: I0227 10:38:22.276646 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9c739ae9-3515-4373-9dc4-80f1b866e4c8-config-data\") pod \"9c739ae9-3515-4373-9dc4-80f1b866e4c8\" (UID: \"9c739ae9-3515-4373-9dc4-80f1b866e4c8\") " Feb 27 10:38:22 crc kubenswrapper[4998]: I0227 10:38:22.276728 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52d87daa-c2cb-4bcf-b365-0333589800e4-combined-ca-bundle\") pod \"52d87daa-c2cb-4bcf-b365-0333589800e4\" (UID: \"52d87daa-c2cb-4bcf-b365-0333589800e4\") " Feb 27 10:38:22 crc kubenswrapper[4998]: I0227 10:38:22.276751 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6j679\" (UniqueName: \"kubernetes.io/projected/d6f746cd-8061-4166-98ab-6d7b8151deb1-kube-api-access-6j679\") pod \"d6f746cd-8061-4166-98ab-6d7b8151deb1\" (UID: \"d6f746cd-8061-4166-98ab-6d7b8151deb1\") " Feb 27 10:38:22 crc kubenswrapper[4998]: I0227 10:38:22.276801 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6f746cd-8061-4166-98ab-6d7b8151deb1-combined-ca-bundle\") pod \"d6f746cd-8061-4166-98ab-6d7b8151deb1\" (UID: \"d6f746cd-8061-4166-98ab-6d7b8151deb1\") " Feb 27 10:38:22 crc kubenswrapper[4998]: I0227 10:38:22.276818 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52d87daa-c2cb-4bcf-b365-0333589800e4-scripts\") pod \"52d87daa-c2cb-4bcf-b365-0333589800e4\" (UID: \"52d87daa-c2cb-4bcf-b365-0333589800e4\") " Feb 27 10:38:22 crc kubenswrapper[4998]: I0227 10:38:22.276865 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/52d87daa-c2cb-4bcf-b365-0333589800e4-logs\") pod \"52d87daa-c2cb-4bcf-b365-0333589800e4\" (UID: \"52d87daa-c2cb-4bcf-b365-0333589800e4\") " Feb 27 10:38:22 crc kubenswrapper[4998]: I0227 10:38:22.276921 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6f746cd-8061-4166-98ab-6d7b8151deb1-config-data\") pod \"d6f746cd-8061-4166-98ab-6d7b8151deb1\" (UID: \"d6f746cd-8061-4166-98ab-6d7b8151deb1\") " Feb 27 10:38:22 crc kubenswrapper[4998]: I0227 10:38:22.279026 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c739ae9-3515-4373-9dc4-80f1b866e4c8-scripts" (OuterVolumeSpecName: "scripts") pod "9c739ae9-3515-4373-9dc4-80f1b866e4c8" (UID: "9c739ae9-3515-4373-9dc4-80f1b866e4c8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:38:22 crc kubenswrapper[4998]: I0227 10:38:22.279472 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52d87daa-c2cb-4bcf-b365-0333589800e4-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "52d87daa-c2cb-4bcf-b365-0333589800e4" (UID: "52d87daa-c2cb-4bcf-b365-0333589800e4"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:38:22 crc kubenswrapper[4998]: I0227 10:38:22.281903 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c739ae9-3515-4373-9dc4-80f1b866e4c8-logs" (OuterVolumeSpecName: "logs") pod "9c739ae9-3515-4373-9dc4-80f1b866e4c8" (UID: "9c739ae9-3515-4373-9dc4-80f1b866e4c8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:38:22 crc kubenswrapper[4998]: I0227 10:38:22.283326 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c739ae9-3515-4373-9dc4-80f1b866e4c8-config-data" (OuterVolumeSpecName: "config-data") pod "9c739ae9-3515-4373-9dc4-80f1b866e4c8" (UID: "9c739ae9-3515-4373-9dc4-80f1b866e4c8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:38:22 crc kubenswrapper[4998]: I0227 10:38:22.283899 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52d87daa-c2cb-4bcf-b365-0333589800e4-logs" (OuterVolumeSpecName: "logs") pod "52d87daa-c2cb-4bcf-b365-0333589800e4" (UID: "52d87daa-c2cb-4bcf-b365-0333589800e4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:38:22 crc kubenswrapper[4998]: I0227 10:38:22.306959 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52d87daa-c2cb-4bcf-b365-0333589800e4-scripts" (OuterVolumeSpecName: "scripts") pod "52d87daa-c2cb-4bcf-b365-0333589800e4" (UID: "52d87daa-c2cb-4bcf-b365-0333589800e4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:38:22 crc kubenswrapper[4998]: I0227 10:38:22.306990 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "52d87daa-c2cb-4bcf-b365-0333589800e4" (UID: "52d87daa-c2cb-4bcf-b365-0333589800e4"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 27 10:38:22 crc kubenswrapper[4998]: I0227 10:38:22.307006 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6f746cd-8061-4166-98ab-6d7b8151deb1-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "d6f746cd-8061-4166-98ab-6d7b8151deb1" (UID: "d6f746cd-8061-4166-98ab-6d7b8151deb1"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:38:22 crc kubenswrapper[4998]: I0227 10:38:22.307108 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c739ae9-3515-4373-9dc4-80f1b866e4c8-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "9c739ae9-3515-4373-9dc4-80f1b866e4c8" (UID: "9c739ae9-3515-4373-9dc4-80f1b866e4c8"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:38:22 crc kubenswrapper[4998]: I0227 10:38:22.309670 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c739ae9-3515-4373-9dc4-80f1b866e4c8-kube-api-access-tddvl" (OuterVolumeSpecName: "kube-api-access-tddvl") pod "9c739ae9-3515-4373-9dc4-80f1b866e4c8" (UID: "9c739ae9-3515-4373-9dc4-80f1b866e4c8"). InnerVolumeSpecName "kube-api-access-tddvl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:38:22 crc kubenswrapper[4998]: I0227 10:38:22.326997 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6f746cd-8061-4166-98ab-6d7b8151deb1-kube-api-access-6j679" (OuterVolumeSpecName: "kube-api-access-6j679") pod "d6f746cd-8061-4166-98ab-6d7b8151deb1" (UID: "d6f746cd-8061-4166-98ab-6d7b8151deb1"). InnerVolumeSpecName "kube-api-access-6j679". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:38:22 crc kubenswrapper[4998]: I0227 10:38:22.327238 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52d87daa-c2cb-4bcf-b365-0333589800e4-kube-api-access-t44cx" (OuterVolumeSpecName: "kube-api-access-t44cx") pod "52d87daa-c2cb-4bcf-b365-0333589800e4" (UID: "52d87daa-c2cb-4bcf-b365-0333589800e4"). InnerVolumeSpecName "kube-api-access-t44cx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:38:22 crc kubenswrapper[4998]: I0227 10:38:22.345510 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6f746cd-8061-4166-98ab-6d7b8151deb1-scripts" (OuterVolumeSpecName: "scripts") pod "d6f746cd-8061-4166-98ab-6d7b8151deb1" (UID: "d6f746cd-8061-4166-98ab-6d7b8151deb1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:38:22 crc kubenswrapper[4998]: I0227 10:38:22.352992 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6f746cd-8061-4166-98ab-6d7b8151deb1-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "d6f746cd-8061-4166-98ab-6d7b8151deb1" (UID: "d6f746cd-8061-4166-98ab-6d7b8151deb1"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:38:22 crc kubenswrapper[4998]: I0227 10:38:22.393472 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6j679\" (UniqueName: \"kubernetes.io/projected/d6f746cd-8061-4166-98ab-6d7b8151deb1-kube-api-access-6j679\") on node \"crc\" DevicePath \"\"" Feb 27 10:38:22 crc kubenswrapper[4998]: I0227 10:38:22.393505 4998 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52d87daa-c2cb-4bcf-b365-0333589800e4-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 10:38:22 crc kubenswrapper[4998]: I0227 10:38:22.393515 4998 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/52d87daa-c2cb-4bcf-b365-0333589800e4-logs\") on node \"crc\" DevicePath \"\"" Feb 27 10:38:22 crc kubenswrapper[4998]: I0227 10:38:22.393524 4998 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d6f746cd-8061-4166-98ab-6d7b8151deb1-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 10:38:22 crc kubenswrapper[4998]: I0227 10:38:22.393534 4998 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9c739ae9-3515-4373-9dc4-80f1b866e4c8-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 27 10:38:22 crc kubenswrapper[4998]: I0227 10:38:22.393545 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t44cx\" (UniqueName: \"kubernetes.io/projected/52d87daa-c2cb-4bcf-b365-0333589800e4-kube-api-access-t44cx\") on node \"crc\" DevicePath \"\"" Feb 27 10:38:22 crc kubenswrapper[4998]: I0227 10:38:22.393573 4998 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Feb 27 10:38:22 crc kubenswrapper[4998]: I0227 10:38:22.393584 4998 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c739ae9-3515-4373-9dc4-80f1b866e4c8-logs\") on node \"crc\" DevicePath \"\"" Feb 27 10:38:22 crc kubenswrapper[4998]: I0227 10:38:22.393597 4998 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d6f746cd-8061-4166-98ab-6d7b8151deb1-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 27 10:38:22 crc kubenswrapper[4998]: I0227 10:38:22.393605 4998 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d6f746cd-8061-4166-98ab-6d7b8151deb1-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 27 10:38:22 crc kubenswrapper[4998]: I0227 10:38:22.393615 4998 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/52d87daa-c2cb-4bcf-b365-0333589800e4-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 27 10:38:22 crc kubenswrapper[4998]: I0227 10:38:22.393628 4998 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9c739ae9-3515-4373-9dc4-80f1b866e4c8-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 10:38:22 crc kubenswrapper[4998]: I0227 10:38:22.393643 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tddvl\" (UniqueName: \"kubernetes.io/projected/9c739ae9-3515-4373-9dc4-80f1b866e4c8-kube-api-access-tddvl\") on node \"crc\" DevicePath \"\"" Feb 27 10:38:22 crc kubenswrapper[4998]: I0227 10:38:22.393659 4998 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9c739ae9-3515-4373-9dc4-80f1b866e4c8-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 10:38:22 crc kubenswrapper[4998]: I0227 10:38:22.416560 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52d87daa-c2cb-4bcf-b365-0333589800e4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "52d87daa-c2cb-4bcf-b365-0333589800e4" (UID: "52d87daa-c2cb-4bcf-b365-0333589800e4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:38:22 crc kubenswrapper[4998]: I0227 10:38:22.418849 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6f746cd-8061-4166-98ab-6d7b8151deb1-config-data" (OuterVolumeSpecName: "config-data") pod "d6f746cd-8061-4166-98ab-6d7b8151deb1" (UID: "d6f746cd-8061-4166-98ab-6d7b8151deb1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:38:22 crc kubenswrapper[4998]: I0227 10:38:22.431332 4998 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Feb 27 10:38:22 crc kubenswrapper[4998]: I0227 10:38:22.447192 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52d87daa-c2cb-4bcf-b365-0333589800e4-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "52d87daa-c2cb-4bcf-b365-0333589800e4" (UID: "52d87daa-c2cb-4bcf-b365-0333589800e4"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:38:22 crc kubenswrapper[4998]: I0227 10:38:22.451510 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52d87daa-c2cb-4bcf-b365-0333589800e4-config-data" (OuterVolumeSpecName: "config-data") pod "52d87daa-c2cb-4bcf-b365-0333589800e4" (UID: "52d87daa-c2cb-4bcf-b365-0333589800e4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:38:22 crc kubenswrapper[4998]: I0227 10:38:22.464941 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6f746cd-8061-4166-98ab-6d7b8151deb1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d6f746cd-8061-4166-98ab-6d7b8151deb1" (UID: "d6f746cd-8061-4166-98ab-6d7b8151deb1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:38:22 crc kubenswrapper[4998]: I0227 10:38:22.495554 4998 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6f746cd-8061-4166-98ab-6d7b8151deb1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:38:22 crc kubenswrapper[4998]: I0227 10:38:22.495583 4998 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6f746cd-8061-4166-98ab-6d7b8151deb1-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 10:38:22 crc kubenswrapper[4998]: I0227 10:38:22.495592 4998 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Feb 27 10:38:22 crc kubenswrapper[4998]: I0227 10:38:22.495600 4998 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52d87daa-c2cb-4bcf-b365-0333589800e4-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 10:38:22 crc kubenswrapper[4998]: I0227 10:38:22.495609 4998 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/52d87daa-c2cb-4bcf-b365-0333589800e4-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 10:38:22 crc kubenswrapper[4998]: I0227 10:38:22.495617 4998 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52d87daa-c2cb-4bcf-b365-0333589800e4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:38:22 crc kubenswrapper[4998]: I0227 10:38:22.907875 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"52d87daa-c2cb-4bcf-b365-0333589800e4","Type":"ContainerDied","Data":"853ac6072ac5ead6add5ba0bd0bc5dac7a3862eab15c19b203498235c9c3ac84"} Feb 27 10:38:22 crc kubenswrapper[4998]: I0227 10:38:22.908173 4998 scope.go:117] "RemoveContainer" containerID="ed4668b054966dd7317556579dc533a819bfcb8f8c15230c4dcd7a75075814e0" Feb 27 10:38:22 crc kubenswrapper[4998]: I0227 10:38:22.907928 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 27 10:38:22 crc kubenswrapper[4998]: I0227 10:38:22.910395 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-mkwcq" event={"ID":"d6f746cd-8061-4166-98ab-6d7b8151deb1","Type":"ContainerDied","Data":"67d8c8bcdc26f2b7777e8d423f7bf91875531ce8e98056825c557fb6bf15825d"} Feb 27 10:38:22 crc kubenswrapper[4998]: I0227 10:38:22.910439 4998 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="67d8c8bcdc26f2b7777e8d423f7bf91875531ce8e98056825c557fb6bf15825d" Feb 27 10:38:22 crc kubenswrapper[4998]: I0227 10:38:22.910460 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-mkwcq" Feb 27 10:38:22 crc kubenswrapper[4998]: I0227 10:38:22.912900 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7cf746bbbf-fvmpp" event={"ID":"9c739ae9-3515-4373-9dc4-80f1b866e4c8","Type":"ContainerDied","Data":"acce384e04ac53334de54734447e439507a269b1ce001d7f6e03aca8a3d0218c"} Feb 27 10:38:22 crc kubenswrapper[4998]: I0227 10:38:22.912942 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7cf746bbbf-fvmpp" Feb 27 10:38:22 crc kubenswrapper[4998]: I0227 10:38:22.941005 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 27 10:38:22 crc kubenswrapper[4998]: I0227 10:38:22.959159 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 27 10:38:23 crc kubenswrapper[4998]: I0227 10:38:23.015359 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 27 10:38:23 crc kubenswrapper[4998]: E0227 10:38:23.015797 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6f746cd-8061-4166-98ab-6d7b8151deb1" containerName="keystone-bootstrap" Feb 27 10:38:23 crc kubenswrapper[4998]: I0227 10:38:23.015812 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6f746cd-8061-4166-98ab-6d7b8151deb1" containerName="keystone-bootstrap" Feb 27 10:38:23 crc kubenswrapper[4998]: E0227 10:38:23.015823 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52d87daa-c2cb-4bcf-b365-0333589800e4" containerName="glance-log" Feb 27 10:38:23 crc kubenswrapper[4998]: I0227 10:38:23.015828 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="52d87daa-c2cb-4bcf-b365-0333589800e4" containerName="glance-log" Feb 27 10:38:23 crc kubenswrapper[4998]: E0227 10:38:23.015841 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52d87daa-c2cb-4bcf-b365-0333589800e4" containerName="glance-httpd" Feb 27 10:38:23 crc kubenswrapper[4998]: I0227 10:38:23.015847 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="52d87daa-c2cb-4bcf-b365-0333589800e4" containerName="glance-httpd" Feb 27 10:38:23 crc kubenswrapper[4998]: I0227 10:38:23.016023 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6f746cd-8061-4166-98ab-6d7b8151deb1" containerName="keystone-bootstrap" Feb 27 10:38:23 crc kubenswrapper[4998]: I0227 10:38:23.016038 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="52d87daa-c2cb-4bcf-b365-0333589800e4" containerName="glance-log" Feb 27 10:38:23 crc kubenswrapper[4998]: I0227 10:38:23.016045 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="52d87daa-c2cb-4bcf-b365-0333589800e4" containerName="glance-httpd" Feb 27 10:38:23 crc kubenswrapper[4998]: I0227 10:38:23.016992 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 27 10:38:23 crc kubenswrapper[4998]: I0227 10:38:23.019933 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 27 10:38:23 crc kubenswrapper[4998]: I0227 10:38:23.020039 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 27 10:38:23 crc kubenswrapper[4998]: I0227 10:38:23.029698 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7cf746bbbf-fvmpp"] Feb 27 10:38:23 crc kubenswrapper[4998]: I0227 10:38:23.048733 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-7cf746bbbf-fvmpp"] Feb 27 10:38:23 crc kubenswrapper[4998]: I0227 10:38:23.067399 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 27 10:38:23 crc kubenswrapper[4998]: I0227 10:38:23.105987 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/543a0e99-c247-4ab0-940e-461f495066cc-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"543a0e99-c247-4ab0-940e-461f495066cc\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:38:23 crc kubenswrapper[4998]: I0227 10:38:23.106057 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vclcz\" (UniqueName: \"kubernetes.io/projected/543a0e99-c247-4ab0-940e-461f495066cc-kube-api-access-vclcz\") pod \"glance-default-internal-api-0\" (UID: \"543a0e99-c247-4ab0-940e-461f495066cc\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:38:23 crc kubenswrapper[4998]: I0227 10:38:23.106080 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/543a0e99-c247-4ab0-940e-461f495066cc-config-data\") pod \"glance-default-internal-api-0\" (UID: \"543a0e99-c247-4ab0-940e-461f495066cc\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:38:23 crc kubenswrapper[4998]: I0227 10:38:23.106107 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"543a0e99-c247-4ab0-940e-461f495066cc\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:38:23 crc kubenswrapper[4998]: I0227 10:38:23.106139 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/543a0e99-c247-4ab0-940e-461f495066cc-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"543a0e99-c247-4ab0-940e-461f495066cc\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:38:23 crc kubenswrapper[4998]: I0227 10:38:23.106175 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/543a0e99-c247-4ab0-940e-461f495066cc-logs\") pod \"glance-default-internal-api-0\" (UID: \"543a0e99-c247-4ab0-940e-461f495066cc\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:38:23 crc kubenswrapper[4998]: I0227 10:38:23.106193 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/543a0e99-c247-4ab0-940e-461f495066cc-scripts\") pod \"glance-default-internal-api-0\" (UID: \"543a0e99-c247-4ab0-940e-461f495066cc\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:38:23 crc kubenswrapper[4998]: I0227 10:38:23.106240 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/543a0e99-c247-4ab0-940e-461f495066cc-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"543a0e99-c247-4ab0-940e-461f495066cc\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:38:23 crc kubenswrapper[4998]: I0227 10:38:23.207997 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/543a0e99-c247-4ab0-940e-461f495066cc-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"543a0e99-c247-4ab0-940e-461f495066cc\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:38:23 crc kubenswrapper[4998]: I0227 10:38:23.208073 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vclcz\" (UniqueName: \"kubernetes.io/projected/543a0e99-c247-4ab0-940e-461f495066cc-kube-api-access-vclcz\") pod \"glance-default-internal-api-0\" (UID: \"543a0e99-c247-4ab0-940e-461f495066cc\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:38:23 crc kubenswrapper[4998]: I0227 10:38:23.208102 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/543a0e99-c247-4ab0-940e-461f495066cc-config-data\") pod \"glance-default-internal-api-0\" (UID: \"543a0e99-c247-4ab0-940e-461f495066cc\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:38:23 crc kubenswrapper[4998]: I0227 10:38:23.208139 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"543a0e99-c247-4ab0-940e-461f495066cc\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:38:23 crc kubenswrapper[4998]: I0227 10:38:23.208380 4998 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"543a0e99-c247-4ab0-940e-461f495066cc\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-internal-api-0" Feb 27 10:38:23 crc kubenswrapper[4998]: I0227 10:38:23.208571 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/543a0e99-c247-4ab0-940e-461f495066cc-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"543a0e99-c247-4ab0-940e-461f495066cc\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:38:23 crc kubenswrapper[4998]: I0227 10:38:23.209111 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/543a0e99-c247-4ab0-940e-461f495066cc-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"543a0e99-c247-4ab0-940e-461f495066cc\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:38:23 crc kubenswrapper[4998]: I0227 10:38:23.209156 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/543a0e99-c247-4ab0-940e-461f495066cc-logs\") pod \"glance-default-internal-api-0\" (UID: \"543a0e99-c247-4ab0-940e-461f495066cc\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:38:23 crc kubenswrapper[4998]: I0227 10:38:23.209179 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/543a0e99-c247-4ab0-940e-461f495066cc-scripts\") pod \"glance-default-internal-api-0\" (UID: \"543a0e99-c247-4ab0-940e-461f495066cc\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:38:23 crc kubenswrapper[4998]: I0227 10:38:23.209261 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/543a0e99-c247-4ab0-940e-461f495066cc-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"543a0e99-c247-4ab0-940e-461f495066cc\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:38:23 crc kubenswrapper[4998]: I0227 10:38:23.209948 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/543a0e99-c247-4ab0-940e-461f495066cc-logs\") pod \"glance-default-internal-api-0\" (UID: \"543a0e99-c247-4ab0-940e-461f495066cc\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:38:23 crc kubenswrapper[4998]: I0227 10:38:23.220604 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/543a0e99-c247-4ab0-940e-461f495066cc-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"543a0e99-c247-4ab0-940e-461f495066cc\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:38:23 crc kubenswrapper[4998]: I0227 10:38:23.220954 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/543a0e99-c247-4ab0-940e-461f495066cc-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"543a0e99-c247-4ab0-940e-461f495066cc\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:38:23 crc kubenswrapper[4998]: I0227 10:38:23.221621 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/543a0e99-c247-4ab0-940e-461f495066cc-config-data\") pod \"glance-default-internal-api-0\" (UID: \"543a0e99-c247-4ab0-940e-461f495066cc\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:38:23 crc kubenswrapper[4998]: I0227 10:38:23.231145 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/543a0e99-c247-4ab0-940e-461f495066cc-scripts\") pod \"glance-default-internal-api-0\" (UID: \"543a0e99-c247-4ab0-940e-461f495066cc\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:38:23 crc kubenswrapper[4998]: I0227 10:38:23.233080 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vclcz\" (UniqueName: \"kubernetes.io/projected/543a0e99-c247-4ab0-940e-461f495066cc-kube-api-access-vclcz\") pod \"glance-default-internal-api-0\" (UID: \"543a0e99-c247-4ab0-940e-461f495066cc\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:38:23 crc kubenswrapper[4998]: I0227 10:38:23.255601 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"543a0e99-c247-4ab0-940e-461f495066cc\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:38:23 crc kubenswrapper[4998]: I0227 10:38:23.340877 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 27 10:38:23 crc kubenswrapper[4998]: I0227 10:38:23.345976 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-mkwcq"] Feb 27 10:38:23 crc kubenswrapper[4998]: I0227 10:38:23.354590 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-mkwcq"] Feb 27 10:38:23 crc kubenswrapper[4998]: I0227 10:38:23.433819 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-dlccw"] Feb 27 10:38:23 crc kubenswrapper[4998]: I0227 10:38:23.435129 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-dlccw" Feb 27 10:38:23 crc kubenswrapper[4998]: I0227 10:38:23.439709 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-4n75w" Feb 27 10:38:23 crc kubenswrapper[4998]: I0227 10:38:23.440139 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 27 10:38:23 crc kubenswrapper[4998]: I0227 10:38:23.440260 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 27 10:38:23 crc kubenswrapper[4998]: I0227 10:38:23.440382 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 27 10:38:23 crc kubenswrapper[4998]: I0227 10:38:23.441338 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 27 10:38:23 crc kubenswrapper[4998]: I0227 10:38:23.446335 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-dlccw"] Feb 27 10:38:23 crc kubenswrapper[4998]: I0227 10:38:23.514598 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/06aa0c33-2be2-426a-98a0-eff676933eb1-credential-keys\") pod \"keystone-bootstrap-dlccw\" (UID: \"06aa0c33-2be2-426a-98a0-eff676933eb1\") " pod="openstack/keystone-bootstrap-dlccw" Feb 27 10:38:23 crc kubenswrapper[4998]: I0227 10:38:23.514710 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06aa0c33-2be2-426a-98a0-eff676933eb1-combined-ca-bundle\") pod \"keystone-bootstrap-dlccw\" (UID: \"06aa0c33-2be2-426a-98a0-eff676933eb1\") " pod="openstack/keystone-bootstrap-dlccw" Feb 27 10:38:23 crc kubenswrapper[4998]: I0227 10:38:23.514793 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06aa0c33-2be2-426a-98a0-eff676933eb1-config-data\") pod \"keystone-bootstrap-dlccw\" (UID: \"06aa0c33-2be2-426a-98a0-eff676933eb1\") " pod="openstack/keystone-bootstrap-dlccw" Feb 27 10:38:23 crc kubenswrapper[4998]: I0227 10:38:23.514929 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tr955\" (UniqueName: \"kubernetes.io/projected/06aa0c33-2be2-426a-98a0-eff676933eb1-kube-api-access-tr955\") pod \"keystone-bootstrap-dlccw\" (UID: \"06aa0c33-2be2-426a-98a0-eff676933eb1\") " pod="openstack/keystone-bootstrap-dlccw" Feb 27 10:38:23 crc kubenswrapper[4998]: I0227 10:38:23.514992 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/06aa0c33-2be2-426a-98a0-eff676933eb1-fernet-keys\") pod \"keystone-bootstrap-dlccw\" (UID: \"06aa0c33-2be2-426a-98a0-eff676933eb1\") " pod="openstack/keystone-bootstrap-dlccw" Feb 27 10:38:23 crc kubenswrapper[4998]: I0227 10:38:23.515017 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06aa0c33-2be2-426a-98a0-eff676933eb1-scripts\") pod \"keystone-bootstrap-dlccw\" (UID: \"06aa0c33-2be2-426a-98a0-eff676933eb1\") " pod="openstack/keystone-bootstrap-dlccw" Feb 27 10:38:23 crc kubenswrapper[4998]: I0227 10:38:23.616745 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06aa0c33-2be2-426a-98a0-eff676933eb1-config-data\") pod \"keystone-bootstrap-dlccw\" (UID: \"06aa0c33-2be2-426a-98a0-eff676933eb1\") " pod="openstack/keystone-bootstrap-dlccw" Feb 27 10:38:23 crc kubenswrapper[4998]: I0227 10:38:23.616875 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tr955\" (UniqueName: \"kubernetes.io/projected/06aa0c33-2be2-426a-98a0-eff676933eb1-kube-api-access-tr955\") pod \"keystone-bootstrap-dlccw\" (UID: \"06aa0c33-2be2-426a-98a0-eff676933eb1\") " pod="openstack/keystone-bootstrap-dlccw" Feb 27 10:38:23 crc kubenswrapper[4998]: I0227 10:38:23.616941 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06aa0c33-2be2-426a-98a0-eff676933eb1-scripts\") pod \"keystone-bootstrap-dlccw\" (UID: \"06aa0c33-2be2-426a-98a0-eff676933eb1\") " pod="openstack/keystone-bootstrap-dlccw" Feb 27 10:38:23 crc kubenswrapper[4998]: I0227 10:38:23.616973 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/06aa0c33-2be2-426a-98a0-eff676933eb1-fernet-keys\") pod \"keystone-bootstrap-dlccw\" (UID: \"06aa0c33-2be2-426a-98a0-eff676933eb1\") " pod="openstack/keystone-bootstrap-dlccw" Feb 27 10:38:23 crc kubenswrapper[4998]: I0227 10:38:23.617041 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/06aa0c33-2be2-426a-98a0-eff676933eb1-credential-keys\") pod \"keystone-bootstrap-dlccw\" (UID: \"06aa0c33-2be2-426a-98a0-eff676933eb1\") " pod="openstack/keystone-bootstrap-dlccw" Feb 27 10:38:23 crc kubenswrapper[4998]: I0227 10:38:23.617085 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06aa0c33-2be2-426a-98a0-eff676933eb1-combined-ca-bundle\") pod \"keystone-bootstrap-dlccw\" (UID: \"06aa0c33-2be2-426a-98a0-eff676933eb1\") " pod="openstack/keystone-bootstrap-dlccw" Feb 27 10:38:23 crc kubenswrapper[4998]: I0227 10:38:23.620867 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06aa0c33-2be2-426a-98a0-eff676933eb1-config-data\") pod \"keystone-bootstrap-dlccw\" (UID: \"06aa0c33-2be2-426a-98a0-eff676933eb1\") " pod="openstack/keystone-bootstrap-dlccw" Feb 27 10:38:23 crc kubenswrapper[4998]: I0227 10:38:23.621479 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06aa0c33-2be2-426a-98a0-eff676933eb1-scripts\") pod \"keystone-bootstrap-dlccw\" (UID: \"06aa0c33-2be2-426a-98a0-eff676933eb1\") " pod="openstack/keystone-bootstrap-dlccw" Feb 27 10:38:23 crc kubenswrapper[4998]: I0227 10:38:23.629829 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/06aa0c33-2be2-426a-98a0-eff676933eb1-credential-keys\") pod \"keystone-bootstrap-dlccw\" (UID: \"06aa0c33-2be2-426a-98a0-eff676933eb1\") " pod="openstack/keystone-bootstrap-dlccw" Feb 27 10:38:23 crc kubenswrapper[4998]: I0227 10:38:23.630124 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06aa0c33-2be2-426a-98a0-eff676933eb1-combined-ca-bundle\") pod \"keystone-bootstrap-dlccw\" (UID: \"06aa0c33-2be2-426a-98a0-eff676933eb1\") " pod="openstack/keystone-bootstrap-dlccw" Feb 27 10:38:23 crc kubenswrapper[4998]: I0227 10:38:23.630558 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/06aa0c33-2be2-426a-98a0-eff676933eb1-fernet-keys\") pod \"keystone-bootstrap-dlccw\" (UID: \"06aa0c33-2be2-426a-98a0-eff676933eb1\") " pod="openstack/keystone-bootstrap-dlccw" Feb 27 10:38:23 crc kubenswrapper[4998]: I0227 10:38:23.634485 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tr955\" (UniqueName: \"kubernetes.io/projected/06aa0c33-2be2-426a-98a0-eff676933eb1-kube-api-access-tr955\") pod \"keystone-bootstrap-dlccw\" (UID: \"06aa0c33-2be2-426a-98a0-eff676933eb1\") " pod="openstack/keystone-bootstrap-dlccw" Feb 27 10:38:23 crc kubenswrapper[4998]: I0227 10:38:23.760392 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-dlccw" Feb 27 10:38:24 crc kubenswrapper[4998]: I0227 10:38:24.776949 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52d87daa-c2cb-4bcf-b365-0333589800e4" path="/var/lib/kubelet/pods/52d87daa-c2cb-4bcf-b365-0333589800e4/volumes" Feb 27 10:38:24 crc kubenswrapper[4998]: I0227 10:38:24.778706 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c739ae9-3515-4373-9dc4-80f1b866e4c8" path="/var/lib/kubelet/pods/9c739ae9-3515-4373-9dc4-80f1b866e4c8/volumes" Feb 27 10:38:24 crc kubenswrapper[4998]: I0227 10:38:24.779122 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6f746cd-8061-4166-98ab-6d7b8151deb1" path="/var/lib/kubelet/pods/d6f746cd-8061-4166-98ab-6d7b8151deb1/volumes" Feb 27 10:38:24 crc kubenswrapper[4998]: I0227 10:38:24.934461 4998 generic.go:334] "Generic (PLEG): container finished" podID="97372f36-bf18-4a79-917b-cf9b6d0f92a2" containerID="de81a5f70de9a03abbd0d4f927a765587abce470f9ae52dc8086c284b44f0c93" exitCode=0 Feb 27 10:38:24 crc kubenswrapper[4998]: I0227 10:38:24.934515 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-b8h8c" event={"ID":"97372f36-bf18-4a79-917b-cf9b6d0f92a2","Type":"ContainerDied","Data":"de81a5f70de9a03abbd0d4f927a765587abce470f9ae52dc8086c284b44f0c93"} Feb 27 10:38:30 crc kubenswrapper[4998]: I0227 10:38:30.274578 4998 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-9bmcl" podUID="b29cf5b5-0760-4c81-a1e5-e434017c2414" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.119:5353: i/o timeout" Feb 27 10:38:30 crc kubenswrapper[4998]: I0227 10:38:30.275315 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-9bmcl" Feb 27 10:38:30 crc kubenswrapper[4998]: E0227 10:38:30.660385 4998 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Feb 27 10:38:30 crc kubenswrapper[4998]: E0227 10:38:30.660760 4998 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n64dh684h674h8bh5dch57fh5fh58dhbfh669h57dh544h5c4h588h5ddh6fh5c8h84h597hc7hb4h65h697h9h56ch558h697h645h65bh654h7dh687q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-947lq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-954574d65-vqjfh_openstack(e758d850-f266-4136-8ad2-9f7cf30bc777): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 27 10:38:30 crc kubenswrapper[4998]: E0227 10:38:30.671349 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-954574d65-vqjfh" podUID="e758d850-f266-4136-8ad2-9f7cf30bc777" Feb 27 10:38:30 crc kubenswrapper[4998]: I0227 10:38:30.683719 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-b8h8c" Feb 27 10:38:30 crc kubenswrapper[4998]: I0227 10:38:30.693617 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5844564ccc-p674r" Feb 27 10:38:30 crc kubenswrapper[4998]: I0227 10:38:30.852938 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x25bj\" (UniqueName: \"kubernetes.io/projected/3819243f-aca7-47b2-8ed5-ea24e21c8ca4-kube-api-access-x25bj\") pod \"3819243f-aca7-47b2-8ed5-ea24e21c8ca4\" (UID: \"3819243f-aca7-47b2-8ed5-ea24e21c8ca4\") " Feb 27 10:38:30 crc kubenswrapper[4998]: I0227 10:38:30.853009 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97372f36-bf18-4a79-917b-cf9b6d0f92a2-combined-ca-bundle\") pod \"97372f36-bf18-4a79-917b-cf9b6d0f92a2\" (UID: \"97372f36-bf18-4a79-917b-cf9b6d0f92a2\") " Feb 27 10:38:30 crc kubenswrapper[4998]: I0227 10:38:30.853208 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3819243f-aca7-47b2-8ed5-ea24e21c8ca4-logs\") pod \"3819243f-aca7-47b2-8ed5-ea24e21c8ca4\" (UID: \"3819243f-aca7-47b2-8ed5-ea24e21c8ca4\") " Feb 27 10:38:30 crc kubenswrapper[4998]: I0227 10:38:30.853273 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dkszr\" (UniqueName: \"kubernetes.io/projected/97372f36-bf18-4a79-917b-cf9b6d0f92a2-kube-api-access-dkszr\") pod \"97372f36-bf18-4a79-917b-cf9b6d0f92a2\" (UID: \"97372f36-bf18-4a79-917b-cf9b6d0f92a2\") " Feb 27 10:38:30 crc kubenswrapper[4998]: I0227 10:38:30.853348 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3819243f-aca7-47b2-8ed5-ea24e21c8ca4-scripts\") pod \"3819243f-aca7-47b2-8ed5-ea24e21c8ca4\" (UID: \"3819243f-aca7-47b2-8ed5-ea24e21c8ca4\") " Feb 27 10:38:30 crc kubenswrapper[4998]: I0227 10:38:30.853367 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3819243f-aca7-47b2-8ed5-ea24e21c8ca4-horizon-secret-key\") pod \"3819243f-aca7-47b2-8ed5-ea24e21c8ca4\" (UID: \"3819243f-aca7-47b2-8ed5-ea24e21c8ca4\") " Feb 27 10:38:30 crc kubenswrapper[4998]: I0227 10:38:30.853409 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/97372f36-bf18-4a79-917b-cf9b6d0f92a2-config\") pod \"97372f36-bf18-4a79-917b-cf9b6d0f92a2\" (UID: \"97372f36-bf18-4a79-917b-cf9b6d0f92a2\") " Feb 27 10:38:30 crc kubenswrapper[4998]: I0227 10:38:30.853478 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3819243f-aca7-47b2-8ed5-ea24e21c8ca4-config-data\") pod \"3819243f-aca7-47b2-8ed5-ea24e21c8ca4\" (UID: \"3819243f-aca7-47b2-8ed5-ea24e21c8ca4\") " Feb 27 10:38:30 crc kubenswrapper[4998]: I0227 10:38:30.853711 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3819243f-aca7-47b2-8ed5-ea24e21c8ca4-logs" (OuterVolumeSpecName: "logs") pod "3819243f-aca7-47b2-8ed5-ea24e21c8ca4" (UID: "3819243f-aca7-47b2-8ed5-ea24e21c8ca4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:38:30 crc kubenswrapper[4998]: I0227 10:38:30.854059 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3819243f-aca7-47b2-8ed5-ea24e21c8ca4-scripts" (OuterVolumeSpecName: "scripts") pod "3819243f-aca7-47b2-8ed5-ea24e21c8ca4" (UID: "3819243f-aca7-47b2-8ed5-ea24e21c8ca4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:38:30 crc kubenswrapper[4998]: I0227 10:38:30.854326 4998 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3819243f-aca7-47b2-8ed5-ea24e21c8ca4-logs\") on node \"crc\" DevicePath \"\"" Feb 27 10:38:30 crc kubenswrapper[4998]: I0227 10:38:30.854346 4998 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3819243f-aca7-47b2-8ed5-ea24e21c8ca4-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 10:38:30 crc kubenswrapper[4998]: I0227 10:38:30.854367 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3819243f-aca7-47b2-8ed5-ea24e21c8ca4-config-data" (OuterVolumeSpecName: "config-data") pod "3819243f-aca7-47b2-8ed5-ea24e21c8ca4" (UID: "3819243f-aca7-47b2-8ed5-ea24e21c8ca4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:38:30 crc kubenswrapper[4998]: I0227 10:38:30.862490 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97372f36-bf18-4a79-917b-cf9b6d0f92a2-kube-api-access-dkszr" (OuterVolumeSpecName: "kube-api-access-dkszr") pod "97372f36-bf18-4a79-917b-cf9b6d0f92a2" (UID: "97372f36-bf18-4a79-917b-cf9b6d0f92a2"). InnerVolumeSpecName "kube-api-access-dkszr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:38:30 crc kubenswrapper[4998]: I0227 10:38:30.862644 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3819243f-aca7-47b2-8ed5-ea24e21c8ca4-kube-api-access-x25bj" (OuterVolumeSpecName: "kube-api-access-x25bj") pod "3819243f-aca7-47b2-8ed5-ea24e21c8ca4" (UID: "3819243f-aca7-47b2-8ed5-ea24e21c8ca4"). InnerVolumeSpecName "kube-api-access-x25bj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:38:30 crc kubenswrapper[4998]: I0227 10:38:30.864046 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3819243f-aca7-47b2-8ed5-ea24e21c8ca4-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "3819243f-aca7-47b2-8ed5-ea24e21c8ca4" (UID: "3819243f-aca7-47b2-8ed5-ea24e21c8ca4"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:38:30 crc kubenswrapper[4998]: I0227 10:38:30.881728 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97372f36-bf18-4a79-917b-cf9b6d0f92a2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "97372f36-bf18-4a79-917b-cf9b6d0f92a2" (UID: "97372f36-bf18-4a79-917b-cf9b6d0f92a2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:38:30 crc kubenswrapper[4998]: I0227 10:38:30.886919 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97372f36-bf18-4a79-917b-cf9b6d0f92a2-config" (OuterVolumeSpecName: "config") pod "97372f36-bf18-4a79-917b-cf9b6d0f92a2" (UID: "97372f36-bf18-4a79-917b-cf9b6d0f92a2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:38:30 crc kubenswrapper[4998]: I0227 10:38:30.956141 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x25bj\" (UniqueName: \"kubernetes.io/projected/3819243f-aca7-47b2-8ed5-ea24e21c8ca4-kube-api-access-x25bj\") on node \"crc\" DevicePath \"\"" Feb 27 10:38:30 crc kubenswrapper[4998]: I0227 10:38:30.956180 4998 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97372f36-bf18-4a79-917b-cf9b6d0f92a2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:38:30 crc kubenswrapper[4998]: I0227 10:38:30.956193 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dkszr\" (UniqueName: \"kubernetes.io/projected/97372f36-bf18-4a79-917b-cf9b6d0f92a2-kube-api-access-dkszr\") on node \"crc\" DevicePath \"\"" Feb 27 10:38:30 crc kubenswrapper[4998]: I0227 10:38:30.956207 4998 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3819243f-aca7-47b2-8ed5-ea24e21c8ca4-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 27 10:38:30 crc kubenswrapper[4998]: I0227 10:38:30.956220 4998 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/97372f36-bf18-4a79-917b-cf9b6d0f92a2-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:38:30 crc kubenswrapper[4998]: I0227 10:38:30.956251 4998 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3819243f-aca7-47b2-8ed5-ea24e21c8ca4-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 10:38:31 crc kubenswrapper[4998]: I0227 10:38:31.025110 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-b8h8c" event={"ID":"97372f36-bf18-4a79-917b-cf9b6d0f92a2","Type":"ContainerDied","Data":"aceba0ec71dae70656679e65d811b53fd10f444d3c8610573b64b23e7a678d50"} Feb 27 10:38:31 crc kubenswrapper[4998]: I0227 10:38:31.025155 4998 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aceba0ec71dae70656679e65d811b53fd10f444d3c8610573b64b23e7a678d50" Feb 27 10:38:31 crc kubenswrapper[4998]: I0227 10:38:31.025240 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-b8h8c" Feb 27 10:38:31 crc kubenswrapper[4998]: I0227 10:38:31.051254 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5844564ccc-p674r" Feb 27 10:38:31 crc kubenswrapper[4998]: I0227 10:38:31.051710 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5844564ccc-p674r" event={"ID":"3819243f-aca7-47b2-8ed5-ea24e21c8ca4","Type":"ContainerDied","Data":"cacee258059665f98edbb81905afc282817cf8df2f9bea109f38ecadf9d14331"} Feb 27 10:38:31 crc kubenswrapper[4998]: I0227 10:38:31.156997 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5844564ccc-p674r"] Feb 27 10:38:31 crc kubenswrapper[4998]: I0227 10:38:31.173110 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5844564ccc-p674r"] Feb 27 10:38:31 crc kubenswrapper[4998]: E0227 10:38:31.484319 4998 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Feb 27 10:38:31 crc kubenswrapper[4998]: E0227 10:38:31.484695 4998 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wmz29,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-5x42c_openstack(d455fe16-80bf-42c1-be16-a87102249bf8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 27 10:38:31 crc kubenswrapper[4998]: E0227 10:38:31.487359 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-5x42c" podUID="d455fe16-80bf-42c1-be16-a87102249bf8" Feb 27 10:38:31 crc kubenswrapper[4998]: I0227 10:38:31.529283 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-9bmcl" Feb 27 10:38:31 crc kubenswrapper[4998]: I0227 10:38:31.538568 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 27 10:38:31 crc kubenswrapper[4998]: I0227 10:38:31.596785 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6d3a1f2-9e57-4c10-9480-669366053f4b-combined-ca-bundle\") pod \"a6d3a1f2-9e57-4c10-9480-669366053f4b\" (UID: \"a6d3a1f2-9e57-4c10-9480-669366053f4b\") " Feb 27 10:38:31 crc kubenswrapper[4998]: I0227 10:38:31.596844 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6d3a1f2-9e57-4c10-9480-669366053f4b-logs\") pod \"a6d3a1f2-9e57-4c10-9480-669366053f4b\" (UID: \"a6d3a1f2-9e57-4c10-9480-669366053f4b\") " Feb 27 10:38:31 crc kubenswrapper[4998]: I0227 10:38:31.596909 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6d3a1f2-9e57-4c10-9480-669366053f4b-config-data\") pod \"a6d3a1f2-9e57-4c10-9480-669366053f4b\" (UID: \"a6d3a1f2-9e57-4c10-9480-669366053f4b\") " Feb 27 10:38:31 crc kubenswrapper[4998]: I0227 10:38:31.596998 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b29cf5b5-0760-4c81-a1e5-e434017c2414-ovsdbserver-sb\") pod \"b29cf5b5-0760-4c81-a1e5-e434017c2414\" (UID: \"b29cf5b5-0760-4c81-a1e5-e434017c2414\") " Feb 27 10:38:31 crc kubenswrapper[4998]: I0227 10:38:31.597027 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b29cf5b5-0760-4c81-a1e5-e434017c2414-config\") pod \"b29cf5b5-0760-4c81-a1e5-e434017c2414\" (UID: \"b29cf5b5-0760-4c81-a1e5-e434017c2414\") " Feb 27 10:38:31 crc kubenswrapper[4998]: I0227 10:38:31.597055 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v96q7\" (UniqueName: \"kubernetes.io/projected/b29cf5b5-0760-4c81-a1e5-e434017c2414-kube-api-access-v96q7\") pod \"b29cf5b5-0760-4c81-a1e5-e434017c2414\" (UID: \"b29cf5b5-0760-4c81-a1e5-e434017c2414\") " Feb 27 10:38:31 crc kubenswrapper[4998]: I0227 10:38:31.597081 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b29cf5b5-0760-4c81-a1e5-e434017c2414-dns-svc\") pod \"b29cf5b5-0760-4c81-a1e5-e434017c2414\" (UID: \"b29cf5b5-0760-4c81-a1e5-e434017c2414\") " Feb 27 10:38:31 crc kubenswrapper[4998]: I0227 10:38:31.597134 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6d3a1f2-9e57-4c10-9480-669366053f4b-scripts\") pod \"a6d3a1f2-9e57-4c10-9480-669366053f4b\" (UID: \"a6d3a1f2-9e57-4c10-9480-669366053f4b\") " Feb 27 10:38:31 crc kubenswrapper[4998]: I0227 10:38:31.597158 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b29cf5b5-0760-4c81-a1e5-e434017c2414-ovsdbserver-nb\") pod \"b29cf5b5-0760-4c81-a1e5-e434017c2414\" (UID: \"b29cf5b5-0760-4c81-a1e5-e434017c2414\") " Feb 27 10:38:31 crc kubenswrapper[4998]: I0227 10:38:31.597178 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6d3a1f2-9e57-4c10-9480-669366053f4b-public-tls-certs\") pod \"a6d3a1f2-9e57-4c10-9480-669366053f4b\" (UID: \"a6d3a1f2-9e57-4c10-9480-669366053f4b\") " Feb 27 10:38:31 crc kubenswrapper[4998]: I0227 10:38:31.597196 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a6d3a1f2-9e57-4c10-9480-669366053f4b-httpd-run\") pod \"a6d3a1f2-9e57-4c10-9480-669366053f4b\" (UID: \"a6d3a1f2-9e57-4c10-9480-669366053f4b\") " Feb 27 10:38:31 crc kubenswrapper[4998]: I0227 10:38:31.597219 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"a6d3a1f2-9e57-4c10-9480-669366053f4b\" (UID: \"a6d3a1f2-9e57-4c10-9480-669366053f4b\") " Feb 27 10:38:31 crc kubenswrapper[4998]: I0227 10:38:31.597294 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-czx6r\" (UniqueName: \"kubernetes.io/projected/a6d3a1f2-9e57-4c10-9480-669366053f4b-kube-api-access-czx6r\") pod \"a6d3a1f2-9e57-4c10-9480-669366053f4b\" (UID: \"a6d3a1f2-9e57-4c10-9480-669366053f4b\") " Feb 27 10:38:31 crc kubenswrapper[4998]: I0227 10:38:31.611103 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a6d3a1f2-9e57-4c10-9480-669366053f4b-logs" (OuterVolumeSpecName: "logs") pod "a6d3a1f2-9e57-4c10-9480-669366053f4b" (UID: "a6d3a1f2-9e57-4c10-9480-669366053f4b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:38:31 crc kubenswrapper[4998]: I0227 10:38:31.617848 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6d3a1f2-9e57-4c10-9480-669366053f4b-kube-api-access-czx6r" (OuterVolumeSpecName: "kube-api-access-czx6r") pod "a6d3a1f2-9e57-4c10-9480-669366053f4b" (UID: "a6d3a1f2-9e57-4c10-9480-669366053f4b"). InnerVolumeSpecName "kube-api-access-czx6r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:38:31 crc kubenswrapper[4998]: I0227 10:38:31.622443 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a6d3a1f2-9e57-4c10-9480-669366053f4b-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "a6d3a1f2-9e57-4c10-9480-669366053f4b" (UID: "a6d3a1f2-9e57-4c10-9480-669366053f4b"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:38:31 crc kubenswrapper[4998]: I0227 10:38:31.622674 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b29cf5b5-0760-4c81-a1e5-e434017c2414-kube-api-access-v96q7" (OuterVolumeSpecName: "kube-api-access-v96q7") pod "b29cf5b5-0760-4c81-a1e5-e434017c2414" (UID: "b29cf5b5-0760-4c81-a1e5-e434017c2414"). InnerVolumeSpecName "kube-api-access-v96q7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:38:31 crc kubenswrapper[4998]: I0227 10:38:31.625486 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6d3a1f2-9e57-4c10-9480-669366053f4b-scripts" (OuterVolumeSpecName: "scripts") pod "a6d3a1f2-9e57-4c10-9480-669366053f4b" (UID: "a6d3a1f2-9e57-4c10-9480-669366053f4b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:38:31 crc kubenswrapper[4998]: I0227 10:38:31.657240 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "glance") pod "a6d3a1f2-9e57-4c10-9480-669366053f4b" (UID: "a6d3a1f2-9e57-4c10-9480-669366053f4b"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 27 10:38:31 crc kubenswrapper[4998]: I0227 10:38:31.663454 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6d3a1f2-9e57-4c10-9480-669366053f4b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a6d3a1f2-9e57-4c10-9480-669366053f4b" (UID: "a6d3a1f2-9e57-4c10-9480-669366053f4b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:38:31 crc kubenswrapper[4998]: I0227 10:38:31.667850 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6d3a1f2-9e57-4c10-9480-669366053f4b-config-data" (OuterVolumeSpecName: "config-data") pod "a6d3a1f2-9e57-4c10-9480-669366053f4b" (UID: "a6d3a1f2-9e57-4c10-9480-669366053f4b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:38:31 crc kubenswrapper[4998]: I0227 10:38:31.683434 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b29cf5b5-0760-4c81-a1e5-e434017c2414-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b29cf5b5-0760-4c81-a1e5-e434017c2414" (UID: "b29cf5b5-0760-4c81-a1e5-e434017c2414"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:38:31 crc kubenswrapper[4998]: I0227 10:38:31.684495 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b29cf5b5-0760-4c81-a1e5-e434017c2414-config" (OuterVolumeSpecName: "config") pod "b29cf5b5-0760-4c81-a1e5-e434017c2414" (UID: "b29cf5b5-0760-4c81-a1e5-e434017c2414"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:38:31 crc kubenswrapper[4998]: I0227 10:38:31.688934 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6d3a1f2-9e57-4c10-9480-669366053f4b-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "a6d3a1f2-9e57-4c10-9480-669366053f4b" (UID: "a6d3a1f2-9e57-4c10-9480-669366053f4b"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:38:31 crc kubenswrapper[4998]: I0227 10:38:31.700526 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b29cf5b5-0760-4c81-a1e5-e434017c2414-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b29cf5b5-0760-4c81-a1e5-e434017c2414" (UID: "b29cf5b5-0760-4c81-a1e5-e434017c2414"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:38:31 crc kubenswrapper[4998]: I0227 10:38:31.701152 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b29cf5b5-0760-4c81-a1e5-e434017c2414-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b29cf5b5-0760-4c81-a1e5-e434017c2414" (UID: "b29cf5b5-0760-4c81-a1e5-e434017c2414"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:38:31 crc kubenswrapper[4998]: I0227 10:38:31.702109 4998 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6d3a1f2-9e57-4c10-9480-669366053f4b-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 10:38:31 crc kubenswrapper[4998]: I0227 10:38:31.702125 4998 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b29cf5b5-0760-4c81-a1e5-e434017c2414-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 27 10:38:31 crc kubenswrapper[4998]: I0227 10:38:31.702135 4998 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6d3a1f2-9e57-4c10-9480-669366053f4b-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 10:38:31 crc kubenswrapper[4998]: I0227 10:38:31.702143 4998 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a6d3a1f2-9e57-4c10-9480-669366053f4b-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 27 10:38:31 crc kubenswrapper[4998]: I0227 10:38:31.702163 4998 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Feb 27 10:38:31 crc kubenswrapper[4998]: I0227 10:38:31.702172 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-czx6r\" (UniqueName: \"kubernetes.io/projected/a6d3a1f2-9e57-4c10-9480-669366053f4b-kube-api-access-czx6r\") on node \"crc\" DevicePath \"\"" Feb 27 10:38:31 crc kubenswrapper[4998]: I0227 10:38:31.702181 4998 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6d3a1f2-9e57-4c10-9480-669366053f4b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:38:31 crc kubenswrapper[4998]: I0227 10:38:31.702189 4998 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6d3a1f2-9e57-4c10-9480-669366053f4b-logs\") on node \"crc\" DevicePath \"\"" Feb 27 10:38:31 crc kubenswrapper[4998]: I0227 10:38:31.702196 4998 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6d3a1f2-9e57-4c10-9480-669366053f4b-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 10:38:31 crc kubenswrapper[4998]: I0227 10:38:31.702203 4998 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b29cf5b5-0760-4c81-a1e5-e434017c2414-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 27 10:38:31 crc kubenswrapper[4998]: I0227 10:38:31.702211 4998 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b29cf5b5-0760-4c81-a1e5-e434017c2414-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:38:31 crc kubenswrapper[4998]: I0227 10:38:31.702220 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v96q7\" (UniqueName: \"kubernetes.io/projected/b29cf5b5-0760-4c81-a1e5-e434017c2414-kube-api-access-v96q7\") on node \"crc\" DevicePath \"\"" Feb 27 10:38:31 crc kubenswrapper[4998]: I0227 10:38:31.702248 4998 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b29cf5b5-0760-4c81-a1e5-e434017c2414-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 27 10:38:31 crc kubenswrapper[4998]: I0227 10:38:31.749406 4998 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Feb 27 10:38:31 crc kubenswrapper[4998]: I0227 10:38:31.804300 4998 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Feb 27 10:38:31 crc kubenswrapper[4998]: I0227 10:38:31.943582 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-6fzk5"] Feb 27 10:38:31 crc kubenswrapper[4998]: E0227 10:38:31.943935 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97372f36-bf18-4a79-917b-cf9b6d0f92a2" containerName="neutron-db-sync" Feb 27 10:38:31 crc kubenswrapper[4998]: I0227 10:38:31.943950 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="97372f36-bf18-4a79-917b-cf9b6d0f92a2" containerName="neutron-db-sync" Feb 27 10:38:31 crc kubenswrapper[4998]: E0227 10:38:31.943967 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6d3a1f2-9e57-4c10-9480-669366053f4b" containerName="glance-log" Feb 27 10:38:31 crc kubenswrapper[4998]: I0227 10:38:31.943973 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6d3a1f2-9e57-4c10-9480-669366053f4b" containerName="glance-log" Feb 27 10:38:31 crc kubenswrapper[4998]: E0227 10:38:31.943981 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6d3a1f2-9e57-4c10-9480-669366053f4b" containerName="glance-httpd" Feb 27 10:38:31 crc kubenswrapper[4998]: I0227 10:38:31.943987 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6d3a1f2-9e57-4c10-9480-669366053f4b" containerName="glance-httpd" Feb 27 10:38:31 crc kubenswrapper[4998]: E0227 10:38:31.944002 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b29cf5b5-0760-4c81-a1e5-e434017c2414" containerName="init" Feb 27 10:38:31 crc kubenswrapper[4998]: I0227 10:38:31.944008 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="b29cf5b5-0760-4c81-a1e5-e434017c2414" containerName="init" Feb 27 10:38:31 crc kubenswrapper[4998]: E0227 10:38:31.944030 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b29cf5b5-0760-4c81-a1e5-e434017c2414" containerName="dnsmasq-dns" Feb 27 10:38:31 crc kubenswrapper[4998]: I0227 10:38:31.944036 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="b29cf5b5-0760-4c81-a1e5-e434017c2414" containerName="dnsmasq-dns" Feb 27 10:38:31 crc kubenswrapper[4998]: I0227 10:38:31.944170 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="97372f36-bf18-4a79-917b-cf9b6d0f92a2" containerName="neutron-db-sync" Feb 27 10:38:31 crc kubenswrapper[4998]: I0227 10:38:31.944186 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="b29cf5b5-0760-4c81-a1e5-e434017c2414" containerName="dnsmasq-dns" Feb 27 10:38:31 crc kubenswrapper[4998]: I0227 10:38:31.944198 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6d3a1f2-9e57-4c10-9480-669366053f4b" containerName="glance-log" Feb 27 10:38:31 crc kubenswrapper[4998]: I0227 10:38:31.944209 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6d3a1f2-9e57-4c10-9480-669366053f4b" containerName="glance-httpd" Feb 27 10:38:31 crc kubenswrapper[4998]: I0227 10:38:31.945055 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-6fzk5" Feb 27 10:38:31 crc kubenswrapper[4998]: I0227 10:38:31.970448 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-64c87cb5cd-prbxl"] Feb 27 10:38:31 crc kubenswrapper[4998]: I0227 10:38:31.991534 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-6fzk5"] Feb 27 10:38:32 crc kubenswrapper[4998]: I0227 10:38:32.010531 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftshk\" (UniqueName: \"kubernetes.io/projected/73b3d7ab-a5fe-4bc8-a113-d665de7a3773-kube-api-access-ftshk\") pod \"dnsmasq-dns-55f844cf75-6fzk5\" (UID: \"73b3d7ab-a5fe-4bc8-a113-d665de7a3773\") " pod="openstack/dnsmasq-dns-55f844cf75-6fzk5" Feb 27 10:38:32 crc kubenswrapper[4998]: I0227 10:38:32.010953 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/73b3d7ab-a5fe-4bc8-a113-d665de7a3773-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-6fzk5\" (UID: \"73b3d7ab-a5fe-4bc8-a113-d665de7a3773\") " pod="openstack/dnsmasq-dns-55f844cf75-6fzk5" Feb 27 10:38:32 crc kubenswrapper[4998]: I0227 10:38:32.011051 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/73b3d7ab-a5fe-4bc8-a113-d665de7a3773-dns-svc\") pod \"dnsmasq-dns-55f844cf75-6fzk5\" (UID: \"73b3d7ab-a5fe-4bc8-a113-d665de7a3773\") " pod="openstack/dnsmasq-dns-55f844cf75-6fzk5" Feb 27 10:38:32 crc kubenswrapper[4998]: I0227 10:38:32.011122 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/73b3d7ab-a5fe-4bc8-a113-d665de7a3773-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-6fzk5\" (UID: \"73b3d7ab-a5fe-4bc8-a113-d665de7a3773\") " pod="openstack/dnsmasq-dns-55f844cf75-6fzk5" Feb 27 10:38:32 crc kubenswrapper[4998]: I0227 10:38:32.011212 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73b3d7ab-a5fe-4bc8-a113-d665de7a3773-config\") pod \"dnsmasq-dns-55f844cf75-6fzk5\" (UID: \"73b3d7ab-a5fe-4bc8-a113-d665de7a3773\") " pod="openstack/dnsmasq-dns-55f844cf75-6fzk5" Feb 27 10:38:32 crc kubenswrapper[4998]: I0227 10:38:32.011442 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/73b3d7ab-a5fe-4bc8-a113-d665de7a3773-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-6fzk5\" (UID: \"73b3d7ab-a5fe-4bc8-a113-d665de7a3773\") " pod="openstack/dnsmasq-dns-55f844cf75-6fzk5" Feb 27 10:38:32 crc kubenswrapper[4998]: I0227 10:38:32.072668 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-9bmcl" event={"ID":"b29cf5b5-0760-4c81-a1e5-e434017c2414","Type":"ContainerDied","Data":"b4503c84c0476327af577e4033f1ebdb16802044b42c485d0141ee11074d0b3b"} Feb 27 10:38:32 crc kubenswrapper[4998]: I0227 10:38:32.072754 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-9bmcl" Feb 27 10:38:32 crc kubenswrapper[4998]: I0227 10:38:32.078905 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 27 10:38:32 crc kubenswrapper[4998]: I0227 10:38:32.079069 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a6d3a1f2-9e57-4c10-9480-669366053f4b","Type":"ContainerDied","Data":"8238718ed2c65702164469bd57b659441f4fcccdd1360a18af49b3857ed9c977"} Feb 27 10:38:32 crc kubenswrapper[4998]: I0227 10:38:32.079115 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-787dd6b8cd-n8j8x"] Feb 27 10:38:32 crc kubenswrapper[4998]: I0227 10:38:32.080399 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-787dd6b8cd-n8j8x" Feb 27 10:38:32 crc kubenswrapper[4998]: E0227 10:38:32.081248 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-5x42c" podUID="d455fe16-80bf-42c1-be16-a87102249bf8" Feb 27 10:38:32 crc kubenswrapper[4998]: I0227 10:38:32.087340 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Feb 27 10:38:32 crc kubenswrapper[4998]: I0227 10:38:32.087545 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 27 10:38:32 crc kubenswrapper[4998]: I0227 10:38:32.087723 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-vp4t7" Feb 27 10:38:32 crc kubenswrapper[4998]: I0227 10:38:32.087827 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 27 10:38:32 crc kubenswrapper[4998]: I0227 10:38:32.111708 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-787dd6b8cd-n8j8x"] Feb 27 10:38:32 crc kubenswrapper[4998]: I0227 10:38:32.113095 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/73b3d7ab-a5fe-4bc8-a113-d665de7a3773-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-6fzk5\" (UID: \"73b3d7ab-a5fe-4bc8-a113-d665de7a3773\") " pod="openstack/dnsmasq-dns-55f844cf75-6fzk5" Feb 27 10:38:32 crc kubenswrapper[4998]: I0227 10:38:32.113158 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b910b535-da07-4b60-b42c-72f170ac8bbc-httpd-config\") pod \"neutron-787dd6b8cd-n8j8x\" (UID: \"b910b535-da07-4b60-b42c-72f170ac8bbc\") " pod="openstack/neutron-787dd6b8cd-n8j8x" Feb 27 10:38:32 crc kubenswrapper[4998]: I0227 10:38:32.113253 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b910b535-da07-4b60-b42c-72f170ac8bbc-config\") pod \"neutron-787dd6b8cd-n8j8x\" (UID: \"b910b535-da07-4b60-b42c-72f170ac8bbc\") " pod="openstack/neutron-787dd6b8cd-n8j8x" Feb 27 10:38:32 crc kubenswrapper[4998]: I0227 10:38:32.113286 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b910b535-da07-4b60-b42c-72f170ac8bbc-ovndb-tls-certs\") pod \"neutron-787dd6b8cd-n8j8x\" (UID: \"b910b535-da07-4b60-b42c-72f170ac8bbc\") " pod="openstack/neutron-787dd6b8cd-n8j8x" Feb 27 10:38:32 crc kubenswrapper[4998]: I0227 10:38:32.113326 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b910b535-da07-4b60-b42c-72f170ac8bbc-combined-ca-bundle\") pod \"neutron-787dd6b8cd-n8j8x\" (UID: \"b910b535-da07-4b60-b42c-72f170ac8bbc\") " pod="openstack/neutron-787dd6b8cd-n8j8x" Feb 27 10:38:32 crc kubenswrapper[4998]: I0227 10:38:32.113354 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpwqv\" (UniqueName: \"kubernetes.io/projected/b910b535-da07-4b60-b42c-72f170ac8bbc-kube-api-access-jpwqv\") pod \"neutron-787dd6b8cd-n8j8x\" (UID: \"b910b535-da07-4b60-b42c-72f170ac8bbc\") " pod="openstack/neutron-787dd6b8cd-n8j8x" Feb 27 10:38:32 crc kubenswrapper[4998]: I0227 10:38:32.113395 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftshk\" (UniqueName: \"kubernetes.io/projected/73b3d7ab-a5fe-4bc8-a113-d665de7a3773-kube-api-access-ftshk\") pod \"dnsmasq-dns-55f844cf75-6fzk5\" (UID: \"73b3d7ab-a5fe-4bc8-a113-d665de7a3773\") " pod="openstack/dnsmasq-dns-55f844cf75-6fzk5" Feb 27 10:38:32 crc kubenswrapper[4998]: I0227 10:38:32.113514 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/73b3d7ab-a5fe-4bc8-a113-d665de7a3773-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-6fzk5\" (UID: \"73b3d7ab-a5fe-4bc8-a113-d665de7a3773\") " pod="openstack/dnsmasq-dns-55f844cf75-6fzk5" Feb 27 10:38:32 crc kubenswrapper[4998]: I0227 10:38:32.113592 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/73b3d7ab-a5fe-4bc8-a113-d665de7a3773-dns-svc\") pod \"dnsmasq-dns-55f844cf75-6fzk5\" (UID: \"73b3d7ab-a5fe-4bc8-a113-d665de7a3773\") " pod="openstack/dnsmasq-dns-55f844cf75-6fzk5" Feb 27 10:38:32 crc kubenswrapper[4998]: I0227 10:38:32.113631 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/73b3d7ab-a5fe-4bc8-a113-d665de7a3773-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-6fzk5\" (UID: \"73b3d7ab-a5fe-4bc8-a113-d665de7a3773\") " pod="openstack/dnsmasq-dns-55f844cf75-6fzk5" Feb 27 10:38:32 crc kubenswrapper[4998]: I0227 10:38:32.113719 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73b3d7ab-a5fe-4bc8-a113-d665de7a3773-config\") pod \"dnsmasq-dns-55f844cf75-6fzk5\" (UID: \"73b3d7ab-a5fe-4bc8-a113-d665de7a3773\") " pod="openstack/dnsmasq-dns-55f844cf75-6fzk5" Feb 27 10:38:32 crc kubenswrapper[4998]: I0227 10:38:32.114459 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/73b3d7ab-a5fe-4bc8-a113-d665de7a3773-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-6fzk5\" (UID: \"73b3d7ab-a5fe-4bc8-a113-d665de7a3773\") " pod="openstack/dnsmasq-dns-55f844cf75-6fzk5" Feb 27 10:38:32 crc kubenswrapper[4998]: I0227 10:38:32.114740 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/73b3d7ab-a5fe-4bc8-a113-d665de7a3773-dns-svc\") pod \"dnsmasq-dns-55f844cf75-6fzk5\" (UID: \"73b3d7ab-a5fe-4bc8-a113-d665de7a3773\") " pod="openstack/dnsmasq-dns-55f844cf75-6fzk5" Feb 27 10:38:32 crc kubenswrapper[4998]: I0227 10:38:32.115198 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/73b3d7ab-a5fe-4bc8-a113-d665de7a3773-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-6fzk5\" (UID: \"73b3d7ab-a5fe-4bc8-a113-d665de7a3773\") " pod="openstack/dnsmasq-dns-55f844cf75-6fzk5" Feb 27 10:38:32 crc kubenswrapper[4998]: I0227 10:38:32.117478 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/73b3d7ab-a5fe-4bc8-a113-d665de7a3773-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-6fzk5\" (UID: \"73b3d7ab-a5fe-4bc8-a113-d665de7a3773\") " pod="openstack/dnsmasq-dns-55f844cf75-6fzk5" Feb 27 10:38:32 crc kubenswrapper[4998]: I0227 10:38:32.121187 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73b3d7ab-a5fe-4bc8-a113-d665de7a3773-config\") pod \"dnsmasq-dns-55f844cf75-6fzk5\" (UID: \"73b3d7ab-a5fe-4bc8-a113-d665de7a3773\") " pod="openstack/dnsmasq-dns-55f844cf75-6fzk5" Feb 27 10:38:32 crc kubenswrapper[4998]: I0227 10:38:32.133354 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftshk\" (UniqueName: \"kubernetes.io/projected/73b3d7ab-a5fe-4bc8-a113-d665de7a3773-kube-api-access-ftshk\") pod \"dnsmasq-dns-55f844cf75-6fzk5\" (UID: \"73b3d7ab-a5fe-4bc8-a113-d665de7a3773\") " pod="openstack/dnsmasq-dns-55f844cf75-6fzk5" Feb 27 10:38:32 crc kubenswrapper[4998]: I0227 10:38:32.162506 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-9bmcl"] Feb 27 10:38:32 crc kubenswrapper[4998]: I0227 10:38:32.170111 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-9bmcl"] Feb 27 10:38:32 crc kubenswrapper[4998]: I0227 10:38:32.189402 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 27 10:38:32 crc kubenswrapper[4998]: I0227 10:38:32.202984 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 27 10:38:32 crc kubenswrapper[4998]: I0227 10:38:32.213107 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 27 10:38:32 crc kubenswrapper[4998]: I0227 10:38:32.218732 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b910b535-da07-4b60-b42c-72f170ac8bbc-combined-ca-bundle\") pod \"neutron-787dd6b8cd-n8j8x\" (UID: \"b910b535-da07-4b60-b42c-72f170ac8bbc\") " pod="openstack/neutron-787dd6b8cd-n8j8x" Feb 27 10:38:32 crc kubenswrapper[4998]: I0227 10:38:32.218783 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpwqv\" (UniqueName: \"kubernetes.io/projected/b910b535-da07-4b60-b42c-72f170ac8bbc-kube-api-access-jpwqv\") pod \"neutron-787dd6b8cd-n8j8x\" (UID: \"b910b535-da07-4b60-b42c-72f170ac8bbc\") " pod="openstack/neutron-787dd6b8cd-n8j8x" Feb 27 10:38:32 crc kubenswrapper[4998]: I0227 10:38:32.218889 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b910b535-da07-4b60-b42c-72f170ac8bbc-httpd-config\") pod \"neutron-787dd6b8cd-n8j8x\" (UID: \"b910b535-da07-4b60-b42c-72f170ac8bbc\") " pod="openstack/neutron-787dd6b8cd-n8j8x" Feb 27 10:38:32 crc kubenswrapper[4998]: I0227 10:38:32.218941 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b910b535-da07-4b60-b42c-72f170ac8bbc-config\") pod \"neutron-787dd6b8cd-n8j8x\" (UID: \"b910b535-da07-4b60-b42c-72f170ac8bbc\") " pod="openstack/neutron-787dd6b8cd-n8j8x" Feb 27 10:38:32 crc kubenswrapper[4998]: I0227 10:38:32.218970 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b910b535-da07-4b60-b42c-72f170ac8bbc-ovndb-tls-certs\") pod \"neutron-787dd6b8cd-n8j8x\" (UID: \"b910b535-da07-4b60-b42c-72f170ac8bbc\") " pod="openstack/neutron-787dd6b8cd-n8j8x" Feb 27 10:38:32 crc kubenswrapper[4998]: I0227 10:38:32.225857 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b910b535-da07-4b60-b42c-72f170ac8bbc-ovndb-tls-certs\") pod \"neutron-787dd6b8cd-n8j8x\" (UID: \"b910b535-da07-4b60-b42c-72f170ac8bbc\") " pod="openstack/neutron-787dd6b8cd-n8j8x" Feb 27 10:38:32 crc kubenswrapper[4998]: I0227 10:38:32.229814 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b910b535-da07-4b60-b42c-72f170ac8bbc-combined-ca-bundle\") pod \"neutron-787dd6b8cd-n8j8x\" (UID: \"b910b535-da07-4b60-b42c-72f170ac8bbc\") " pod="openstack/neutron-787dd6b8cd-n8j8x" Feb 27 10:38:32 crc kubenswrapper[4998]: I0227 10:38:32.234542 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 27 10:38:32 crc kubenswrapper[4998]: I0227 10:38:32.237496 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 27 10:38:32 crc kubenswrapper[4998]: I0227 10:38:32.243360 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/b910b535-da07-4b60-b42c-72f170ac8bbc-config\") pod \"neutron-787dd6b8cd-n8j8x\" (UID: \"b910b535-da07-4b60-b42c-72f170ac8bbc\") " pod="openstack/neutron-787dd6b8cd-n8j8x" Feb 27 10:38:32 crc kubenswrapper[4998]: I0227 10:38:32.245674 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 27 10:38:32 crc kubenswrapper[4998]: I0227 10:38:32.247220 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b910b535-da07-4b60-b42c-72f170ac8bbc-httpd-config\") pod \"neutron-787dd6b8cd-n8j8x\" (UID: \"b910b535-da07-4b60-b42c-72f170ac8bbc\") " pod="openstack/neutron-787dd6b8cd-n8j8x" Feb 27 10:38:32 crc kubenswrapper[4998]: I0227 10:38:32.252369 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpwqv\" (UniqueName: \"kubernetes.io/projected/b910b535-da07-4b60-b42c-72f170ac8bbc-kube-api-access-jpwqv\") pod \"neutron-787dd6b8cd-n8j8x\" (UID: \"b910b535-da07-4b60-b42c-72f170ac8bbc\") " pod="openstack/neutron-787dd6b8cd-n8j8x" Feb 27 10:38:32 crc kubenswrapper[4998]: I0227 10:38:32.256415 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 27 10:38:32 crc kubenswrapper[4998]: I0227 10:38:32.272035 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-6fzk5" Feb 27 10:38:32 crc kubenswrapper[4998]: I0227 10:38:32.320451 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4dfdbf1-a4ba-472a-b88d-7c894603f9f7-config-data\") pod \"glance-default-external-api-0\" (UID: \"a4dfdbf1-a4ba-472a-b88d-7c894603f9f7\") " pod="openstack/glance-default-external-api-0" Feb 27 10:38:32 crc kubenswrapper[4998]: I0227 10:38:32.320848 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a4dfdbf1-a4ba-472a-b88d-7c894603f9f7-scripts\") pod \"glance-default-external-api-0\" (UID: \"a4dfdbf1-a4ba-472a-b88d-7c894603f9f7\") " pod="openstack/glance-default-external-api-0" Feb 27 10:38:32 crc kubenswrapper[4998]: I0227 10:38:32.320908 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4dfdbf1-a4ba-472a-b88d-7c894603f9f7-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"a4dfdbf1-a4ba-472a-b88d-7c894603f9f7\") " pod="openstack/glance-default-external-api-0" Feb 27 10:38:32 crc kubenswrapper[4998]: I0227 10:38:32.320955 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpkpz\" (UniqueName: \"kubernetes.io/projected/a4dfdbf1-a4ba-472a-b88d-7c894603f9f7-kube-api-access-bpkpz\") pod \"glance-default-external-api-0\" (UID: \"a4dfdbf1-a4ba-472a-b88d-7c894603f9f7\") " pod="openstack/glance-default-external-api-0" Feb 27 10:38:32 crc kubenswrapper[4998]: I0227 10:38:32.320985 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a4dfdbf1-a4ba-472a-b88d-7c894603f9f7-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"a4dfdbf1-a4ba-472a-b88d-7c894603f9f7\") " pod="openstack/glance-default-external-api-0" Feb 27 10:38:32 crc kubenswrapper[4998]: I0227 10:38:32.321026 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"a4dfdbf1-a4ba-472a-b88d-7c894603f9f7\") " pod="openstack/glance-default-external-api-0" Feb 27 10:38:32 crc kubenswrapper[4998]: I0227 10:38:32.321112 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4dfdbf1-a4ba-472a-b88d-7c894603f9f7-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"a4dfdbf1-a4ba-472a-b88d-7c894603f9f7\") " pod="openstack/glance-default-external-api-0" Feb 27 10:38:32 crc kubenswrapper[4998]: I0227 10:38:32.321149 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a4dfdbf1-a4ba-472a-b88d-7c894603f9f7-logs\") pod \"glance-default-external-api-0\" (UID: \"a4dfdbf1-a4ba-472a-b88d-7c894603f9f7\") " pod="openstack/glance-default-external-api-0" Feb 27 10:38:32 crc kubenswrapper[4998]: I0227 10:38:32.413169 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-787dd6b8cd-n8j8x" Feb 27 10:38:32 crc kubenswrapper[4998]: I0227 10:38:32.423307 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4dfdbf1-a4ba-472a-b88d-7c894603f9f7-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"a4dfdbf1-a4ba-472a-b88d-7c894603f9f7\") " pod="openstack/glance-default-external-api-0" Feb 27 10:38:32 crc kubenswrapper[4998]: I0227 10:38:32.423378 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpkpz\" (UniqueName: \"kubernetes.io/projected/a4dfdbf1-a4ba-472a-b88d-7c894603f9f7-kube-api-access-bpkpz\") pod \"glance-default-external-api-0\" (UID: \"a4dfdbf1-a4ba-472a-b88d-7c894603f9f7\") " pod="openstack/glance-default-external-api-0" Feb 27 10:38:32 crc kubenswrapper[4998]: I0227 10:38:32.423412 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a4dfdbf1-a4ba-472a-b88d-7c894603f9f7-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"a4dfdbf1-a4ba-472a-b88d-7c894603f9f7\") " pod="openstack/glance-default-external-api-0" Feb 27 10:38:32 crc kubenswrapper[4998]: I0227 10:38:32.423456 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"a4dfdbf1-a4ba-472a-b88d-7c894603f9f7\") " pod="openstack/glance-default-external-api-0" Feb 27 10:38:32 crc kubenswrapper[4998]: I0227 10:38:32.423537 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4dfdbf1-a4ba-472a-b88d-7c894603f9f7-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"a4dfdbf1-a4ba-472a-b88d-7c894603f9f7\") " pod="openstack/glance-default-external-api-0" Feb 27 10:38:32 crc kubenswrapper[4998]: I0227 10:38:32.423577 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a4dfdbf1-a4ba-472a-b88d-7c894603f9f7-logs\") pod \"glance-default-external-api-0\" (UID: \"a4dfdbf1-a4ba-472a-b88d-7c894603f9f7\") " pod="openstack/glance-default-external-api-0" Feb 27 10:38:32 crc kubenswrapper[4998]: I0227 10:38:32.423647 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4dfdbf1-a4ba-472a-b88d-7c894603f9f7-config-data\") pod \"glance-default-external-api-0\" (UID: \"a4dfdbf1-a4ba-472a-b88d-7c894603f9f7\") " pod="openstack/glance-default-external-api-0" Feb 27 10:38:32 crc kubenswrapper[4998]: I0227 10:38:32.423677 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a4dfdbf1-a4ba-472a-b88d-7c894603f9f7-scripts\") pod \"glance-default-external-api-0\" (UID: \"a4dfdbf1-a4ba-472a-b88d-7c894603f9f7\") " pod="openstack/glance-default-external-api-0" Feb 27 10:38:32 crc kubenswrapper[4998]: I0227 10:38:32.423754 4998 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"a4dfdbf1-a4ba-472a-b88d-7c894603f9f7\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-external-api-0" Feb 27 10:38:32 crc kubenswrapper[4998]: I0227 10:38:32.423996 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a4dfdbf1-a4ba-472a-b88d-7c894603f9f7-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"a4dfdbf1-a4ba-472a-b88d-7c894603f9f7\") " pod="openstack/glance-default-external-api-0" Feb 27 10:38:32 crc kubenswrapper[4998]: I0227 10:38:32.424243 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a4dfdbf1-a4ba-472a-b88d-7c894603f9f7-logs\") pod \"glance-default-external-api-0\" (UID: \"a4dfdbf1-a4ba-472a-b88d-7c894603f9f7\") " pod="openstack/glance-default-external-api-0" Feb 27 10:38:32 crc kubenswrapper[4998]: I0227 10:38:32.427661 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4dfdbf1-a4ba-472a-b88d-7c894603f9f7-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"a4dfdbf1-a4ba-472a-b88d-7c894603f9f7\") " pod="openstack/glance-default-external-api-0" Feb 27 10:38:32 crc kubenswrapper[4998]: I0227 10:38:32.428450 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4dfdbf1-a4ba-472a-b88d-7c894603f9f7-config-data\") pod \"glance-default-external-api-0\" (UID: \"a4dfdbf1-a4ba-472a-b88d-7c894603f9f7\") " pod="openstack/glance-default-external-api-0" Feb 27 10:38:32 crc kubenswrapper[4998]: I0227 10:38:32.429085 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4dfdbf1-a4ba-472a-b88d-7c894603f9f7-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"a4dfdbf1-a4ba-472a-b88d-7c894603f9f7\") " pod="openstack/glance-default-external-api-0" Feb 27 10:38:32 crc kubenswrapper[4998]: I0227 10:38:32.432357 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a4dfdbf1-a4ba-472a-b88d-7c894603f9f7-scripts\") pod \"glance-default-external-api-0\" (UID: \"a4dfdbf1-a4ba-472a-b88d-7c894603f9f7\") " pod="openstack/glance-default-external-api-0" Feb 27 10:38:32 crc kubenswrapper[4998]: I0227 10:38:32.439064 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpkpz\" (UniqueName: \"kubernetes.io/projected/a4dfdbf1-a4ba-472a-b88d-7c894603f9f7-kube-api-access-bpkpz\") pod \"glance-default-external-api-0\" (UID: \"a4dfdbf1-a4ba-472a-b88d-7c894603f9f7\") " pod="openstack/glance-default-external-api-0" Feb 27 10:38:32 crc kubenswrapper[4998]: I0227 10:38:32.446053 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"a4dfdbf1-a4ba-472a-b88d-7c894603f9f7\") " pod="openstack/glance-default-external-api-0" Feb 27 10:38:32 crc kubenswrapper[4998]: I0227 10:38:32.581947 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 27 10:38:32 crc kubenswrapper[4998]: I0227 10:38:32.778441 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3819243f-aca7-47b2-8ed5-ea24e21c8ca4" path="/var/lib/kubelet/pods/3819243f-aca7-47b2-8ed5-ea24e21c8ca4/volumes" Feb 27 10:38:32 crc kubenswrapper[4998]: I0227 10:38:32.779166 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6d3a1f2-9e57-4c10-9480-669366053f4b" path="/var/lib/kubelet/pods/a6d3a1f2-9e57-4c10-9480-669366053f4b/volumes" Feb 27 10:38:32 crc kubenswrapper[4998]: I0227 10:38:32.780008 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b29cf5b5-0760-4c81-a1e5-e434017c2414" path="/var/lib/kubelet/pods/b29cf5b5-0760-4c81-a1e5-e434017c2414/volumes" Feb 27 10:38:33 crc kubenswrapper[4998]: I0227 10:38:33.132950 4998 scope.go:117] "RemoveContainer" containerID="49b5f63b3c9365790e6f33a8ce9744ba4a33b0c79cf429c553eea414f2918134" Feb 27 10:38:33 crc kubenswrapper[4998]: W0227 10:38:33.179804 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod63a87b91_16fb_436d_8c53_317b204acebc.slice/crio-e1a4ba2a3ba7af1679d743503219ceeb8d5a71a9e424c94828a1604bda046705 WatchSource:0}: Error finding container e1a4ba2a3ba7af1679d743503219ceeb8d5a71a9e424c94828a1604bda046705: Status 404 returned error can't find the container with id e1a4ba2a3ba7af1679d743503219ceeb8d5a71a9e424c94828a1604bda046705 Feb 27 10:38:33 crc kubenswrapper[4998]: I0227 10:38:33.199097 4998 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 27 10:38:33 crc kubenswrapper[4998]: E0227 10:38:33.224527 4998 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Feb 27 10:38:33 crc kubenswrapper[4998]: E0227 10:38:33.224694 4998 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zdvmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-tf4n2_openstack(c96d5b82-3e0b-49a0-be3d-7f2fae6dd592): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 27 10:38:33 crc kubenswrapper[4998]: E0227 10:38:33.226576 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-tf4n2" podUID="c96d5b82-3e0b-49a0-be3d-7f2fae6dd592" Feb 27 10:38:33 crc kubenswrapper[4998]: I0227 10:38:33.362036 4998 scope.go:117] "RemoveContainer" containerID="ecebfe9c67e1c6d424fbecd3d09f6b8af170a6407819fbec9d1f23550f8b66ca" Feb 27 10:38:33 crc kubenswrapper[4998]: I0227 10:38:33.418077 4998 scope.go:117] "RemoveContainer" containerID="5e38589cb6adadcd54fdd9743121ad7d62ad72c457498e1bb0e1b252a3ffd51a" Feb 27 10:38:33 crc kubenswrapper[4998]: I0227 10:38:33.421024 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-954574d65-vqjfh" Feb 27 10:38:33 crc kubenswrapper[4998]: I0227 10:38:33.441815 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e758d850-f266-4136-8ad2-9f7cf30bc777-config-data\") pod \"e758d850-f266-4136-8ad2-9f7cf30bc777\" (UID: \"e758d850-f266-4136-8ad2-9f7cf30bc777\") " Feb 27 10:38:33 crc kubenswrapper[4998]: I0227 10:38:33.441875 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-947lq\" (UniqueName: \"kubernetes.io/projected/e758d850-f266-4136-8ad2-9f7cf30bc777-kube-api-access-947lq\") pod \"e758d850-f266-4136-8ad2-9f7cf30bc777\" (UID: \"e758d850-f266-4136-8ad2-9f7cf30bc777\") " Feb 27 10:38:33 crc kubenswrapper[4998]: I0227 10:38:33.441929 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e758d850-f266-4136-8ad2-9f7cf30bc777-horizon-secret-key\") pod \"e758d850-f266-4136-8ad2-9f7cf30bc777\" (UID: \"e758d850-f266-4136-8ad2-9f7cf30bc777\") " Feb 27 10:38:33 crc kubenswrapper[4998]: I0227 10:38:33.442020 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e758d850-f266-4136-8ad2-9f7cf30bc777-scripts\") pod \"e758d850-f266-4136-8ad2-9f7cf30bc777\" (UID: \"e758d850-f266-4136-8ad2-9f7cf30bc777\") " Feb 27 10:38:33 crc kubenswrapper[4998]: I0227 10:38:33.442145 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e758d850-f266-4136-8ad2-9f7cf30bc777-logs\") pod \"e758d850-f266-4136-8ad2-9f7cf30bc777\" (UID: \"e758d850-f266-4136-8ad2-9f7cf30bc777\") " Feb 27 10:38:33 crc kubenswrapper[4998]: I0227 10:38:33.442865 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e758d850-f266-4136-8ad2-9f7cf30bc777-logs" (OuterVolumeSpecName: "logs") pod "e758d850-f266-4136-8ad2-9f7cf30bc777" (UID: "e758d850-f266-4136-8ad2-9f7cf30bc777"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:38:33 crc kubenswrapper[4998]: I0227 10:38:33.444014 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e758d850-f266-4136-8ad2-9f7cf30bc777-config-data" (OuterVolumeSpecName: "config-data") pod "e758d850-f266-4136-8ad2-9f7cf30bc777" (UID: "e758d850-f266-4136-8ad2-9f7cf30bc777"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:38:33 crc kubenswrapper[4998]: I0227 10:38:33.444471 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e758d850-f266-4136-8ad2-9f7cf30bc777-scripts" (OuterVolumeSpecName: "scripts") pod "e758d850-f266-4136-8ad2-9f7cf30bc777" (UID: "e758d850-f266-4136-8ad2-9f7cf30bc777"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:38:33 crc kubenswrapper[4998]: I0227 10:38:33.457358 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e758d850-f266-4136-8ad2-9f7cf30bc777-kube-api-access-947lq" (OuterVolumeSpecName: "kube-api-access-947lq") pod "e758d850-f266-4136-8ad2-9f7cf30bc777" (UID: "e758d850-f266-4136-8ad2-9f7cf30bc777"). InnerVolumeSpecName "kube-api-access-947lq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:38:33 crc kubenswrapper[4998]: I0227 10:38:33.465795 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e758d850-f266-4136-8ad2-9f7cf30bc777-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "e758d850-f266-4136-8ad2-9f7cf30bc777" (UID: "e758d850-f266-4136-8ad2-9f7cf30bc777"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:38:33 crc kubenswrapper[4998]: I0227 10:38:33.532441 4998 scope.go:117] "RemoveContainer" containerID="3bd6ae72de7a2e630c6afbc45e6bf84e157d6024d722f2ac89b29182aa9b90fb" Feb 27 10:38:33 crc kubenswrapper[4998]: I0227 10:38:33.547751 4998 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e758d850-f266-4136-8ad2-9f7cf30bc777-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 10:38:33 crc kubenswrapper[4998]: I0227 10:38:33.547776 4998 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e758d850-f266-4136-8ad2-9f7cf30bc777-logs\") on node \"crc\" DevicePath \"\"" Feb 27 10:38:33 crc kubenswrapper[4998]: I0227 10:38:33.547787 4998 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e758d850-f266-4136-8ad2-9f7cf30bc777-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 10:38:33 crc kubenswrapper[4998]: I0227 10:38:33.547797 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-947lq\" (UniqueName: \"kubernetes.io/projected/e758d850-f266-4136-8ad2-9f7cf30bc777-kube-api-access-947lq\") on node \"crc\" DevicePath \"\"" Feb 27 10:38:33 crc kubenswrapper[4998]: I0227 10:38:33.547806 4998 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e758d850-f266-4136-8ad2-9f7cf30bc777-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 27 10:38:33 crc kubenswrapper[4998]: I0227 10:38:33.786823 4998 scope.go:117] "RemoveContainer" containerID="8d8a0d2fef601f74d77d7101ab158b41f5db80a93c85d0402b61c7357155bd16" Feb 27 10:38:33 crc kubenswrapper[4998]: I0227 10:38:33.866495 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5d7f558cb4-k5mxh"] Feb 27 10:38:34 crc kubenswrapper[4998]: I0227 10:38:34.029618 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 27 10:38:34 crc kubenswrapper[4998]: I0227 10:38:34.048016 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-dlccw"] Feb 27 10:38:34 crc kubenswrapper[4998]: I0227 10:38:34.054122 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 27 10:38:34 crc kubenswrapper[4998]: I0227 10:38:34.146389 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-65b865b5bf-kvsff"] Feb 27 10:38:34 crc kubenswrapper[4998]: I0227 10:38:34.147903 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-65b865b5bf-kvsff" Feb 27 10:38:34 crc kubenswrapper[4998]: I0227 10:38:34.151527 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Feb 27 10:38:34 crc kubenswrapper[4998]: I0227 10:38:34.152366 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Feb 27 10:38:34 crc kubenswrapper[4998]: I0227 10:38:34.164348 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"543a0e99-c247-4ab0-940e-461f495066cc","Type":"ContainerStarted","Data":"3c73cf7ebfbe48810ee6b8010592983563bb9338dcdc49a3d8262c1a8e6422fb"} Feb 27 10:38:34 crc kubenswrapper[4998]: I0227 10:38:34.166652 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-64c87cb5cd-prbxl" event={"ID":"63a87b91-16fb-436d-8c53-317b204acebc","Type":"ContainerStarted","Data":"e1a4ba2a3ba7af1679d743503219ceeb8d5a71a9e424c94828a1604bda046705"} Feb 27 10:38:34 crc kubenswrapper[4998]: I0227 10:38:34.168567 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90e83aa1-ab9f-4409-9515-6df2c46796cc-combined-ca-bundle\") pod \"neutron-65b865b5bf-kvsff\" (UID: \"90e83aa1-ab9f-4409-9515-6df2c46796cc\") " pod="openstack/neutron-65b865b5bf-kvsff" Feb 27 10:38:34 crc kubenswrapper[4998]: I0227 10:38:34.168603 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/90e83aa1-ab9f-4409-9515-6df2c46796cc-ovndb-tls-certs\") pod \"neutron-65b865b5bf-kvsff\" (UID: \"90e83aa1-ab9f-4409-9515-6df2c46796cc\") " pod="openstack/neutron-65b865b5bf-kvsff" Feb 27 10:38:34 crc kubenswrapper[4998]: I0227 10:38:34.168683 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/90e83aa1-ab9f-4409-9515-6df2c46796cc-httpd-config\") pod \"neutron-65b865b5bf-kvsff\" (UID: \"90e83aa1-ab9f-4409-9515-6df2c46796cc\") " pod="openstack/neutron-65b865b5bf-kvsff" Feb 27 10:38:34 crc kubenswrapper[4998]: I0227 10:38:34.168703 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/90e83aa1-ab9f-4409-9515-6df2c46796cc-config\") pod \"neutron-65b865b5bf-kvsff\" (UID: \"90e83aa1-ab9f-4409-9515-6df2c46796cc\") " pod="openstack/neutron-65b865b5bf-kvsff" Feb 27 10:38:34 crc kubenswrapper[4998]: I0227 10:38:34.168757 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/90e83aa1-ab9f-4409-9515-6df2c46796cc-public-tls-certs\") pod \"neutron-65b865b5bf-kvsff\" (UID: \"90e83aa1-ab9f-4409-9515-6df2c46796cc\") " pod="openstack/neutron-65b865b5bf-kvsff" Feb 27 10:38:34 crc kubenswrapper[4998]: I0227 10:38:34.168784 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/90e83aa1-ab9f-4409-9515-6df2c46796cc-internal-tls-certs\") pod \"neutron-65b865b5bf-kvsff\" (UID: \"90e83aa1-ab9f-4409-9515-6df2c46796cc\") " pod="openstack/neutron-65b865b5bf-kvsff" Feb 27 10:38:34 crc kubenswrapper[4998]: I0227 10:38:34.168801 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7hl2\" (UniqueName: \"kubernetes.io/projected/90e83aa1-ab9f-4409-9515-6df2c46796cc-kube-api-access-s7hl2\") pod \"neutron-65b865b5bf-kvsff\" (UID: \"90e83aa1-ab9f-4409-9515-6df2c46796cc\") " pod="openstack/neutron-65b865b5bf-kvsff" Feb 27 10:38:34 crc kubenswrapper[4998]: I0227 10:38:34.175691 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-dlccw" event={"ID":"06aa0c33-2be2-426a-98a0-eff676933eb1","Type":"ContainerStarted","Data":"d23c5fd9866e836a7e1dce08869332e328cded1062650cf673b369230767d70e"} Feb 27 10:38:34 crc kubenswrapper[4998]: I0227 10:38:34.180551 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-plff2" event={"ID":"e55e4748-da26-4ed7-8bba-e7260a78ba19","Type":"ContainerStarted","Data":"d1b0cc283350dc86c38043c0b922f6d720ef3669dd2604dce714abe1361d664f"} Feb 27 10:38:34 crc kubenswrapper[4998]: I0227 10:38:34.191879 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5d7f558cb4-k5mxh" event={"ID":"c6f8dcd8-b50e-47b8-b54c-2aa103be577c","Type":"ContainerStarted","Data":"33d97022dd69f983b4281e95922f85ea297a329d65a35402486437072fbbaf25"} Feb 27 10:38:34 crc kubenswrapper[4998]: I0227 10:38:34.193386 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-6fzk5"] Feb 27 10:38:34 crc kubenswrapper[4998]: I0227 10:38:34.211483 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-954574d65-vqjfh" event={"ID":"e758d850-f266-4136-8ad2-9f7cf30bc777","Type":"ContainerDied","Data":"7bcbd2176035981790f57daeb17b4522c4f287a411fa7f6d987bb03015d6c1cc"} Feb 27 10:38:34 crc kubenswrapper[4998]: I0227 10:38:34.211582 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-954574d65-vqjfh" Feb 27 10:38:34 crc kubenswrapper[4998]: I0227 10:38:34.217119 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-65b865b5bf-kvsff"] Feb 27 10:38:34 crc kubenswrapper[4998]: I0227 10:38:34.247549 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-plff2" podStartSLOduration=3.180091096 podStartE2EDuration="33.247524841s" podCreationTimestamp="2026-02-27 10:38:01 +0000 UTC" firstStartedPulling="2026-02-27 10:38:03.110223143 +0000 UTC m=+1235.108494111" lastFinishedPulling="2026-02-27 10:38:33.177656888 +0000 UTC m=+1265.175927856" observedRunningTime="2026-02-27 10:38:34.199387453 +0000 UTC m=+1266.197658421" watchObservedRunningTime="2026-02-27 10:38:34.247524841 +0000 UTC m=+1266.245795809" Feb 27 10:38:34 crc kubenswrapper[4998]: I0227 10:38:34.253421 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"63050361-4a13-4b25-8c1a-ff9fed854172","Type":"ContainerStarted","Data":"9e79a77a343f83bceab73f363285e5705ab1a0c7de7eeab802874c8ee2910d8e"} Feb 27 10:38:34 crc kubenswrapper[4998]: I0227 10:38:34.270466 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/90e83aa1-ab9f-4409-9515-6df2c46796cc-httpd-config\") pod \"neutron-65b865b5bf-kvsff\" (UID: \"90e83aa1-ab9f-4409-9515-6df2c46796cc\") " pod="openstack/neutron-65b865b5bf-kvsff" Feb 27 10:38:34 crc kubenswrapper[4998]: I0227 10:38:34.270845 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/90e83aa1-ab9f-4409-9515-6df2c46796cc-config\") pod \"neutron-65b865b5bf-kvsff\" (UID: \"90e83aa1-ab9f-4409-9515-6df2c46796cc\") " pod="openstack/neutron-65b865b5bf-kvsff" Feb 27 10:38:34 crc kubenswrapper[4998]: I0227 10:38:34.271263 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/90e83aa1-ab9f-4409-9515-6df2c46796cc-public-tls-certs\") pod \"neutron-65b865b5bf-kvsff\" (UID: \"90e83aa1-ab9f-4409-9515-6df2c46796cc\") " pod="openstack/neutron-65b865b5bf-kvsff" Feb 27 10:38:34 crc kubenswrapper[4998]: I0227 10:38:34.271338 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/90e83aa1-ab9f-4409-9515-6df2c46796cc-internal-tls-certs\") pod \"neutron-65b865b5bf-kvsff\" (UID: \"90e83aa1-ab9f-4409-9515-6df2c46796cc\") " pod="openstack/neutron-65b865b5bf-kvsff" Feb 27 10:38:34 crc kubenswrapper[4998]: I0227 10:38:34.271370 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7hl2\" (UniqueName: \"kubernetes.io/projected/90e83aa1-ab9f-4409-9515-6df2c46796cc-kube-api-access-s7hl2\") pod \"neutron-65b865b5bf-kvsff\" (UID: \"90e83aa1-ab9f-4409-9515-6df2c46796cc\") " pod="openstack/neutron-65b865b5bf-kvsff" Feb 27 10:38:34 crc kubenswrapper[4998]: I0227 10:38:34.271437 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90e83aa1-ab9f-4409-9515-6df2c46796cc-combined-ca-bundle\") pod \"neutron-65b865b5bf-kvsff\" (UID: \"90e83aa1-ab9f-4409-9515-6df2c46796cc\") " pod="openstack/neutron-65b865b5bf-kvsff" Feb 27 10:38:34 crc kubenswrapper[4998]: I0227 10:38:34.271489 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/90e83aa1-ab9f-4409-9515-6df2c46796cc-ovndb-tls-certs\") pod \"neutron-65b865b5bf-kvsff\" (UID: \"90e83aa1-ab9f-4409-9515-6df2c46796cc\") " pod="openstack/neutron-65b865b5bf-kvsff" Feb 27 10:38:34 crc kubenswrapper[4998]: E0227 10:38:34.278106 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-tf4n2" podUID="c96d5b82-3e0b-49a0-be3d-7f2fae6dd592" Feb 27 10:38:34 crc kubenswrapper[4998]: I0227 10:38:34.278970 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/90e83aa1-ab9f-4409-9515-6df2c46796cc-ovndb-tls-certs\") pod \"neutron-65b865b5bf-kvsff\" (UID: \"90e83aa1-ab9f-4409-9515-6df2c46796cc\") " pod="openstack/neutron-65b865b5bf-kvsff" Feb 27 10:38:34 crc kubenswrapper[4998]: I0227 10:38:34.289533 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-787dd6b8cd-n8j8x"] Feb 27 10:38:34 crc kubenswrapper[4998]: I0227 10:38:34.295812 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/90e83aa1-ab9f-4409-9515-6df2c46796cc-httpd-config\") pod \"neutron-65b865b5bf-kvsff\" (UID: \"90e83aa1-ab9f-4409-9515-6df2c46796cc\") " pod="openstack/neutron-65b865b5bf-kvsff" Feb 27 10:38:34 crc kubenswrapper[4998]: I0227 10:38:34.298589 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/90e83aa1-ab9f-4409-9515-6df2c46796cc-internal-tls-certs\") pod \"neutron-65b865b5bf-kvsff\" (UID: \"90e83aa1-ab9f-4409-9515-6df2c46796cc\") " pod="openstack/neutron-65b865b5bf-kvsff" Feb 27 10:38:34 crc kubenswrapper[4998]: I0227 10:38:34.308021 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7hl2\" (UniqueName: \"kubernetes.io/projected/90e83aa1-ab9f-4409-9515-6df2c46796cc-kube-api-access-s7hl2\") pod \"neutron-65b865b5bf-kvsff\" (UID: \"90e83aa1-ab9f-4409-9515-6df2c46796cc\") " pod="openstack/neutron-65b865b5bf-kvsff" Feb 27 10:38:34 crc kubenswrapper[4998]: I0227 10:38:34.329140 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90e83aa1-ab9f-4409-9515-6df2c46796cc-combined-ca-bundle\") pod \"neutron-65b865b5bf-kvsff\" (UID: \"90e83aa1-ab9f-4409-9515-6df2c46796cc\") " pod="openstack/neutron-65b865b5bf-kvsff" Feb 27 10:38:34 crc kubenswrapper[4998]: I0227 10:38:34.334824 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/90e83aa1-ab9f-4409-9515-6df2c46796cc-config\") pod \"neutron-65b865b5bf-kvsff\" (UID: \"90e83aa1-ab9f-4409-9515-6df2c46796cc\") " pod="openstack/neutron-65b865b5bf-kvsff" Feb 27 10:38:34 crc kubenswrapper[4998]: I0227 10:38:34.363835 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/90e83aa1-ab9f-4409-9515-6df2c46796cc-public-tls-certs\") pod \"neutron-65b865b5bf-kvsff\" (UID: \"90e83aa1-ab9f-4409-9515-6df2c46796cc\") " pod="openstack/neutron-65b865b5bf-kvsff" Feb 27 10:38:34 crc kubenswrapper[4998]: I0227 10:38:34.380023 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-954574d65-vqjfh"] Feb 27 10:38:34 crc kubenswrapper[4998]: I0227 10:38:34.404616 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-954574d65-vqjfh"] Feb 27 10:38:34 crc kubenswrapper[4998]: I0227 10:38:34.446660 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 27 10:38:34 crc kubenswrapper[4998]: I0227 10:38:34.521993 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-65b865b5bf-kvsff" Feb 27 10:38:34 crc kubenswrapper[4998]: E0227 10:38:34.543318 4998 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode758d850_f266_4136_8ad2_9f7cf30bc777.slice/crio-7bcbd2176035981790f57daeb17b4522c4f287a411fa7f6d987bb03015d6c1cc\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode758d850_f266_4136_8ad2_9f7cf30bc777.slice\": RecentStats: unable to find data in memory cache]" Feb 27 10:38:34 crc kubenswrapper[4998]: I0227 10:38:34.812201 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e758d850-f266-4136-8ad2-9f7cf30bc777" path="/var/lib/kubelet/pods/e758d850-f266-4136-8ad2-9f7cf30bc777/volumes" Feb 27 10:38:35 crc kubenswrapper[4998]: I0227 10:38:35.275576 4998 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-9bmcl" podUID="b29cf5b5-0760-4c81-a1e5-e434017c2414" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.119:5353: i/o timeout" Feb 27 10:38:35 crc kubenswrapper[4998]: I0227 10:38:35.280579 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-65b865b5bf-kvsff"] Feb 27 10:38:35 crc kubenswrapper[4998]: I0227 10:38:35.301683 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-dlccw" event={"ID":"06aa0c33-2be2-426a-98a0-eff676933eb1","Type":"ContainerStarted","Data":"eae523fac7505ae968dd9b7f9bb1c82265f152db269da617070438ba4412f92a"} Feb 27 10:38:35 crc kubenswrapper[4998]: I0227 10:38:35.312776 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5d7f558cb4-k5mxh" event={"ID":"c6f8dcd8-b50e-47b8-b54c-2aa103be577c","Type":"ContainerStarted","Data":"89ad96f667ddb56d226ac109d6d32ffb6cb321b04ed41f41229a2bbe91f9dcb2"} Feb 27 10:38:35 crc kubenswrapper[4998]: I0227 10:38:35.312822 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5d7f558cb4-k5mxh" event={"ID":"c6f8dcd8-b50e-47b8-b54c-2aa103be577c","Type":"ContainerStarted","Data":"4153cd795a2994937944f16ca6dc3c47f82661de20f6d22db809cfa50a0a5f7e"} Feb 27 10:38:35 crc kubenswrapper[4998]: I0227 10:38:35.315311 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a4dfdbf1-a4ba-472a-b88d-7c894603f9f7","Type":"ContainerStarted","Data":"b1c19f0eb1db9ed519fe0017259110823b863c4815b8746211c2800ed86a6897"} Feb 27 10:38:35 crc kubenswrapper[4998]: I0227 10:38:35.319526 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-787dd6b8cd-n8j8x" event={"ID":"b910b535-da07-4b60-b42c-72f170ac8bbc","Type":"ContainerStarted","Data":"3a7e682515de5896cb79338cae359e4e76067f0248aad6f138cd24ec86c343bd"} Feb 27 10:38:35 crc kubenswrapper[4998]: I0227 10:38:35.319697 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-787dd6b8cd-n8j8x" event={"ID":"b910b535-da07-4b60-b42c-72f170ac8bbc","Type":"ContainerStarted","Data":"eae121cd1e1b682313b6c28cdc2c4eae877c426cfa7a1614bda858ca83e1b95a"} Feb 27 10:38:35 crc kubenswrapper[4998]: I0227 10:38:35.319711 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-787dd6b8cd-n8j8x" event={"ID":"b910b535-da07-4b60-b42c-72f170ac8bbc","Type":"ContainerStarted","Data":"c96b7294f4e72df341d03c04ec20641c0e0c0d9460ad674a8210907476bcca16"} Feb 27 10:38:35 crc kubenswrapper[4998]: I0227 10:38:35.320541 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-787dd6b8cd-n8j8x" Feb 27 10:38:35 crc kubenswrapper[4998]: I0227 10:38:35.330353 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-dlccw" podStartSLOduration=12.330336375 podStartE2EDuration="12.330336375s" podCreationTimestamp="2026-02-27 10:38:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:38:35.327242787 +0000 UTC m=+1267.325513745" watchObservedRunningTime="2026-02-27 10:38:35.330336375 +0000 UTC m=+1267.328607343" Feb 27 10:38:35 crc kubenswrapper[4998]: I0227 10:38:35.337799 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"543a0e99-c247-4ab0-940e-461f495066cc","Type":"ContainerStarted","Data":"6db1d88d19bd1eee8afd58b6a8044df83c964f1e0606a25d3f38e26e921e013d"} Feb 27 10:38:35 crc kubenswrapper[4998]: I0227 10:38:35.355761 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-64c87cb5cd-prbxl" event={"ID":"63a87b91-16fb-436d-8c53-317b204acebc","Type":"ContainerStarted","Data":"4d13db667b5d6248ab35065cb2dad51e3d395498b2e94ba8f06c2fdb4455a5a9"} Feb 27 10:38:35 crc kubenswrapper[4998]: I0227 10:38:35.355815 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-64c87cb5cd-prbxl" event={"ID":"63a87b91-16fb-436d-8c53-317b204acebc","Type":"ContainerStarted","Data":"0d4b5db9912175f7b371aef129e1a3835005cfc41d31b381885990ea6c8dca9e"} Feb 27 10:38:35 crc kubenswrapper[4998]: I0227 10:38:35.355909 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-787dd6b8cd-n8j8x" podStartSLOduration=3.3558973659999998 podStartE2EDuration="3.355897366s" podCreationTimestamp="2026-02-27 10:38:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:38:35.346605782 +0000 UTC m=+1267.344876740" watchObservedRunningTime="2026-02-27 10:38:35.355897366 +0000 UTC m=+1267.354168334" Feb 27 10:38:35 crc kubenswrapper[4998]: I0227 10:38:35.367250 4998 generic.go:334] "Generic (PLEG): container finished" podID="73b3d7ab-a5fe-4bc8-a113-d665de7a3773" containerID="7b8c3abb1720b489976607ab04f56862c647bf6c64245aa94721312299b41b7c" exitCode=0 Feb 27 10:38:35 crc kubenswrapper[4998]: I0227 10:38:35.368027 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-6fzk5" event={"ID":"73b3d7ab-a5fe-4bc8-a113-d665de7a3773","Type":"ContainerDied","Data":"7b8c3abb1720b489976607ab04f56862c647bf6c64245aa94721312299b41b7c"} Feb 27 10:38:35 crc kubenswrapper[4998]: I0227 10:38:35.368066 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-6fzk5" event={"ID":"73b3d7ab-a5fe-4bc8-a113-d665de7a3773","Type":"ContainerStarted","Data":"058df4b49fc0778bd235a482cdd9f81e0e1405e12b5f48fdf57e3658b430332c"} Feb 27 10:38:35 crc kubenswrapper[4998]: I0227 10:38:35.382469 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-5d7f558cb4-k5mxh" podStartSLOduration=26.382450148 podStartE2EDuration="26.382450148s" podCreationTimestamp="2026-02-27 10:38:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:38:35.367936288 +0000 UTC m=+1267.366207256" watchObservedRunningTime="2026-02-27 10:38:35.382450148 +0000 UTC m=+1267.380721116" Feb 27 10:38:35 crc kubenswrapper[4998]: I0227 10:38:35.403253 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-64c87cb5cd-prbxl" podStartSLOduration=25.803858177 podStartE2EDuration="26.403210228s" podCreationTimestamp="2026-02-27 10:38:09 +0000 UTC" firstStartedPulling="2026-02-27 10:38:33.1988413 +0000 UTC m=+1265.197112268" lastFinishedPulling="2026-02-27 10:38:33.798193351 +0000 UTC m=+1265.796464319" observedRunningTime="2026-02-27 10:38:35.39953354 +0000 UTC m=+1267.397804508" watchObservedRunningTime="2026-02-27 10:38:35.403210228 +0000 UTC m=+1267.401481196" Feb 27 10:38:35 crc kubenswrapper[4998]: W0227 10:38:35.763344 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod90e83aa1_ab9f_4409_9515_6df2c46796cc.slice/crio-f885f06738127817b82268344421af2326d32f341336406fe9306a6fd3b0a478 WatchSource:0}: Error finding container f885f06738127817b82268344421af2326d32f341336406fe9306a6fd3b0a478: Status 404 returned error can't find the container with id f885f06738127817b82268344421af2326d32f341336406fe9306a6fd3b0a478 Feb 27 10:38:36 crc kubenswrapper[4998]: I0227 10:38:36.391341 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-6fzk5" event={"ID":"73b3d7ab-a5fe-4bc8-a113-d665de7a3773","Type":"ContainerStarted","Data":"49e89a12c7099fce6728d4b69138d6d1cb485e4360dc8f988fabd11fdd316cfe"} Feb 27 10:38:36 crc kubenswrapper[4998]: I0227 10:38:36.392350 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55f844cf75-6fzk5" Feb 27 10:38:36 crc kubenswrapper[4998]: I0227 10:38:36.396205 4998 generic.go:334] "Generic (PLEG): container finished" podID="e55e4748-da26-4ed7-8bba-e7260a78ba19" containerID="d1b0cc283350dc86c38043c0b922f6d720ef3669dd2604dce714abe1361d664f" exitCode=0 Feb 27 10:38:36 crc kubenswrapper[4998]: I0227 10:38:36.396528 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-plff2" event={"ID":"e55e4748-da26-4ed7-8bba-e7260a78ba19","Type":"ContainerDied","Data":"d1b0cc283350dc86c38043c0b922f6d720ef3669dd2604dce714abe1361d664f"} Feb 27 10:38:36 crc kubenswrapper[4998]: I0227 10:38:36.422625 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55f844cf75-6fzk5" podStartSLOduration=5.422600999 podStartE2EDuration="5.422600999s" podCreationTimestamp="2026-02-27 10:38:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:38:36.409332288 +0000 UTC m=+1268.407603256" watchObservedRunningTime="2026-02-27 10:38:36.422600999 +0000 UTC m=+1268.420871967" Feb 27 10:38:36 crc kubenswrapper[4998]: I0227 10:38:36.423517 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a4dfdbf1-a4ba-472a-b88d-7c894603f9f7","Type":"ContainerStarted","Data":"7928bcec7e673b58f8f8e3aa79fdc5314578de400f9fb19091a5bea7d04a0bdf"} Feb 27 10:38:36 crc kubenswrapper[4998]: I0227 10:38:36.432008 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-65b865b5bf-kvsff" event={"ID":"90e83aa1-ab9f-4409-9515-6df2c46796cc","Type":"ContainerStarted","Data":"f885f06738127817b82268344421af2326d32f341336406fe9306a6fd3b0a478"} Feb 27 10:38:37 crc kubenswrapper[4998]: I0227 10:38:37.458442 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"543a0e99-c247-4ab0-940e-461f495066cc","Type":"ContainerStarted","Data":"c048982bf17cc79eda51d9c9a9497b219787811a94818d7e1916880eff6a7eea"} Feb 27 10:38:37 crc kubenswrapper[4998]: I0227 10:38:37.463951 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-65b865b5bf-kvsff" event={"ID":"90e83aa1-ab9f-4409-9515-6df2c46796cc","Type":"ContainerStarted","Data":"dc3dad9b629d02afd0597eac6ecebaefdc7cb58f4d94e9ce3c84a781733473a9"} Feb 27 10:38:37 crc kubenswrapper[4998]: I0227 10:38:37.463993 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-65b865b5bf-kvsff" event={"ID":"90e83aa1-ab9f-4409-9515-6df2c46796cc","Type":"ContainerStarted","Data":"e088aa17038527b16d120351b78485f781fe5e9522b19f745c98878c44b295ba"} Feb 27 10:38:37 crc kubenswrapper[4998]: I0227 10:38:37.465005 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-65b865b5bf-kvsff" Feb 27 10:38:37 crc kubenswrapper[4998]: I0227 10:38:37.470820 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"63050361-4a13-4b25-8c1a-ff9fed854172","Type":"ContainerStarted","Data":"89a24b7b9a31d6a217c7ea68c7d8a8143d6a021d9a773eabae436a7601c63702"} Feb 27 10:38:37 crc kubenswrapper[4998]: I0227 10:38:37.480662 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a4dfdbf1-a4ba-472a-b88d-7c894603f9f7","Type":"ContainerStarted","Data":"3979bd276ede97da4dbb5aef6000d58a58fd93bb8d6ca6a51efccf76ded9b2c0"} Feb 27 10:38:37 crc kubenswrapper[4998]: I0227 10:38:37.480846 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=15.480830812 podStartE2EDuration="15.480830812s" podCreationTimestamp="2026-02-27 10:38:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:38:37.480514252 +0000 UTC m=+1269.478785210" watchObservedRunningTime="2026-02-27 10:38:37.480830812 +0000 UTC m=+1269.479101780" Feb 27 10:38:37 crc kubenswrapper[4998]: I0227 10:38:37.518478 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-65b865b5bf-kvsff" podStartSLOduration=3.518455457 podStartE2EDuration="3.518455457s" podCreationTimestamp="2026-02-27 10:38:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:38:37.506701314 +0000 UTC m=+1269.504972282" watchObservedRunningTime="2026-02-27 10:38:37.518455457 +0000 UTC m=+1269.516726425" Feb 27 10:38:37 crc kubenswrapper[4998]: I0227 10:38:37.538195 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.538172532 podStartE2EDuration="5.538172532s" podCreationTimestamp="2026-02-27 10:38:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:38:37.536565051 +0000 UTC m=+1269.534836019" watchObservedRunningTime="2026-02-27 10:38:37.538172532 +0000 UTC m=+1269.536443510" Feb 27 10:38:37 crc kubenswrapper[4998]: I0227 10:38:37.977404 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-plff2" Feb 27 10:38:38 crc kubenswrapper[4998]: I0227 10:38:38.050702 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e55e4748-da26-4ed7-8bba-e7260a78ba19-config-data\") pod \"e55e4748-da26-4ed7-8bba-e7260a78ba19\" (UID: \"e55e4748-da26-4ed7-8bba-e7260a78ba19\") " Feb 27 10:38:38 crc kubenswrapper[4998]: I0227 10:38:38.050858 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nk6cf\" (UniqueName: \"kubernetes.io/projected/e55e4748-da26-4ed7-8bba-e7260a78ba19-kube-api-access-nk6cf\") pod \"e55e4748-da26-4ed7-8bba-e7260a78ba19\" (UID: \"e55e4748-da26-4ed7-8bba-e7260a78ba19\") " Feb 27 10:38:38 crc kubenswrapper[4998]: I0227 10:38:38.050904 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e55e4748-da26-4ed7-8bba-e7260a78ba19-scripts\") pod \"e55e4748-da26-4ed7-8bba-e7260a78ba19\" (UID: \"e55e4748-da26-4ed7-8bba-e7260a78ba19\") " Feb 27 10:38:38 crc kubenswrapper[4998]: I0227 10:38:38.050973 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e55e4748-da26-4ed7-8bba-e7260a78ba19-combined-ca-bundle\") pod \"e55e4748-da26-4ed7-8bba-e7260a78ba19\" (UID: \"e55e4748-da26-4ed7-8bba-e7260a78ba19\") " Feb 27 10:38:38 crc kubenswrapper[4998]: I0227 10:38:38.051061 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e55e4748-da26-4ed7-8bba-e7260a78ba19-logs\") pod \"e55e4748-da26-4ed7-8bba-e7260a78ba19\" (UID: \"e55e4748-da26-4ed7-8bba-e7260a78ba19\") " Feb 27 10:38:38 crc kubenswrapper[4998]: I0227 10:38:38.051856 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e55e4748-da26-4ed7-8bba-e7260a78ba19-logs" (OuterVolumeSpecName: "logs") pod "e55e4748-da26-4ed7-8bba-e7260a78ba19" (UID: "e55e4748-da26-4ed7-8bba-e7260a78ba19"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:38:38 crc kubenswrapper[4998]: I0227 10:38:38.057466 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e55e4748-da26-4ed7-8bba-e7260a78ba19-kube-api-access-nk6cf" (OuterVolumeSpecName: "kube-api-access-nk6cf") pod "e55e4748-da26-4ed7-8bba-e7260a78ba19" (UID: "e55e4748-da26-4ed7-8bba-e7260a78ba19"). InnerVolumeSpecName "kube-api-access-nk6cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:38:38 crc kubenswrapper[4998]: I0227 10:38:38.066755 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e55e4748-da26-4ed7-8bba-e7260a78ba19-scripts" (OuterVolumeSpecName: "scripts") pod "e55e4748-da26-4ed7-8bba-e7260a78ba19" (UID: "e55e4748-da26-4ed7-8bba-e7260a78ba19"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:38:38 crc kubenswrapper[4998]: I0227 10:38:38.081347 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e55e4748-da26-4ed7-8bba-e7260a78ba19-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e55e4748-da26-4ed7-8bba-e7260a78ba19" (UID: "e55e4748-da26-4ed7-8bba-e7260a78ba19"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:38:38 crc kubenswrapper[4998]: I0227 10:38:38.084499 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e55e4748-da26-4ed7-8bba-e7260a78ba19-config-data" (OuterVolumeSpecName: "config-data") pod "e55e4748-da26-4ed7-8bba-e7260a78ba19" (UID: "e55e4748-da26-4ed7-8bba-e7260a78ba19"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:38:38 crc kubenswrapper[4998]: I0227 10:38:38.153285 4998 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e55e4748-da26-4ed7-8bba-e7260a78ba19-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:38:38 crc kubenswrapper[4998]: I0227 10:38:38.153316 4998 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e55e4748-da26-4ed7-8bba-e7260a78ba19-logs\") on node \"crc\" DevicePath \"\"" Feb 27 10:38:38 crc kubenswrapper[4998]: I0227 10:38:38.153326 4998 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e55e4748-da26-4ed7-8bba-e7260a78ba19-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 10:38:38 crc kubenswrapper[4998]: I0227 10:38:38.153334 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nk6cf\" (UniqueName: \"kubernetes.io/projected/e55e4748-da26-4ed7-8bba-e7260a78ba19-kube-api-access-nk6cf\") on node \"crc\" DevicePath \"\"" Feb 27 10:38:38 crc kubenswrapper[4998]: I0227 10:38:38.153345 4998 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e55e4748-da26-4ed7-8bba-e7260a78ba19-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 10:38:38 crc kubenswrapper[4998]: I0227 10:38:38.494935 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-plff2" Feb 27 10:38:38 crc kubenswrapper[4998]: I0227 10:38:38.494859 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-plff2" event={"ID":"e55e4748-da26-4ed7-8bba-e7260a78ba19","Type":"ContainerDied","Data":"6e227790258b331609f52b1c9341ffbabdb338cfb175d53f3d4a928e2ad8b547"} Feb 27 10:38:38 crc kubenswrapper[4998]: I0227 10:38:38.495024 4998 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e227790258b331609f52b1c9341ffbabdb338cfb175d53f3d4a928e2ad8b547" Feb 27 10:38:38 crc kubenswrapper[4998]: I0227 10:38:38.584077 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-85b475b45b-ggjbp"] Feb 27 10:38:38 crc kubenswrapper[4998]: E0227 10:38:38.587357 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e55e4748-da26-4ed7-8bba-e7260a78ba19" containerName="placement-db-sync" Feb 27 10:38:38 crc kubenswrapper[4998]: I0227 10:38:38.587387 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="e55e4748-da26-4ed7-8bba-e7260a78ba19" containerName="placement-db-sync" Feb 27 10:38:38 crc kubenswrapper[4998]: I0227 10:38:38.587704 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="e55e4748-da26-4ed7-8bba-e7260a78ba19" containerName="placement-db-sync" Feb 27 10:38:38 crc kubenswrapper[4998]: I0227 10:38:38.590739 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-85b475b45b-ggjbp" Feb 27 10:38:38 crc kubenswrapper[4998]: I0227 10:38:38.597180 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Feb 27 10:38:38 crc kubenswrapper[4998]: I0227 10:38:38.597407 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 27 10:38:38 crc kubenswrapper[4998]: I0227 10:38:38.597536 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 27 10:38:38 crc kubenswrapper[4998]: I0227 10:38:38.600139 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-xlt66" Feb 27 10:38:38 crc kubenswrapper[4998]: I0227 10:38:38.600303 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Feb 27 10:38:38 crc kubenswrapper[4998]: I0227 10:38:38.619345 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-85b475b45b-ggjbp"] Feb 27 10:38:38 crc kubenswrapper[4998]: I0227 10:38:38.664509 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4phj\" (UniqueName: \"kubernetes.io/projected/64d37fff-983b-4a39-89c4-dba36db2f1ba-kube-api-access-p4phj\") pod \"placement-85b475b45b-ggjbp\" (UID: \"64d37fff-983b-4a39-89c4-dba36db2f1ba\") " pod="openstack/placement-85b475b45b-ggjbp" Feb 27 10:38:38 crc kubenswrapper[4998]: I0227 10:38:38.664561 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64d37fff-983b-4a39-89c4-dba36db2f1ba-combined-ca-bundle\") pod \"placement-85b475b45b-ggjbp\" (UID: \"64d37fff-983b-4a39-89c4-dba36db2f1ba\") " pod="openstack/placement-85b475b45b-ggjbp" Feb 27 10:38:38 crc kubenswrapper[4998]: I0227 10:38:38.664593 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/64d37fff-983b-4a39-89c4-dba36db2f1ba-internal-tls-certs\") pod \"placement-85b475b45b-ggjbp\" (UID: \"64d37fff-983b-4a39-89c4-dba36db2f1ba\") " pod="openstack/placement-85b475b45b-ggjbp" Feb 27 10:38:38 crc kubenswrapper[4998]: I0227 10:38:38.664621 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64d37fff-983b-4a39-89c4-dba36db2f1ba-config-data\") pod \"placement-85b475b45b-ggjbp\" (UID: \"64d37fff-983b-4a39-89c4-dba36db2f1ba\") " pod="openstack/placement-85b475b45b-ggjbp" Feb 27 10:38:38 crc kubenswrapper[4998]: I0227 10:38:38.664645 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/64d37fff-983b-4a39-89c4-dba36db2f1ba-scripts\") pod \"placement-85b475b45b-ggjbp\" (UID: \"64d37fff-983b-4a39-89c4-dba36db2f1ba\") " pod="openstack/placement-85b475b45b-ggjbp" Feb 27 10:38:38 crc kubenswrapper[4998]: I0227 10:38:38.664706 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/64d37fff-983b-4a39-89c4-dba36db2f1ba-public-tls-certs\") pod \"placement-85b475b45b-ggjbp\" (UID: \"64d37fff-983b-4a39-89c4-dba36db2f1ba\") " pod="openstack/placement-85b475b45b-ggjbp" Feb 27 10:38:38 crc kubenswrapper[4998]: I0227 10:38:38.664724 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/64d37fff-983b-4a39-89c4-dba36db2f1ba-logs\") pod \"placement-85b475b45b-ggjbp\" (UID: \"64d37fff-983b-4a39-89c4-dba36db2f1ba\") " pod="openstack/placement-85b475b45b-ggjbp" Feb 27 10:38:38 crc kubenswrapper[4998]: I0227 10:38:38.765845 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/64d37fff-983b-4a39-89c4-dba36db2f1ba-internal-tls-certs\") pod \"placement-85b475b45b-ggjbp\" (UID: \"64d37fff-983b-4a39-89c4-dba36db2f1ba\") " pod="openstack/placement-85b475b45b-ggjbp" Feb 27 10:38:38 crc kubenswrapper[4998]: I0227 10:38:38.765907 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64d37fff-983b-4a39-89c4-dba36db2f1ba-config-data\") pod \"placement-85b475b45b-ggjbp\" (UID: \"64d37fff-983b-4a39-89c4-dba36db2f1ba\") " pod="openstack/placement-85b475b45b-ggjbp" Feb 27 10:38:38 crc kubenswrapper[4998]: I0227 10:38:38.765952 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/64d37fff-983b-4a39-89c4-dba36db2f1ba-scripts\") pod \"placement-85b475b45b-ggjbp\" (UID: \"64d37fff-983b-4a39-89c4-dba36db2f1ba\") " pod="openstack/placement-85b475b45b-ggjbp" Feb 27 10:38:38 crc kubenswrapper[4998]: I0227 10:38:38.766037 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/64d37fff-983b-4a39-89c4-dba36db2f1ba-public-tls-certs\") pod \"placement-85b475b45b-ggjbp\" (UID: \"64d37fff-983b-4a39-89c4-dba36db2f1ba\") " pod="openstack/placement-85b475b45b-ggjbp" Feb 27 10:38:38 crc kubenswrapper[4998]: I0227 10:38:38.766065 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/64d37fff-983b-4a39-89c4-dba36db2f1ba-logs\") pod \"placement-85b475b45b-ggjbp\" (UID: \"64d37fff-983b-4a39-89c4-dba36db2f1ba\") " pod="openstack/placement-85b475b45b-ggjbp" Feb 27 10:38:38 crc kubenswrapper[4998]: I0227 10:38:38.766123 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4phj\" (UniqueName: \"kubernetes.io/projected/64d37fff-983b-4a39-89c4-dba36db2f1ba-kube-api-access-p4phj\") pod \"placement-85b475b45b-ggjbp\" (UID: \"64d37fff-983b-4a39-89c4-dba36db2f1ba\") " pod="openstack/placement-85b475b45b-ggjbp" Feb 27 10:38:38 crc kubenswrapper[4998]: I0227 10:38:38.766163 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64d37fff-983b-4a39-89c4-dba36db2f1ba-combined-ca-bundle\") pod \"placement-85b475b45b-ggjbp\" (UID: \"64d37fff-983b-4a39-89c4-dba36db2f1ba\") " pod="openstack/placement-85b475b45b-ggjbp" Feb 27 10:38:38 crc kubenswrapper[4998]: I0227 10:38:38.766594 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/64d37fff-983b-4a39-89c4-dba36db2f1ba-logs\") pod \"placement-85b475b45b-ggjbp\" (UID: \"64d37fff-983b-4a39-89c4-dba36db2f1ba\") " pod="openstack/placement-85b475b45b-ggjbp" Feb 27 10:38:38 crc kubenswrapper[4998]: I0227 10:38:38.772006 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/64d37fff-983b-4a39-89c4-dba36db2f1ba-scripts\") pod \"placement-85b475b45b-ggjbp\" (UID: \"64d37fff-983b-4a39-89c4-dba36db2f1ba\") " pod="openstack/placement-85b475b45b-ggjbp" Feb 27 10:38:38 crc kubenswrapper[4998]: I0227 10:38:38.773845 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/64d37fff-983b-4a39-89c4-dba36db2f1ba-public-tls-certs\") pod \"placement-85b475b45b-ggjbp\" (UID: \"64d37fff-983b-4a39-89c4-dba36db2f1ba\") " pod="openstack/placement-85b475b45b-ggjbp" Feb 27 10:38:38 crc kubenswrapper[4998]: I0227 10:38:38.774281 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64d37fff-983b-4a39-89c4-dba36db2f1ba-combined-ca-bundle\") pod \"placement-85b475b45b-ggjbp\" (UID: \"64d37fff-983b-4a39-89c4-dba36db2f1ba\") " pod="openstack/placement-85b475b45b-ggjbp" Feb 27 10:38:38 crc kubenswrapper[4998]: I0227 10:38:38.775572 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64d37fff-983b-4a39-89c4-dba36db2f1ba-config-data\") pod \"placement-85b475b45b-ggjbp\" (UID: \"64d37fff-983b-4a39-89c4-dba36db2f1ba\") " pod="openstack/placement-85b475b45b-ggjbp" Feb 27 10:38:38 crc kubenswrapper[4998]: I0227 10:38:38.794703 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/64d37fff-983b-4a39-89c4-dba36db2f1ba-internal-tls-certs\") pod \"placement-85b475b45b-ggjbp\" (UID: \"64d37fff-983b-4a39-89c4-dba36db2f1ba\") " pod="openstack/placement-85b475b45b-ggjbp" Feb 27 10:38:38 crc kubenswrapper[4998]: I0227 10:38:38.798871 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4phj\" (UniqueName: \"kubernetes.io/projected/64d37fff-983b-4a39-89c4-dba36db2f1ba-kube-api-access-p4phj\") pod \"placement-85b475b45b-ggjbp\" (UID: \"64d37fff-983b-4a39-89c4-dba36db2f1ba\") " pod="openstack/placement-85b475b45b-ggjbp" Feb 27 10:38:38 crc kubenswrapper[4998]: I0227 10:38:38.917753 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-85b475b45b-ggjbp" Feb 27 10:38:39 crc kubenswrapper[4998]: I0227 10:38:39.497518 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-85b475b45b-ggjbp"] Feb 27 10:38:39 crc kubenswrapper[4998]: I0227 10:38:39.900859 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-64c87cb5cd-prbxl" Feb 27 10:38:39 crc kubenswrapper[4998]: I0227 10:38:39.901393 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-64c87cb5cd-prbxl" Feb 27 10:38:39 crc kubenswrapper[4998]: I0227 10:38:39.950673 4998 scope.go:117] "RemoveContainer" containerID="f3ace2ccd449b5fe79ecb1654a7a36dd9ed12da3407d4638855e5fd4910f4752" Feb 27 10:38:40 crc kubenswrapper[4998]: I0227 10:38:40.000427 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5d7f558cb4-k5mxh" Feb 27 10:38:40 crc kubenswrapper[4998]: I0227 10:38:40.000481 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5d7f558cb4-k5mxh" Feb 27 10:38:40 crc kubenswrapper[4998]: I0227 10:38:40.532414 4998 generic.go:334] "Generic (PLEG): container finished" podID="06aa0c33-2be2-426a-98a0-eff676933eb1" containerID="eae523fac7505ae968dd9b7f9bb1c82265f152db269da617070438ba4412f92a" exitCode=0 Feb 27 10:38:40 crc kubenswrapper[4998]: I0227 10:38:40.532517 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-dlccw" event={"ID":"06aa0c33-2be2-426a-98a0-eff676933eb1","Type":"ContainerDied","Data":"eae523fac7505ae968dd9b7f9bb1c82265f152db269da617070438ba4412f92a"} Feb 27 10:38:40 crc kubenswrapper[4998]: I0227 10:38:40.533892 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-85b475b45b-ggjbp" event={"ID":"64d37fff-983b-4a39-89c4-dba36db2f1ba","Type":"ContainerStarted","Data":"406a6f9bfef14b6832688f03dd8dd8484ea04fd03bc2fc905283e66b8ee54ede"} Feb 27 10:38:42 crc kubenswrapper[4998]: I0227 10:38:42.276436 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55f844cf75-6fzk5" Feb 27 10:38:42 crc kubenswrapper[4998]: I0227 10:38:42.342847 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-59cmz"] Feb 27 10:38:42 crc kubenswrapper[4998]: I0227 10:38:42.343434 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-785d8bcb8c-59cmz" podUID="6c82c2bb-efea-40ad-9915-c9c61d9f53cf" containerName="dnsmasq-dns" containerID="cri-o://78f2e3cb632e97aca39a37ee02726f2d018a2584bae0e246feb315f4c646eda3" gracePeriod=10 Feb 27 10:38:42 crc kubenswrapper[4998]: I0227 10:38:42.573117 4998 generic.go:334] "Generic (PLEG): container finished" podID="6c82c2bb-efea-40ad-9915-c9c61d9f53cf" containerID="78f2e3cb632e97aca39a37ee02726f2d018a2584bae0e246feb315f4c646eda3" exitCode=0 Feb 27 10:38:42 crc kubenswrapper[4998]: I0227 10:38:42.573162 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-59cmz" event={"ID":"6c82c2bb-efea-40ad-9915-c9c61d9f53cf","Type":"ContainerDied","Data":"78f2e3cb632e97aca39a37ee02726f2d018a2584bae0e246feb315f4c646eda3"} Feb 27 10:38:42 crc kubenswrapper[4998]: I0227 10:38:42.582711 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 27 10:38:42 crc kubenswrapper[4998]: I0227 10:38:42.582780 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 27 10:38:42 crc kubenswrapper[4998]: I0227 10:38:42.610396 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 27 10:38:42 crc kubenswrapper[4998]: I0227 10:38:42.631257 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 27 10:38:43 crc kubenswrapper[4998]: I0227 10:38:43.342257 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 27 10:38:43 crc kubenswrapper[4998]: I0227 10:38:43.342319 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 27 10:38:43 crc kubenswrapper[4998]: I0227 10:38:43.375328 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 27 10:38:43 crc kubenswrapper[4998]: I0227 10:38:43.390804 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 27 10:38:43 crc kubenswrapper[4998]: I0227 10:38:43.586333 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 27 10:38:43 crc kubenswrapper[4998]: I0227 10:38:43.588591 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 27 10:38:43 crc kubenswrapper[4998]: I0227 10:38:43.588626 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 27 10:38:43 crc kubenswrapper[4998]: I0227 10:38:43.588640 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 27 10:38:44 crc kubenswrapper[4998]: I0227 10:38:44.984323 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-dlccw" Feb 27 10:38:45 crc kubenswrapper[4998]: I0227 10:38:45.108526 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/06aa0c33-2be2-426a-98a0-eff676933eb1-credential-keys\") pod \"06aa0c33-2be2-426a-98a0-eff676933eb1\" (UID: \"06aa0c33-2be2-426a-98a0-eff676933eb1\") " Feb 27 10:38:45 crc kubenswrapper[4998]: I0227 10:38:45.108587 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06aa0c33-2be2-426a-98a0-eff676933eb1-combined-ca-bundle\") pod \"06aa0c33-2be2-426a-98a0-eff676933eb1\" (UID: \"06aa0c33-2be2-426a-98a0-eff676933eb1\") " Feb 27 10:38:45 crc kubenswrapper[4998]: I0227 10:38:45.108644 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tr955\" (UniqueName: \"kubernetes.io/projected/06aa0c33-2be2-426a-98a0-eff676933eb1-kube-api-access-tr955\") pod \"06aa0c33-2be2-426a-98a0-eff676933eb1\" (UID: \"06aa0c33-2be2-426a-98a0-eff676933eb1\") " Feb 27 10:38:45 crc kubenswrapper[4998]: I0227 10:38:45.108679 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/06aa0c33-2be2-426a-98a0-eff676933eb1-fernet-keys\") pod \"06aa0c33-2be2-426a-98a0-eff676933eb1\" (UID: \"06aa0c33-2be2-426a-98a0-eff676933eb1\") " Feb 27 10:38:45 crc kubenswrapper[4998]: I0227 10:38:45.108759 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06aa0c33-2be2-426a-98a0-eff676933eb1-config-data\") pod \"06aa0c33-2be2-426a-98a0-eff676933eb1\" (UID: \"06aa0c33-2be2-426a-98a0-eff676933eb1\") " Feb 27 10:38:45 crc kubenswrapper[4998]: I0227 10:38:45.108787 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06aa0c33-2be2-426a-98a0-eff676933eb1-scripts\") pod \"06aa0c33-2be2-426a-98a0-eff676933eb1\" (UID: \"06aa0c33-2be2-426a-98a0-eff676933eb1\") " Feb 27 10:38:45 crc kubenswrapper[4998]: I0227 10:38:45.125304 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06aa0c33-2be2-426a-98a0-eff676933eb1-scripts" (OuterVolumeSpecName: "scripts") pod "06aa0c33-2be2-426a-98a0-eff676933eb1" (UID: "06aa0c33-2be2-426a-98a0-eff676933eb1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:38:45 crc kubenswrapper[4998]: I0227 10:38:45.133704 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06aa0c33-2be2-426a-98a0-eff676933eb1-kube-api-access-tr955" (OuterVolumeSpecName: "kube-api-access-tr955") pod "06aa0c33-2be2-426a-98a0-eff676933eb1" (UID: "06aa0c33-2be2-426a-98a0-eff676933eb1"). InnerVolumeSpecName "kube-api-access-tr955". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:38:45 crc kubenswrapper[4998]: I0227 10:38:45.141739 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06aa0c33-2be2-426a-98a0-eff676933eb1-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "06aa0c33-2be2-426a-98a0-eff676933eb1" (UID: "06aa0c33-2be2-426a-98a0-eff676933eb1"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:38:45 crc kubenswrapper[4998]: I0227 10:38:45.155654 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06aa0c33-2be2-426a-98a0-eff676933eb1-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "06aa0c33-2be2-426a-98a0-eff676933eb1" (UID: "06aa0c33-2be2-426a-98a0-eff676933eb1"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:38:45 crc kubenswrapper[4998]: I0227 10:38:45.210735 4998 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/06aa0c33-2be2-426a-98a0-eff676933eb1-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 27 10:38:45 crc kubenswrapper[4998]: I0227 10:38:45.211048 4998 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06aa0c33-2be2-426a-98a0-eff676933eb1-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 10:38:45 crc kubenswrapper[4998]: I0227 10:38:45.211061 4998 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/06aa0c33-2be2-426a-98a0-eff676933eb1-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 27 10:38:45 crc kubenswrapper[4998]: I0227 10:38:45.211078 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tr955\" (UniqueName: \"kubernetes.io/projected/06aa0c33-2be2-426a-98a0-eff676933eb1-kube-api-access-tr955\") on node \"crc\" DevicePath \"\"" Feb 27 10:38:45 crc kubenswrapper[4998]: I0227 10:38:45.211496 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06aa0c33-2be2-426a-98a0-eff676933eb1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "06aa0c33-2be2-426a-98a0-eff676933eb1" (UID: "06aa0c33-2be2-426a-98a0-eff676933eb1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:38:45 crc kubenswrapper[4998]: I0227 10:38:45.233851 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06aa0c33-2be2-426a-98a0-eff676933eb1-config-data" (OuterVolumeSpecName: "config-data") pod "06aa0c33-2be2-426a-98a0-eff676933eb1" (UID: "06aa0c33-2be2-426a-98a0-eff676933eb1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:38:45 crc kubenswrapper[4998]: I0227 10:38:45.234769 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-59cmz" Feb 27 10:38:45 crc kubenswrapper[4998]: I0227 10:38:45.316873 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6c82c2bb-efea-40ad-9915-c9c61d9f53cf-dns-svc\") pod \"6c82c2bb-efea-40ad-9915-c9c61d9f53cf\" (UID: \"6c82c2bb-efea-40ad-9915-c9c61d9f53cf\") " Feb 27 10:38:45 crc kubenswrapper[4998]: I0227 10:38:45.317184 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6c82c2bb-efea-40ad-9915-c9c61d9f53cf-ovsdbserver-nb\") pod \"6c82c2bb-efea-40ad-9915-c9c61d9f53cf\" (UID: \"6c82c2bb-efea-40ad-9915-c9c61d9f53cf\") " Feb 27 10:38:45 crc kubenswrapper[4998]: I0227 10:38:45.317408 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c82c2bb-efea-40ad-9915-c9c61d9f53cf-config\") pod \"6c82c2bb-efea-40ad-9915-c9c61d9f53cf\" (UID: \"6c82c2bb-efea-40ad-9915-c9c61d9f53cf\") " Feb 27 10:38:45 crc kubenswrapper[4998]: I0227 10:38:45.317597 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6c82c2bb-efea-40ad-9915-c9c61d9f53cf-ovsdbserver-sb\") pod \"6c82c2bb-efea-40ad-9915-c9c61d9f53cf\" (UID: \"6c82c2bb-efea-40ad-9915-c9c61d9f53cf\") " Feb 27 10:38:45 crc kubenswrapper[4998]: I0227 10:38:45.317706 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nd95s\" (UniqueName: \"kubernetes.io/projected/6c82c2bb-efea-40ad-9915-c9c61d9f53cf-kube-api-access-nd95s\") pod \"6c82c2bb-efea-40ad-9915-c9c61d9f53cf\" (UID: \"6c82c2bb-efea-40ad-9915-c9c61d9f53cf\") " Feb 27 10:38:45 crc kubenswrapper[4998]: I0227 10:38:45.317806 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6c82c2bb-efea-40ad-9915-c9c61d9f53cf-dns-swift-storage-0\") pod \"6c82c2bb-efea-40ad-9915-c9c61d9f53cf\" (UID: \"6c82c2bb-efea-40ad-9915-c9c61d9f53cf\") " Feb 27 10:38:45 crc kubenswrapper[4998]: I0227 10:38:45.318618 4998 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06aa0c33-2be2-426a-98a0-eff676933eb1-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 10:38:45 crc kubenswrapper[4998]: I0227 10:38:45.318688 4998 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06aa0c33-2be2-426a-98a0-eff676933eb1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:38:45 crc kubenswrapper[4998]: I0227 10:38:45.354751 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c82c2bb-efea-40ad-9915-c9c61d9f53cf-kube-api-access-nd95s" (OuterVolumeSpecName: "kube-api-access-nd95s") pod "6c82c2bb-efea-40ad-9915-c9c61d9f53cf" (UID: "6c82c2bb-efea-40ad-9915-c9c61d9f53cf"). InnerVolumeSpecName "kube-api-access-nd95s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:38:45 crc kubenswrapper[4998]: I0227 10:38:45.377865 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c82c2bb-efea-40ad-9915-c9c61d9f53cf-config" (OuterVolumeSpecName: "config") pod "6c82c2bb-efea-40ad-9915-c9c61d9f53cf" (UID: "6c82c2bb-efea-40ad-9915-c9c61d9f53cf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:38:45 crc kubenswrapper[4998]: I0227 10:38:45.392837 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c82c2bb-efea-40ad-9915-c9c61d9f53cf-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6c82c2bb-efea-40ad-9915-c9c61d9f53cf" (UID: "6c82c2bb-efea-40ad-9915-c9c61d9f53cf"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:38:45 crc kubenswrapper[4998]: I0227 10:38:45.397781 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c82c2bb-efea-40ad-9915-c9c61d9f53cf-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "6c82c2bb-efea-40ad-9915-c9c61d9f53cf" (UID: "6c82c2bb-efea-40ad-9915-c9c61d9f53cf"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:38:45 crc kubenswrapper[4998]: I0227 10:38:45.420577 4998 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c82c2bb-efea-40ad-9915-c9c61d9f53cf-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:38:45 crc kubenswrapper[4998]: I0227 10:38:45.420611 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nd95s\" (UniqueName: \"kubernetes.io/projected/6c82c2bb-efea-40ad-9915-c9c61d9f53cf-kube-api-access-nd95s\") on node \"crc\" DevicePath \"\"" Feb 27 10:38:45 crc kubenswrapper[4998]: I0227 10:38:45.420624 4998 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6c82c2bb-efea-40ad-9915-c9c61d9f53cf-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 27 10:38:45 crc kubenswrapper[4998]: I0227 10:38:45.420634 4998 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6c82c2bb-efea-40ad-9915-c9c61d9f53cf-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 27 10:38:45 crc kubenswrapper[4998]: I0227 10:38:45.421300 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c82c2bb-efea-40ad-9915-c9c61d9f53cf-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6c82c2bb-efea-40ad-9915-c9c61d9f53cf" (UID: "6c82c2bb-efea-40ad-9915-c9c61d9f53cf"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:38:45 crc kubenswrapper[4998]: I0227 10:38:45.433011 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c82c2bb-efea-40ad-9915-c9c61d9f53cf-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6c82c2bb-efea-40ad-9915-c9c61d9f53cf" (UID: "6c82c2bb-efea-40ad-9915-c9c61d9f53cf"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:38:45 crc kubenswrapper[4998]: I0227 10:38:45.522491 4998 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6c82c2bb-efea-40ad-9915-c9c61d9f53cf-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 27 10:38:45 crc kubenswrapper[4998]: I0227 10:38:45.522717 4998 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6c82c2bb-efea-40ad-9915-c9c61d9f53cf-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 27 10:38:45 crc kubenswrapper[4998]: I0227 10:38:45.619416 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"63050361-4a13-4b25-8c1a-ff9fed854172","Type":"ContainerStarted","Data":"6c2aa53c9a3d41b29ebae488fb1cfb0f9f9a2224bde207ee937110d3162d5362"} Feb 27 10:38:45 crc kubenswrapper[4998]: I0227 10:38:45.621662 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-59cmz" event={"ID":"6c82c2bb-efea-40ad-9915-c9c61d9f53cf","Type":"ContainerDied","Data":"999c6b964a6edc639c6b4adb6b0df1b8b48568aa45e963f0bc0e6696fdfe26f9"} Feb 27 10:38:45 crc kubenswrapper[4998]: I0227 10:38:45.621707 4998 scope.go:117] "RemoveContainer" containerID="78f2e3cb632e97aca39a37ee02726f2d018a2584bae0e246feb315f4c646eda3" Feb 27 10:38:45 crc kubenswrapper[4998]: I0227 10:38:45.621740 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-59cmz" Feb 27 10:38:45 crc kubenswrapper[4998]: I0227 10:38:45.624519 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-dlccw" event={"ID":"06aa0c33-2be2-426a-98a0-eff676933eb1","Type":"ContainerDied","Data":"d23c5fd9866e836a7e1dce08869332e328cded1062650cf673b369230767d70e"} Feb 27 10:38:45 crc kubenswrapper[4998]: I0227 10:38:45.624614 4998 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d23c5fd9866e836a7e1dce08869332e328cded1062650cf673b369230767d70e" Feb 27 10:38:45 crc kubenswrapper[4998]: I0227 10:38:45.624563 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-dlccw" Feb 27 10:38:45 crc kubenswrapper[4998]: I0227 10:38:45.635600 4998 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 27 10:38:45 crc kubenswrapper[4998]: I0227 10:38:45.635630 4998 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 27 10:38:45 crc kubenswrapper[4998]: I0227 10:38:45.637507 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-85b475b45b-ggjbp" event={"ID":"64d37fff-983b-4a39-89c4-dba36db2f1ba","Type":"ContainerStarted","Data":"cbaf5c02f514eba186509dbedb6411360e62b35dd42d0510e6c64f49914d996d"} Feb 27 10:38:45 crc kubenswrapper[4998]: I0227 10:38:45.637554 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-85b475b45b-ggjbp" event={"ID":"64d37fff-983b-4a39-89c4-dba36db2f1ba","Type":"ContainerStarted","Data":"8da3b426469ff337c194003544e89203c78dc69859f657de6c5989d427a40e87"} Feb 27 10:38:45 crc kubenswrapper[4998]: I0227 10:38:45.637599 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-85b475b45b-ggjbp" Feb 27 10:38:45 crc kubenswrapper[4998]: I0227 10:38:45.637624 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-85b475b45b-ggjbp" Feb 27 10:38:45 crc kubenswrapper[4998]: I0227 10:38:45.671098 4998 scope.go:117] "RemoveContainer" containerID="e06d77fca15ce2b688b643ad6b139954c8693ce9c53763f5b646060f0ef1cfe4" Feb 27 10:38:45 crc kubenswrapper[4998]: I0227 10:38:45.682152 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-85b475b45b-ggjbp" podStartSLOduration=7.682127617 podStartE2EDuration="7.682127617s" podCreationTimestamp="2026-02-27 10:38:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:38:45.671710376 +0000 UTC m=+1277.669981354" watchObservedRunningTime="2026-02-27 10:38:45.682127617 +0000 UTC m=+1277.680398585" Feb 27 10:38:45 crc kubenswrapper[4998]: I0227 10:38:45.724327 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-59cmz"] Feb 27 10:38:45 crc kubenswrapper[4998]: I0227 10:38:45.733732 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-59cmz"] Feb 27 10:38:46 crc kubenswrapper[4998]: I0227 10:38:46.152416 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 27 10:38:46 crc kubenswrapper[4998]: I0227 10:38:46.152604 4998 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 27 10:38:46 crc kubenswrapper[4998]: I0227 10:38:46.194884 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-7fb98b967f-nv7q9"] Feb 27 10:38:46 crc kubenswrapper[4998]: E0227 10:38:46.195369 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c82c2bb-efea-40ad-9915-c9c61d9f53cf" containerName="init" Feb 27 10:38:46 crc kubenswrapper[4998]: I0227 10:38:46.195389 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c82c2bb-efea-40ad-9915-c9c61d9f53cf" containerName="init" Feb 27 10:38:46 crc kubenswrapper[4998]: E0227 10:38:46.195401 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06aa0c33-2be2-426a-98a0-eff676933eb1" containerName="keystone-bootstrap" Feb 27 10:38:46 crc kubenswrapper[4998]: I0227 10:38:46.195409 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="06aa0c33-2be2-426a-98a0-eff676933eb1" containerName="keystone-bootstrap" Feb 27 10:38:46 crc kubenswrapper[4998]: E0227 10:38:46.195425 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c82c2bb-efea-40ad-9915-c9c61d9f53cf" containerName="dnsmasq-dns" Feb 27 10:38:46 crc kubenswrapper[4998]: I0227 10:38:46.195433 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c82c2bb-efea-40ad-9915-c9c61d9f53cf" containerName="dnsmasq-dns" Feb 27 10:38:46 crc kubenswrapper[4998]: I0227 10:38:46.195626 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="06aa0c33-2be2-426a-98a0-eff676933eb1" containerName="keystone-bootstrap" Feb 27 10:38:46 crc kubenswrapper[4998]: I0227 10:38:46.195678 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c82c2bb-efea-40ad-9915-c9c61d9f53cf" containerName="dnsmasq-dns" Feb 27 10:38:46 crc kubenswrapper[4998]: I0227 10:38:46.196471 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7fb98b967f-nv7q9" Feb 27 10:38:46 crc kubenswrapper[4998]: I0227 10:38:46.200720 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 27 10:38:46 crc kubenswrapper[4998]: I0227 10:38:46.200901 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Feb 27 10:38:46 crc kubenswrapper[4998]: I0227 10:38:46.202644 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 27 10:38:46 crc kubenswrapper[4998]: I0227 10:38:46.203983 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 27 10:38:46 crc kubenswrapper[4998]: I0227 10:38:46.204142 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-4n75w" Feb 27 10:38:46 crc kubenswrapper[4998]: I0227 10:38:46.219476 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7fb98b967f-nv7q9"] Feb 27 10:38:46 crc kubenswrapper[4998]: I0227 10:38:46.230971 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Feb 27 10:38:46 crc kubenswrapper[4998]: I0227 10:38:46.323260 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 27 10:38:46 crc kubenswrapper[4998]: I0227 10:38:46.337610 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad724f01-459e-4616-b2a2-989f67e5f334-public-tls-certs\") pod \"keystone-7fb98b967f-nv7q9\" (UID: \"ad724f01-459e-4616-b2a2-989f67e5f334\") " pod="openstack/keystone-7fb98b967f-nv7q9" Feb 27 10:38:46 crc kubenswrapper[4998]: I0227 10:38:46.337864 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad724f01-459e-4616-b2a2-989f67e5f334-internal-tls-certs\") pod \"keystone-7fb98b967f-nv7q9\" (UID: \"ad724f01-459e-4616-b2a2-989f67e5f334\") " pod="openstack/keystone-7fb98b967f-nv7q9" Feb 27 10:38:46 crc kubenswrapper[4998]: I0227 10:38:46.337978 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad724f01-459e-4616-b2a2-989f67e5f334-scripts\") pod \"keystone-7fb98b967f-nv7q9\" (UID: \"ad724f01-459e-4616-b2a2-989f67e5f334\") " pod="openstack/keystone-7fb98b967f-nv7q9" Feb 27 10:38:46 crc kubenswrapper[4998]: I0227 10:38:46.338170 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad724f01-459e-4616-b2a2-989f67e5f334-config-data\") pod \"keystone-7fb98b967f-nv7q9\" (UID: \"ad724f01-459e-4616-b2a2-989f67e5f334\") " pod="openstack/keystone-7fb98b967f-nv7q9" Feb 27 10:38:46 crc kubenswrapper[4998]: I0227 10:38:46.338385 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad724f01-459e-4616-b2a2-989f67e5f334-combined-ca-bundle\") pod \"keystone-7fb98b967f-nv7q9\" (UID: \"ad724f01-459e-4616-b2a2-989f67e5f334\") " pod="openstack/keystone-7fb98b967f-nv7q9" Feb 27 10:38:46 crc kubenswrapper[4998]: I0227 10:38:46.338482 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ad724f01-459e-4616-b2a2-989f67e5f334-credential-keys\") pod \"keystone-7fb98b967f-nv7q9\" (UID: \"ad724f01-459e-4616-b2a2-989f67e5f334\") " pod="openstack/keystone-7fb98b967f-nv7q9" Feb 27 10:38:46 crc kubenswrapper[4998]: I0227 10:38:46.338681 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ad724f01-459e-4616-b2a2-989f67e5f334-fernet-keys\") pod \"keystone-7fb98b967f-nv7q9\" (UID: \"ad724f01-459e-4616-b2a2-989f67e5f334\") " pod="openstack/keystone-7fb98b967f-nv7q9" Feb 27 10:38:46 crc kubenswrapper[4998]: I0227 10:38:46.338800 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5j8f\" (UniqueName: \"kubernetes.io/projected/ad724f01-459e-4616-b2a2-989f67e5f334-kube-api-access-k5j8f\") pod \"keystone-7fb98b967f-nv7q9\" (UID: \"ad724f01-459e-4616-b2a2-989f67e5f334\") " pod="openstack/keystone-7fb98b967f-nv7q9" Feb 27 10:38:46 crc kubenswrapper[4998]: I0227 10:38:46.441066 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad724f01-459e-4616-b2a2-989f67e5f334-config-data\") pod \"keystone-7fb98b967f-nv7q9\" (UID: \"ad724f01-459e-4616-b2a2-989f67e5f334\") " pod="openstack/keystone-7fb98b967f-nv7q9" Feb 27 10:38:46 crc kubenswrapper[4998]: I0227 10:38:46.441660 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad724f01-459e-4616-b2a2-989f67e5f334-combined-ca-bundle\") pod \"keystone-7fb98b967f-nv7q9\" (UID: \"ad724f01-459e-4616-b2a2-989f67e5f334\") " pod="openstack/keystone-7fb98b967f-nv7q9" Feb 27 10:38:46 crc kubenswrapper[4998]: I0227 10:38:46.441804 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ad724f01-459e-4616-b2a2-989f67e5f334-credential-keys\") pod \"keystone-7fb98b967f-nv7q9\" (UID: \"ad724f01-459e-4616-b2a2-989f67e5f334\") " pod="openstack/keystone-7fb98b967f-nv7q9" Feb 27 10:38:46 crc kubenswrapper[4998]: I0227 10:38:46.442185 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ad724f01-459e-4616-b2a2-989f67e5f334-fernet-keys\") pod \"keystone-7fb98b967f-nv7q9\" (UID: \"ad724f01-459e-4616-b2a2-989f67e5f334\") " pod="openstack/keystone-7fb98b967f-nv7q9" Feb 27 10:38:46 crc kubenswrapper[4998]: I0227 10:38:46.442365 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5j8f\" (UniqueName: \"kubernetes.io/projected/ad724f01-459e-4616-b2a2-989f67e5f334-kube-api-access-k5j8f\") pod \"keystone-7fb98b967f-nv7q9\" (UID: \"ad724f01-459e-4616-b2a2-989f67e5f334\") " pod="openstack/keystone-7fb98b967f-nv7q9" Feb 27 10:38:46 crc kubenswrapper[4998]: I0227 10:38:46.442526 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad724f01-459e-4616-b2a2-989f67e5f334-public-tls-certs\") pod \"keystone-7fb98b967f-nv7q9\" (UID: \"ad724f01-459e-4616-b2a2-989f67e5f334\") " pod="openstack/keystone-7fb98b967f-nv7q9" Feb 27 10:38:46 crc kubenswrapper[4998]: I0227 10:38:46.442637 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad724f01-459e-4616-b2a2-989f67e5f334-internal-tls-certs\") pod \"keystone-7fb98b967f-nv7q9\" (UID: \"ad724f01-459e-4616-b2a2-989f67e5f334\") " pod="openstack/keystone-7fb98b967f-nv7q9" Feb 27 10:38:46 crc kubenswrapper[4998]: I0227 10:38:46.442748 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad724f01-459e-4616-b2a2-989f67e5f334-scripts\") pod \"keystone-7fb98b967f-nv7q9\" (UID: \"ad724f01-459e-4616-b2a2-989f67e5f334\") " pod="openstack/keystone-7fb98b967f-nv7q9" Feb 27 10:38:46 crc kubenswrapper[4998]: I0227 10:38:46.446272 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad724f01-459e-4616-b2a2-989f67e5f334-combined-ca-bundle\") pod \"keystone-7fb98b967f-nv7q9\" (UID: \"ad724f01-459e-4616-b2a2-989f67e5f334\") " pod="openstack/keystone-7fb98b967f-nv7q9" Feb 27 10:38:46 crc kubenswrapper[4998]: I0227 10:38:46.447960 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad724f01-459e-4616-b2a2-989f67e5f334-config-data\") pod \"keystone-7fb98b967f-nv7q9\" (UID: \"ad724f01-459e-4616-b2a2-989f67e5f334\") " pod="openstack/keystone-7fb98b967f-nv7q9" Feb 27 10:38:46 crc kubenswrapper[4998]: I0227 10:38:46.457984 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad724f01-459e-4616-b2a2-989f67e5f334-scripts\") pod \"keystone-7fb98b967f-nv7q9\" (UID: \"ad724f01-459e-4616-b2a2-989f67e5f334\") " pod="openstack/keystone-7fb98b967f-nv7q9" Feb 27 10:38:46 crc kubenswrapper[4998]: I0227 10:38:46.458397 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ad724f01-459e-4616-b2a2-989f67e5f334-fernet-keys\") pod \"keystone-7fb98b967f-nv7q9\" (UID: \"ad724f01-459e-4616-b2a2-989f67e5f334\") " pod="openstack/keystone-7fb98b967f-nv7q9" Feb 27 10:38:46 crc kubenswrapper[4998]: I0227 10:38:46.461798 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad724f01-459e-4616-b2a2-989f67e5f334-internal-tls-certs\") pod \"keystone-7fb98b967f-nv7q9\" (UID: \"ad724f01-459e-4616-b2a2-989f67e5f334\") " pod="openstack/keystone-7fb98b967f-nv7q9" Feb 27 10:38:46 crc kubenswrapper[4998]: I0227 10:38:46.461804 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad724f01-459e-4616-b2a2-989f67e5f334-public-tls-certs\") pod \"keystone-7fb98b967f-nv7q9\" (UID: \"ad724f01-459e-4616-b2a2-989f67e5f334\") " pod="openstack/keystone-7fb98b967f-nv7q9" Feb 27 10:38:46 crc kubenswrapper[4998]: I0227 10:38:46.462147 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ad724f01-459e-4616-b2a2-989f67e5f334-credential-keys\") pod \"keystone-7fb98b967f-nv7q9\" (UID: \"ad724f01-459e-4616-b2a2-989f67e5f334\") " pod="openstack/keystone-7fb98b967f-nv7q9" Feb 27 10:38:46 crc kubenswrapper[4998]: I0227 10:38:46.470874 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5j8f\" (UniqueName: \"kubernetes.io/projected/ad724f01-459e-4616-b2a2-989f67e5f334-kube-api-access-k5j8f\") pod \"keystone-7fb98b967f-nv7q9\" (UID: \"ad724f01-459e-4616-b2a2-989f67e5f334\") " pod="openstack/keystone-7fb98b967f-nv7q9" Feb 27 10:38:46 crc kubenswrapper[4998]: I0227 10:38:46.647456 4998 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 27 10:38:46 crc kubenswrapper[4998]: I0227 10:38:46.680293 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 27 10:38:46 crc kubenswrapper[4998]: I0227 10:38:46.702914 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7fb98b967f-nv7q9" Feb 27 10:38:46 crc kubenswrapper[4998]: I0227 10:38:46.812989 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c82c2bb-efea-40ad-9915-c9c61d9f53cf" path="/var/lib/kubelet/pods/6c82c2bb-efea-40ad-9915-c9c61d9f53cf/volumes" Feb 27 10:38:46 crc kubenswrapper[4998]: I0227 10:38:46.813794 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 27 10:38:47 crc kubenswrapper[4998]: I0227 10:38:47.570086 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7fb98b967f-nv7q9"] Feb 27 10:38:47 crc kubenswrapper[4998]: W0227 10:38:47.590336 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podad724f01_459e_4616_b2a2_989f67e5f334.slice/crio-193cf2a7eaae6f5aca765e02a15b4879705e75d7b9aa0f6ef923ad48ccf838ee WatchSource:0}: Error finding container 193cf2a7eaae6f5aca765e02a15b4879705e75d7b9aa0f6ef923ad48ccf838ee: Status 404 returned error can't find the container with id 193cf2a7eaae6f5aca765e02a15b4879705e75d7b9aa0f6ef923ad48ccf838ee Feb 27 10:38:47 crc kubenswrapper[4998]: I0227 10:38:47.687169 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7fb98b967f-nv7q9" event={"ID":"ad724f01-459e-4616-b2a2-989f67e5f334","Type":"ContainerStarted","Data":"193cf2a7eaae6f5aca765e02a15b4879705e75d7b9aa0f6ef923ad48ccf838ee"} Feb 27 10:38:47 crc kubenswrapper[4998]: I0227 10:38:47.693353 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-5x42c" event={"ID":"d455fe16-80bf-42c1-be16-a87102249bf8","Type":"ContainerStarted","Data":"f0c82b8579f470c3c6cc44b4c3354d4c9bbff2e80f7f102e17391d38f2c19d98"} Feb 27 10:38:47 crc kubenswrapper[4998]: I0227 10:38:47.697168 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-tf4n2" event={"ID":"c96d5b82-3e0b-49a0-be3d-7f2fae6dd592","Type":"ContainerStarted","Data":"f4316920d4d3df2d0cebe2ec917145c808ee1bfd4546870b1316f784d8e64ea4"} Feb 27 10:38:47 crc kubenswrapper[4998]: I0227 10:38:47.751464 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-5x42c" podStartSLOduration=3.5254467419999997 podStartE2EDuration="47.751442338s" podCreationTimestamp="2026-02-27 10:38:00 +0000 UTC" firstStartedPulling="2026-02-27 10:38:03.149562522 +0000 UTC m=+1235.147833490" lastFinishedPulling="2026-02-27 10:38:47.375558118 +0000 UTC m=+1279.373829086" observedRunningTime="2026-02-27 10:38:47.714784715 +0000 UTC m=+1279.713055683" watchObservedRunningTime="2026-02-27 10:38:47.751442338 +0000 UTC m=+1279.749713296" Feb 27 10:38:47 crc kubenswrapper[4998]: I0227 10:38:47.759162 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-tf4n2" podStartSLOduration=4.119597067 podStartE2EDuration="47.759140442s" podCreationTimestamp="2026-02-27 10:38:00 +0000 UTC" firstStartedPulling="2026-02-27 10:38:02.561606953 +0000 UTC m=+1234.559877921" lastFinishedPulling="2026-02-27 10:38:46.201150328 +0000 UTC m=+1278.199421296" observedRunningTime="2026-02-27 10:38:47.733435466 +0000 UTC m=+1279.731706444" watchObservedRunningTime="2026-02-27 10:38:47.759140442 +0000 UTC m=+1279.757411410" Feb 27 10:38:48 crc kubenswrapper[4998]: I0227 10:38:48.710352 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7fb98b967f-nv7q9" event={"ID":"ad724f01-459e-4616-b2a2-989f67e5f334","Type":"ContainerStarted","Data":"a115cbc380416d91edfacfba49e7f1609dbef85233f0743f41f2ee78c364d659"} Feb 27 10:38:48 crc kubenswrapper[4998]: I0227 10:38:48.711823 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-7fb98b967f-nv7q9" Feb 27 10:38:48 crc kubenswrapper[4998]: I0227 10:38:48.727772 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-7fb98b967f-nv7q9" podStartSLOduration=2.7277129799999997 podStartE2EDuration="2.72771298s" podCreationTimestamp="2026-02-27 10:38:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:38:48.725309684 +0000 UTC m=+1280.723580672" watchObservedRunningTime="2026-02-27 10:38:48.72771298 +0000 UTC m=+1280.725983948" Feb 27 10:38:49 crc kubenswrapper[4998]: I0227 10:38:49.904801 4998 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-64c87cb5cd-prbxl" podUID="63a87b91-16fb-436d-8c53-317b204acebc" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.152:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.152:8443: connect: connection refused" Feb 27 10:38:50 crc kubenswrapper[4998]: I0227 10:38:50.005023 4998 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5d7f558cb4-k5mxh" podUID="c6f8dcd8-b50e-47b8-b54c-2aa103be577c" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.153:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.153:8443: connect: connection refused" Feb 27 10:38:51 crc kubenswrapper[4998]: I0227 10:38:51.736532 4998 generic.go:334] "Generic (PLEG): container finished" podID="d455fe16-80bf-42c1-be16-a87102249bf8" containerID="f0c82b8579f470c3c6cc44b4c3354d4c9bbff2e80f7f102e17391d38f2c19d98" exitCode=0 Feb 27 10:38:51 crc kubenswrapper[4998]: I0227 10:38:51.736620 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-5x42c" event={"ID":"d455fe16-80bf-42c1-be16-a87102249bf8","Type":"ContainerDied","Data":"f0c82b8579f470c3c6cc44b4c3354d4c9bbff2e80f7f102e17391d38f2c19d98"} Feb 27 10:38:54 crc kubenswrapper[4998]: I0227 10:38:54.108448 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-5x42c" Feb 27 10:38:54 crc kubenswrapper[4998]: I0227 10:38:54.222625 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d455fe16-80bf-42c1-be16-a87102249bf8-combined-ca-bundle\") pod \"d455fe16-80bf-42c1-be16-a87102249bf8\" (UID: \"d455fe16-80bf-42c1-be16-a87102249bf8\") " Feb 27 10:38:54 crc kubenswrapper[4998]: I0227 10:38:54.222681 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wmz29\" (UniqueName: \"kubernetes.io/projected/d455fe16-80bf-42c1-be16-a87102249bf8-kube-api-access-wmz29\") pod \"d455fe16-80bf-42c1-be16-a87102249bf8\" (UID: \"d455fe16-80bf-42c1-be16-a87102249bf8\") " Feb 27 10:38:54 crc kubenswrapper[4998]: I0227 10:38:54.223877 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d455fe16-80bf-42c1-be16-a87102249bf8-db-sync-config-data\") pod \"d455fe16-80bf-42c1-be16-a87102249bf8\" (UID: \"d455fe16-80bf-42c1-be16-a87102249bf8\") " Feb 27 10:38:54 crc kubenswrapper[4998]: I0227 10:38:54.229483 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d455fe16-80bf-42c1-be16-a87102249bf8-kube-api-access-wmz29" (OuterVolumeSpecName: "kube-api-access-wmz29") pod "d455fe16-80bf-42c1-be16-a87102249bf8" (UID: "d455fe16-80bf-42c1-be16-a87102249bf8"). InnerVolumeSpecName "kube-api-access-wmz29". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:38:54 crc kubenswrapper[4998]: I0227 10:38:54.230067 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d455fe16-80bf-42c1-be16-a87102249bf8-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "d455fe16-80bf-42c1-be16-a87102249bf8" (UID: "d455fe16-80bf-42c1-be16-a87102249bf8"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:38:54 crc kubenswrapper[4998]: I0227 10:38:54.249677 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d455fe16-80bf-42c1-be16-a87102249bf8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d455fe16-80bf-42c1-be16-a87102249bf8" (UID: "d455fe16-80bf-42c1-be16-a87102249bf8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:38:54 crc kubenswrapper[4998]: I0227 10:38:54.326891 4998 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d455fe16-80bf-42c1-be16-a87102249bf8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:38:54 crc kubenswrapper[4998]: I0227 10:38:54.326940 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wmz29\" (UniqueName: \"kubernetes.io/projected/d455fe16-80bf-42c1-be16-a87102249bf8-kube-api-access-wmz29\") on node \"crc\" DevicePath \"\"" Feb 27 10:38:54 crc kubenswrapper[4998]: I0227 10:38:54.326959 4998 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d455fe16-80bf-42c1-be16-a87102249bf8-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 10:38:54 crc kubenswrapper[4998]: I0227 10:38:54.768178 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-5x42c" Feb 27 10:38:54 crc kubenswrapper[4998]: I0227 10:38:54.778847 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-5x42c" event={"ID":"d455fe16-80bf-42c1-be16-a87102249bf8","Type":"ContainerDied","Data":"e50cd2128f7e91c20076b24bc3e650b4a658d257e453555c893d70f3ae1f9226"} Feb 27 10:38:54 crc kubenswrapper[4998]: I0227 10:38:54.778889 4998 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e50cd2128f7e91c20076b24bc3e650b4a658d257e453555c893d70f3ae1f9226" Feb 27 10:38:55 crc kubenswrapper[4998]: I0227 10:38:55.471749 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-775f48c444-nclg6"] Feb 27 10:38:55 crc kubenswrapper[4998]: E0227 10:38:55.472562 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d455fe16-80bf-42c1-be16-a87102249bf8" containerName="barbican-db-sync" Feb 27 10:38:55 crc kubenswrapper[4998]: I0227 10:38:55.472578 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="d455fe16-80bf-42c1-be16-a87102249bf8" containerName="barbican-db-sync" Feb 27 10:38:55 crc kubenswrapper[4998]: I0227 10:38:55.472801 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="d455fe16-80bf-42c1-be16-a87102249bf8" containerName="barbican-db-sync" Feb 27 10:38:55 crc kubenswrapper[4998]: I0227 10:38:55.474006 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-775f48c444-nclg6" Feb 27 10:38:55 crc kubenswrapper[4998]: I0227 10:38:55.479724 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Feb 27 10:38:55 crc kubenswrapper[4998]: I0227 10:38:55.479973 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 27 10:38:55 crc kubenswrapper[4998]: I0227 10:38:55.480119 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-58gdq" Feb 27 10:38:55 crc kubenswrapper[4998]: I0227 10:38:55.484092 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-5578686c6c-rhd22"] Feb 27 10:38:55 crc kubenswrapper[4998]: I0227 10:38:55.485505 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5578686c6c-rhd22" Feb 27 10:38:55 crc kubenswrapper[4998]: I0227 10:38:55.488268 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Feb 27 10:38:55 crc kubenswrapper[4998]: I0227 10:38:55.499923 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-775f48c444-nclg6"] Feb 27 10:38:55 crc kubenswrapper[4998]: I0227 10:38:55.554909 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6626932a-9b39-41b4-a857-0ef4489cc74c-logs\") pod \"barbican-keystone-listener-775f48c444-nclg6\" (UID: \"6626932a-9b39-41b4-a857-0ef4489cc74c\") " pod="openstack/barbican-keystone-listener-775f48c444-nclg6" Feb 27 10:38:55 crc kubenswrapper[4998]: I0227 10:38:55.555319 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lt2x\" (UniqueName: \"kubernetes.io/projected/f355d01e-079d-48f0-abc8-b26c45650314-kube-api-access-2lt2x\") pod \"barbican-worker-5578686c6c-rhd22\" (UID: \"f355d01e-079d-48f0-abc8-b26c45650314\") " pod="openstack/barbican-worker-5578686c6c-rhd22" Feb 27 10:38:55 crc kubenswrapper[4998]: I0227 10:38:55.555394 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6626932a-9b39-41b4-a857-0ef4489cc74c-config-data\") pod \"barbican-keystone-listener-775f48c444-nclg6\" (UID: \"6626932a-9b39-41b4-a857-0ef4489cc74c\") " pod="openstack/barbican-keystone-listener-775f48c444-nclg6" Feb 27 10:38:55 crc kubenswrapper[4998]: I0227 10:38:55.555420 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6626932a-9b39-41b4-a857-0ef4489cc74c-config-data-custom\") pod \"barbican-keystone-listener-775f48c444-nclg6\" (UID: \"6626932a-9b39-41b4-a857-0ef4489cc74c\") " pod="openstack/barbican-keystone-listener-775f48c444-nclg6" Feb 27 10:38:55 crc kubenswrapper[4998]: I0227 10:38:55.555527 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmwjn\" (UniqueName: \"kubernetes.io/projected/6626932a-9b39-41b4-a857-0ef4489cc74c-kube-api-access-wmwjn\") pod \"barbican-keystone-listener-775f48c444-nclg6\" (UID: \"6626932a-9b39-41b4-a857-0ef4489cc74c\") " pod="openstack/barbican-keystone-listener-775f48c444-nclg6" Feb 27 10:38:55 crc kubenswrapper[4998]: I0227 10:38:55.555642 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f355d01e-079d-48f0-abc8-b26c45650314-config-data-custom\") pod \"barbican-worker-5578686c6c-rhd22\" (UID: \"f355d01e-079d-48f0-abc8-b26c45650314\") " pod="openstack/barbican-worker-5578686c6c-rhd22" Feb 27 10:38:55 crc kubenswrapper[4998]: I0227 10:38:55.555694 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6626932a-9b39-41b4-a857-0ef4489cc74c-combined-ca-bundle\") pod \"barbican-keystone-listener-775f48c444-nclg6\" (UID: \"6626932a-9b39-41b4-a857-0ef4489cc74c\") " pod="openstack/barbican-keystone-listener-775f48c444-nclg6" Feb 27 10:38:55 crc kubenswrapper[4998]: I0227 10:38:55.555734 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f355d01e-079d-48f0-abc8-b26c45650314-config-data\") pod \"barbican-worker-5578686c6c-rhd22\" (UID: \"f355d01e-079d-48f0-abc8-b26c45650314\") " pod="openstack/barbican-worker-5578686c6c-rhd22" Feb 27 10:38:55 crc kubenswrapper[4998]: I0227 10:38:55.555757 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f355d01e-079d-48f0-abc8-b26c45650314-logs\") pod \"barbican-worker-5578686c6c-rhd22\" (UID: \"f355d01e-079d-48f0-abc8-b26c45650314\") " pod="openstack/barbican-worker-5578686c6c-rhd22" Feb 27 10:38:55 crc kubenswrapper[4998]: I0227 10:38:55.555820 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f355d01e-079d-48f0-abc8-b26c45650314-combined-ca-bundle\") pod \"barbican-worker-5578686c6c-rhd22\" (UID: \"f355d01e-079d-48f0-abc8-b26c45650314\") " pod="openstack/barbican-worker-5578686c6c-rhd22" Feb 27 10:38:55 crc kubenswrapper[4998]: I0227 10:38:55.565167 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5578686c6c-rhd22"] Feb 27 10:38:55 crc kubenswrapper[4998]: I0227 10:38:55.579386 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-pwqr7"] Feb 27 10:38:55 crc kubenswrapper[4998]: I0227 10:38:55.581158 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-pwqr7" Feb 27 10:38:55 crc kubenswrapper[4998]: I0227 10:38:55.612685 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-pwqr7"] Feb 27 10:38:55 crc kubenswrapper[4998]: I0227 10:38:55.659419 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/37ef111c-f910-4337-a24c-4debf7a4425b-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-pwqr7\" (UID: \"37ef111c-f910-4337-a24c-4debf7a4425b\") " pod="openstack/dnsmasq-dns-85ff748b95-pwqr7" Feb 27 10:38:55 crc kubenswrapper[4998]: I0227 10:38:55.659484 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/37ef111c-f910-4337-a24c-4debf7a4425b-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-pwqr7\" (UID: \"37ef111c-f910-4337-a24c-4debf7a4425b\") " pod="openstack/dnsmasq-dns-85ff748b95-pwqr7" Feb 27 10:38:55 crc kubenswrapper[4998]: I0227 10:38:55.659511 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/37ef111c-f910-4337-a24c-4debf7a4425b-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-pwqr7\" (UID: \"37ef111c-f910-4337-a24c-4debf7a4425b\") " pod="openstack/dnsmasq-dns-85ff748b95-pwqr7" Feb 27 10:38:55 crc kubenswrapper[4998]: I0227 10:38:55.659538 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lt2x\" (UniqueName: \"kubernetes.io/projected/f355d01e-079d-48f0-abc8-b26c45650314-kube-api-access-2lt2x\") pod \"barbican-worker-5578686c6c-rhd22\" (UID: \"f355d01e-079d-48f0-abc8-b26c45650314\") " pod="openstack/barbican-worker-5578686c6c-rhd22" Feb 27 10:38:55 crc kubenswrapper[4998]: I0227 10:38:55.659558 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dqwl\" (UniqueName: \"kubernetes.io/projected/37ef111c-f910-4337-a24c-4debf7a4425b-kube-api-access-9dqwl\") pod \"dnsmasq-dns-85ff748b95-pwqr7\" (UID: \"37ef111c-f910-4337-a24c-4debf7a4425b\") " pod="openstack/dnsmasq-dns-85ff748b95-pwqr7" Feb 27 10:38:55 crc kubenswrapper[4998]: I0227 10:38:55.659603 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6626932a-9b39-41b4-a857-0ef4489cc74c-config-data\") pod \"barbican-keystone-listener-775f48c444-nclg6\" (UID: \"6626932a-9b39-41b4-a857-0ef4489cc74c\") " pod="openstack/barbican-keystone-listener-775f48c444-nclg6" Feb 27 10:38:55 crc kubenswrapper[4998]: I0227 10:38:55.659633 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6626932a-9b39-41b4-a857-0ef4489cc74c-config-data-custom\") pod \"barbican-keystone-listener-775f48c444-nclg6\" (UID: \"6626932a-9b39-41b4-a857-0ef4489cc74c\") " pod="openstack/barbican-keystone-listener-775f48c444-nclg6" Feb 27 10:38:55 crc kubenswrapper[4998]: I0227 10:38:55.659682 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmwjn\" (UniqueName: \"kubernetes.io/projected/6626932a-9b39-41b4-a857-0ef4489cc74c-kube-api-access-wmwjn\") pod \"barbican-keystone-listener-775f48c444-nclg6\" (UID: \"6626932a-9b39-41b4-a857-0ef4489cc74c\") " pod="openstack/barbican-keystone-listener-775f48c444-nclg6" Feb 27 10:38:55 crc kubenswrapper[4998]: I0227 10:38:55.659736 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f355d01e-079d-48f0-abc8-b26c45650314-config-data-custom\") pod \"barbican-worker-5578686c6c-rhd22\" (UID: \"f355d01e-079d-48f0-abc8-b26c45650314\") " pod="openstack/barbican-worker-5578686c6c-rhd22" Feb 27 10:38:55 crc kubenswrapper[4998]: I0227 10:38:55.659771 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6626932a-9b39-41b4-a857-0ef4489cc74c-combined-ca-bundle\") pod \"barbican-keystone-listener-775f48c444-nclg6\" (UID: \"6626932a-9b39-41b4-a857-0ef4489cc74c\") " pod="openstack/barbican-keystone-listener-775f48c444-nclg6" Feb 27 10:38:55 crc kubenswrapper[4998]: I0227 10:38:55.659798 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/37ef111c-f910-4337-a24c-4debf7a4425b-dns-svc\") pod \"dnsmasq-dns-85ff748b95-pwqr7\" (UID: \"37ef111c-f910-4337-a24c-4debf7a4425b\") " pod="openstack/dnsmasq-dns-85ff748b95-pwqr7" Feb 27 10:38:55 crc kubenswrapper[4998]: I0227 10:38:55.659825 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f355d01e-079d-48f0-abc8-b26c45650314-config-data\") pod \"barbican-worker-5578686c6c-rhd22\" (UID: \"f355d01e-079d-48f0-abc8-b26c45650314\") " pod="openstack/barbican-worker-5578686c6c-rhd22" Feb 27 10:38:55 crc kubenswrapper[4998]: I0227 10:38:55.659848 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f355d01e-079d-48f0-abc8-b26c45650314-logs\") pod \"barbican-worker-5578686c6c-rhd22\" (UID: \"f355d01e-079d-48f0-abc8-b26c45650314\") " pod="openstack/barbican-worker-5578686c6c-rhd22" Feb 27 10:38:55 crc kubenswrapper[4998]: I0227 10:38:55.659886 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f355d01e-079d-48f0-abc8-b26c45650314-combined-ca-bundle\") pod \"barbican-worker-5578686c6c-rhd22\" (UID: \"f355d01e-079d-48f0-abc8-b26c45650314\") " pod="openstack/barbican-worker-5578686c6c-rhd22" Feb 27 10:38:55 crc kubenswrapper[4998]: I0227 10:38:55.659935 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6626932a-9b39-41b4-a857-0ef4489cc74c-logs\") pod \"barbican-keystone-listener-775f48c444-nclg6\" (UID: \"6626932a-9b39-41b4-a857-0ef4489cc74c\") " pod="openstack/barbican-keystone-listener-775f48c444-nclg6" Feb 27 10:38:55 crc kubenswrapper[4998]: I0227 10:38:55.659955 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37ef111c-f910-4337-a24c-4debf7a4425b-config\") pod \"dnsmasq-dns-85ff748b95-pwqr7\" (UID: \"37ef111c-f910-4337-a24c-4debf7a4425b\") " pod="openstack/dnsmasq-dns-85ff748b95-pwqr7" Feb 27 10:38:55 crc kubenswrapper[4998]: I0227 10:38:55.672640 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f355d01e-079d-48f0-abc8-b26c45650314-config-data-custom\") pod \"barbican-worker-5578686c6c-rhd22\" (UID: \"f355d01e-079d-48f0-abc8-b26c45650314\") " pod="openstack/barbican-worker-5578686c6c-rhd22" Feb 27 10:38:55 crc kubenswrapper[4998]: I0227 10:38:55.672919 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f355d01e-079d-48f0-abc8-b26c45650314-logs\") pod \"barbican-worker-5578686c6c-rhd22\" (UID: \"f355d01e-079d-48f0-abc8-b26c45650314\") " pod="openstack/barbican-worker-5578686c6c-rhd22" Feb 27 10:38:55 crc kubenswrapper[4998]: I0227 10:38:55.674595 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6626932a-9b39-41b4-a857-0ef4489cc74c-combined-ca-bundle\") pod \"barbican-keystone-listener-775f48c444-nclg6\" (UID: \"6626932a-9b39-41b4-a857-0ef4489cc74c\") " pod="openstack/barbican-keystone-listener-775f48c444-nclg6" Feb 27 10:38:55 crc kubenswrapper[4998]: I0227 10:38:55.674989 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6626932a-9b39-41b4-a857-0ef4489cc74c-logs\") pod \"barbican-keystone-listener-775f48c444-nclg6\" (UID: \"6626932a-9b39-41b4-a857-0ef4489cc74c\") " pod="openstack/barbican-keystone-listener-775f48c444-nclg6" Feb 27 10:38:55 crc kubenswrapper[4998]: I0227 10:38:55.682835 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f355d01e-079d-48f0-abc8-b26c45650314-combined-ca-bundle\") pod \"barbican-worker-5578686c6c-rhd22\" (UID: \"f355d01e-079d-48f0-abc8-b26c45650314\") " pod="openstack/barbican-worker-5578686c6c-rhd22" Feb 27 10:38:55 crc kubenswrapper[4998]: I0227 10:38:55.691746 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f355d01e-079d-48f0-abc8-b26c45650314-config-data\") pod \"barbican-worker-5578686c6c-rhd22\" (UID: \"f355d01e-079d-48f0-abc8-b26c45650314\") " pod="openstack/barbican-worker-5578686c6c-rhd22" Feb 27 10:38:55 crc kubenswrapper[4998]: I0227 10:38:55.692629 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-5f9f5dcf86-47qzc"] Feb 27 10:38:55 crc kubenswrapper[4998]: I0227 10:38:55.693365 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmwjn\" (UniqueName: \"kubernetes.io/projected/6626932a-9b39-41b4-a857-0ef4489cc74c-kube-api-access-wmwjn\") pod \"barbican-keystone-listener-775f48c444-nclg6\" (UID: \"6626932a-9b39-41b4-a857-0ef4489cc74c\") " pod="openstack/barbican-keystone-listener-775f48c444-nclg6" Feb 27 10:38:55 crc kubenswrapper[4998]: I0227 10:38:55.694345 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6626932a-9b39-41b4-a857-0ef4489cc74c-config-data\") pod \"barbican-keystone-listener-775f48c444-nclg6\" (UID: \"6626932a-9b39-41b4-a857-0ef4489cc74c\") " pod="openstack/barbican-keystone-listener-775f48c444-nclg6" Feb 27 10:38:55 crc kubenswrapper[4998]: I0227 10:38:55.694499 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5f9f5dcf86-47qzc" Feb 27 10:38:55 crc kubenswrapper[4998]: I0227 10:38:55.696961 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6626932a-9b39-41b4-a857-0ef4489cc74c-config-data-custom\") pod \"barbican-keystone-listener-775f48c444-nclg6\" (UID: \"6626932a-9b39-41b4-a857-0ef4489cc74c\") " pod="openstack/barbican-keystone-listener-775f48c444-nclg6" Feb 27 10:38:55 crc kubenswrapper[4998]: I0227 10:38:55.701153 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lt2x\" (UniqueName: \"kubernetes.io/projected/f355d01e-079d-48f0-abc8-b26c45650314-kube-api-access-2lt2x\") pod \"barbican-worker-5578686c6c-rhd22\" (UID: \"f355d01e-079d-48f0-abc8-b26c45650314\") " pod="openstack/barbican-worker-5578686c6c-rhd22" Feb 27 10:38:55 crc kubenswrapper[4998]: I0227 10:38:55.703088 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Feb 27 10:38:55 crc kubenswrapper[4998]: I0227 10:38:55.718431 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5f9f5dcf86-47qzc"] Feb 27 10:38:55 crc kubenswrapper[4998]: I0227 10:38:55.761162 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/37ef111c-f910-4337-a24c-4debf7a4425b-dns-svc\") pod \"dnsmasq-dns-85ff748b95-pwqr7\" (UID: \"37ef111c-f910-4337-a24c-4debf7a4425b\") " pod="openstack/dnsmasq-dns-85ff748b95-pwqr7" Feb 27 10:38:55 crc kubenswrapper[4998]: I0227 10:38:55.761209 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/424ef47f-d9ad-49f6-b3bf-d72859ab02c8-config-data\") pod \"barbican-api-5f9f5dcf86-47qzc\" (UID: \"424ef47f-d9ad-49f6-b3bf-d72859ab02c8\") " pod="openstack/barbican-api-5f9f5dcf86-47qzc" Feb 27 10:38:55 crc kubenswrapper[4998]: I0227 10:38:55.761263 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/424ef47f-d9ad-49f6-b3bf-d72859ab02c8-combined-ca-bundle\") pod \"barbican-api-5f9f5dcf86-47qzc\" (UID: \"424ef47f-d9ad-49f6-b3bf-d72859ab02c8\") " pod="openstack/barbican-api-5f9f5dcf86-47qzc" Feb 27 10:38:55 crc kubenswrapper[4998]: I0227 10:38:55.761295 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37ef111c-f910-4337-a24c-4debf7a4425b-config\") pod \"dnsmasq-dns-85ff748b95-pwqr7\" (UID: \"37ef111c-f910-4337-a24c-4debf7a4425b\") " pod="openstack/dnsmasq-dns-85ff748b95-pwqr7" Feb 27 10:38:55 crc kubenswrapper[4998]: I0227 10:38:55.761321 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/37ef111c-f910-4337-a24c-4debf7a4425b-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-pwqr7\" (UID: \"37ef111c-f910-4337-a24c-4debf7a4425b\") " pod="openstack/dnsmasq-dns-85ff748b95-pwqr7" Feb 27 10:38:55 crc kubenswrapper[4998]: I0227 10:38:55.761358 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77wbz\" (UniqueName: \"kubernetes.io/projected/424ef47f-d9ad-49f6-b3bf-d72859ab02c8-kube-api-access-77wbz\") pod \"barbican-api-5f9f5dcf86-47qzc\" (UID: \"424ef47f-d9ad-49f6-b3bf-d72859ab02c8\") " pod="openstack/barbican-api-5f9f5dcf86-47qzc" Feb 27 10:38:55 crc kubenswrapper[4998]: I0227 10:38:55.761383 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/37ef111c-f910-4337-a24c-4debf7a4425b-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-pwqr7\" (UID: \"37ef111c-f910-4337-a24c-4debf7a4425b\") " pod="openstack/dnsmasq-dns-85ff748b95-pwqr7" Feb 27 10:38:55 crc kubenswrapper[4998]: I0227 10:38:55.761411 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/37ef111c-f910-4337-a24c-4debf7a4425b-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-pwqr7\" (UID: \"37ef111c-f910-4337-a24c-4debf7a4425b\") " pod="openstack/dnsmasq-dns-85ff748b95-pwqr7" Feb 27 10:38:55 crc kubenswrapper[4998]: I0227 10:38:55.761442 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dqwl\" (UniqueName: \"kubernetes.io/projected/37ef111c-f910-4337-a24c-4debf7a4425b-kube-api-access-9dqwl\") pod \"dnsmasq-dns-85ff748b95-pwqr7\" (UID: \"37ef111c-f910-4337-a24c-4debf7a4425b\") " pod="openstack/dnsmasq-dns-85ff748b95-pwqr7" Feb 27 10:38:55 crc kubenswrapper[4998]: I0227 10:38:55.761470 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/424ef47f-d9ad-49f6-b3bf-d72859ab02c8-config-data-custom\") pod \"barbican-api-5f9f5dcf86-47qzc\" (UID: \"424ef47f-d9ad-49f6-b3bf-d72859ab02c8\") " pod="openstack/barbican-api-5f9f5dcf86-47qzc" Feb 27 10:38:55 crc kubenswrapper[4998]: I0227 10:38:55.761497 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/424ef47f-d9ad-49f6-b3bf-d72859ab02c8-logs\") pod \"barbican-api-5f9f5dcf86-47qzc\" (UID: \"424ef47f-d9ad-49f6-b3bf-d72859ab02c8\") " pod="openstack/barbican-api-5f9f5dcf86-47qzc" Feb 27 10:38:55 crc kubenswrapper[4998]: I0227 10:38:55.762531 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/37ef111c-f910-4337-a24c-4debf7a4425b-dns-svc\") pod \"dnsmasq-dns-85ff748b95-pwqr7\" (UID: \"37ef111c-f910-4337-a24c-4debf7a4425b\") " pod="openstack/dnsmasq-dns-85ff748b95-pwqr7" Feb 27 10:38:55 crc kubenswrapper[4998]: I0227 10:38:55.763133 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37ef111c-f910-4337-a24c-4debf7a4425b-config\") pod \"dnsmasq-dns-85ff748b95-pwqr7\" (UID: \"37ef111c-f910-4337-a24c-4debf7a4425b\") " pod="openstack/dnsmasq-dns-85ff748b95-pwqr7" Feb 27 10:38:55 crc kubenswrapper[4998]: I0227 10:38:55.764119 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/37ef111c-f910-4337-a24c-4debf7a4425b-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-pwqr7\" (UID: \"37ef111c-f910-4337-a24c-4debf7a4425b\") " pod="openstack/dnsmasq-dns-85ff748b95-pwqr7" Feb 27 10:38:55 crc kubenswrapper[4998]: I0227 10:38:55.764361 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/37ef111c-f910-4337-a24c-4debf7a4425b-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-pwqr7\" (UID: \"37ef111c-f910-4337-a24c-4debf7a4425b\") " pod="openstack/dnsmasq-dns-85ff748b95-pwqr7" Feb 27 10:38:55 crc kubenswrapper[4998]: I0227 10:38:55.764914 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/37ef111c-f910-4337-a24c-4debf7a4425b-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-pwqr7\" (UID: \"37ef111c-f910-4337-a24c-4debf7a4425b\") " pod="openstack/dnsmasq-dns-85ff748b95-pwqr7" Feb 27 10:38:55 crc kubenswrapper[4998]: I0227 10:38:55.780740 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dqwl\" (UniqueName: \"kubernetes.io/projected/37ef111c-f910-4337-a24c-4debf7a4425b-kube-api-access-9dqwl\") pod \"dnsmasq-dns-85ff748b95-pwqr7\" (UID: \"37ef111c-f910-4337-a24c-4debf7a4425b\") " pod="openstack/dnsmasq-dns-85ff748b95-pwqr7" Feb 27 10:38:55 crc kubenswrapper[4998]: I0227 10:38:55.801126 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-775f48c444-nclg6" Feb 27 10:38:55 crc kubenswrapper[4998]: I0227 10:38:55.802214 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"63050361-4a13-4b25-8c1a-ff9fed854172","Type":"ContainerStarted","Data":"15efc61f1fbc074e15f7623f7011f97435ebee338cc4adabd2d4282217a23804"} Feb 27 10:38:55 crc kubenswrapper[4998]: I0227 10:38:55.802430 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="63050361-4a13-4b25-8c1a-ff9fed854172" containerName="ceilometer-central-agent" containerID="cri-o://9e79a77a343f83bceab73f363285e5705ab1a0c7de7eeab802874c8ee2910d8e" gracePeriod=30 Feb 27 10:38:55 crc kubenswrapper[4998]: I0227 10:38:55.802793 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 27 10:38:55 crc kubenswrapper[4998]: I0227 10:38:55.802876 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="63050361-4a13-4b25-8c1a-ff9fed854172" containerName="sg-core" containerID="cri-o://6c2aa53c9a3d41b29ebae488fb1cfb0f9f9a2224bde207ee937110d3162d5362" gracePeriod=30 Feb 27 10:38:55 crc kubenswrapper[4998]: I0227 10:38:55.803013 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="63050361-4a13-4b25-8c1a-ff9fed854172" containerName="proxy-httpd" containerID="cri-o://15efc61f1fbc074e15f7623f7011f97435ebee338cc4adabd2d4282217a23804" gracePeriod=30 Feb 27 10:38:55 crc kubenswrapper[4998]: I0227 10:38:55.803106 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="63050361-4a13-4b25-8c1a-ff9fed854172" containerName="ceilometer-notification-agent" containerID="cri-o://89a24b7b9a31d6a217c7ea68c7d8a8143d6a021d9a773eabae436a7601c63702" gracePeriod=30 Feb 27 10:38:55 crc kubenswrapper[4998]: I0227 10:38:55.809642 4998 generic.go:334] "Generic (PLEG): container finished" podID="c96d5b82-3e0b-49a0-be3d-7f2fae6dd592" containerID="f4316920d4d3df2d0cebe2ec917145c808ee1bfd4546870b1316f784d8e64ea4" exitCode=0 Feb 27 10:38:55 crc kubenswrapper[4998]: I0227 10:38:55.809715 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-tf4n2" event={"ID":"c96d5b82-3e0b-49a0-be3d-7f2fae6dd592","Type":"ContainerDied","Data":"f4316920d4d3df2d0cebe2ec917145c808ee1bfd4546870b1316f784d8e64ea4"} Feb 27 10:38:55 crc kubenswrapper[4998]: I0227 10:38:55.810154 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5578686c6c-rhd22" Feb 27 10:38:55 crc kubenswrapper[4998]: I0227 10:38:55.841724 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.327974841 podStartE2EDuration="54.841705929s" podCreationTimestamp="2026-02-27 10:38:01 +0000 UTC" firstStartedPulling="2026-02-27 10:38:03.148304142 +0000 UTC m=+1235.146575110" lastFinishedPulling="2026-02-27 10:38:54.66203522 +0000 UTC m=+1286.660306198" observedRunningTime="2026-02-27 10:38:55.838589599 +0000 UTC m=+1287.836860577" watchObservedRunningTime="2026-02-27 10:38:55.841705929 +0000 UTC m=+1287.839976897" Feb 27 10:38:55 crc kubenswrapper[4998]: I0227 10:38:55.863588 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/424ef47f-d9ad-49f6-b3bf-d72859ab02c8-config-data-custom\") pod \"barbican-api-5f9f5dcf86-47qzc\" (UID: \"424ef47f-d9ad-49f6-b3bf-d72859ab02c8\") " pod="openstack/barbican-api-5f9f5dcf86-47qzc" Feb 27 10:38:55 crc kubenswrapper[4998]: I0227 10:38:55.863655 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/424ef47f-d9ad-49f6-b3bf-d72859ab02c8-logs\") pod \"barbican-api-5f9f5dcf86-47qzc\" (UID: \"424ef47f-d9ad-49f6-b3bf-d72859ab02c8\") " pod="openstack/barbican-api-5f9f5dcf86-47qzc" Feb 27 10:38:55 crc kubenswrapper[4998]: I0227 10:38:55.863745 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/424ef47f-d9ad-49f6-b3bf-d72859ab02c8-config-data\") pod \"barbican-api-5f9f5dcf86-47qzc\" (UID: \"424ef47f-d9ad-49f6-b3bf-d72859ab02c8\") " pod="openstack/barbican-api-5f9f5dcf86-47qzc" Feb 27 10:38:55 crc kubenswrapper[4998]: I0227 10:38:55.863792 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/424ef47f-d9ad-49f6-b3bf-d72859ab02c8-combined-ca-bundle\") pod \"barbican-api-5f9f5dcf86-47qzc\" (UID: \"424ef47f-d9ad-49f6-b3bf-d72859ab02c8\") " pod="openstack/barbican-api-5f9f5dcf86-47qzc" Feb 27 10:38:55 crc kubenswrapper[4998]: I0227 10:38:55.863860 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77wbz\" (UniqueName: \"kubernetes.io/projected/424ef47f-d9ad-49f6-b3bf-d72859ab02c8-kube-api-access-77wbz\") pod \"barbican-api-5f9f5dcf86-47qzc\" (UID: \"424ef47f-d9ad-49f6-b3bf-d72859ab02c8\") " pod="openstack/barbican-api-5f9f5dcf86-47qzc" Feb 27 10:38:55 crc kubenswrapper[4998]: I0227 10:38:55.865123 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/424ef47f-d9ad-49f6-b3bf-d72859ab02c8-logs\") pod \"barbican-api-5f9f5dcf86-47qzc\" (UID: \"424ef47f-d9ad-49f6-b3bf-d72859ab02c8\") " pod="openstack/barbican-api-5f9f5dcf86-47qzc" Feb 27 10:38:55 crc kubenswrapper[4998]: I0227 10:38:55.868114 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/424ef47f-d9ad-49f6-b3bf-d72859ab02c8-config-data-custom\") pod \"barbican-api-5f9f5dcf86-47qzc\" (UID: \"424ef47f-d9ad-49f6-b3bf-d72859ab02c8\") " pod="openstack/barbican-api-5f9f5dcf86-47qzc" Feb 27 10:38:55 crc kubenswrapper[4998]: I0227 10:38:55.869013 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/424ef47f-d9ad-49f6-b3bf-d72859ab02c8-combined-ca-bundle\") pod \"barbican-api-5f9f5dcf86-47qzc\" (UID: \"424ef47f-d9ad-49f6-b3bf-d72859ab02c8\") " pod="openstack/barbican-api-5f9f5dcf86-47qzc" Feb 27 10:38:55 crc kubenswrapper[4998]: I0227 10:38:55.871132 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/424ef47f-d9ad-49f6-b3bf-d72859ab02c8-config-data\") pod \"barbican-api-5f9f5dcf86-47qzc\" (UID: \"424ef47f-d9ad-49f6-b3bf-d72859ab02c8\") " pod="openstack/barbican-api-5f9f5dcf86-47qzc" Feb 27 10:38:55 crc kubenswrapper[4998]: I0227 10:38:55.881835 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77wbz\" (UniqueName: \"kubernetes.io/projected/424ef47f-d9ad-49f6-b3bf-d72859ab02c8-kube-api-access-77wbz\") pod \"barbican-api-5f9f5dcf86-47qzc\" (UID: \"424ef47f-d9ad-49f6-b3bf-d72859ab02c8\") " pod="openstack/barbican-api-5f9f5dcf86-47qzc" Feb 27 10:38:55 crc kubenswrapper[4998]: I0227 10:38:55.903124 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-pwqr7" Feb 27 10:38:56 crc kubenswrapper[4998]: I0227 10:38:56.132286 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5f9f5dcf86-47qzc" Feb 27 10:38:56 crc kubenswrapper[4998]: I0227 10:38:56.353919 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5578686c6c-rhd22"] Feb 27 10:38:56 crc kubenswrapper[4998]: W0227 10:38:56.360336 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf355d01e_079d_48f0_abc8_b26c45650314.slice/crio-d1782f831817a15bb3203073480dbeeaba1b59b720ee5fd9f54da96f0af1c5a2 WatchSource:0}: Error finding container d1782f831817a15bb3203073480dbeeaba1b59b720ee5fd9f54da96f0af1c5a2: Status 404 returned error can't find the container with id d1782f831817a15bb3203073480dbeeaba1b59b720ee5fd9f54da96f0af1c5a2 Feb 27 10:38:56 crc kubenswrapper[4998]: I0227 10:38:56.437068 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-775f48c444-nclg6"] Feb 27 10:38:56 crc kubenswrapper[4998]: I0227 10:38:56.509126 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-pwqr7"] Feb 27 10:38:56 crc kubenswrapper[4998]: W0227 10:38:56.511406 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod37ef111c_f910_4337_a24c_4debf7a4425b.slice/crio-ed7e04fe946d15a5ebe8cf4c45b2e0c5ff152f0187f62c6c9d66ea34b5eeb077 WatchSource:0}: Error finding container ed7e04fe946d15a5ebe8cf4c45b2e0c5ff152f0187f62c6c9d66ea34b5eeb077: Status 404 returned error can't find the container with id ed7e04fe946d15a5ebe8cf4c45b2e0c5ff152f0187f62c6c9d66ea34b5eeb077 Feb 27 10:38:56 crc kubenswrapper[4998]: I0227 10:38:56.676336 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5f9f5dcf86-47qzc"] Feb 27 10:38:56 crc kubenswrapper[4998]: W0227 10:38:56.679161 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod424ef47f_d9ad_49f6_b3bf_d72859ab02c8.slice/crio-22c20520db6dc1f397a5e4d6a73f488786c4f02585c0b68550c79f915be73183 WatchSource:0}: Error finding container 22c20520db6dc1f397a5e4d6a73f488786c4f02585c0b68550c79f915be73183: Status 404 returned error can't find the container with id 22c20520db6dc1f397a5e4d6a73f488786c4f02585c0b68550c79f915be73183 Feb 27 10:38:56 crc kubenswrapper[4998]: I0227 10:38:56.823046 4998 generic.go:334] "Generic (PLEG): container finished" podID="37ef111c-f910-4337-a24c-4debf7a4425b" containerID="4815a28ff01e1acd2d96580a7cfac8034d329e8bfb7aecae0d32d25ac312fcbd" exitCode=0 Feb 27 10:38:56 crc kubenswrapper[4998]: I0227 10:38:56.823319 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-pwqr7" event={"ID":"37ef111c-f910-4337-a24c-4debf7a4425b","Type":"ContainerDied","Data":"4815a28ff01e1acd2d96580a7cfac8034d329e8bfb7aecae0d32d25ac312fcbd"} Feb 27 10:38:56 crc kubenswrapper[4998]: I0227 10:38:56.823361 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-pwqr7" event={"ID":"37ef111c-f910-4337-a24c-4debf7a4425b","Type":"ContainerStarted","Data":"ed7e04fe946d15a5ebe8cf4c45b2e0c5ff152f0187f62c6c9d66ea34b5eeb077"} Feb 27 10:38:56 crc kubenswrapper[4998]: I0227 10:38:56.828511 4998 generic.go:334] "Generic (PLEG): container finished" podID="63050361-4a13-4b25-8c1a-ff9fed854172" containerID="15efc61f1fbc074e15f7623f7011f97435ebee338cc4adabd2d4282217a23804" exitCode=0 Feb 27 10:38:56 crc kubenswrapper[4998]: I0227 10:38:56.828541 4998 generic.go:334] "Generic (PLEG): container finished" podID="63050361-4a13-4b25-8c1a-ff9fed854172" containerID="6c2aa53c9a3d41b29ebae488fb1cfb0f9f9a2224bde207ee937110d3162d5362" exitCode=2 Feb 27 10:38:56 crc kubenswrapper[4998]: I0227 10:38:56.828551 4998 generic.go:334] "Generic (PLEG): container finished" podID="63050361-4a13-4b25-8c1a-ff9fed854172" containerID="9e79a77a343f83bceab73f363285e5705ab1a0c7de7eeab802874c8ee2910d8e" exitCode=0 Feb 27 10:38:56 crc kubenswrapper[4998]: I0227 10:38:56.828596 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"63050361-4a13-4b25-8c1a-ff9fed854172","Type":"ContainerDied","Data":"15efc61f1fbc074e15f7623f7011f97435ebee338cc4adabd2d4282217a23804"} Feb 27 10:38:56 crc kubenswrapper[4998]: I0227 10:38:56.828626 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"63050361-4a13-4b25-8c1a-ff9fed854172","Type":"ContainerDied","Data":"6c2aa53c9a3d41b29ebae488fb1cfb0f9f9a2224bde207ee937110d3162d5362"} Feb 27 10:38:56 crc kubenswrapper[4998]: I0227 10:38:56.828641 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"63050361-4a13-4b25-8c1a-ff9fed854172","Type":"ContainerDied","Data":"9e79a77a343f83bceab73f363285e5705ab1a0c7de7eeab802874c8ee2910d8e"} Feb 27 10:38:56 crc kubenswrapper[4998]: I0227 10:38:56.833179 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-775f48c444-nclg6" event={"ID":"6626932a-9b39-41b4-a857-0ef4489cc74c","Type":"ContainerStarted","Data":"0a204c141fb608291e62806d099d9df290a6692b45150058a828796eda496600"} Feb 27 10:38:56 crc kubenswrapper[4998]: I0227 10:38:56.844324 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5578686c6c-rhd22" event={"ID":"f355d01e-079d-48f0-abc8-b26c45650314","Type":"ContainerStarted","Data":"d1782f831817a15bb3203073480dbeeaba1b59b720ee5fd9f54da96f0af1c5a2"} Feb 27 10:38:56 crc kubenswrapper[4998]: I0227 10:38:56.848058 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5f9f5dcf86-47qzc" event={"ID":"424ef47f-d9ad-49f6-b3bf-d72859ab02c8","Type":"ContainerStarted","Data":"22c20520db6dc1f397a5e4d6a73f488786c4f02585c0b68550c79f915be73183"} Feb 27 10:38:57 crc kubenswrapper[4998]: I0227 10:38:57.378372 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-tf4n2" Feb 27 10:38:57 crc kubenswrapper[4998]: I0227 10:38:57.493716 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 10:38:57 crc kubenswrapper[4998]: I0227 10:38:57.509457 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c96d5b82-3e0b-49a0-be3d-7f2fae6dd592-combined-ca-bundle\") pod \"c96d5b82-3e0b-49a0-be3d-7f2fae6dd592\" (UID: \"c96d5b82-3e0b-49a0-be3d-7f2fae6dd592\") " Feb 27 10:38:57 crc kubenswrapper[4998]: I0227 10:38:57.511759 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c96d5b82-3e0b-49a0-be3d-7f2fae6dd592-scripts\") pod \"c96d5b82-3e0b-49a0-be3d-7f2fae6dd592\" (UID: \"c96d5b82-3e0b-49a0-be3d-7f2fae6dd592\") " Feb 27 10:38:57 crc kubenswrapper[4998]: I0227 10:38:57.512332 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zdvmf\" (UniqueName: \"kubernetes.io/projected/c96d5b82-3e0b-49a0-be3d-7f2fae6dd592-kube-api-access-zdvmf\") pod \"c96d5b82-3e0b-49a0-be3d-7f2fae6dd592\" (UID: \"c96d5b82-3e0b-49a0-be3d-7f2fae6dd592\") " Feb 27 10:38:57 crc kubenswrapper[4998]: I0227 10:38:57.512369 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c96d5b82-3e0b-49a0-be3d-7f2fae6dd592-etc-machine-id\") pod \"c96d5b82-3e0b-49a0-be3d-7f2fae6dd592\" (UID: \"c96d5b82-3e0b-49a0-be3d-7f2fae6dd592\") " Feb 27 10:38:57 crc kubenswrapper[4998]: I0227 10:38:57.512605 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c96d5b82-3e0b-49a0-be3d-7f2fae6dd592-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "c96d5b82-3e0b-49a0-be3d-7f2fae6dd592" (UID: "c96d5b82-3e0b-49a0-be3d-7f2fae6dd592"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 10:38:57 crc kubenswrapper[4998]: I0227 10:38:57.513062 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c96d5b82-3e0b-49a0-be3d-7f2fae6dd592-db-sync-config-data\") pod \"c96d5b82-3e0b-49a0-be3d-7f2fae6dd592\" (UID: \"c96d5b82-3e0b-49a0-be3d-7f2fae6dd592\") " Feb 27 10:38:57 crc kubenswrapper[4998]: I0227 10:38:57.513303 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c96d5b82-3e0b-49a0-be3d-7f2fae6dd592-config-data\") pod \"c96d5b82-3e0b-49a0-be3d-7f2fae6dd592\" (UID: \"c96d5b82-3e0b-49a0-be3d-7f2fae6dd592\") " Feb 27 10:38:57 crc kubenswrapper[4998]: I0227 10:38:57.514000 4998 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c96d5b82-3e0b-49a0-be3d-7f2fae6dd592-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 27 10:38:57 crc kubenswrapper[4998]: I0227 10:38:57.520212 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c96d5b82-3e0b-49a0-be3d-7f2fae6dd592-kube-api-access-zdvmf" (OuterVolumeSpecName: "kube-api-access-zdvmf") pod "c96d5b82-3e0b-49a0-be3d-7f2fae6dd592" (UID: "c96d5b82-3e0b-49a0-be3d-7f2fae6dd592"). InnerVolumeSpecName "kube-api-access-zdvmf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:38:57 crc kubenswrapper[4998]: I0227 10:38:57.537714 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c96d5b82-3e0b-49a0-be3d-7f2fae6dd592-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "c96d5b82-3e0b-49a0-be3d-7f2fae6dd592" (UID: "c96d5b82-3e0b-49a0-be3d-7f2fae6dd592"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:38:57 crc kubenswrapper[4998]: I0227 10:38:57.538326 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c96d5b82-3e0b-49a0-be3d-7f2fae6dd592-scripts" (OuterVolumeSpecName: "scripts") pod "c96d5b82-3e0b-49a0-be3d-7f2fae6dd592" (UID: "c96d5b82-3e0b-49a0-be3d-7f2fae6dd592"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:38:57 crc kubenswrapper[4998]: I0227 10:38:57.592059 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c96d5b82-3e0b-49a0-be3d-7f2fae6dd592-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c96d5b82-3e0b-49a0-be3d-7f2fae6dd592" (UID: "c96d5b82-3e0b-49a0-be3d-7f2fae6dd592"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:38:57 crc kubenswrapper[4998]: I0227 10:38:57.615582 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/63050361-4a13-4b25-8c1a-ff9fed854172-sg-core-conf-yaml\") pod \"63050361-4a13-4b25-8c1a-ff9fed854172\" (UID: \"63050361-4a13-4b25-8c1a-ff9fed854172\") " Feb 27 10:38:57 crc kubenswrapper[4998]: I0227 10:38:57.615745 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63050361-4a13-4b25-8c1a-ff9fed854172-config-data\") pod \"63050361-4a13-4b25-8c1a-ff9fed854172\" (UID: \"63050361-4a13-4b25-8c1a-ff9fed854172\") " Feb 27 10:38:57 crc kubenswrapper[4998]: I0227 10:38:57.615823 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63050361-4a13-4b25-8c1a-ff9fed854172-combined-ca-bundle\") pod \"63050361-4a13-4b25-8c1a-ff9fed854172\" (UID: \"63050361-4a13-4b25-8c1a-ff9fed854172\") " Feb 27 10:38:57 crc kubenswrapper[4998]: I0227 10:38:57.615866 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6pvtx\" (UniqueName: \"kubernetes.io/projected/63050361-4a13-4b25-8c1a-ff9fed854172-kube-api-access-6pvtx\") pod \"63050361-4a13-4b25-8c1a-ff9fed854172\" (UID: \"63050361-4a13-4b25-8c1a-ff9fed854172\") " Feb 27 10:38:57 crc kubenswrapper[4998]: I0227 10:38:57.615950 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63050361-4a13-4b25-8c1a-ff9fed854172-scripts\") pod \"63050361-4a13-4b25-8c1a-ff9fed854172\" (UID: \"63050361-4a13-4b25-8c1a-ff9fed854172\") " Feb 27 10:38:57 crc kubenswrapper[4998]: I0227 10:38:57.616019 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/63050361-4a13-4b25-8c1a-ff9fed854172-run-httpd\") pod \"63050361-4a13-4b25-8c1a-ff9fed854172\" (UID: \"63050361-4a13-4b25-8c1a-ff9fed854172\") " Feb 27 10:38:57 crc kubenswrapper[4998]: I0227 10:38:57.616067 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/63050361-4a13-4b25-8c1a-ff9fed854172-log-httpd\") pod \"63050361-4a13-4b25-8c1a-ff9fed854172\" (UID: \"63050361-4a13-4b25-8c1a-ff9fed854172\") " Feb 27 10:38:57 crc kubenswrapper[4998]: I0227 10:38:57.617166 4998 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c96d5b82-3e0b-49a0-be3d-7f2fae6dd592-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:38:57 crc kubenswrapper[4998]: I0227 10:38:57.617213 4998 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c96d5b82-3e0b-49a0-be3d-7f2fae6dd592-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 10:38:57 crc kubenswrapper[4998]: I0227 10:38:57.617259 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zdvmf\" (UniqueName: \"kubernetes.io/projected/c96d5b82-3e0b-49a0-be3d-7f2fae6dd592-kube-api-access-zdvmf\") on node \"crc\" DevicePath \"\"" Feb 27 10:38:57 crc kubenswrapper[4998]: I0227 10:38:57.617272 4998 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c96d5b82-3e0b-49a0-be3d-7f2fae6dd592-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 10:38:57 crc kubenswrapper[4998]: I0227 10:38:57.617403 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63050361-4a13-4b25-8c1a-ff9fed854172-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "63050361-4a13-4b25-8c1a-ff9fed854172" (UID: "63050361-4a13-4b25-8c1a-ff9fed854172"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:38:57 crc kubenswrapper[4998]: I0227 10:38:57.617470 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63050361-4a13-4b25-8c1a-ff9fed854172-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "63050361-4a13-4b25-8c1a-ff9fed854172" (UID: "63050361-4a13-4b25-8c1a-ff9fed854172"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:38:57 crc kubenswrapper[4998]: I0227 10:38:57.621453 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63050361-4a13-4b25-8c1a-ff9fed854172-scripts" (OuterVolumeSpecName: "scripts") pod "63050361-4a13-4b25-8c1a-ff9fed854172" (UID: "63050361-4a13-4b25-8c1a-ff9fed854172"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:38:57 crc kubenswrapper[4998]: I0227 10:38:57.623590 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63050361-4a13-4b25-8c1a-ff9fed854172-kube-api-access-6pvtx" (OuterVolumeSpecName: "kube-api-access-6pvtx") pod "63050361-4a13-4b25-8c1a-ff9fed854172" (UID: "63050361-4a13-4b25-8c1a-ff9fed854172"). InnerVolumeSpecName "kube-api-access-6pvtx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:38:57 crc kubenswrapper[4998]: I0227 10:38:57.625436 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c96d5b82-3e0b-49a0-be3d-7f2fae6dd592-config-data" (OuterVolumeSpecName: "config-data") pod "c96d5b82-3e0b-49a0-be3d-7f2fae6dd592" (UID: "c96d5b82-3e0b-49a0-be3d-7f2fae6dd592"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:38:57 crc kubenswrapper[4998]: I0227 10:38:57.649728 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63050361-4a13-4b25-8c1a-ff9fed854172-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "63050361-4a13-4b25-8c1a-ff9fed854172" (UID: "63050361-4a13-4b25-8c1a-ff9fed854172"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:38:57 crc kubenswrapper[4998]: I0227 10:38:57.724260 4998 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/63050361-4a13-4b25-8c1a-ff9fed854172-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 27 10:38:57 crc kubenswrapper[4998]: I0227 10:38:57.724301 4998 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/63050361-4a13-4b25-8c1a-ff9fed854172-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 27 10:38:57 crc kubenswrapper[4998]: I0227 10:38:57.724317 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6pvtx\" (UniqueName: \"kubernetes.io/projected/63050361-4a13-4b25-8c1a-ff9fed854172-kube-api-access-6pvtx\") on node \"crc\" DevicePath \"\"" Feb 27 10:38:57 crc kubenswrapper[4998]: I0227 10:38:57.724330 4998 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63050361-4a13-4b25-8c1a-ff9fed854172-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 10:38:57 crc kubenswrapper[4998]: I0227 10:38:57.724342 4998 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c96d5b82-3e0b-49a0-be3d-7f2fae6dd592-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 10:38:57 crc kubenswrapper[4998]: I0227 10:38:57.724352 4998 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/63050361-4a13-4b25-8c1a-ff9fed854172-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 27 10:38:57 crc kubenswrapper[4998]: I0227 10:38:57.788624 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63050361-4a13-4b25-8c1a-ff9fed854172-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "63050361-4a13-4b25-8c1a-ff9fed854172" (UID: "63050361-4a13-4b25-8c1a-ff9fed854172"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:38:57 crc kubenswrapper[4998]: I0227 10:38:57.796430 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63050361-4a13-4b25-8c1a-ff9fed854172-config-data" (OuterVolumeSpecName: "config-data") pod "63050361-4a13-4b25-8c1a-ff9fed854172" (UID: "63050361-4a13-4b25-8c1a-ff9fed854172"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:38:57 crc kubenswrapper[4998]: I0227 10:38:57.826296 4998 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63050361-4a13-4b25-8c1a-ff9fed854172-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 10:38:57 crc kubenswrapper[4998]: I0227 10:38:57.826357 4998 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63050361-4a13-4b25-8c1a-ff9fed854172-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:38:57 crc kubenswrapper[4998]: I0227 10:38:57.865934 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5f9f5dcf86-47qzc" event={"ID":"424ef47f-d9ad-49f6-b3bf-d72859ab02c8","Type":"ContainerStarted","Data":"3a58f2334e863f49237f38e612faecdb92f93f8dd4b4f3e71207362e0b8e4ee9"} Feb 27 10:38:57 crc kubenswrapper[4998]: I0227 10:38:57.865996 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5f9f5dcf86-47qzc" event={"ID":"424ef47f-d9ad-49f6-b3bf-d72859ab02c8","Type":"ContainerStarted","Data":"c6fa1e017f2401e33d8bbbf0501bef296439ae903050928ff9e72691331bf252"} Feb 27 10:38:57 crc kubenswrapper[4998]: I0227 10:38:57.866517 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5f9f5dcf86-47qzc" Feb 27 10:38:57 crc kubenswrapper[4998]: I0227 10:38:57.866563 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5f9f5dcf86-47qzc" Feb 27 10:38:57 crc kubenswrapper[4998]: I0227 10:38:57.875550 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-pwqr7" event={"ID":"37ef111c-f910-4337-a24c-4debf7a4425b","Type":"ContainerStarted","Data":"e990128229d7cda91e02b88aea94d0f0e3c7126c6fb26643ed78fc56ff639180"} Feb 27 10:38:57 crc kubenswrapper[4998]: I0227 10:38:57.876427 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-85ff748b95-pwqr7" Feb 27 10:38:57 crc kubenswrapper[4998]: I0227 10:38:57.896855 4998 generic.go:334] "Generic (PLEG): container finished" podID="63050361-4a13-4b25-8c1a-ff9fed854172" containerID="89a24b7b9a31d6a217c7ea68c7d8a8143d6a021d9a773eabae436a7601c63702" exitCode=0 Feb 27 10:38:57 crc kubenswrapper[4998]: I0227 10:38:57.896923 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 10:38:57 crc kubenswrapper[4998]: I0227 10:38:57.897045 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"63050361-4a13-4b25-8c1a-ff9fed854172","Type":"ContainerDied","Data":"89a24b7b9a31d6a217c7ea68c7d8a8143d6a021d9a773eabae436a7601c63702"} Feb 27 10:38:57 crc kubenswrapper[4998]: I0227 10:38:57.897084 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"63050361-4a13-4b25-8c1a-ff9fed854172","Type":"ContainerDied","Data":"84892f17463b620030b60d047789717dafaa1e5dbeea66da153ddf671f8677fa"} Feb 27 10:38:57 crc kubenswrapper[4998]: I0227 10:38:57.897130 4998 scope.go:117] "RemoveContainer" containerID="15efc61f1fbc074e15f7623f7011f97435ebee338cc4adabd2d4282217a23804" Feb 27 10:38:57 crc kubenswrapper[4998]: I0227 10:38:57.900690 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-tf4n2" event={"ID":"c96d5b82-3e0b-49a0-be3d-7f2fae6dd592","Type":"ContainerDied","Data":"543a0ae511145be1c75bb0fe8ed3bcb18995ba599ac85b7d568ac91b207e77c8"} Feb 27 10:38:57 crc kubenswrapper[4998]: I0227 10:38:57.900714 4998 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="543a0ae511145be1c75bb0fe8ed3bcb18995ba599ac85b7d568ac91b207e77c8" Feb 27 10:38:57 crc kubenswrapper[4998]: I0227 10:38:57.900769 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-tf4n2" Feb 27 10:38:57 crc kubenswrapper[4998]: I0227 10:38:57.924616 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-5f9f5dcf86-47qzc" podStartSLOduration=2.9245871599999997 podStartE2EDuration="2.92458716s" podCreationTimestamp="2026-02-27 10:38:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:38:57.889957091 +0000 UTC m=+1289.888228059" watchObservedRunningTime="2026-02-27 10:38:57.92458716 +0000 UTC m=+1289.922858128" Feb 27 10:38:57 crc kubenswrapper[4998]: I0227 10:38:57.956499 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-85ff748b95-pwqr7" podStartSLOduration=2.956479932 podStartE2EDuration="2.956479932s" podCreationTimestamp="2026-02-27 10:38:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:38:57.914733218 +0000 UTC m=+1289.913004186" watchObservedRunningTime="2026-02-27 10:38:57.956479932 +0000 UTC m=+1289.954750900" Feb 27 10:38:57 crc kubenswrapper[4998]: I0227 10:38:57.979921 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 27 10:38:58 crc kubenswrapper[4998]: I0227 10:38:58.002507 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 27 10:38:58 crc kubenswrapper[4998]: I0227 10:38:58.017602 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 27 10:38:58 crc kubenswrapper[4998]: E0227 10:38:58.018209 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63050361-4a13-4b25-8c1a-ff9fed854172" containerName="proxy-httpd" Feb 27 10:38:58 crc kubenswrapper[4998]: I0227 10:38:58.018250 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="63050361-4a13-4b25-8c1a-ff9fed854172" containerName="proxy-httpd" Feb 27 10:38:58 crc kubenswrapper[4998]: E0227 10:38:58.018274 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63050361-4a13-4b25-8c1a-ff9fed854172" containerName="ceilometer-notification-agent" Feb 27 10:38:58 crc kubenswrapper[4998]: I0227 10:38:58.018285 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="63050361-4a13-4b25-8c1a-ff9fed854172" containerName="ceilometer-notification-agent" Feb 27 10:38:58 crc kubenswrapper[4998]: E0227 10:38:58.018334 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c96d5b82-3e0b-49a0-be3d-7f2fae6dd592" containerName="cinder-db-sync" Feb 27 10:38:58 crc kubenswrapper[4998]: I0227 10:38:58.018342 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="c96d5b82-3e0b-49a0-be3d-7f2fae6dd592" containerName="cinder-db-sync" Feb 27 10:38:58 crc kubenswrapper[4998]: E0227 10:38:58.018361 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63050361-4a13-4b25-8c1a-ff9fed854172" containerName="sg-core" Feb 27 10:38:58 crc kubenswrapper[4998]: I0227 10:38:58.018368 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="63050361-4a13-4b25-8c1a-ff9fed854172" containerName="sg-core" Feb 27 10:38:58 crc kubenswrapper[4998]: E0227 10:38:58.018386 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63050361-4a13-4b25-8c1a-ff9fed854172" containerName="ceilometer-central-agent" Feb 27 10:38:58 crc kubenswrapper[4998]: I0227 10:38:58.018392 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="63050361-4a13-4b25-8c1a-ff9fed854172" containerName="ceilometer-central-agent" Feb 27 10:38:58 crc kubenswrapper[4998]: I0227 10:38:58.018610 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="63050361-4a13-4b25-8c1a-ff9fed854172" containerName="ceilometer-central-agent" Feb 27 10:38:58 crc kubenswrapper[4998]: I0227 10:38:58.018629 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="63050361-4a13-4b25-8c1a-ff9fed854172" containerName="proxy-httpd" Feb 27 10:38:58 crc kubenswrapper[4998]: I0227 10:38:58.018642 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="63050361-4a13-4b25-8c1a-ff9fed854172" containerName="sg-core" Feb 27 10:38:58 crc kubenswrapper[4998]: I0227 10:38:58.018653 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="c96d5b82-3e0b-49a0-be3d-7f2fae6dd592" containerName="cinder-db-sync" Feb 27 10:38:58 crc kubenswrapper[4998]: I0227 10:38:58.018671 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="63050361-4a13-4b25-8c1a-ff9fed854172" containerName="ceilometer-notification-agent" Feb 27 10:38:58 crc kubenswrapper[4998]: I0227 10:38:58.025291 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 10:38:58 crc kubenswrapper[4998]: I0227 10:38:58.029049 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 27 10:38:58 crc kubenswrapper[4998]: I0227 10:38:58.031822 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 27 10:38:58 crc kubenswrapper[4998]: I0227 10:38:58.038994 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 27 10:38:58 crc kubenswrapper[4998]: I0227 10:38:58.138248 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8hw7\" (UniqueName: \"kubernetes.io/projected/b318c0e6-bda6-4a0b-9f83-6a636ef90c08-kube-api-access-q8hw7\") pod \"ceilometer-0\" (UID: \"b318c0e6-bda6-4a0b-9f83-6a636ef90c08\") " pod="openstack/ceilometer-0" Feb 27 10:38:58 crc kubenswrapper[4998]: I0227 10:38:58.138939 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b318c0e6-bda6-4a0b-9f83-6a636ef90c08-config-data\") pod \"ceilometer-0\" (UID: \"b318c0e6-bda6-4a0b-9f83-6a636ef90c08\") " pod="openstack/ceilometer-0" Feb 27 10:38:58 crc kubenswrapper[4998]: I0227 10:38:58.139085 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b318c0e6-bda6-4a0b-9f83-6a636ef90c08-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b318c0e6-bda6-4a0b-9f83-6a636ef90c08\") " pod="openstack/ceilometer-0" Feb 27 10:38:58 crc kubenswrapper[4998]: I0227 10:38:58.139193 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b318c0e6-bda6-4a0b-9f83-6a636ef90c08-run-httpd\") pod \"ceilometer-0\" (UID: \"b318c0e6-bda6-4a0b-9f83-6a636ef90c08\") " pod="openstack/ceilometer-0" Feb 27 10:38:58 crc kubenswrapper[4998]: I0227 10:38:58.139729 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b318c0e6-bda6-4a0b-9f83-6a636ef90c08-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b318c0e6-bda6-4a0b-9f83-6a636ef90c08\") " pod="openstack/ceilometer-0" Feb 27 10:38:58 crc kubenswrapper[4998]: I0227 10:38:58.139826 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b318c0e6-bda6-4a0b-9f83-6a636ef90c08-scripts\") pod \"ceilometer-0\" (UID: \"b318c0e6-bda6-4a0b-9f83-6a636ef90c08\") " pod="openstack/ceilometer-0" Feb 27 10:38:58 crc kubenswrapper[4998]: I0227 10:38:58.139916 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b318c0e6-bda6-4a0b-9f83-6a636ef90c08-log-httpd\") pod \"ceilometer-0\" (UID: \"b318c0e6-bda6-4a0b-9f83-6a636ef90c08\") " pod="openstack/ceilometer-0" Feb 27 10:38:58 crc kubenswrapper[4998]: I0227 10:38:58.186880 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 27 10:38:58 crc kubenswrapper[4998]: I0227 10:38:58.189797 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 27 10:38:58 crc kubenswrapper[4998]: I0227 10:38:58.194762 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 27 10:38:58 crc kubenswrapper[4998]: I0227 10:38:58.194966 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-z9824" Feb 27 10:38:58 crc kubenswrapper[4998]: I0227 10:38:58.195068 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 27 10:38:58 crc kubenswrapper[4998]: I0227 10:38:58.195173 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 27 10:38:58 crc kubenswrapper[4998]: I0227 10:38:58.208677 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 27 10:38:58 crc kubenswrapper[4998]: I0227 10:38:58.243871 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b318c0e6-bda6-4a0b-9f83-6a636ef90c08-scripts\") pod \"ceilometer-0\" (UID: \"b318c0e6-bda6-4a0b-9f83-6a636ef90c08\") " pod="openstack/ceilometer-0" Feb 27 10:38:58 crc kubenswrapper[4998]: I0227 10:38:58.243933 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b318c0e6-bda6-4a0b-9f83-6a636ef90c08-log-httpd\") pod \"ceilometer-0\" (UID: \"b318c0e6-bda6-4a0b-9f83-6a636ef90c08\") " pod="openstack/ceilometer-0" Feb 27 10:38:58 crc kubenswrapper[4998]: I0227 10:38:58.243967 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8hw7\" (UniqueName: \"kubernetes.io/projected/b318c0e6-bda6-4a0b-9f83-6a636ef90c08-kube-api-access-q8hw7\") pod \"ceilometer-0\" (UID: \"b318c0e6-bda6-4a0b-9f83-6a636ef90c08\") " pod="openstack/ceilometer-0" Feb 27 10:38:58 crc kubenswrapper[4998]: I0227 10:38:58.244031 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fbbbec2f-ab3a-413d-81d6-04a63d922de8-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"fbbbec2f-ab3a-413d-81d6-04a63d922de8\") " pod="openstack/cinder-scheduler-0" Feb 27 10:38:58 crc kubenswrapper[4998]: I0227 10:38:58.244080 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b318c0e6-bda6-4a0b-9f83-6a636ef90c08-config-data\") pod \"ceilometer-0\" (UID: \"b318c0e6-bda6-4a0b-9f83-6a636ef90c08\") " pod="openstack/ceilometer-0" Feb 27 10:38:58 crc kubenswrapper[4998]: I0227 10:38:58.244103 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fbbbec2f-ab3a-413d-81d6-04a63d922de8-scripts\") pod \"cinder-scheduler-0\" (UID: \"fbbbec2f-ab3a-413d-81d6-04a63d922de8\") " pod="openstack/cinder-scheduler-0" Feb 27 10:38:58 crc kubenswrapper[4998]: I0227 10:38:58.244131 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwnxc\" (UniqueName: \"kubernetes.io/projected/fbbbec2f-ab3a-413d-81d6-04a63d922de8-kube-api-access-hwnxc\") pod \"cinder-scheduler-0\" (UID: \"fbbbec2f-ab3a-413d-81d6-04a63d922de8\") " pod="openstack/cinder-scheduler-0" Feb 27 10:38:58 crc kubenswrapper[4998]: I0227 10:38:58.244157 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbbbec2f-ab3a-413d-81d6-04a63d922de8-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"fbbbec2f-ab3a-413d-81d6-04a63d922de8\") " pod="openstack/cinder-scheduler-0" Feb 27 10:38:58 crc kubenswrapper[4998]: I0227 10:38:58.244176 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fbbbec2f-ab3a-413d-81d6-04a63d922de8-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"fbbbec2f-ab3a-413d-81d6-04a63d922de8\") " pod="openstack/cinder-scheduler-0" Feb 27 10:38:58 crc kubenswrapper[4998]: I0227 10:38:58.244192 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbbbec2f-ab3a-413d-81d6-04a63d922de8-config-data\") pod \"cinder-scheduler-0\" (UID: \"fbbbec2f-ab3a-413d-81d6-04a63d922de8\") " pod="openstack/cinder-scheduler-0" Feb 27 10:38:58 crc kubenswrapper[4998]: I0227 10:38:58.244218 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b318c0e6-bda6-4a0b-9f83-6a636ef90c08-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b318c0e6-bda6-4a0b-9f83-6a636ef90c08\") " pod="openstack/ceilometer-0" Feb 27 10:38:58 crc kubenswrapper[4998]: I0227 10:38:58.244258 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b318c0e6-bda6-4a0b-9f83-6a636ef90c08-run-httpd\") pod \"ceilometer-0\" (UID: \"b318c0e6-bda6-4a0b-9f83-6a636ef90c08\") " pod="openstack/ceilometer-0" Feb 27 10:38:58 crc kubenswrapper[4998]: I0227 10:38:58.244276 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b318c0e6-bda6-4a0b-9f83-6a636ef90c08-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b318c0e6-bda6-4a0b-9f83-6a636ef90c08\") " pod="openstack/ceilometer-0" Feb 27 10:38:58 crc kubenswrapper[4998]: I0227 10:38:58.245967 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b318c0e6-bda6-4a0b-9f83-6a636ef90c08-log-httpd\") pod \"ceilometer-0\" (UID: \"b318c0e6-bda6-4a0b-9f83-6a636ef90c08\") " pod="openstack/ceilometer-0" Feb 27 10:38:58 crc kubenswrapper[4998]: I0227 10:38:58.246743 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b318c0e6-bda6-4a0b-9f83-6a636ef90c08-run-httpd\") pod \"ceilometer-0\" (UID: \"b318c0e6-bda6-4a0b-9f83-6a636ef90c08\") " pod="openstack/ceilometer-0" Feb 27 10:38:58 crc kubenswrapper[4998]: I0227 10:38:58.250570 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b318c0e6-bda6-4a0b-9f83-6a636ef90c08-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b318c0e6-bda6-4a0b-9f83-6a636ef90c08\") " pod="openstack/ceilometer-0" Feb 27 10:38:58 crc kubenswrapper[4998]: I0227 10:38:58.267456 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b318c0e6-bda6-4a0b-9f83-6a636ef90c08-config-data\") pod \"ceilometer-0\" (UID: \"b318c0e6-bda6-4a0b-9f83-6a636ef90c08\") " pod="openstack/ceilometer-0" Feb 27 10:38:58 crc kubenswrapper[4998]: I0227 10:38:58.276944 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b318c0e6-bda6-4a0b-9f83-6a636ef90c08-scripts\") pod \"ceilometer-0\" (UID: \"b318c0e6-bda6-4a0b-9f83-6a636ef90c08\") " pod="openstack/ceilometer-0" Feb 27 10:38:58 crc kubenswrapper[4998]: I0227 10:38:58.285434 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8hw7\" (UniqueName: \"kubernetes.io/projected/b318c0e6-bda6-4a0b-9f83-6a636ef90c08-kube-api-access-q8hw7\") pod \"ceilometer-0\" (UID: \"b318c0e6-bda6-4a0b-9f83-6a636ef90c08\") " pod="openstack/ceilometer-0" Feb 27 10:38:58 crc kubenswrapper[4998]: I0227 10:38:58.311111 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b318c0e6-bda6-4a0b-9f83-6a636ef90c08-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b318c0e6-bda6-4a0b-9f83-6a636ef90c08\") " pod="openstack/ceilometer-0" Feb 27 10:38:58 crc kubenswrapper[4998]: I0227 10:38:58.332086 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-pwqr7"] Feb 27 10:38:58 crc kubenswrapper[4998]: I0227 10:38:58.346417 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fbbbec2f-ab3a-413d-81d6-04a63d922de8-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"fbbbec2f-ab3a-413d-81d6-04a63d922de8\") " pod="openstack/cinder-scheduler-0" Feb 27 10:38:58 crc kubenswrapper[4998]: I0227 10:38:58.346488 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fbbbec2f-ab3a-413d-81d6-04a63d922de8-scripts\") pod \"cinder-scheduler-0\" (UID: \"fbbbec2f-ab3a-413d-81d6-04a63d922de8\") " pod="openstack/cinder-scheduler-0" Feb 27 10:38:58 crc kubenswrapper[4998]: I0227 10:38:58.346517 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwnxc\" (UniqueName: \"kubernetes.io/projected/fbbbec2f-ab3a-413d-81d6-04a63d922de8-kube-api-access-hwnxc\") pod \"cinder-scheduler-0\" (UID: \"fbbbec2f-ab3a-413d-81d6-04a63d922de8\") " pod="openstack/cinder-scheduler-0" Feb 27 10:38:58 crc kubenswrapper[4998]: I0227 10:38:58.346537 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbbbec2f-ab3a-413d-81d6-04a63d922de8-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"fbbbec2f-ab3a-413d-81d6-04a63d922de8\") " pod="openstack/cinder-scheduler-0" Feb 27 10:38:58 crc kubenswrapper[4998]: I0227 10:38:58.346557 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fbbbec2f-ab3a-413d-81d6-04a63d922de8-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"fbbbec2f-ab3a-413d-81d6-04a63d922de8\") " pod="openstack/cinder-scheduler-0" Feb 27 10:38:58 crc kubenswrapper[4998]: I0227 10:38:58.346571 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbbbec2f-ab3a-413d-81d6-04a63d922de8-config-data\") pod \"cinder-scheduler-0\" (UID: \"fbbbec2f-ab3a-413d-81d6-04a63d922de8\") " pod="openstack/cinder-scheduler-0" Feb 27 10:38:58 crc kubenswrapper[4998]: I0227 10:38:58.355977 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fbbbec2f-ab3a-413d-81d6-04a63d922de8-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"fbbbec2f-ab3a-413d-81d6-04a63d922de8\") " pod="openstack/cinder-scheduler-0" Feb 27 10:38:58 crc kubenswrapper[4998]: I0227 10:38:58.386668 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 10:38:58 crc kubenswrapper[4998]: I0227 10:38:58.393937 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwnxc\" (UniqueName: \"kubernetes.io/projected/fbbbec2f-ab3a-413d-81d6-04a63d922de8-kube-api-access-hwnxc\") pod \"cinder-scheduler-0\" (UID: \"fbbbec2f-ab3a-413d-81d6-04a63d922de8\") " pod="openstack/cinder-scheduler-0" Feb 27 10:38:58 crc kubenswrapper[4998]: I0227 10:38:58.407374 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-6qj8s"] Feb 27 10:38:58 crc kubenswrapper[4998]: I0227 10:38:58.408838 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-6qj8s" Feb 27 10:38:58 crc kubenswrapper[4998]: I0227 10:38:58.409835 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fbbbec2f-ab3a-413d-81d6-04a63d922de8-scripts\") pod \"cinder-scheduler-0\" (UID: \"fbbbec2f-ab3a-413d-81d6-04a63d922de8\") " pod="openstack/cinder-scheduler-0" Feb 27 10:38:58 crc kubenswrapper[4998]: I0227 10:38:58.409960 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbbbec2f-ab3a-413d-81d6-04a63d922de8-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"fbbbec2f-ab3a-413d-81d6-04a63d922de8\") " pod="openstack/cinder-scheduler-0" Feb 27 10:38:58 crc kubenswrapper[4998]: I0227 10:38:58.410845 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fbbbec2f-ab3a-413d-81d6-04a63d922de8-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"fbbbec2f-ab3a-413d-81d6-04a63d922de8\") " pod="openstack/cinder-scheduler-0" Feb 27 10:38:58 crc kubenswrapper[4998]: I0227 10:38:58.419710 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-6qj8s"] Feb 27 10:38:58 crc kubenswrapper[4998]: I0227 10:38:58.427259 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbbbec2f-ab3a-413d-81d6-04a63d922de8-config-data\") pod \"cinder-scheduler-0\" (UID: \"fbbbec2f-ab3a-413d-81d6-04a63d922de8\") " pod="openstack/cinder-scheduler-0" Feb 27 10:38:58 crc kubenswrapper[4998]: I0227 10:38:58.454316 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5d039984-b6c7-4498-b215-d46fdab92f47-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-6qj8s\" (UID: \"5d039984-b6c7-4498-b215-d46fdab92f47\") " pod="openstack/dnsmasq-dns-5c9776ccc5-6qj8s" Feb 27 10:38:58 crc kubenswrapper[4998]: I0227 10:38:58.454387 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5d039984-b6c7-4498-b215-d46fdab92f47-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-6qj8s\" (UID: \"5d039984-b6c7-4498-b215-d46fdab92f47\") " pod="openstack/dnsmasq-dns-5c9776ccc5-6qj8s" Feb 27 10:38:58 crc kubenswrapper[4998]: I0227 10:38:58.454405 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5d039984-b6c7-4498-b215-d46fdab92f47-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-6qj8s\" (UID: \"5d039984-b6c7-4498-b215-d46fdab92f47\") " pod="openstack/dnsmasq-dns-5c9776ccc5-6qj8s" Feb 27 10:38:58 crc kubenswrapper[4998]: I0227 10:38:58.454446 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5d039984-b6c7-4498-b215-d46fdab92f47-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-6qj8s\" (UID: \"5d039984-b6c7-4498-b215-d46fdab92f47\") " pod="openstack/dnsmasq-dns-5c9776ccc5-6qj8s" Feb 27 10:38:58 crc kubenswrapper[4998]: I0227 10:38:58.454483 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l48wq\" (UniqueName: \"kubernetes.io/projected/5d039984-b6c7-4498-b215-d46fdab92f47-kube-api-access-l48wq\") pod \"dnsmasq-dns-5c9776ccc5-6qj8s\" (UID: \"5d039984-b6c7-4498-b215-d46fdab92f47\") " pod="openstack/dnsmasq-dns-5c9776ccc5-6qj8s" Feb 27 10:38:58 crc kubenswrapper[4998]: I0227 10:38:58.454526 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d039984-b6c7-4498-b215-d46fdab92f47-config\") pod \"dnsmasq-dns-5c9776ccc5-6qj8s\" (UID: \"5d039984-b6c7-4498-b215-d46fdab92f47\") " pod="openstack/dnsmasq-dns-5c9776ccc5-6qj8s" Feb 27 10:38:58 crc kubenswrapper[4998]: I0227 10:38:58.520688 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 27 10:38:58 crc kubenswrapper[4998]: I0227 10:38:58.570418 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d039984-b6c7-4498-b215-d46fdab92f47-config\") pod \"dnsmasq-dns-5c9776ccc5-6qj8s\" (UID: \"5d039984-b6c7-4498-b215-d46fdab92f47\") " pod="openstack/dnsmasq-dns-5c9776ccc5-6qj8s" Feb 27 10:38:58 crc kubenswrapper[4998]: I0227 10:38:58.570531 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5d039984-b6c7-4498-b215-d46fdab92f47-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-6qj8s\" (UID: \"5d039984-b6c7-4498-b215-d46fdab92f47\") " pod="openstack/dnsmasq-dns-5c9776ccc5-6qj8s" Feb 27 10:38:58 crc kubenswrapper[4998]: I0227 10:38:58.570590 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5d039984-b6c7-4498-b215-d46fdab92f47-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-6qj8s\" (UID: \"5d039984-b6c7-4498-b215-d46fdab92f47\") " pod="openstack/dnsmasq-dns-5c9776ccc5-6qj8s" Feb 27 10:38:58 crc kubenswrapper[4998]: I0227 10:38:58.570610 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5d039984-b6c7-4498-b215-d46fdab92f47-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-6qj8s\" (UID: \"5d039984-b6c7-4498-b215-d46fdab92f47\") " pod="openstack/dnsmasq-dns-5c9776ccc5-6qj8s" Feb 27 10:38:58 crc kubenswrapper[4998]: I0227 10:38:58.570651 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5d039984-b6c7-4498-b215-d46fdab92f47-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-6qj8s\" (UID: \"5d039984-b6c7-4498-b215-d46fdab92f47\") " pod="openstack/dnsmasq-dns-5c9776ccc5-6qj8s" Feb 27 10:38:58 crc kubenswrapper[4998]: I0227 10:38:58.570706 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l48wq\" (UniqueName: \"kubernetes.io/projected/5d039984-b6c7-4498-b215-d46fdab92f47-kube-api-access-l48wq\") pod \"dnsmasq-dns-5c9776ccc5-6qj8s\" (UID: \"5d039984-b6c7-4498-b215-d46fdab92f47\") " pod="openstack/dnsmasq-dns-5c9776ccc5-6qj8s" Feb 27 10:38:58 crc kubenswrapper[4998]: I0227 10:38:58.572277 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5d039984-b6c7-4498-b215-d46fdab92f47-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-6qj8s\" (UID: \"5d039984-b6c7-4498-b215-d46fdab92f47\") " pod="openstack/dnsmasq-dns-5c9776ccc5-6qj8s" Feb 27 10:38:58 crc kubenswrapper[4998]: I0227 10:38:58.572787 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5d039984-b6c7-4498-b215-d46fdab92f47-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-6qj8s\" (UID: \"5d039984-b6c7-4498-b215-d46fdab92f47\") " pod="openstack/dnsmasq-dns-5c9776ccc5-6qj8s" Feb 27 10:38:58 crc kubenswrapper[4998]: I0227 10:38:58.572999 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5d039984-b6c7-4498-b215-d46fdab92f47-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-6qj8s\" (UID: \"5d039984-b6c7-4498-b215-d46fdab92f47\") " pod="openstack/dnsmasq-dns-5c9776ccc5-6qj8s" Feb 27 10:38:58 crc kubenswrapper[4998]: I0227 10:38:58.573401 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5d039984-b6c7-4498-b215-d46fdab92f47-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-6qj8s\" (UID: \"5d039984-b6c7-4498-b215-d46fdab92f47\") " pod="openstack/dnsmasq-dns-5c9776ccc5-6qj8s" Feb 27 10:38:58 crc kubenswrapper[4998]: I0227 10:38:58.573699 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d039984-b6c7-4498-b215-d46fdab92f47-config\") pod \"dnsmasq-dns-5c9776ccc5-6qj8s\" (UID: \"5d039984-b6c7-4498-b215-d46fdab92f47\") " pod="openstack/dnsmasq-dns-5c9776ccc5-6qj8s" Feb 27 10:38:58 crc kubenswrapper[4998]: I0227 10:38:58.607289 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l48wq\" (UniqueName: \"kubernetes.io/projected/5d039984-b6c7-4498-b215-d46fdab92f47-kube-api-access-l48wq\") pod \"dnsmasq-dns-5c9776ccc5-6qj8s\" (UID: \"5d039984-b6c7-4498-b215-d46fdab92f47\") " pod="openstack/dnsmasq-dns-5c9776ccc5-6qj8s" Feb 27 10:38:58 crc kubenswrapper[4998]: I0227 10:38:58.717369 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 27 10:38:58 crc kubenswrapper[4998]: I0227 10:38:58.719550 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 27 10:38:58 crc kubenswrapper[4998]: I0227 10:38:58.721184 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 27 10:38:58 crc kubenswrapper[4998]: I0227 10:38:58.732766 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 27 10:38:58 crc kubenswrapper[4998]: I0227 10:38:58.773884 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/525cac6b-3504-40ef-bdb8-6352411b7006-config-data-custom\") pod \"cinder-api-0\" (UID: \"525cac6b-3504-40ef-bdb8-6352411b7006\") " pod="openstack/cinder-api-0" Feb 27 10:38:58 crc kubenswrapper[4998]: I0227 10:38:58.773976 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/525cac6b-3504-40ef-bdb8-6352411b7006-logs\") pod \"cinder-api-0\" (UID: \"525cac6b-3504-40ef-bdb8-6352411b7006\") " pod="openstack/cinder-api-0" Feb 27 10:38:58 crc kubenswrapper[4998]: I0227 10:38:58.774028 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/525cac6b-3504-40ef-bdb8-6352411b7006-scripts\") pod \"cinder-api-0\" (UID: \"525cac6b-3504-40ef-bdb8-6352411b7006\") " pod="openstack/cinder-api-0" Feb 27 10:38:58 crc kubenswrapper[4998]: I0227 10:38:58.774102 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/525cac6b-3504-40ef-bdb8-6352411b7006-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"525cac6b-3504-40ef-bdb8-6352411b7006\") " pod="openstack/cinder-api-0" Feb 27 10:38:58 crc kubenswrapper[4998]: I0227 10:38:58.774143 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/525cac6b-3504-40ef-bdb8-6352411b7006-etc-machine-id\") pod \"cinder-api-0\" (UID: \"525cac6b-3504-40ef-bdb8-6352411b7006\") " pod="openstack/cinder-api-0" Feb 27 10:38:58 crc kubenswrapper[4998]: I0227 10:38:58.774167 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/525cac6b-3504-40ef-bdb8-6352411b7006-config-data\") pod \"cinder-api-0\" (UID: \"525cac6b-3504-40ef-bdb8-6352411b7006\") " pod="openstack/cinder-api-0" Feb 27 10:38:58 crc kubenswrapper[4998]: I0227 10:38:58.774195 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mv27s\" (UniqueName: \"kubernetes.io/projected/525cac6b-3504-40ef-bdb8-6352411b7006-kube-api-access-mv27s\") pod \"cinder-api-0\" (UID: \"525cac6b-3504-40ef-bdb8-6352411b7006\") " pod="openstack/cinder-api-0" Feb 27 10:38:58 crc kubenswrapper[4998]: I0227 10:38:58.776613 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63050361-4a13-4b25-8c1a-ff9fed854172" path="/var/lib/kubelet/pods/63050361-4a13-4b25-8c1a-ff9fed854172/volumes" Feb 27 10:38:58 crc kubenswrapper[4998]: I0227 10:38:58.795845 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-6qj8s" Feb 27 10:38:58 crc kubenswrapper[4998]: I0227 10:38:58.875896 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/525cac6b-3504-40ef-bdb8-6352411b7006-etc-machine-id\") pod \"cinder-api-0\" (UID: \"525cac6b-3504-40ef-bdb8-6352411b7006\") " pod="openstack/cinder-api-0" Feb 27 10:38:58 crc kubenswrapper[4998]: I0227 10:38:58.875942 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/525cac6b-3504-40ef-bdb8-6352411b7006-config-data\") pod \"cinder-api-0\" (UID: \"525cac6b-3504-40ef-bdb8-6352411b7006\") " pod="openstack/cinder-api-0" Feb 27 10:38:58 crc kubenswrapper[4998]: I0227 10:38:58.875967 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mv27s\" (UniqueName: \"kubernetes.io/projected/525cac6b-3504-40ef-bdb8-6352411b7006-kube-api-access-mv27s\") pod \"cinder-api-0\" (UID: \"525cac6b-3504-40ef-bdb8-6352411b7006\") " pod="openstack/cinder-api-0" Feb 27 10:38:58 crc kubenswrapper[4998]: I0227 10:38:58.876003 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/525cac6b-3504-40ef-bdb8-6352411b7006-config-data-custom\") pod \"cinder-api-0\" (UID: \"525cac6b-3504-40ef-bdb8-6352411b7006\") " pod="openstack/cinder-api-0" Feb 27 10:38:58 crc kubenswrapper[4998]: I0227 10:38:58.876044 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/525cac6b-3504-40ef-bdb8-6352411b7006-logs\") pod \"cinder-api-0\" (UID: \"525cac6b-3504-40ef-bdb8-6352411b7006\") " pod="openstack/cinder-api-0" Feb 27 10:38:58 crc kubenswrapper[4998]: I0227 10:38:58.876081 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/525cac6b-3504-40ef-bdb8-6352411b7006-scripts\") pod \"cinder-api-0\" (UID: \"525cac6b-3504-40ef-bdb8-6352411b7006\") " pod="openstack/cinder-api-0" Feb 27 10:38:58 crc kubenswrapper[4998]: I0227 10:38:58.876139 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/525cac6b-3504-40ef-bdb8-6352411b7006-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"525cac6b-3504-40ef-bdb8-6352411b7006\") " pod="openstack/cinder-api-0" Feb 27 10:38:58 crc kubenswrapper[4998]: I0227 10:38:58.876423 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/525cac6b-3504-40ef-bdb8-6352411b7006-etc-machine-id\") pod \"cinder-api-0\" (UID: \"525cac6b-3504-40ef-bdb8-6352411b7006\") " pod="openstack/cinder-api-0" Feb 27 10:38:58 crc kubenswrapper[4998]: I0227 10:38:58.877106 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/525cac6b-3504-40ef-bdb8-6352411b7006-logs\") pod \"cinder-api-0\" (UID: \"525cac6b-3504-40ef-bdb8-6352411b7006\") " pod="openstack/cinder-api-0" Feb 27 10:38:58 crc kubenswrapper[4998]: I0227 10:38:58.879647 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/525cac6b-3504-40ef-bdb8-6352411b7006-scripts\") pod \"cinder-api-0\" (UID: \"525cac6b-3504-40ef-bdb8-6352411b7006\") " pod="openstack/cinder-api-0" Feb 27 10:38:58 crc kubenswrapper[4998]: I0227 10:38:58.880078 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/525cac6b-3504-40ef-bdb8-6352411b7006-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"525cac6b-3504-40ef-bdb8-6352411b7006\") " pod="openstack/cinder-api-0" Feb 27 10:38:58 crc kubenswrapper[4998]: I0227 10:38:58.880464 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/525cac6b-3504-40ef-bdb8-6352411b7006-config-data-custom\") pod \"cinder-api-0\" (UID: \"525cac6b-3504-40ef-bdb8-6352411b7006\") " pod="openstack/cinder-api-0" Feb 27 10:38:58 crc kubenswrapper[4998]: I0227 10:38:58.880650 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/525cac6b-3504-40ef-bdb8-6352411b7006-config-data\") pod \"cinder-api-0\" (UID: \"525cac6b-3504-40ef-bdb8-6352411b7006\") " pod="openstack/cinder-api-0" Feb 27 10:38:58 crc kubenswrapper[4998]: I0227 10:38:58.892434 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mv27s\" (UniqueName: \"kubernetes.io/projected/525cac6b-3504-40ef-bdb8-6352411b7006-kube-api-access-mv27s\") pod \"cinder-api-0\" (UID: \"525cac6b-3504-40ef-bdb8-6352411b7006\") " pod="openstack/cinder-api-0" Feb 27 10:38:59 crc kubenswrapper[4998]: I0227 10:38:59.038268 4998 scope.go:117] "RemoveContainer" containerID="6c2aa53c9a3d41b29ebae488fb1cfb0f9f9a2224bde207ee937110d3162d5362" Feb 27 10:38:59 crc kubenswrapper[4998]: I0227 10:38:59.049636 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 27 10:38:59 crc kubenswrapper[4998]: I0227 10:38:59.177455 4998 scope.go:117] "RemoveContainer" containerID="89a24b7b9a31d6a217c7ea68c7d8a8143d6a021d9a773eabae436a7601c63702" Feb 27 10:38:59 crc kubenswrapper[4998]: I0227 10:38:59.384930 4998 scope.go:117] "RemoveContainer" containerID="9e79a77a343f83bceab73f363285e5705ab1a0c7de7eeab802874c8ee2910d8e" Feb 27 10:38:59 crc kubenswrapper[4998]: I0227 10:38:59.452837 4998 scope.go:117] "RemoveContainer" containerID="15efc61f1fbc074e15f7623f7011f97435ebee338cc4adabd2d4282217a23804" Feb 27 10:38:59 crc kubenswrapper[4998]: E0227 10:38:59.464366 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15efc61f1fbc074e15f7623f7011f97435ebee338cc4adabd2d4282217a23804\": container with ID starting with 15efc61f1fbc074e15f7623f7011f97435ebee338cc4adabd2d4282217a23804 not found: ID does not exist" containerID="15efc61f1fbc074e15f7623f7011f97435ebee338cc4adabd2d4282217a23804" Feb 27 10:38:59 crc kubenswrapper[4998]: I0227 10:38:59.464446 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15efc61f1fbc074e15f7623f7011f97435ebee338cc4adabd2d4282217a23804"} err="failed to get container status \"15efc61f1fbc074e15f7623f7011f97435ebee338cc4adabd2d4282217a23804\": rpc error: code = NotFound desc = could not find container \"15efc61f1fbc074e15f7623f7011f97435ebee338cc4adabd2d4282217a23804\": container with ID starting with 15efc61f1fbc074e15f7623f7011f97435ebee338cc4adabd2d4282217a23804 not found: ID does not exist" Feb 27 10:38:59 crc kubenswrapper[4998]: I0227 10:38:59.464511 4998 scope.go:117] "RemoveContainer" containerID="6c2aa53c9a3d41b29ebae488fb1cfb0f9f9a2224bde207ee937110d3162d5362" Feb 27 10:38:59 crc kubenswrapper[4998]: E0227 10:38:59.473633 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c2aa53c9a3d41b29ebae488fb1cfb0f9f9a2224bde207ee937110d3162d5362\": container with ID starting with 6c2aa53c9a3d41b29ebae488fb1cfb0f9f9a2224bde207ee937110d3162d5362 not found: ID does not exist" containerID="6c2aa53c9a3d41b29ebae488fb1cfb0f9f9a2224bde207ee937110d3162d5362" Feb 27 10:38:59 crc kubenswrapper[4998]: I0227 10:38:59.473681 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c2aa53c9a3d41b29ebae488fb1cfb0f9f9a2224bde207ee937110d3162d5362"} err="failed to get container status \"6c2aa53c9a3d41b29ebae488fb1cfb0f9f9a2224bde207ee937110d3162d5362\": rpc error: code = NotFound desc = could not find container \"6c2aa53c9a3d41b29ebae488fb1cfb0f9f9a2224bde207ee937110d3162d5362\": container with ID starting with 6c2aa53c9a3d41b29ebae488fb1cfb0f9f9a2224bde207ee937110d3162d5362 not found: ID does not exist" Feb 27 10:38:59 crc kubenswrapper[4998]: I0227 10:38:59.473713 4998 scope.go:117] "RemoveContainer" containerID="89a24b7b9a31d6a217c7ea68c7d8a8143d6a021d9a773eabae436a7601c63702" Feb 27 10:38:59 crc kubenswrapper[4998]: E0227 10:38:59.474633 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89a24b7b9a31d6a217c7ea68c7d8a8143d6a021d9a773eabae436a7601c63702\": container with ID starting with 89a24b7b9a31d6a217c7ea68c7d8a8143d6a021d9a773eabae436a7601c63702 not found: ID does not exist" containerID="89a24b7b9a31d6a217c7ea68c7d8a8143d6a021d9a773eabae436a7601c63702" Feb 27 10:38:59 crc kubenswrapper[4998]: I0227 10:38:59.474687 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89a24b7b9a31d6a217c7ea68c7d8a8143d6a021d9a773eabae436a7601c63702"} err="failed to get container status \"89a24b7b9a31d6a217c7ea68c7d8a8143d6a021d9a773eabae436a7601c63702\": rpc error: code = NotFound desc = could not find container \"89a24b7b9a31d6a217c7ea68c7d8a8143d6a021d9a773eabae436a7601c63702\": container with ID starting with 89a24b7b9a31d6a217c7ea68c7d8a8143d6a021d9a773eabae436a7601c63702 not found: ID does not exist" Feb 27 10:38:59 crc kubenswrapper[4998]: I0227 10:38:59.474711 4998 scope.go:117] "RemoveContainer" containerID="9e79a77a343f83bceab73f363285e5705ab1a0c7de7eeab802874c8ee2910d8e" Feb 27 10:38:59 crc kubenswrapper[4998]: E0227 10:38:59.475999 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e79a77a343f83bceab73f363285e5705ab1a0c7de7eeab802874c8ee2910d8e\": container with ID starting with 9e79a77a343f83bceab73f363285e5705ab1a0c7de7eeab802874c8ee2910d8e not found: ID does not exist" containerID="9e79a77a343f83bceab73f363285e5705ab1a0c7de7eeab802874c8ee2910d8e" Feb 27 10:38:59 crc kubenswrapper[4998]: I0227 10:38:59.476049 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e79a77a343f83bceab73f363285e5705ab1a0c7de7eeab802874c8ee2910d8e"} err="failed to get container status \"9e79a77a343f83bceab73f363285e5705ab1a0c7de7eeab802874c8ee2910d8e\": rpc error: code = NotFound desc = could not find container \"9e79a77a343f83bceab73f363285e5705ab1a0c7de7eeab802874c8ee2910d8e\": container with ID starting with 9e79a77a343f83bceab73f363285e5705ab1a0c7de7eeab802874c8ee2910d8e not found: ID does not exist" Feb 27 10:38:59 crc kubenswrapper[4998]: W0227 10:38:59.685966 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb318c0e6_bda6_4a0b_9f83_6a636ef90c08.slice/crio-f77153d5e603c7d8ae22b5384f13c3e40508553c5b93bdad28cb169d91166d84 WatchSource:0}: Error finding container f77153d5e603c7d8ae22b5384f13c3e40508553c5b93bdad28cb169d91166d84: Status 404 returned error can't find the container with id f77153d5e603c7d8ae22b5384f13c3e40508553c5b93bdad28cb169d91166d84 Feb 27 10:38:59 crc kubenswrapper[4998]: I0227 10:38:59.699890 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 27 10:38:59 crc kubenswrapper[4998]: I0227 10:38:59.777167 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 27 10:38:59 crc kubenswrapper[4998]: W0227 10:38:59.783443 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod525cac6b_3504_40ef_bdb8_6352411b7006.slice/crio-e52b1dbd3b25d8ae56c6acf6aa55bef2910214bc3d16938ad9b320544622901f WatchSource:0}: Error finding container e52b1dbd3b25d8ae56c6acf6aa55bef2910214bc3d16938ad9b320544622901f: Status 404 returned error can't find the container with id e52b1dbd3b25d8ae56c6acf6aa55bef2910214bc3d16938ad9b320544622901f Feb 27 10:38:59 crc kubenswrapper[4998]: W0227 10:38:59.787144 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfbbbec2f_ab3a_413d_81d6_04a63d922de8.slice/crio-75d663a7be2ffb744cf1960d3de563689971caab77da4d8cbd50c99858c14ba5 WatchSource:0}: Error finding container 75d663a7be2ffb744cf1960d3de563689971caab77da4d8cbd50c99858c14ba5: Status 404 returned error can't find the container with id 75d663a7be2ffb744cf1960d3de563689971caab77da4d8cbd50c99858c14ba5 Feb 27 10:38:59 crc kubenswrapper[4998]: I0227 10:38:59.792594 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 27 10:38:59 crc kubenswrapper[4998]: W0227 10:38:59.793459 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5d039984_b6c7_4498_b215_d46fdab92f47.slice/crio-44848b4aeedb25bec6fa56575b7e94e68a5dd159519f6b991c8ccfb4f206a72b WatchSource:0}: Error finding container 44848b4aeedb25bec6fa56575b7e94e68a5dd159519f6b991c8ccfb4f206a72b: Status 404 returned error can't find the container with id 44848b4aeedb25bec6fa56575b7e94e68a5dd159519f6b991c8ccfb4f206a72b Feb 27 10:38:59 crc kubenswrapper[4998]: I0227 10:38:59.809580 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-6qj8s"] Feb 27 10:38:59 crc kubenswrapper[4998]: I0227 10:38:59.932298 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-6qj8s" event={"ID":"5d039984-b6c7-4498-b215-d46fdab92f47","Type":"ContainerStarted","Data":"44848b4aeedb25bec6fa56575b7e94e68a5dd159519f6b991c8ccfb4f206a72b"} Feb 27 10:38:59 crc kubenswrapper[4998]: I0227 10:38:59.937870 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b318c0e6-bda6-4a0b-9f83-6a636ef90c08","Type":"ContainerStarted","Data":"f77153d5e603c7d8ae22b5384f13c3e40508553c5b93bdad28cb169d91166d84"} Feb 27 10:38:59 crc kubenswrapper[4998]: I0227 10:38:59.940948 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-775f48c444-nclg6" event={"ID":"6626932a-9b39-41b4-a857-0ef4489cc74c","Type":"ContainerStarted","Data":"5f7cf59e2702b0f49f7ab5e2f79d11c413f37300f45354105c6f538b1daa2124"} Feb 27 10:38:59 crc kubenswrapper[4998]: I0227 10:38:59.943277 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5578686c6c-rhd22" event={"ID":"f355d01e-079d-48f0-abc8-b26c45650314","Type":"ContainerStarted","Data":"f729b312f107eacbecba64c37f6d499e2cb4e3d3474580aae84a2197930b5277"} Feb 27 10:38:59 crc kubenswrapper[4998]: I0227 10:38:59.945006 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"fbbbec2f-ab3a-413d-81d6-04a63d922de8","Type":"ContainerStarted","Data":"75d663a7be2ffb744cf1960d3de563689971caab77da4d8cbd50c99858c14ba5"} Feb 27 10:38:59 crc kubenswrapper[4998]: I0227 10:38:59.946772 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-85ff748b95-pwqr7" podUID="37ef111c-f910-4337-a24c-4debf7a4425b" containerName="dnsmasq-dns" containerID="cri-o://e990128229d7cda91e02b88aea94d0f0e3c7126c6fb26643ed78fc56ff639180" gracePeriod=10 Feb 27 10:38:59 crc kubenswrapper[4998]: I0227 10:38:59.947050 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"525cac6b-3504-40ef-bdb8-6352411b7006","Type":"ContainerStarted","Data":"e52b1dbd3b25d8ae56c6acf6aa55bef2910214bc3d16938ad9b320544622901f"} Feb 27 10:39:00 crc kubenswrapper[4998]: I0227 10:39:00.000821 4998 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5d7f558cb4-k5mxh" podUID="c6f8dcd8-b50e-47b8-b54c-2aa103be577c" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.153:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.153:8443: connect: connection refused" Feb 27 10:39:00 crc kubenswrapper[4998]: I0227 10:39:00.952841 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-pwqr7" Feb 27 10:39:01 crc kubenswrapper[4998]: I0227 10:39:01.008264 4998 generic.go:334] "Generic (PLEG): container finished" podID="37ef111c-f910-4337-a24c-4debf7a4425b" containerID="e990128229d7cda91e02b88aea94d0f0e3c7126c6fb26643ed78fc56ff639180" exitCode=0 Feb 27 10:39:01 crc kubenswrapper[4998]: I0227 10:39:01.008381 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-pwqr7" Feb 27 10:39:01 crc kubenswrapper[4998]: I0227 10:39:01.008404 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-pwqr7" event={"ID":"37ef111c-f910-4337-a24c-4debf7a4425b","Type":"ContainerDied","Data":"e990128229d7cda91e02b88aea94d0f0e3c7126c6fb26643ed78fc56ff639180"} Feb 27 10:39:01 crc kubenswrapper[4998]: I0227 10:39:01.008429 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-pwqr7" event={"ID":"37ef111c-f910-4337-a24c-4debf7a4425b","Type":"ContainerDied","Data":"ed7e04fe946d15a5ebe8cf4c45b2e0c5ff152f0187f62c6c9d66ea34b5eeb077"} Feb 27 10:39:01 crc kubenswrapper[4998]: I0227 10:39:01.008453 4998 scope.go:117] "RemoveContainer" containerID="e990128229d7cda91e02b88aea94d0f0e3c7126c6fb26643ed78fc56ff639180" Feb 27 10:39:01 crc kubenswrapper[4998]: I0227 10:39:01.012235 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"525cac6b-3504-40ef-bdb8-6352411b7006","Type":"ContainerStarted","Data":"8125398b44ab64c6e2c918f34c7cac044a83b4dd49c1dd09be7013d9c8ddc483"} Feb 27 10:39:01 crc kubenswrapper[4998]: I0227 10:39:01.013387 4998 generic.go:334] "Generic (PLEG): container finished" podID="5d039984-b6c7-4498-b215-d46fdab92f47" containerID="5f4ced49849e0649523d983e7c39c8a92ffaa3036d0d34953cbc2d2ed82949fc" exitCode=0 Feb 27 10:39:01 crc kubenswrapper[4998]: I0227 10:39:01.013431 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-6qj8s" event={"ID":"5d039984-b6c7-4498-b215-d46fdab92f47","Type":"ContainerDied","Data":"5f4ced49849e0649523d983e7c39c8a92ffaa3036d0d34953cbc2d2ed82949fc"} Feb 27 10:39:01 crc kubenswrapper[4998]: I0227 10:39:01.016426 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b318c0e6-bda6-4a0b-9f83-6a636ef90c08","Type":"ContainerStarted","Data":"b5b4115a4eba4bfd7cad294788f637c5a83d54134d82cbcafc6865077b76c827"} Feb 27 10:39:01 crc kubenswrapper[4998]: I0227 10:39:01.031844 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/37ef111c-f910-4337-a24c-4debf7a4425b-ovsdbserver-sb\") pod \"37ef111c-f910-4337-a24c-4debf7a4425b\" (UID: \"37ef111c-f910-4337-a24c-4debf7a4425b\") " Feb 27 10:39:01 crc kubenswrapper[4998]: I0227 10:39:01.031983 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/37ef111c-f910-4337-a24c-4debf7a4425b-ovsdbserver-nb\") pod \"37ef111c-f910-4337-a24c-4debf7a4425b\" (UID: \"37ef111c-f910-4337-a24c-4debf7a4425b\") " Feb 27 10:39:01 crc kubenswrapper[4998]: I0227 10:39:01.032025 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9dqwl\" (UniqueName: \"kubernetes.io/projected/37ef111c-f910-4337-a24c-4debf7a4425b-kube-api-access-9dqwl\") pod \"37ef111c-f910-4337-a24c-4debf7a4425b\" (UID: \"37ef111c-f910-4337-a24c-4debf7a4425b\") " Feb 27 10:39:01 crc kubenswrapper[4998]: I0227 10:39:01.032065 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37ef111c-f910-4337-a24c-4debf7a4425b-config\") pod \"37ef111c-f910-4337-a24c-4debf7a4425b\" (UID: \"37ef111c-f910-4337-a24c-4debf7a4425b\") " Feb 27 10:39:01 crc kubenswrapper[4998]: I0227 10:39:01.032091 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/37ef111c-f910-4337-a24c-4debf7a4425b-dns-swift-storage-0\") pod \"37ef111c-f910-4337-a24c-4debf7a4425b\" (UID: \"37ef111c-f910-4337-a24c-4debf7a4425b\") " Feb 27 10:39:01 crc kubenswrapper[4998]: I0227 10:39:01.032205 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/37ef111c-f910-4337-a24c-4debf7a4425b-dns-svc\") pod \"37ef111c-f910-4337-a24c-4debf7a4425b\" (UID: \"37ef111c-f910-4337-a24c-4debf7a4425b\") " Feb 27 10:39:01 crc kubenswrapper[4998]: I0227 10:39:01.042298 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-775f48c444-nclg6" event={"ID":"6626932a-9b39-41b4-a857-0ef4489cc74c","Type":"ContainerStarted","Data":"76b73039cafdd8df5b81f890322a40cdebbe3864dd2b19a5ccf87b373485866a"} Feb 27 10:39:01 crc kubenswrapper[4998]: I0227 10:39:01.049657 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37ef111c-f910-4337-a24c-4debf7a4425b-kube-api-access-9dqwl" (OuterVolumeSpecName: "kube-api-access-9dqwl") pod "37ef111c-f910-4337-a24c-4debf7a4425b" (UID: "37ef111c-f910-4337-a24c-4debf7a4425b"). InnerVolumeSpecName "kube-api-access-9dqwl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:39:01 crc kubenswrapper[4998]: I0227 10:39:01.061804 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5578686c6c-rhd22" event={"ID":"f355d01e-079d-48f0-abc8-b26c45650314","Type":"ContainerStarted","Data":"e4e29a9a76379c2688c7ba48ae83351a5ab7dd9a6d1f8aa174becc195a4b1080"} Feb 27 10:39:01 crc kubenswrapper[4998]: I0227 10:39:01.076040 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-775f48c444-nclg6" podStartSLOduration=3.342569565 podStartE2EDuration="6.075974252s" podCreationTimestamp="2026-02-27 10:38:55 +0000 UTC" firstStartedPulling="2026-02-27 10:38:56.446329296 +0000 UTC m=+1288.444600264" lastFinishedPulling="2026-02-27 10:38:59.179733983 +0000 UTC m=+1291.178004951" observedRunningTime="2026-02-27 10:39:01.066561703 +0000 UTC m=+1293.064832671" watchObservedRunningTime="2026-02-27 10:39:01.075974252 +0000 UTC m=+1293.074245220" Feb 27 10:39:01 crc kubenswrapper[4998]: I0227 10:39:01.108752 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-5578686c6c-rhd22" podStartSLOduration=3.29418022 podStartE2EDuration="6.108729311s" podCreationTimestamp="2026-02-27 10:38:55 +0000 UTC" firstStartedPulling="2026-02-27 10:38:56.363366844 +0000 UTC m=+1288.361637812" lastFinishedPulling="2026-02-27 10:38:59.177915935 +0000 UTC m=+1291.176186903" observedRunningTime="2026-02-27 10:39:01.091907828 +0000 UTC m=+1293.090178796" watchObservedRunningTime="2026-02-27 10:39:01.108729311 +0000 UTC m=+1293.107000279" Feb 27 10:39:01 crc kubenswrapper[4998]: I0227 10:39:01.134427 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9dqwl\" (UniqueName: \"kubernetes.io/projected/37ef111c-f910-4337-a24c-4debf7a4425b-kube-api-access-9dqwl\") on node \"crc\" DevicePath \"\"" Feb 27 10:39:01 crc kubenswrapper[4998]: I0227 10:39:01.173848 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37ef111c-f910-4337-a24c-4debf7a4425b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "37ef111c-f910-4337-a24c-4debf7a4425b" (UID: "37ef111c-f910-4337-a24c-4debf7a4425b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:39:01 crc kubenswrapper[4998]: I0227 10:39:01.200894 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37ef111c-f910-4337-a24c-4debf7a4425b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "37ef111c-f910-4337-a24c-4debf7a4425b" (UID: "37ef111c-f910-4337-a24c-4debf7a4425b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:39:01 crc kubenswrapper[4998]: I0227 10:39:01.203731 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37ef111c-f910-4337-a24c-4debf7a4425b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "37ef111c-f910-4337-a24c-4debf7a4425b" (UID: "37ef111c-f910-4337-a24c-4debf7a4425b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:39:01 crc kubenswrapper[4998]: I0227 10:39:01.224332 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37ef111c-f910-4337-a24c-4debf7a4425b-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "37ef111c-f910-4337-a24c-4debf7a4425b" (UID: "37ef111c-f910-4337-a24c-4debf7a4425b"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:39:01 crc kubenswrapper[4998]: I0227 10:39:01.236623 4998 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/37ef111c-f910-4337-a24c-4debf7a4425b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 27 10:39:01 crc kubenswrapper[4998]: I0227 10:39:01.236676 4998 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/37ef111c-f910-4337-a24c-4debf7a4425b-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 27 10:39:01 crc kubenswrapper[4998]: I0227 10:39:01.236689 4998 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/37ef111c-f910-4337-a24c-4debf7a4425b-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 27 10:39:01 crc kubenswrapper[4998]: I0227 10:39:01.236698 4998 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/37ef111c-f910-4337-a24c-4debf7a4425b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 27 10:39:01 crc kubenswrapper[4998]: I0227 10:39:01.246596 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37ef111c-f910-4337-a24c-4debf7a4425b-config" (OuterVolumeSpecName: "config") pod "37ef111c-f910-4337-a24c-4debf7a4425b" (UID: "37ef111c-f910-4337-a24c-4debf7a4425b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:39:01 crc kubenswrapper[4998]: I0227 10:39:01.344484 4998 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37ef111c-f910-4337-a24c-4debf7a4425b-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:39:01 crc kubenswrapper[4998]: I0227 10:39:01.375046 4998 scope.go:117] "RemoveContainer" containerID="4815a28ff01e1acd2d96580a7cfac8034d329e8bfb7aecae0d32d25ac312fcbd" Feb 27 10:39:01 crc kubenswrapper[4998]: I0227 10:39:01.399122 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-pwqr7"] Feb 27 10:39:01 crc kubenswrapper[4998]: I0227 10:39:01.407998 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-pwqr7"] Feb 27 10:39:01 crc kubenswrapper[4998]: I0227 10:39:01.446400 4998 scope.go:117] "RemoveContainer" containerID="e990128229d7cda91e02b88aea94d0f0e3c7126c6fb26643ed78fc56ff639180" Feb 27 10:39:01 crc kubenswrapper[4998]: E0227 10:39:01.449466 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e990128229d7cda91e02b88aea94d0f0e3c7126c6fb26643ed78fc56ff639180\": container with ID starting with e990128229d7cda91e02b88aea94d0f0e3c7126c6fb26643ed78fc56ff639180 not found: ID does not exist" containerID="e990128229d7cda91e02b88aea94d0f0e3c7126c6fb26643ed78fc56ff639180" Feb 27 10:39:01 crc kubenswrapper[4998]: I0227 10:39:01.449514 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e990128229d7cda91e02b88aea94d0f0e3c7126c6fb26643ed78fc56ff639180"} err="failed to get container status \"e990128229d7cda91e02b88aea94d0f0e3c7126c6fb26643ed78fc56ff639180\": rpc error: code = NotFound desc = could not find container \"e990128229d7cda91e02b88aea94d0f0e3c7126c6fb26643ed78fc56ff639180\": container with ID starting with e990128229d7cda91e02b88aea94d0f0e3c7126c6fb26643ed78fc56ff639180 not found: ID does not exist" Feb 27 10:39:01 crc kubenswrapper[4998]: I0227 10:39:01.449538 4998 scope.go:117] "RemoveContainer" containerID="4815a28ff01e1acd2d96580a7cfac8034d329e8bfb7aecae0d32d25ac312fcbd" Feb 27 10:39:01 crc kubenswrapper[4998]: E0227 10:39:01.450771 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4815a28ff01e1acd2d96580a7cfac8034d329e8bfb7aecae0d32d25ac312fcbd\": container with ID starting with 4815a28ff01e1acd2d96580a7cfac8034d329e8bfb7aecae0d32d25ac312fcbd not found: ID does not exist" containerID="4815a28ff01e1acd2d96580a7cfac8034d329e8bfb7aecae0d32d25ac312fcbd" Feb 27 10:39:01 crc kubenswrapper[4998]: I0227 10:39:01.450803 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4815a28ff01e1acd2d96580a7cfac8034d329e8bfb7aecae0d32d25ac312fcbd"} err="failed to get container status \"4815a28ff01e1acd2d96580a7cfac8034d329e8bfb7aecae0d32d25ac312fcbd\": rpc error: code = NotFound desc = could not find container \"4815a28ff01e1acd2d96580a7cfac8034d329e8bfb7aecae0d32d25ac312fcbd\": container with ID starting with 4815a28ff01e1acd2d96580a7cfac8034d329e8bfb7aecae0d32d25ac312fcbd not found: ID does not exist" Feb 27 10:39:02 crc kubenswrapper[4998]: I0227 10:39:02.082915 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b318c0e6-bda6-4a0b-9f83-6a636ef90c08","Type":"ContainerStarted","Data":"7c2ed0082690f16a18ed325c460eaa42a7a8a3ab54d79065151b8f70f891d707"} Feb 27 10:39:02 crc kubenswrapper[4998]: I0227 10:39:02.090904 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"fbbbec2f-ab3a-413d-81d6-04a63d922de8","Type":"ContainerStarted","Data":"4baa6eb69d4b3b824717412864bcaad7d13b276cd792825b5ac26a385908fdad"} Feb 27 10:39:02 crc kubenswrapper[4998]: I0227 10:39:02.104299 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"525cac6b-3504-40ef-bdb8-6352411b7006","Type":"ContainerStarted","Data":"40d06c2b7f227b7e4232c5dfa35f5b741387241707cd07f5bb54ca68977b7f03"} Feb 27 10:39:02 crc kubenswrapper[4998]: I0227 10:39:02.104521 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 27 10:39:02 crc kubenswrapper[4998]: I0227 10:39:02.110041 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-6qj8s" event={"ID":"5d039984-b6c7-4498-b215-d46fdab92f47","Type":"ContainerStarted","Data":"6699df0a79ea327072aee2c5a5231e89a775e998435397badc0de7158d249bcf"} Feb 27 10:39:02 crc kubenswrapper[4998]: I0227 10:39:02.111184 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c9776ccc5-6qj8s" Feb 27 10:39:02 crc kubenswrapper[4998]: I0227 10:39:02.119849 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 27 10:39:02 crc kubenswrapper[4998]: I0227 10:39:02.131895 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.131877091 podStartE2EDuration="4.131877091s" podCreationTimestamp="2026-02-27 10:38:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:39:02.123586048 +0000 UTC m=+1294.121857046" watchObservedRunningTime="2026-02-27 10:39:02.131877091 +0000 UTC m=+1294.130148059" Feb 27 10:39:02 crc kubenswrapper[4998]: I0227 10:39:02.163566 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c9776ccc5-6qj8s" podStartSLOduration=4.163548616 podStartE2EDuration="4.163548616s" podCreationTimestamp="2026-02-27 10:38:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:39:02.145009188 +0000 UTC m=+1294.143280156" watchObservedRunningTime="2026-02-27 10:39:02.163548616 +0000 UTC m=+1294.161819584" Feb 27 10:39:02 crc kubenswrapper[4998]: I0227 10:39:02.266685 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-64c87cb5cd-prbxl" Feb 27 10:39:02 crc kubenswrapper[4998]: I0227 10:39:02.426936 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-787dd6b8cd-n8j8x" Feb 27 10:39:02 crc kubenswrapper[4998]: I0227 10:39:02.679284 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-65b865b5bf-kvsff"] Feb 27 10:39:02 crc kubenswrapper[4998]: I0227 10:39:02.679769 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-65b865b5bf-kvsff" podUID="90e83aa1-ab9f-4409-9515-6df2c46796cc" containerName="neutron-api" containerID="cri-o://e088aa17038527b16d120351b78485f781fe5e9522b19f745c98878c44b295ba" gracePeriod=30 Feb 27 10:39:02 crc kubenswrapper[4998]: I0227 10:39:02.680302 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-65b865b5bf-kvsff" podUID="90e83aa1-ab9f-4409-9515-6df2c46796cc" containerName="neutron-httpd" containerID="cri-o://dc3dad9b629d02afd0597eac6ecebaefdc7cb58f4d94e9ce3c84a781733473a9" gracePeriod=30 Feb 27 10:39:02 crc kubenswrapper[4998]: I0227 10:39:02.783603 4998 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-65b865b5bf-kvsff" podUID="90e83aa1-ab9f-4409-9515-6df2c46796cc" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.159:9696/\": read tcp 10.217.0.2:34520->10.217.0.159:9696: read: connection reset by peer" Feb 27 10:39:02 crc kubenswrapper[4998]: I0227 10:39:02.789942 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37ef111c-f910-4337-a24c-4debf7a4425b" path="/var/lib/kubelet/pods/37ef111c-f910-4337-a24c-4debf7a4425b/volumes" Feb 27 10:39:02 crc kubenswrapper[4998]: I0227 10:39:02.813022 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6c687d6d7f-drrbg"] Feb 27 10:39:02 crc kubenswrapper[4998]: E0227 10:39:02.813442 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37ef111c-f910-4337-a24c-4debf7a4425b" containerName="dnsmasq-dns" Feb 27 10:39:02 crc kubenswrapper[4998]: I0227 10:39:02.813460 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="37ef111c-f910-4337-a24c-4debf7a4425b" containerName="dnsmasq-dns" Feb 27 10:39:02 crc kubenswrapper[4998]: E0227 10:39:02.813491 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37ef111c-f910-4337-a24c-4debf7a4425b" containerName="init" Feb 27 10:39:02 crc kubenswrapper[4998]: I0227 10:39:02.813499 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="37ef111c-f910-4337-a24c-4debf7a4425b" containerName="init" Feb 27 10:39:02 crc kubenswrapper[4998]: I0227 10:39:02.813750 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="37ef111c-f910-4337-a24c-4debf7a4425b" containerName="dnsmasq-dns" Feb 27 10:39:02 crc kubenswrapper[4998]: I0227 10:39:02.815163 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6c687d6d7f-drrbg"] Feb 27 10:39:02 crc kubenswrapper[4998]: I0227 10:39:02.815290 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6c687d6d7f-drrbg" Feb 27 10:39:02 crc kubenswrapper[4998]: I0227 10:39:02.881586 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/82249664-67b0-479a-b26e-4a756f1d8b35-public-tls-certs\") pod \"neutron-6c687d6d7f-drrbg\" (UID: \"82249664-67b0-479a-b26e-4a756f1d8b35\") " pod="openstack/neutron-6c687d6d7f-drrbg" Feb 27 10:39:02 crc kubenswrapper[4998]: I0227 10:39:02.881794 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/82249664-67b0-479a-b26e-4a756f1d8b35-internal-tls-certs\") pod \"neutron-6c687d6d7f-drrbg\" (UID: \"82249664-67b0-479a-b26e-4a756f1d8b35\") " pod="openstack/neutron-6c687d6d7f-drrbg" Feb 27 10:39:02 crc kubenswrapper[4998]: I0227 10:39:02.881874 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stl69\" (UniqueName: \"kubernetes.io/projected/82249664-67b0-479a-b26e-4a756f1d8b35-kube-api-access-stl69\") pod \"neutron-6c687d6d7f-drrbg\" (UID: \"82249664-67b0-479a-b26e-4a756f1d8b35\") " pod="openstack/neutron-6c687d6d7f-drrbg" Feb 27 10:39:02 crc kubenswrapper[4998]: I0227 10:39:02.881969 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/82249664-67b0-479a-b26e-4a756f1d8b35-ovndb-tls-certs\") pod \"neutron-6c687d6d7f-drrbg\" (UID: \"82249664-67b0-479a-b26e-4a756f1d8b35\") " pod="openstack/neutron-6c687d6d7f-drrbg" Feb 27 10:39:02 crc kubenswrapper[4998]: I0227 10:39:02.881998 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/82249664-67b0-479a-b26e-4a756f1d8b35-config\") pod \"neutron-6c687d6d7f-drrbg\" (UID: \"82249664-67b0-479a-b26e-4a756f1d8b35\") " pod="openstack/neutron-6c687d6d7f-drrbg" Feb 27 10:39:02 crc kubenswrapper[4998]: I0227 10:39:02.882067 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/82249664-67b0-479a-b26e-4a756f1d8b35-httpd-config\") pod \"neutron-6c687d6d7f-drrbg\" (UID: \"82249664-67b0-479a-b26e-4a756f1d8b35\") " pod="openstack/neutron-6c687d6d7f-drrbg" Feb 27 10:39:02 crc kubenswrapper[4998]: I0227 10:39:02.882146 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82249664-67b0-479a-b26e-4a756f1d8b35-combined-ca-bundle\") pod \"neutron-6c687d6d7f-drrbg\" (UID: \"82249664-67b0-479a-b26e-4a756f1d8b35\") " pod="openstack/neutron-6c687d6d7f-drrbg" Feb 27 10:39:02 crc kubenswrapper[4998]: I0227 10:39:02.889775 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-8fcf676c4-nnmzc"] Feb 27 10:39:02 crc kubenswrapper[4998]: I0227 10:39:02.891865 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-8fcf676c4-nnmzc" Feb 27 10:39:02 crc kubenswrapper[4998]: I0227 10:39:02.912787 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Feb 27 10:39:02 crc kubenswrapper[4998]: I0227 10:39:02.913002 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Feb 27 10:39:02 crc kubenswrapper[4998]: I0227 10:39:02.939256 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-8fcf676c4-nnmzc"] Feb 27 10:39:02 crc kubenswrapper[4998]: I0227 10:39:02.984649 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/82249664-67b0-479a-b26e-4a756f1d8b35-internal-tls-certs\") pod \"neutron-6c687d6d7f-drrbg\" (UID: \"82249664-67b0-479a-b26e-4a756f1d8b35\") " pod="openstack/neutron-6c687d6d7f-drrbg" Feb 27 10:39:02 crc kubenswrapper[4998]: I0227 10:39:02.984719 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5b6939aa-143d-43c5-9547-0bdebbebaf43-config-data-custom\") pod \"barbican-api-8fcf676c4-nnmzc\" (UID: \"5b6939aa-143d-43c5-9547-0bdebbebaf43\") " pod="openstack/barbican-api-8fcf676c4-nnmzc" Feb 27 10:39:02 crc kubenswrapper[4998]: I0227 10:39:02.984762 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-stl69\" (UniqueName: \"kubernetes.io/projected/82249664-67b0-479a-b26e-4a756f1d8b35-kube-api-access-stl69\") pod \"neutron-6c687d6d7f-drrbg\" (UID: \"82249664-67b0-479a-b26e-4a756f1d8b35\") " pod="openstack/neutron-6c687d6d7f-drrbg" Feb 27 10:39:02 crc kubenswrapper[4998]: I0227 10:39:02.984790 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b6939aa-143d-43c5-9547-0bdebbebaf43-internal-tls-certs\") pod \"barbican-api-8fcf676c4-nnmzc\" (UID: \"5b6939aa-143d-43c5-9547-0bdebbebaf43\") " pod="openstack/barbican-api-8fcf676c4-nnmzc" Feb 27 10:39:02 crc kubenswrapper[4998]: I0227 10:39:02.984823 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b6939aa-143d-43c5-9547-0bdebbebaf43-config-data\") pod \"barbican-api-8fcf676c4-nnmzc\" (UID: \"5b6939aa-143d-43c5-9547-0bdebbebaf43\") " pod="openstack/barbican-api-8fcf676c4-nnmzc" Feb 27 10:39:02 crc kubenswrapper[4998]: I0227 10:39:02.984917 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/82249664-67b0-479a-b26e-4a756f1d8b35-ovndb-tls-certs\") pod \"neutron-6c687d6d7f-drrbg\" (UID: \"82249664-67b0-479a-b26e-4a756f1d8b35\") " pod="openstack/neutron-6c687d6d7f-drrbg" Feb 27 10:39:02 crc kubenswrapper[4998]: I0227 10:39:02.984946 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/82249664-67b0-479a-b26e-4a756f1d8b35-config\") pod \"neutron-6c687d6d7f-drrbg\" (UID: \"82249664-67b0-479a-b26e-4a756f1d8b35\") " pod="openstack/neutron-6c687d6d7f-drrbg" Feb 27 10:39:02 crc kubenswrapper[4998]: I0227 10:39:02.984997 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbbll\" (UniqueName: \"kubernetes.io/projected/5b6939aa-143d-43c5-9547-0bdebbebaf43-kube-api-access-tbbll\") pod \"barbican-api-8fcf676c4-nnmzc\" (UID: \"5b6939aa-143d-43c5-9547-0bdebbebaf43\") " pod="openstack/barbican-api-8fcf676c4-nnmzc" Feb 27 10:39:02 crc kubenswrapper[4998]: I0227 10:39:02.985026 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/82249664-67b0-479a-b26e-4a756f1d8b35-httpd-config\") pod \"neutron-6c687d6d7f-drrbg\" (UID: \"82249664-67b0-479a-b26e-4a756f1d8b35\") " pod="openstack/neutron-6c687d6d7f-drrbg" Feb 27 10:39:02 crc kubenswrapper[4998]: I0227 10:39:02.985065 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82249664-67b0-479a-b26e-4a756f1d8b35-combined-ca-bundle\") pod \"neutron-6c687d6d7f-drrbg\" (UID: \"82249664-67b0-479a-b26e-4a756f1d8b35\") " pod="openstack/neutron-6c687d6d7f-drrbg" Feb 27 10:39:02 crc kubenswrapper[4998]: I0227 10:39:02.985129 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b6939aa-143d-43c5-9547-0bdebbebaf43-public-tls-certs\") pod \"barbican-api-8fcf676c4-nnmzc\" (UID: \"5b6939aa-143d-43c5-9547-0bdebbebaf43\") " pod="openstack/barbican-api-8fcf676c4-nnmzc" Feb 27 10:39:02 crc kubenswrapper[4998]: I0227 10:39:02.985174 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/82249664-67b0-479a-b26e-4a756f1d8b35-public-tls-certs\") pod \"neutron-6c687d6d7f-drrbg\" (UID: \"82249664-67b0-479a-b26e-4a756f1d8b35\") " pod="openstack/neutron-6c687d6d7f-drrbg" Feb 27 10:39:02 crc kubenswrapper[4998]: I0227 10:39:02.985206 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b6939aa-143d-43c5-9547-0bdebbebaf43-combined-ca-bundle\") pod \"barbican-api-8fcf676c4-nnmzc\" (UID: \"5b6939aa-143d-43c5-9547-0bdebbebaf43\") " pod="openstack/barbican-api-8fcf676c4-nnmzc" Feb 27 10:39:02 crc kubenswrapper[4998]: I0227 10:39:02.985251 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5b6939aa-143d-43c5-9547-0bdebbebaf43-logs\") pod \"barbican-api-8fcf676c4-nnmzc\" (UID: \"5b6939aa-143d-43c5-9547-0bdebbebaf43\") " pod="openstack/barbican-api-8fcf676c4-nnmzc" Feb 27 10:39:02 crc kubenswrapper[4998]: I0227 10:39:02.990884 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82249664-67b0-479a-b26e-4a756f1d8b35-combined-ca-bundle\") pod \"neutron-6c687d6d7f-drrbg\" (UID: \"82249664-67b0-479a-b26e-4a756f1d8b35\") " pod="openstack/neutron-6c687d6d7f-drrbg" Feb 27 10:39:02 crc kubenswrapper[4998]: I0227 10:39:02.991714 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/82249664-67b0-479a-b26e-4a756f1d8b35-httpd-config\") pod \"neutron-6c687d6d7f-drrbg\" (UID: \"82249664-67b0-479a-b26e-4a756f1d8b35\") " pod="openstack/neutron-6c687d6d7f-drrbg" Feb 27 10:39:02 crc kubenswrapper[4998]: I0227 10:39:02.992056 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/82249664-67b0-479a-b26e-4a756f1d8b35-config\") pod \"neutron-6c687d6d7f-drrbg\" (UID: \"82249664-67b0-479a-b26e-4a756f1d8b35\") " pod="openstack/neutron-6c687d6d7f-drrbg" Feb 27 10:39:02 crc kubenswrapper[4998]: I0227 10:39:02.992951 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/82249664-67b0-479a-b26e-4a756f1d8b35-public-tls-certs\") pod \"neutron-6c687d6d7f-drrbg\" (UID: \"82249664-67b0-479a-b26e-4a756f1d8b35\") " pod="openstack/neutron-6c687d6d7f-drrbg" Feb 27 10:39:02 crc kubenswrapper[4998]: I0227 10:39:02.993055 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/82249664-67b0-479a-b26e-4a756f1d8b35-internal-tls-certs\") pod \"neutron-6c687d6d7f-drrbg\" (UID: \"82249664-67b0-479a-b26e-4a756f1d8b35\") " pod="openstack/neutron-6c687d6d7f-drrbg" Feb 27 10:39:02 crc kubenswrapper[4998]: I0227 10:39:02.997875 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/82249664-67b0-479a-b26e-4a756f1d8b35-ovndb-tls-certs\") pod \"neutron-6c687d6d7f-drrbg\" (UID: \"82249664-67b0-479a-b26e-4a756f1d8b35\") " pod="openstack/neutron-6c687d6d7f-drrbg" Feb 27 10:39:03 crc kubenswrapper[4998]: I0227 10:39:03.027695 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-stl69\" (UniqueName: \"kubernetes.io/projected/82249664-67b0-479a-b26e-4a756f1d8b35-kube-api-access-stl69\") pod \"neutron-6c687d6d7f-drrbg\" (UID: \"82249664-67b0-479a-b26e-4a756f1d8b35\") " pod="openstack/neutron-6c687d6d7f-drrbg" Feb 27 10:39:03 crc kubenswrapper[4998]: I0227 10:39:03.086378 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5b6939aa-143d-43c5-9547-0bdebbebaf43-config-data-custom\") pod \"barbican-api-8fcf676c4-nnmzc\" (UID: \"5b6939aa-143d-43c5-9547-0bdebbebaf43\") " pod="openstack/barbican-api-8fcf676c4-nnmzc" Feb 27 10:39:03 crc kubenswrapper[4998]: I0227 10:39:03.086420 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b6939aa-143d-43c5-9547-0bdebbebaf43-internal-tls-certs\") pod \"barbican-api-8fcf676c4-nnmzc\" (UID: \"5b6939aa-143d-43c5-9547-0bdebbebaf43\") " pod="openstack/barbican-api-8fcf676c4-nnmzc" Feb 27 10:39:03 crc kubenswrapper[4998]: I0227 10:39:03.086442 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b6939aa-143d-43c5-9547-0bdebbebaf43-config-data\") pod \"barbican-api-8fcf676c4-nnmzc\" (UID: \"5b6939aa-143d-43c5-9547-0bdebbebaf43\") " pod="openstack/barbican-api-8fcf676c4-nnmzc" Feb 27 10:39:03 crc kubenswrapper[4998]: I0227 10:39:03.086518 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbbll\" (UniqueName: \"kubernetes.io/projected/5b6939aa-143d-43c5-9547-0bdebbebaf43-kube-api-access-tbbll\") pod \"barbican-api-8fcf676c4-nnmzc\" (UID: \"5b6939aa-143d-43c5-9547-0bdebbebaf43\") " pod="openstack/barbican-api-8fcf676c4-nnmzc" Feb 27 10:39:03 crc kubenswrapper[4998]: I0227 10:39:03.086567 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b6939aa-143d-43c5-9547-0bdebbebaf43-public-tls-certs\") pod \"barbican-api-8fcf676c4-nnmzc\" (UID: \"5b6939aa-143d-43c5-9547-0bdebbebaf43\") " pod="openstack/barbican-api-8fcf676c4-nnmzc" Feb 27 10:39:03 crc kubenswrapper[4998]: I0227 10:39:03.086596 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b6939aa-143d-43c5-9547-0bdebbebaf43-combined-ca-bundle\") pod \"barbican-api-8fcf676c4-nnmzc\" (UID: \"5b6939aa-143d-43c5-9547-0bdebbebaf43\") " pod="openstack/barbican-api-8fcf676c4-nnmzc" Feb 27 10:39:03 crc kubenswrapper[4998]: I0227 10:39:03.086612 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5b6939aa-143d-43c5-9547-0bdebbebaf43-logs\") pod \"barbican-api-8fcf676c4-nnmzc\" (UID: \"5b6939aa-143d-43c5-9547-0bdebbebaf43\") " pod="openstack/barbican-api-8fcf676c4-nnmzc" Feb 27 10:39:03 crc kubenswrapper[4998]: I0227 10:39:03.086982 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5b6939aa-143d-43c5-9547-0bdebbebaf43-logs\") pod \"barbican-api-8fcf676c4-nnmzc\" (UID: \"5b6939aa-143d-43c5-9547-0bdebbebaf43\") " pod="openstack/barbican-api-8fcf676c4-nnmzc" Feb 27 10:39:03 crc kubenswrapper[4998]: I0227 10:39:03.091848 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5b6939aa-143d-43c5-9547-0bdebbebaf43-config-data-custom\") pod \"barbican-api-8fcf676c4-nnmzc\" (UID: \"5b6939aa-143d-43c5-9547-0bdebbebaf43\") " pod="openstack/barbican-api-8fcf676c4-nnmzc" Feb 27 10:39:03 crc kubenswrapper[4998]: I0227 10:39:03.094124 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b6939aa-143d-43c5-9547-0bdebbebaf43-internal-tls-certs\") pod \"barbican-api-8fcf676c4-nnmzc\" (UID: \"5b6939aa-143d-43c5-9547-0bdebbebaf43\") " pod="openstack/barbican-api-8fcf676c4-nnmzc" Feb 27 10:39:03 crc kubenswrapper[4998]: I0227 10:39:03.097207 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b6939aa-143d-43c5-9547-0bdebbebaf43-combined-ca-bundle\") pod \"barbican-api-8fcf676c4-nnmzc\" (UID: \"5b6939aa-143d-43c5-9547-0bdebbebaf43\") " pod="openstack/barbican-api-8fcf676c4-nnmzc" Feb 27 10:39:03 crc kubenswrapper[4998]: I0227 10:39:03.097525 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b6939aa-143d-43c5-9547-0bdebbebaf43-config-data\") pod \"barbican-api-8fcf676c4-nnmzc\" (UID: \"5b6939aa-143d-43c5-9547-0bdebbebaf43\") " pod="openstack/barbican-api-8fcf676c4-nnmzc" Feb 27 10:39:03 crc kubenswrapper[4998]: I0227 10:39:03.097967 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b6939aa-143d-43c5-9547-0bdebbebaf43-public-tls-certs\") pod \"barbican-api-8fcf676c4-nnmzc\" (UID: \"5b6939aa-143d-43c5-9547-0bdebbebaf43\") " pod="openstack/barbican-api-8fcf676c4-nnmzc" Feb 27 10:39:03 crc kubenswrapper[4998]: I0227 10:39:03.106469 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbbll\" (UniqueName: \"kubernetes.io/projected/5b6939aa-143d-43c5-9547-0bdebbebaf43-kube-api-access-tbbll\") pod \"barbican-api-8fcf676c4-nnmzc\" (UID: \"5b6939aa-143d-43c5-9547-0bdebbebaf43\") " pod="openstack/barbican-api-8fcf676c4-nnmzc" Feb 27 10:39:03 crc kubenswrapper[4998]: I0227 10:39:03.137371 4998 generic.go:334] "Generic (PLEG): container finished" podID="90e83aa1-ab9f-4409-9515-6df2c46796cc" containerID="dc3dad9b629d02afd0597eac6ecebaefdc7cb58f4d94e9ce3c84a781733473a9" exitCode=0 Feb 27 10:39:03 crc kubenswrapper[4998]: I0227 10:39:03.137570 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-65b865b5bf-kvsff" event={"ID":"90e83aa1-ab9f-4409-9515-6df2c46796cc","Type":"ContainerDied","Data":"dc3dad9b629d02afd0597eac6ecebaefdc7cb58f4d94e9ce3c84a781733473a9"} Feb 27 10:39:03 crc kubenswrapper[4998]: I0227 10:39:03.141250 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6c687d6d7f-drrbg" Feb 27 10:39:03 crc kubenswrapper[4998]: I0227 10:39:03.143680 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"fbbbec2f-ab3a-413d-81d6-04a63d922de8","Type":"ContainerStarted","Data":"1d4581fa2d57a706106f3ce3d48ebece994cf4bbe669d29a1efaa1651f2b37db"} Feb 27 10:39:03 crc kubenswrapper[4998]: I0227 10:39:03.149423 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b318c0e6-bda6-4a0b-9f83-6a636ef90c08","Type":"ContainerStarted","Data":"3b7b29a9a14470e97f5459925c46251ada2e7b75204c1d2ec909d6dd2782441a"} Feb 27 10:39:03 crc kubenswrapper[4998]: I0227 10:39:03.180121 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.072447555 podStartE2EDuration="5.180098917s" podCreationTimestamp="2026-02-27 10:38:58 +0000 UTC" firstStartedPulling="2026-02-27 10:38:59.790646321 +0000 UTC m=+1291.788917289" lastFinishedPulling="2026-02-27 10:39:00.898297683 +0000 UTC m=+1292.896568651" observedRunningTime="2026-02-27 10:39:03.168109777 +0000 UTC m=+1295.166380755" watchObservedRunningTime="2026-02-27 10:39:03.180098917 +0000 UTC m=+1295.178369885" Feb 27 10:39:03 crc kubenswrapper[4998]: I0227 10:39:03.270699 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-8fcf676c4-nnmzc" Feb 27 10:39:03 crc kubenswrapper[4998]: I0227 10:39:03.399648 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5f9f5dcf86-47qzc" Feb 27 10:39:03 crc kubenswrapper[4998]: I0227 10:39:03.522192 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 27 10:39:03 crc kubenswrapper[4998]: I0227 10:39:03.837538 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-8fcf676c4-nnmzc"] Feb 27 10:39:03 crc kubenswrapper[4998]: W0227 10:39:03.848050 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5b6939aa_143d_43c5_9547_0bdebbebaf43.slice/crio-4cd13a19846796cdf2a31743215d56b101f65ee409feaab0a013ef09b0f1ef7a WatchSource:0}: Error finding container 4cd13a19846796cdf2a31743215d56b101f65ee409feaab0a013ef09b0f1ef7a: Status 404 returned error can't find the container with id 4cd13a19846796cdf2a31743215d56b101f65ee409feaab0a013ef09b0f1ef7a Feb 27 10:39:03 crc kubenswrapper[4998]: W0227 10:39:03.863738 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod82249664_67b0_479a_b26e_4a756f1d8b35.slice/crio-5a9a338a3df5287a7176aaf37517bd567ac306de1421bba7a8ad6de723cd547b WatchSource:0}: Error finding container 5a9a338a3df5287a7176aaf37517bd567ac306de1421bba7a8ad6de723cd547b: Status 404 returned error can't find the container with id 5a9a338a3df5287a7176aaf37517bd567ac306de1421bba7a8ad6de723cd547b Feb 27 10:39:03 crc kubenswrapper[4998]: I0227 10:39:03.868763 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6c687d6d7f-drrbg"] Feb 27 10:39:04 crc kubenswrapper[4998]: I0227 10:39:04.218676 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-8fcf676c4-nnmzc" event={"ID":"5b6939aa-143d-43c5-9547-0bdebbebaf43","Type":"ContainerStarted","Data":"bd231f449057af6db9b9762099e2da28f9041e970e678c47aa5e0471ba4f2f62"} Feb 27 10:39:04 crc kubenswrapper[4998]: I0227 10:39:04.229139 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-8fcf676c4-nnmzc" event={"ID":"5b6939aa-143d-43c5-9547-0bdebbebaf43","Type":"ContainerStarted","Data":"4cd13a19846796cdf2a31743215d56b101f65ee409feaab0a013ef09b0f1ef7a"} Feb 27 10:39:04 crc kubenswrapper[4998]: I0227 10:39:04.229198 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6c687d6d7f-drrbg" event={"ID":"82249664-67b0-479a-b26e-4a756f1d8b35","Type":"ContainerStarted","Data":"5a9a338a3df5287a7176aaf37517bd567ac306de1421bba7a8ad6de723cd547b"} Feb 27 10:39:04 crc kubenswrapper[4998]: I0227 10:39:04.229580 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="525cac6b-3504-40ef-bdb8-6352411b7006" containerName="cinder-api-log" containerID="cri-o://8125398b44ab64c6e2c918f34c7cac044a83b4dd49c1dd09be7013d9c8ddc483" gracePeriod=30 Feb 27 10:39:04 crc kubenswrapper[4998]: I0227 10:39:04.230086 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="525cac6b-3504-40ef-bdb8-6352411b7006" containerName="cinder-api" containerID="cri-o://40d06c2b7f227b7e4232c5dfa35f5b741387241707cd07f5bb54ca68977b7f03" gracePeriod=30 Feb 27 10:39:04 crc kubenswrapper[4998]: I0227 10:39:04.524355 4998 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-65b865b5bf-kvsff" podUID="90e83aa1-ab9f-4409-9515-6df2c46796cc" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.159:9696/\": dial tcp 10.217.0.159:9696: connect: connection refused" Feb 27 10:39:04 crc kubenswrapper[4998]: I0227 10:39:04.819772 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 27 10:39:04 crc kubenswrapper[4998]: I0227 10:39:04.935790 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/525cac6b-3504-40ef-bdb8-6352411b7006-logs\") pod \"525cac6b-3504-40ef-bdb8-6352411b7006\" (UID: \"525cac6b-3504-40ef-bdb8-6352411b7006\") " Feb 27 10:39:04 crc kubenswrapper[4998]: I0227 10:39:04.936001 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/525cac6b-3504-40ef-bdb8-6352411b7006-config-data-custom\") pod \"525cac6b-3504-40ef-bdb8-6352411b7006\" (UID: \"525cac6b-3504-40ef-bdb8-6352411b7006\") " Feb 27 10:39:04 crc kubenswrapper[4998]: I0227 10:39:04.936122 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/525cac6b-3504-40ef-bdb8-6352411b7006-combined-ca-bundle\") pod \"525cac6b-3504-40ef-bdb8-6352411b7006\" (UID: \"525cac6b-3504-40ef-bdb8-6352411b7006\") " Feb 27 10:39:04 crc kubenswrapper[4998]: I0227 10:39:04.936247 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/525cac6b-3504-40ef-bdb8-6352411b7006-config-data\") pod \"525cac6b-3504-40ef-bdb8-6352411b7006\" (UID: \"525cac6b-3504-40ef-bdb8-6352411b7006\") " Feb 27 10:39:04 crc kubenswrapper[4998]: I0227 10:39:04.936404 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mv27s\" (UniqueName: \"kubernetes.io/projected/525cac6b-3504-40ef-bdb8-6352411b7006-kube-api-access-mv27s\") pod \"525cac6b-3504-40ef-bdb8-6352411b7006\" (UID: \"525cac6b-3504-40ef-bdb8-6352411b7006\") " Feb 27 10:39:04 crc kubenswrapper[4998]: I0227 10:39:04.936524 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/525cac6b-3504-40ef-bdb8-6352411b7006-scripts\") pod \"525cac6b-3504-40ef-bdb8-6352411b7006\" (UID: \"525cac6b-3504-40ef-bdb8-6352411b7006\") " Feb 27 10:39:04 crc kubenswrapper[4998]: I0227 10:39:04.936609 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/525cac6b-3504-40ef-bdb8-6352411b7006-etc-machine-id\") pod \"525cac6b-3504-40ef-bdb8-6352411b7006\" (UID: \"525cac6b-3504-40ef-bdb8-6352411b7006\") " Feb 27 10:39:04 crc kubenswrapper[4998]: I0227 10:39:04.937738 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/525cac6b-3504-40ef-bdb8-6352411b7006-logs" (OuterVolumeSpecName: "logs") pod "525cac6b-3504-40ef-bdb8-6352411b7006" (UID: "525cac6b-3504-40ef-bdb8-6352411b7006"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:39:04 crc kubenswrapper[4998]: I0227 10:39:04.944030 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/525cac6b-3504-40ef-bdb8-6352411b7006-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "525cac6b-3504-40ef-bdb8-6352411b7006" (UID: "525cac6b-3504-40ef-bdb8-6352411b7006"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 10:39:04 crc kubenswrapper[4998]: I0227 10:39:04.952381 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/525cac6b-3504-40ef-bdb8-6352411b7006-scripts" (OuterVolumeSpecName: "scripts") pod "525cac6b-3504-40ef-bdb8-6352411b7006" (UID: "525cac6b-3504-40ef-bdb8-6352411b7006"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:39:04 crc kubenswrapper[4998]: I0227 10:39:04.952501 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/525cac6b-3504-40ef-bdb8-6352411b7006-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "525cac6b-3504-40ef-bdb8-6352411b7006" (UID: "525cac6b-3504-40ef-bdb8-6352411b7006"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:39:04 crc kubenswrapper[4998]: I0227 10:39:04.952619 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/525cac6b-3504-40ef-bdb8-6352411b7006-kube-api-access-mv27s" (OuterVolumeSpecName: "kube-api-access-mv27s") pod "525cac6b-3504-40ef-bdb8-6352411b7006" (UID: "525cac6b-3504-40ef-bdb8-6352411b7006"). InnerVolumeSpecName "kube-api-access-mv27s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:39:04 crc kubenswrapper[4998]: I0227 10:39:04.974350 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/525cac6b-3504-40ef-bdb8-6352411b7006-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "525cac6b-3504-40ef-bdb8-6352411b7006" (UID: "525cac6b-3504-40ef-bdb8-6352411b7006"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:39:05 crc kubenswrapper[4998]: I0227 10:39:05.001338 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/525cac6b-3504-40ef-bdb8-6352411b7006-config-data" (OuterVolumeSpecName: "config-data") pod "525cac6b-3504-40ef-bdb8-6352411b7006" (UID: "525cac6b-3504-40ef-bdb8-6352411b7006"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:39:05 crc kubenswrapper[4998]: I0227 10:39:05.038610 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mv27s\" (UniqueName: \"kubernetes.io/projected/525cac6b-3504-40ef-bdb8-6352411b7006-kube-api-access-mv27s\") on node \"crc\" DevicePath \"\"" Feb 27 10:39:05 crc kubenswrapper[4998]: I0227 10:39:05.038654 4998 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/525cac6b-3504-40ef-bdb8-6352411b7006-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 10:39:05 crc kubenswrapper[4998]: I0227 10:39:05.038665 4998 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/525cac6b-3504-40ef-bdb8-6352411b7006-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 27 10:39:05 crc kubenswrapper[4998]: I0227 10:39:05.038674 4998 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/525cac6b-3504-40ef-bdb8-6352411b7006-logs\") on node \"crc\" DevicePath \"\"" Feb 27 10:39:05 crc kubenswrapper[4998]: I0227 10:39:05.038683 4998 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/525cac6b-3504-40ef-bdb8-6352411b7006-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 27 10:39:05 crc kubenswrapper[4998]: I0227 10:39:05.038691 4998 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/525cac6b-3504-40ef-bdb8-6352411b7006-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:39:05 crc kubenswrapper[4998]: I0227 10:39:05.038700 4998 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/525cac6b-3504-40ef-bdb8-6352411b7006-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 10:39:05 crc kubenswrapper[4998]: I0227 10:39:05.119300 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-64c87cb5cd-prbxl" Feb 27 10:39:05 crc kubenswrapper[4998]: I0227 10:39:05.234994 4998 generic.go:334] "Generic (PLEG): container finished" podID="525cac6b-3504-40ef-bdb8-6352411b7006" containerID="40d06c2b7f227b7e4232c5dfa35f5b741387241707cd07f5bb54ca68977b7f03" exitCode=0 Feb 27 10:39:05 crc kubenswrapper[4998]: I0227 10:39:05.235023 4998 generic.go:334] "Generic (PLEG): container finished" podID="525cac6b-3504-40ef-bdb8-6352411b7006" containerID="8125398b44ab64c6e2c918f34c7cac044a83b4dd49c1dd09be7013d9c8ddc483" exitCode=143 Feb 27 10:39:05 crc kubenswrapper[4998]: I0227 10:39:05.235058 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"525cac6b-3504-40ef-bdb8-6352411b7006","Type":"ContainerDied","Data":"40d06c2b7f227b7e4232c5dfa35f5b741387241707cd07f5bb54ca68977b7f03"} Feb 27 10:39:05 crc kubenswrapper[4998]: I0227 10:39:05.235087 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"525cac6b-3504-40ef-bdb8-6352411b7006","Type":"ContainerDied","Data":"8125398b44ab64c6e2c918f34c7cac044a83b4dd49c1dd09be7013d9c8ddc483"} Feb 27 10:39:05 crc kubenswrapper[4998]: I0227 10:39:05.235097 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"525cac6b-3504-40ef-bdb8-6352411b7006","Type":"ContainerDied","Data":"e52b1dbd3b25d8ae56c6acf6aa55bef2910214bc3d16938ad9b320544622901f"} Feb 27 10:39:05 crc kubenswrapper[4998]: I0227 10:39:05.235113 4998 scope.go:117] "RemoveContainer" containerID="40d06c2b7f227b7e4232c5dfa35f5b741387241707cd07f5bb54ca68977b7f03" Feb 27 10:39:05 crc kubenswrapper[4998]: I0227 10:39:05.235257 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 27 10:39:05 crc kubenswrapper[4998]: I0227 10:39:05.242096 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b318c0e6-bda6-4a0b-9f83-6a636ef90c08","Type":"ContainerStarted","Data":"952add9e480d2f75a6dc35ed32f4f1d2c64eff0c39f1db92eb9e22b123d65e62"} Feb 27 10:39:05 crc kubenswrapper[4998]: I0227 10:39:05.243769 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 27 10:39:05 crc kubenswrapper[4998]: I0227 10:39:05.258800 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6c687d6d7f-drrbg" event={"ID":"82249664-67b0-479a-b26e-4a756f1d8b35","Type":"ContainerStarted","Data":"1fca5cca73a78ae1f78c25869205d0100d458e32e32b8fbf3c43898a51d1c5f5"} Feb 27 10:39:05 crc kubenswrapper[4998]: I0227 10:39:05.258861 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6c687d6d7f-drrbg" event={"ID":"82249664-67b0-479a-b26e-4a756f1d8b35","Type":"ContainerStarted","Data":"6a2a82fbca14dc2db83bc9c5722e3588688b2a505d8b466094f401ed524108da"} Feb 27 10:39:05 crc kubenswrapper[4998]: I0227 10:39:05.259966 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6c687d6d7f-drrbg" Feb 27 10:39:05 crc kubenswrapper[4998]: I0227 10:39:05.272321 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.930115963 podStartE2EDuration="8.272293525s" podCreationTimestamp="2026-02-27 10:38:57 +0000 UTC" firstStartedPulling="2026-02-27 10:38:59.688873761 +0000 UTC m=+1291.687144729" lastFinishedPulling="2026-02-27 10:39:04.031051323 +0000 UTC m=+1296.029322291" observedRunningTime="2026-02-27 10:39:05.259643193 +0000 UTC m=+1297.257914161" watchObservedRunningTime="2026-02-27 10:39:05.272293525 +0000 UTC m=+1297.270564513" Feb 27 10:39:05 crc kubenswrapper[4998]: I0227 10:39:05.276023 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-8fcf676c4-nnmzc" event={"ID":"5b6939aa-143d-43c5-9547-0bdebbebaf43","Type":"ContainerStarted","Data":"c40d80027f4197c0e743fa77187932ce1cd6bac0685cdfbc27be54eb01633d60"} Feb 27 10:39:05 crc kubenswrapper[4998]: I0227 10:39:05.276214 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-8fcf676c4-nnmzc" Feb 27 10:39:05 crc kubenswrapper[4998]: I0227 10:39:05.300777 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 27 10:39:05 crc kubenswrapper[4998]: I0227 10:39:05.308767 4998 scope.go:117] "RemoveContainer" containerID="8125398b44ab64c6e2c918f34c7cac044a83b4dd49c1dd09be7013d9c8ddc483" Feb 27 10:39:05 crc kubenswrapper[4998]: I0227 10:39:05.317832 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Feb 27 10:39:05 crc kubenswrapper[4998]: I0227 10:39:05.361598 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 27 10:39:05 crc kubenswrapper[4998]: I0227 10:39:05.361735 4998 scope.go:117] "RemoveContainer" containerID="40d06c2b7f227b7e4232c5dfa35f5b741387241707cd07f5bb54ca68977b7f03" Feb 27 10:39:05 crc kubenswrapper[4998]: E0227 10:39:05.362036 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="525cac6b-3504-40ef-bdb8-6352411b7006" containerName="cinder-api-log" Feb 27 10:39:05 crc kubenswrapper[4998]: I0227 10:39:05.362054 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="525cac6b-3504-40ef-bdb8-6352411b7006" containerName="cinder-api-log" Feb 27 10:39:05 crc kubenswrapper[4998]: E0227 10:39:05.362078 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="525cac6b-3504-40ef-bdb8-6352411b7006" containerName="cinder-api" Feb 27 10:39:05 crc kubenswrapper[4998]: I0227 10:39:05.362086 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="525cac6b-3504-40ef-bdb8-6352411b7006" containerName="cinder-api" Feb 27 10:39:05 crc kubenswrapper[4998]: I0227 10:39:05.362284 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="525cac6b-3504-40ef-bdb8-6352411b7006" containerName="cinder-api-log" Feb 27 10:39:05 crc kubenswrapper[4998]: I0227 10:39:05.362305 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="525cac6b-3504-40ef-bdb8-6352411b7006" containerName="cinder-api" Feb 27 10:39:05 crc kubenswrapper[4998]: I0227 10:39:05.363254 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 27 10:39:05 crc kubenswrapper[4998]: I0227 10:39:05.366700 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Feb 27 10:39:05 crc kubenswrapper[4998]: E0227 10:39:05.366707 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40d06c2b7f227b7e4232c5dfa35f5b741387241707cd07f5bb54ca68977b7f03\": container with ID starting with 40d06c2b7f227b7e4232c5dfa35f5b741387241707cd07f5bb54ca68977b7f03 not found: ID does not exist" containerID="40d06c2b7f227b7e4232c5dfa35f5b741387241707cd07f5bb54ca68977b7f03" Feb 27 10:39:05 crc kubenswrapper[4998]: I0227 10:39:05.366754 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40d06c2b7f227b7e4232c5dfa35f5b741387241707cd07f5bb54ca68977b7f03"} err="failed to get container status \"40d06c2b7f227b7e4232c5dfa35f5b741387241707cd07f5bb54ca68977b7f03\": rpc error: code = NotFound desc = could not find container \"40d06c2b7f227b7e4232c5dfa35f5b741387241707cd07f5bb54ca68977b7f03\": container with ID starting with 40d06c2b7f227b7e4232c5dfa35f5b741387241707cd07f5bb54ca68977b7f03 not found: ID does not exist" Feb 27 10:39:05 crc kubenswrapper[4998]: I0227 10:39:05.366787 4998 scope.go:117] "RemoveContainer" containerID="8125398b44ab64c6e2c918f34c7cac044a83b4dd49c1dd09be7013d9c8ddc483" Feb 27 10:39:05 crc kubenswrapper[4998]: I0227 10:39:05.367031 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 27 10:39:05 crc kubenswrapper[4998]: I0227 10:39:05.367186 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Feb 27 10:39:05 crc kubenswrapper[4998]: E0227 10:39:05.367204 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8125398b44ab64c6e2c918f34c7cac044a83b4dd49c1dd09be7013d9c8ddc483\": container with ID starting with 8125398b44ab64c6e2c918f34c7cac044a83b4dd49c1dd09be7013d9c8ddc483 not found: ID does not exist" containerID="8125398b44ab64c6e2c918f34c7cac044a83b4dd49c1dd09be7013d9c8ddc483" Feb 27 10:39:05 crc kubenswrapper[4998]: I0227 10:39:05.367252 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8125398b44ab64c6e2c918f34c7cac044a83b4dd49c1dd09be7013d9c8ddc483"} err="failed to get container status \"8125398b44ab64c6e2c918f34c7cac044a83b4dd49c1dd09be7013d9c8ddc483\": rpc error: code = NotFound desc = could not find container \"8125398b44ab64c6e2c918f34c7cac044a83b4dd49c1dd09be7013d9c8ddc483\": container with ID starting with 8125398b44ab64c6e2c918f34c7cac044a83b4dd49c1dd09be7013d9c8ddc483 not found: ID does not exist" Feb 27 10:39:05 crc kubenswrapper[4998]: I0227 10:39:05.367273 4998 scope.go:117] "RemoveContainer" containerID="40d06c2b7f227b7e4232c5dfa35f5b741387241707cd07f5bb54ca68977b7f03" Feb 27 10:39:05 crc kubenswrapper[4998]: I0227 10:39:05.367956 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40d06c2b7f227b7e4232c5dfa35f5b741387241707cd07f5bb54ca68977b7f03"} err="failed to get container status \"40d06c2b7f227b7e4232c5dfa35f5b741387241707cd07f5bb54ca68977b7f03\": rpc error: code = NotFound desc = could not find container \"40d06c2b7f227b7e4232c5dfa35f5b741387241707cd07f5bb54ca68977b7f03\": container with ID starting with 40d06c2b7f227b7e4232c5dfa35f5b741387241707cd07f5bb54ca68977b7f03 not found: ID does not exist" Feb 27 10:39:05 crc kubenswrapper[4998]: I0227 10:39:05.367984 4998 scope.go:117] "RemoveContainer" containerID="8125398b44ab64c6e2c918f34c7cac044a83b4dd49c1dd09be7013d9c8ddc483" Feb 27 10:39:05 crc kubenswrapper[4998]: I0227 10:39:05.368198 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8125398b44ab64c6e2c918f34c7cac044a83b4dd49c1dd09be7013d9c8ddc483"} err="failed to get container status \"8125398b44ab64c6e2c918f34c7cac044a83b4dd49c1dd09be7013d9c8ddc483\": rpc error: code = NotFound desc = could not find container \"8125398b44ab64c6e2c918f34c7cac044a83b4dd49c1dd09be7013d9c8ddc483\": container with ID starting with 8125398b44ab64c6e2c918f34c7cac044a83b4dd49c1dd09be7013d9c8ddc483 not found: ID does not exist" Feb 27 10:39:05 crc kubenswrapper[4998]: I0227 10:39:05.393268 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 27 10:39:05 crc kubenswrapper[4998]: I0227 10:39:05.403107 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6c687d6d7f-drrbg" podStartSLOduration=3.403087796 podStartE2EDuration="3.403087796s" podCreationTimestamp="2026-02-27 10:39:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:39:05.309103033 +0000 UTC m=+1297.307373991" watchObservedRunningTime="2026-02-27 10:39:05.403087796 +0000 UTC m=+1297.401358774" Feb 27 10:39:05 crc kubenswrapper[4998]: I0227 10:39:05.411111 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-8fcf676c4-nnmzc" podStartSLOduration=3.411093599 podStartE2EDuration="3.411093599s" podCreationTimestamp="2026-02-27 10:39:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:39:05.3527891 +0000 UTC m=+1297.351060068" watchObservedRunningTime="2026-02-27 10:39:05.411093599 +0000 UTC m=+1297.409364567" Feb 27 10:39:05 crc kubenswrapper[4998]: I0227 10:39:05.455299 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d7d4f173-de8d-491e-b190-3b00b6da940a-etc-machine-id\") pod \"cinder-api-0\" (UID: \"d7d4f173-de8d-491e-b190-3b00b6da940a\") " pod="openstack/cinder-api-0" Feb 27 10:39:05 crc kubenswrapper[4998]: I0227 10:39:05.455346 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d7d4f173-de8d-491e-b190-3b00b6da940a-logs\") pod \"cinder-api-0\" (UID: \"d7d4f173-de8d-491e-b190-3b00b6da940a\") " pod="openstack/cinder-api-0" Feb 27 10:39:05 crc kubenswrapper[4998]: I0227 10:39:05.455404 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d7d4f173-de8d-491e-b190-3b00b6da940a-config-data-custom\") pod \"cinder-api-0\" (UID: \"d7d4f173-de8d-491e-b190-3b00b6da940a\") " pod="openstack/cinder-api-0" Feb 27 10:39:05 crc kubenswrapper[4998]: I0227 10:39:05.455421 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7d4f173-de8d-491e-b190-3b00b6da940a-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"d7d4f173-de8d-491e-b190-3b00b6da940a\") " pod="openstack/cinder-api-0" Feb 27 10:39:05 crc kubenswrapper[4998]: I0227 10:39:05.455451 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7d4f173-de8d-491e-b190-3b00b6da940a-scripts\") pod \"cinder-api-0\" (UID: \"d7d4f173-de8d-491e-b190-3b00b6da940a\") " pod="openstack/cinder-api-0" Feb 27 10:39:05 crc kubenswrapper[4998]: I0227 10:39:05.455474 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7d4f173-de8d-491e-b190-3b00b6da940a-public-tls-certs\") pod \"cinder-api-0\" (UID: \"d7d4f173-de8d-491e-b190-3b00b6da940a\") " pod="openstack/cinder-api-0" Feb 27 10:39:05 crc kubenswrapper[4998]: I0227 10:39:05.455500 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7d4f173-de8d-491e-b190-3b00b6da940a-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"d7d4f173-de8d-491e-b190-3b00b6da940a\") " pod="openstack/cinder-api-0" Feb 27 10:39:05 crc kubenswrapper[4998]: I0227 10:39:05.455543 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zm4rm\" (UniqueName: \"kubernetes.io/projected/d7d4f173-de8d-491e-b190-3b00b6da940a-kube-api-access-zm4rm\") pod \"cinder-api-0\" (UID: \"d7d4f173-de8d-491e-b190-3b00b6da940a\") " pod="openstack/cinder-api-0" Feb 27 10:39:05 crc kubenswrapper[4998]: I0227 10:39:05.455572 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7d4f173-de8d-491e-b190-3b00b6da940a-config-data\") pod \"cinder-api-0\" (UID: \"d7d4f173-de8d-491e-b190-3b00b6da940a\") " pod="openstack/cinder-api-0" Feb 27 10:39:05 crc kubenswrapper[4998]: E0227 10:39:05.496128 4998 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod525cac6b_3504_40ef_bdb8_6352411b7006.slice\": RecentStats: unable to find data in memory cache]" Feb 27 10:39:05 crc kubenswrapper[4998]: I0227 10:39:05.561419 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d7d4f173-de8d-491e-b190-3b00b6da940a-config-data-custom\") pod \"cinder-api-0\" (UID: \"d7d4f173-de8d-491e-b190-3b00b6da940a\") " pod="openstack/cinder-api-0" Feb 27 10:39:05 crc kubenswrapper[4998]: I0227 10:39:05.561467 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7d4f173-de8d-491e-b190-3b00b6da940a-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"d7d4f173-de8d-491e-b190-3b00b6da940a\") " pod="openstack/cinder-api-0" Feb 27 10:39:05 crc kubenswrapper[4998]: I0227 10:39:05.561533 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7d4f173-de8d-491e-b190-3b00b6da940a-scripts\") pod \"cinder-api-0\" (UID: \"d7d4f173-de8d-491e-b190-3b00b6da940a\") " pod="openstack/cinder-api-0" Feb 27 10:39:05 crc kubenswrapper[4998]: I0227 10:39:05.561581 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7d4f173-de8d-491e-b190-3b00b6da940a-public-tls-certs\") pod \"cinder-api-0\" (UID: \"d7d4f173-de8d-491e-b190-3b00b6da940a\") " pod="openstack/cinder-api-0" Feb 27 10:39:05 crc kubenswrapper[4998]: I0227 10:39:05.561618 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7d4f173-de8d-491e-b190-3b00b6da940a-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"d7d4f173-de8d-491e-b190-3b00b6da940a\") " pod="openstack/cinder-api-0" Feb 27 10:39:05 crc kubenswrapper[4998]: I0227 10:39:05.561694 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zm4rm\" (UniqueName: \"kubernetes.io/projected/d7d4f173-de8d-491e-b190-3b00b6da940a-kube-api-access-zm4rm\") pod \"cinder-api-0\" (UID: \"d7d4f173-de8d-491e-b190-3b00b6da940a\") " pod="openstack/cinder-api-0" Feb 27 10:39:05 crc kubenswrapper[4998]: I0227 10:39:05.561737 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7d4f173-de8d-491e-b190-3b00b6da940a-config-data\") pod \"cinder-api-0\" (UID: \"d7d4f173-de8d-491e-b190-3b00b6da940a\") " pod="openstack/cinder-api-0" Feb 27 10:39:05 crc kubenswrapper[4998]: I0227 10:39:05.561800 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d7d4f173-de8d-491e-b190-3b00b6da940a-etc-machine-id\") pod \"cinder-api-0\" (UID: \"d7d4f173-de8d-491e-b190-3b00b6da940a\") " pod="openstack/cinder-api-0" Feb 27 10:39:05 crc kubenswrapper[4998]: I0227 10:39:05.561828 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d7d4f173-de8d-491e-b190-3b00b6da940a-logs\") pod \"cinder-api-0\" (UID: \"d7d4f173-de8d-491e-b190-3b00b6da940a\") " pod="openstack/cinder-api-0" Feb 27 10:39:05 crc kubenswrapper[4998]: I0227 10:39:05.562304 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d7d4f173-de8d-491e-b190-3b00b6da940a-logs\") pod \"cinder-api-0\" (UID: \"d7d4f173-de8d-491e-b190-3b00b6da940a\") " pod="openstack/cinder-api-0" Feb 27 10:39:05 crc kubenswrapper[4998]: I0227 10:39:05.562551 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d7d4f173-de8d-491e-b190-3b00b6da940a-etc-machine-id\") pod \"cinder-api-0\" (UID: \"d7d4f173-de8d-491e-b190-3b00b6da940a\") " pod="openstack/cinder-api-0" Feb 27 10:39:05 crc kubenswrapper[4998]: I0227 10:39:05.566410 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7d4f173-de8d-491e-b190-3b00b6da940a-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"d7d4f173-de8d-491e-b190-3b00b6da940a\") " pod="openstack/cinder-api-0" Feb 27 10:39:05 crc kubenswrapper[4998]: I0227 10:39:05.566617 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7d4f173-de8d-491e-b190-3b00b6da940a-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"d7d4f173-de8d-491e-b190-3b00b6da940a\") " pod="openstack/cinder-api-0" Feb 27 10:39:05 crc kubenswrapper[4998]: I0227 10:39:05.567454 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7d4f173-de8d-491e-b190-3b00b6da940a-scripts\") pod \"cinder-api-0\" (UID: \"d7d4f173-de8d-491e-b190-3b00b6da940a\") " pod="openstack/cinder-api-0" Feb 27 10:39:05 crc kubenswrapper[4998]: I0227 10:39:05.568196 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7d4f173-de8d-491e-b190-3b00b6da940a-config-data\") pod \"cinder-api-0\" (UID: \"d7d4f173-de8d-491e-b190-3b00b6da940a\") " pod="openstack/cinder-api-0" Feb 27 10:39:05 crc kubenswrapper[4998]: I0227 10:39:05.578630 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d7d4f173-de8d-491e-b190-3b00b6da940a-config-data-custom\") pod \"cinder-api-0\" (UID: \"d7d4f173-de8d-491e-b190-3b00b6da940a\") " pod="openstack/cinder-api-0" Feb 27 10:39:05 crc kubenswrapper[4998]: I0227 10:39:05.581109 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7d4f173-de8d-491e-b190-3b00b6da940a-public-tls-certs\") pod \"cinder-api-0\" (UID: \"d7d4f173-de8d-491e-b190-3b00b6da940a\") " pod="openstack/cinder-api-0" Feb 27 10:39:05 crc kubenswrapper[4998]: I0227 10:39:05.589457 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zm4rm\" (UniqueName: \"kubernetes.io/projected/d7d4f173-de8d-491e-b190-3b00b6da940a-kube-api-access-zm4rm\") pod \"cinder-api-0\" (UID: \"d7d4f173-de8d-491e-b190-3b00b6da940a\") " pod="openstack/cinder-api-0" Feb 27 10:39:05 crc kubenswrapper[4998]: I0227 10:39:05.694981 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 27 10:39:05 crc kubenswrapper[4998]: I0227 10:39:05.847430 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5f9f5dcf86-47qzc" Feb 27 10:39:06 crc kubenswrapper[4998]: I0227 10:39:06.270138 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 27 10:39:06 crc kubenswrapper[4998]: I0227 10:39:06.363675 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-8fcf676c4-nnmzc" Feb 27 10:39:06 crc kubenswrapper[4998]: I0227 10:39:06.778809 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="525cac6b-3504-40ef-bdb8-6352411b7006" path="/var/lib/kubelet/pods/525cac6b-3504-40ef-bdb8-6352411b7006/volumes" Feb 27 10:39:07 crc kubenswrapper[4998]: I0227 10:39:07.371394 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"d7d4f173-de8d-491e-b190-3b00b6da940a","Type":"ContainerStarted","Data":"48cbc26ff026dc5e5207548352f90b7959729f232b99b8c6bb3a1b228b1dde49"} Feb 27 10:39:07 crc kubenswrapper[4998]: I0227 10:39:07.371716 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"d7d4f173-de8d-491e-b190-3b00b6da940a","Type":"ContainerStarted","Data":"5fb270d115003b1e542a037fe0650e71df92d6abe3b0f722d27a63f3055aad4f"} Feb 27 10:39:08 crc kubenswrapper[4998]: I0227 10:39:08.381623 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"d7d4f173-de8d-491e-b190-3b00b6da940a","Type":"ContainerStarted","Data":"b3d5ab3b4e760277f48aea7a9c4c0ddbf6da7846d04f448e77cbff45fb302524"} Feb 27 10:39:08 crc kubenswrapper[4998]: I0227 10:39:08.382099 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 27 10:39:08 crc kubenswrapper[4998]: I0227 10:39:08.408715 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.4086947309999998 podStartE2EDuration="3.408694731s" podCreationTimestamp="2026-02-27 10:39:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:39:08.400339696 +0000 UTC m=+1300.398610684" watchObservedRunningTime="2026-02-27 10:39:08.408694731 +0000 UTC m=+1300.406965699" Feb 27 10:39:08 crc kubenswrapper[4998]: I0227 10:39:08.731370 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 27 10:39:08 crc kubenswrapper[4998]: I0227 10:39:08.779980 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 27 10:39:08 crc kubenswrapper[4998]: I0227 10:39:08.798463 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c9776ccc5-6qj8s" Feb 27 10:39:08 crc kubenswrapper[4998]: I0227 10:39:08.865550 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-6fzk5"] Feb 27 10:39:08 crc kubenswrapper[4998]: I0227 10:39:08.865824 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-55f844cf75-6fzk5" podUID="73b3d7ab-a5fe-4bc8-a113-d665de7a3773" containerName="dnsmasq-dns" containerID="cri-o://49e89a12c7099fce6728d4b69138d6d1cb485e4360dc8f988fabd11fdd316cfe" gracePeriod=10 Feb 27 10:39:09 crc kubenswrapper[4998]: I0227 10:39:09.394046 4998 generic.go:334] "Generic (PLEG): container finished" podID="73b3d7ab-a5fe-4bc8-a113-d665de7a3773" containerID="49e89a12c7099fce6728d4b69138d6d1cb485e4360dc8f988fabd11fdd316cfe" exitCode=0 Feb 27 10:39:09 crc kubenswrapper[4998]: I0227 10:39:09.394275 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-6fzk5" event={"ID":"73b3d7ab-a5fe-4bc8-a113-d665de7a3773","Type":"ContainerDied","Data":"49e89a12c7099fce6728d4b69138d6d1cb485e4360dc8f988fabd11fdd316cfe"} Feb 27 10:39:09 crc kubenswrapper[4998]: I0227 10:39:09.394530 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-6fzk5" event={"ID":"73b3d7ab-a5fe-4bc8-a113-d665de7a3773","Type":"ContainerDied","Data":"058df4b49fc0778bd235a482cdd9f81e0e1405e12b5f48fdf57e3658b430332c"} Feb 27 10:39:09 crc kubenswrapper[4998]: I0227 10:39:09.394545 4998 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="058df4b49fc0778bd235a482cdd9f81e0e1405e12b5f48fdf57e3658b430332c" Feb 27 10:39:09 crc kubenswrapper[4998]: I0227 10:39:09.395502 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="fbbbec2f-ab3a-413d-81d6-04a63d922de8" containerName="cinder-scheduler" containerID="cri-o://4baa6eb69d4b3b824717412864bcaad7d13b276cd792825b5ac26a385908fdad" gracePeriod=30 Feb 27 10:39:09 crc kubenswrapper[4998]: I0227 10:39:09.395754 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="fbbbec2f-ab3a-413d-81d6-04a63d922de8" containerName="probe" containerID="cri-o://1d4581fa2d57a706106f3ce3d48ebece994cf4bbe669d29a1efaa1651f2b37db" gracePeriod=30 Feb 27 10:39:09 crc kubenswrapper[4998]: I0227 10:39:09.423671 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-6fzk5" Feb 27 10:39:09 crc kubenswrapper[4998]: I0227 10:39:09.540644 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/73b3d7ab-a5fe-4bc8-a113-d665de7a3773-ovsdbserver-sb\") pod \"73b3d7ab-a5fe-4bc8-a113-d665de7a3773\" (UID: \"73b3d7ab-a5fe-4bc8-a113-d665de7a3773\") " Feb 27 10:39:09 crc kubenswrapper[4998]: I0227 10:39:09.540704 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/73b3d7ab-a5fe-4bc8-a113-d665de7a3773-dns-svc\") pod \"73b3d7ab-a5fe-4bc8-a113-d665de7a3773\" (UID: \"73b3d7ab-a5fe-4bc8-a113-d665de7a3773\") " Feb 27 10:39:09 crc kubenswrapper[4998]: I0227 10:39:09.540758 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/73b3d7ab-a5fe-4bc8-a113-d665de7a3773-ovsdbserver-nb\") pod \"73b3d7ab-a5fe-4bc8-a113-d665de7a3773\" (UID: \"73b3d7ab-a5fe-4bc8-a113-d665de7a3773\") " Feb 27 10:39:09 crc kubenswrapper[4998]: I0227 10:39:09.540815 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/73b3d7ab-a5fe-4bc8-a113-d665de7a3773-dns-swift-storage-0\") pod \"73b3d7ab-a5fe-4bc8-a113-d665de7a3773\" (UID: \"73b3d7ab-a5fe-4bc8-a113-d665de7a3773\") " Feb 27 10:39:09 crc kubenswrapper[4998]: I0227 10:39:09.540848 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftshk\" (UniqueName: \"kubernetes.io/projected/73b3d7ab-a5fe-4bc8-a113-d665de7a3773-kube-api-access-ftshk\") pod \"73b3d7ab-a5fe-4bc8-a113-d665de7a3773\" (UID: \"73b3d7ab-a5fe-4bc8-a113-d665de7a3773\") " Feb 27 10:39:09 crc kubenswrapper[4998]: I0227 10:39:09.541117 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73b3d7ab-a5fe-4bc8-a113-d665de7a3773-config\") pod \"73b3d7ab-a5fe-4bc8-a113-d665de7a3773\" (UID: \"73b3d7ab-a5fe-4bc8-a113-d665de7a3773\") " Feb 27 10:39:09 crc kubenswrapper[4998]: I0227 10:39:09.547697 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73b3d7ab-a5fe-4bc8-a113-d665de7a3773-kube-api-access-ftshk" (OuterVolumeSpecName: "kube-api-access-ftshk") pod "73b3d7ab-a5fe-4bc8-a113-d665de7a3773" (UID: "73b3d7ab-a5fe-4bc8-a113-d665de7a3773"). InnerVolumeSpecName "kube-api-access-ftshk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:39:09 crc kubenswrapper[4998]: I0227 10:39:09.617895 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73b3d7ab-a5fe-4bc8-a113-d665de7a3773-config" (OuterVolumeSpecName: "config") pod "73b3d7ab-a5fe-4bc8-a113-d665de7a3773" (UID: "73b3d7ab-a5fe-4bc8-a113-d665de7a3773"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:39:09 crc kubenswrapper[4998]: I0227 10:39:09.632716 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73b3d7ab-a5fe-4bc8-a113-d665de7a3773-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "73b3d7ab-a5fe-4bc8-a113-d665de7a3773" (UID: "73b3d7ab-a5fe-4bc8-a113-d665de7a3773"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:39:09 crc kubenswrapper[4998]: I0227 10:39:09.649408 4998 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73b3d7ab-a5fe-4bc8-a113-d665de7a3773-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:39:09 crc kubenswrapper[4998]: I0227 10:39:09.649436 4998 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/73b3d7ab-a5fe-4bc8-a113-d665de7a3773-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 27 10:39:09 crc kubenswrapper[4998]: I0227 10:39:09.649446 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ftshk\" (UniqueName: \"kubernetes.io/projected/73b3d7ab-a5fe-4bc8-a113-d665de7a3773-kube-api-access-ftshk\") on node \"crc\" DevicePath \"\"" Feb 27 10:39:09 crc kubenswrapper[4998]: I0227 10:39:09.690314 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73b3d7ab-a5fe-4bc8-a113-d665de7a3773-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "73b3d7ab-a5fe-4bc8-a113-d665de7a3773" (UID: "73b3d7ab-a5fe-4bc8-a113-d665de7a3773"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:39:09 crc kubenswrapper[4998]: I0227 10:39:09.697831 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73b3d7ab-a5fe-4bc8-a113-d665de7a3773-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "73b3d7ab-a5fe-4bc8-a113-d665de7a3773" (UID: "73b3d7ab-a5fe-4bc8-a113-d665de7a3773"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:39:09 crc kubenswrapper[4998]: I0227 10:39:09.706719 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73b3d7ab-a5fe-4bc8-a113-d665de7a3773-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "73b3d7ab-a5fe-4bc8-a113-d665de7a3773" (UID: "73b3d7ab-a5fe-4bc8-a113-d665de7a3773"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:39:09 crc kubenswrapper[4998]: I0227 10:39:09.751464 4998 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/73b3d7ab-a5fe-4bc8-a113-d665de7a3773-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 27 10:39:09 crc kubenswrapper[4998]: I0227 10:39:09.751739 4998 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/73b3d7ab-a5fe-4bc8-a113-d665de7a3773-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 27 10:39:09 crc kubenswrapper[4998]: I0227 10:39:09.751755 4998 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/73b3d7ab-a5fe-4bc8-a113-d665de7a3773-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 27 10:39:10 crc kubenswrapper[4998]: I0227 10:39:10.403792 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-6fzk5" Feb 27 10:39:10 crc kubenswrapper[4998]: I0227 10:39:10.439021 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-6fzk5"] Feb 27 10:39:10 crc kubenswrapper[4998]: I0227 10:39:10.453375 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-6fzk5"] Feb 27 10:39:10 crc kubenswrapper[4998]: I0227 10:39:10.474105 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-85b475b45b-ggjbp" Feb 27 10:39:10 crc kubenswrapper[4998]: I0227 10:39:10.479203 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-85b475b45b-ggjbp" Feb 27 10:39:10 crc kubenswrapper[4998]: I0227 10:39:10.781420 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73b3d7ab-a5fe-4bc8-a113-d665de7a3773" path="/var/lib/kubelet/pods/73b3d7ab-a5fe-4bc8-a113-d665de7a3773/volumes" Feb 27 10:39:10 crc kubenswrapper[4998]: I0227 10:39:10.862711 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-7455798654-94zkv"] Feb 27 10:39:10 crc kubenswrapper[4998]: E0227 10:39:10.863158 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73b3d7ab-a5fe-4bc8-a113-d665de7a3773" containerName="init" Feb 27 10:39:10 crc kubenswrapper[4998]: I0227 10:39:10.863180 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="73b3d7ab-a5fe-4bc8-a113-d665de7a3773" containerName="init" Feb 27 10:39:10 crc kubenswrapper[4998]: E0227 10:39:10.863197 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73b3d7ab-a5fe-4bc8-a113-d665de7a3773" containerName="dnsmasq-dns" Feb 27 10:39:10 crc kubenswrapper[4998]: I0227 10:39:10.863205 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="73b3d7ab-a5fe-4bc8-a113-d665de7a3773" containerName="dnsmasq-dns" Feb 27 10:39:10 crc kubenswrapper[4998]: I0227 10:39:10.863446 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="73b3d7ab-a5fe-4bc8-a113-d665de7a3773" containerName="dnsmasq-dns" Feb 27 10:39:10 crc kubenswrapper[4998]: I0227 10:39:10.864582 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7455798654-94zkv" Feb 27 10:39:10 crc kubenswrapper[4998]: I0227 10:39:10.882927 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7455798654-94zkv"] Feb 27 10:39:10 crc kubenswrapper[4998]: I0227 10:39:10.981490 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25wvb\" (UniqueName: \"kubernetes.io/projected/56b83176-1737-4d20-a5ad-0b88394e2d40-kube-api-access-25wvb\") pod \"placement-7455798654-94zkv\" (UID: \"56b83176-1737-4d20-a5ad-0b88394e2d40\") " pod="openstack/placement-7455798654-94zkv" Feb 27 10:39:10 crc kubenswrapper[4998]: I0227 10:39:10.981568 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/56b83176-1737-4d20-a5ad-0b88394e2d40-logs\") pod \"placement-7455798654-94zkv\" (UID: \"56b83176-1737-4d20-a5ad-0b88394e2d40\") " pod="openstack/placement-7455798654-94zkv" Feb 27 10:39:10 crc kubenswrapper[4998]: I0227 10:39:10.981598 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/56b83176-1737-4d20-a5ad-0b88394e2d40-internal-tls-certs\") pod \"placement-7455798654-94zkv\" (UID: \"56b83176-1737-4d20-a5ad-0b88394e2d40\") " pod="openstack/placement-7455798654-94zkv" Feb 27 10:39:10 crc kubenswrapper[4998]: I0227 10:39:10.981628 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/56b83176-1737-4d20-a5ad-0b88394e2d40-scripts\") pod \"placement-7455798654-94zkv\" (UID: \"56b83176-1737-4d20-a5ad-0b88394e2d40\") " pod="openstack/placement-7455798654-94zkv" Feb 27 10:39:10 crc kubenswrapper[4998]: I0227 10:39:10.981792 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/56b83176-1737-4d20-a5ad-0b88394e2d40-public-tls-certs\") pod \"placement-7455798654-94zkv\" (UID: \"56b83176-1737-4d20-a5ad-0b88394e2d40\") " pod="openstack/placement-7455798654-94zkv" Feb 27 10:39:10 crc kubenswrapper[4998]: I0227 10:39:10.981929 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56b83176-1737-4d20-a5ad-0b88394e2d40-config-data\") pod \"placement-7455798654-94zkv\" (UID: \"56b83176-1737-4d20-a5ad-0b88394e2d40\") " pod="openstack/placement-7455798654-94zkv" Feb 27 10:39:10 crc kubenswrapper[4998]: I0227 10:39:10.982122 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56b83176-1737-4d20-a5ad-0b88394e2d40-combined-ca-bundle\") pod \"placement-7455798654-94zkv\" (UID: \"56b83176-1737-4d20-a5ad-0b88394e2d40\") " pod="openstack/placement-7455798654-94zkv" Feb 27 10:39:11 crc kubenswrapper[4998]: I0227 10:39:11.084424 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25wvb\" (UniqueName: \"kubernetes.io/projected/56b83176-1737-4d20-a5ad-0b88394e2d40-kube-api-access-25wvb\") pod \"placement-7455798654-94zkv\" (UID: \"56b83176-1737-4d20-a5ad-0b88394e2d40\") " pod="openstack/placement-7455798654-94zkv" Feb 27 10:39:11 crc kubenswrapper[4998]: I0227 10:39:11.084502 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/56b83176-1737-4d20-a5ad-0b88394e2d40-logs\") pod \"placement-7455798654-94zkv\" (UID: \"56b83176-1737-4d20-a5ad-0b88394e2d40\") " pod="openstack/placement-7455798654-94zkv" Feb 27 10:39:11 crc kubenswrapper[4998]: I0227 10:39:11.084527 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/56b83176-1737-4d20-a5ad-0b88394e2d40-internal-tls-certs\") pod \"placement-7455798654-94zkv\" (UID: \"56b83176-1737-4d20-a5ad-0b88394e2d40\") " pod="openstack/placement-7455798654-94zkv" Feb 27 10:39:11 crc kubenswrapper[4998]: I0227 10:39:11.084555 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/56b83176-1737-4d20-a5ad-0b88394e2d40-scripts\") pod \"placement-7455798654-94zkv\" (UID: \"56b83176-1737-4d20-a5ad-0b88394e2d40\") " pod="openstack/placement-7455798654-94zkv" Feb 27 10:39:11 crc kubenswrapper[4998]: I0227 10:39:11.084602 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/56b83176-1737-4d20-a5ad-0b88394e2d40-public-tls-certs\") pod \"placement-7455798654-94zkv\" (UID: \"56b83176-1737-4d20-a5ad-0b88394e2d40\") " pod="openstack/placement-7455798654-94zkv" Feb 27 10:39:11 crc kubenswrapper[4998]: I0227 10:39:11.084710 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56b83176-1737-4d20-a5ad-0b88394e2d40-config-data\") pod \"placement-7455798654-94zkv\" (UID: \"56b83176-1737-4d20-a5ad-0b88394e2d40\") " pod="openstack/placement-7455798654-94zkv" Feb 27 10:39:11 crc kubenswrapper[4998]: I0227 10:39:11.084777 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56b83176-1737-4d20-a5ad-0b88394e2d40-combined-ca-bundle\") pod \"placement-7455798654-94zkv\" (UID: \"56b83176-1737-4d20-a5ad-0b88394e2d40\") " pod="openstack/placement-7455798654-94zkv" Feb 27 10:39:11 crc kubenswrapper[4998]: I0227 10:39:11.085977 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/56b83176-1737-4d20-a5ad-0b88394e2d40-logs\") pod \"placement-7455798654-94zkv\" (UID: \"56b83176-1737-4d20-a5ad-0b88394e2d40\") " pod="openstack/placement-7455798654-94zkv" Feb 27 10:39:11 crc kubenswrapper[4998]: I0227 10:39:11.090243 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/56b83176-1737-4d20-a5ad-0b88394e2d40-scripts\") pod \"placement-7455798654-94zkv\" (UID: \"56b83176-1737-4d20-a5ad-0b88394e2d40\") " pod="openstack/placement-7455798654-94zkv" Feb 27 10:39:11 crc kubenswrapper[4998]: I0227 10:39:11.092102 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56b83176-1737-4d20-a5ad-0b88394e2d40-config-data\") pod \"placement-7455798654-94zkv\" (UID: \"56b83176-1737-4d20-a5ad-0b88394e2d40\") " pod="openstack/placement-7455798654-94zkv" Feb 27 10:39:11 crc kubenswrapper[4998]: I0227 10:39:11.094889 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/56b83176-1737-4d20-a5ad-0b88394e2d40-public-tls-certs\") pod \"placement-7455798654-94zkv\" (UID: \"56b83176-1737-4d20-a5ad-0b88394e2d40\") " pod="openstack/placement-7455798654-94zkv" Feb 27 10:39:11 crc kubenswrapper[4998]: I0227 10:39:11.094905 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/56b83176-1737-4d20-a5ad-0b88394e2d40-internal-tls-certs\") pod \"placement-7455798654-94zkv\" (UID: \"56b83176-1737-4d20-a5ad-0b88394e2d40\") " pod="openstack/placement-7455798654-94zkv" Feb 27 10:39:11 crc kubenswrapper[4998]: I0227 10:39:11.097916 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56b83176-1737-4d20-a5ad-0b88394e2d40-combined-ca-bundle\") pod \"placement-7455798654-94zkv\" (UID: \"56b83176-1737-4d20-a5ad-0b88394e2d40\") " pod="openstack/placement-7455798654-94zkv" Feb 27 10:39:11 crc kubenswrapper[4998]: I0227 10:39:11.114472 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25wvb\" (UniqueName: \"kubernetes.io/projected/56b83176-1737-4d20-a5ad-0b88394e2d40-kube-api-access-25wvb\") pod \"placement-7455798654-94zkv\" (UID: \"56b83176-1737-4d20-a5ad-0b88394e2d40\") " pod="openstack/placement-7455798654-94zkv" Feb 27 10:39:11 crc kubenswrapper[4998]: I0227 10:39:11.190327 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7455798654-94zkv" Feb 27 10:39:11 crc kubenswrapper[4998]: I0227 10:39:11.437315 4998 generic.go:334] "Generic (PLEG): container finished" podID="fbbbec2f-ab3a-413d-81d6-04a63d922de8" containerID="1d4581fa2d57a706106f3ce3d48ebece994cf4bbe669d29a1efaa1651f2b37db" exitCode=0 Feb 27 10:39:11 crc kubenswrapper[4998]: I0227 10:39:11.437391 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"fbbbec2f-ab3a-413d-81d6-04a63d922de8","Type":"ContainerDied","Data":"1d4581fa2d57a706106f3ce3d48ebece994cf4bbe669d29a1efaa1651f2b37db"} Feb 27 10:39:11 crc kubenswrapper[4998]: I0227 10:39:11.742568 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7455798654-94zkv"] Feb 27 10:39:11 crc kubenswrapper[4998]: W0227 10:39:11.743877 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod56b83176_1737_4d20_a5ad_0b88394e2d40.slice/crio-69f40facca8afa20b583c97a5a23e3693638ee9378aede88f79d7f2c0bfaf3f0 WatchSource:0}: Error finding container 69f40facca8afa20b583c97a5a23e3693638ee9378aede88f79d7f2c0bfaf3f0: Status 404 returned error can't find the container with id 69f40facca8afa20b583c97a5a23e3693638ee9378aede88f79d7f2c0bfaf3f0 Feb 27 10:39:11 crc kubenswrapper[4998]: I0227 10:39:11.919216 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-65b865b5bf-kvsff" Feb 27 10:39:12 crc kubenswrapper[4998]: I0227 10:39:12.017785 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/90e83aa1-ab9f-4409-9515-6df2c46796cc-public-tls-certs\") pod \"90e83aa1-ab9f-4409-9515-6df2c46796cc\" (UID: \"90e83aa1-ab9f-4409-9515-6df2c46796cc\") " Feb 27 10:39:12 crc kubenswrapper[4998]: I0227 10:39:12.017867 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/90e83aa1-ab9f-4409-9515-6df2c46796cc-internal-tls-certs\") pod \"90e83aa1-ab9f-4409-9515-6df2c46796cc\" (UID: \"90e83aa1-ab9f-4409-9515-6df2c46796cc\") " Feb 27 10:39:12 crc kubenswrapper[4998]: I0227 10:39:12.017927 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90e83aa1-ab9f-4409-9515-6df2c46796cc-combined-ca-bundle\") pod \"90e83aa1-ab9f-4409-9515-6df2c46796cc\" (UID: \"90e83aa1-ab9f-4409-9515-6df2c46796cc\") " Feb 27 10:39:12 crc kubenswrapper[4998]: I0227 10:39:12.017966 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/90e83aa1-ab9f-4409-9515-6df2c46796cc-httpd-config\") pod \"90e83aa1-ab9f-4409-9515-6df2c46796cc\" (UID: \"90e83aa1-ab9f-4409-9515-6df2c46796cc\") " Feb 27 10:39:12 crc kubenswrapper[4998]: I0227 10:39:12.017987 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s7hl2\" (UniqueName: \"kubernetes.io/projected/90e83aa1-ab9f-4409-9515-6df2c46796cc-kube-api-access-s7hl2\") pod \"90e83aa1-ab9f-4409-9515-6df2c46796cc\" (UID: \"90e83aa1-ab9f-4409-9515-6df2c46796cc\") " Feb 27 10:39:12 crc kubenswrapper[4998]: I0227 10:39:12.018156 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/90e83aa1-ab9f-4409-9515-6df2c46796cc-ovndb-tls-certs\") pod \"90e83aa1-ab9f-4409-9515-6df2c46796cc\" (UID: \"90e83aa1-ab9f-4409-9515-6df2c46796cc\") " Feb 27 10:39:12 crc kubenswrapper[4998]: I0227 10:39:12.018190 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/90e83aa1-ab9f-4409-9515-6df2c46796cc-config\") pod \"90e83aa1-ab9f-4409-9515-6df2c46796cc\" (UID: \"90e83aa1-ab9f-4409-9515-6df2c46796cc\") " Feb 27 10:39:12 crc kubenswrapper[4998]: I0227 10:39:12.038193 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90e83aa1-ab9f-4409-9515-6df2c46796cc-kube-api-access-s7hl2" (OuterVolumeSpecName: "kube-api-access-s7hl2") pod "90e83aa1-ab9f-4409-9515-6df2c46796cc" (UID: "90e83aa1-ab9f-4409-9515-6df2c46796cc"). InnerVolumeSpecName "kube-api-access-s7hl2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:39:12 crc kubenswrapper[4998]: I0227 10:39:12.053455 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90e83aa1-ab9f-4409-9515-6df2c46796cc-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "90e83aa1-ab9f-4409-9515-6df2c46796cc" (UID: "90e83aa1-ab9f-4409-9515-6df2c46796cc"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:39:12 crc kubenswrapper[4998]: I0227 10:39:12.123758 4998 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/90e83aa1-ab9f-4409-9515-6df2c46796cc-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:39:12 crc kubenswrapper[4998]: I0227 10:39:12.123800 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s7hl2\" (UniqueName: \"kubernetes.io/projected/90e83aa1-ab9f-4409-9515-6df2c46796cc-kube-api-access-s7hl2\") on node \"crc\" DevicePath \"\"" Feb 27 10:39:12 crc kubenswrapper[4998]: I0227 10:39:12.162617 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90e83aa1-ab9f-4409-9515-6df2c46796cc-config" (OuterVolumeSpecName: "config") pod "90e83aa1-ab9f-4409-9515-6df2c46796cc" (UID: "90e83aa1-ab9f-4409-9515-6df2c46796cc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:39:12 crc kubenswrapper[4998]: I0227 10:39:12.197486 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90e83aa1-ab9f-4409-9515-6df2c46796cc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "90e83aa1-ab9f-4409-9515-6df2c46796cc" (UID: "90e83aa1-ab9f-4409-9515-6df2c46796cc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:39:12 crc kubenswrapper[4998]: I0227 10:39:12.205459 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90e83aa1-ab9f-4409-9515-6df2c46796cc-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "90e83aa1-ab9f-4409-9515-6df2c46796cc" (UID: "90e83aa1-ab9f-4409-9515-6df2c46796cc"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:39:12 crc kubenswrapper[4998]: I0227 10:39:12.226890 4998 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/90e83aa1-ab9f-4409-9515-6df2c46796cc-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:39:12 crc kubenswrapper[4998]: I0227 10:39:12.226931 4998 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/90e83aa1-ab9f-4409-9515-6df2c46796cc-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 10:39:12 crc kubenswrapper[4998]: I0227 10:39:12.226944 4998 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90e83aa1-ab9f-4409-9515-6df2c46796cc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:39:12 crc kubenswrapper[4998]: I0227 10:39:12.254364 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90e83aa1-ab9f-4409-9515-6df2c46796cc-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "90e83aa1-ab9f-4409-9515-6df2c46796cc" (UID: "90e83aa1-ab9f-4409-9515-6df2c46796cc"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:39:12 crc kubenswrapper[4998]: I0227 10:39:12.304743 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90e83aa1-ab9f-4409-9515-6df2c46796cc-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "90e83aa1-ab9f-4409-9515-6df2c46796cc" (UID: "90e83aa1-ab9f-4409-9515-6df2c46796cc"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:39:12 crc kubenswrapper[4998]: I0227 10:39:12.329141 4998 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/90e83aa1-ab9f-4409-9515-6df2c46796cc-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 10:39:12 crc kubenswrapper[4998]: I0227 10:39:12.329185 4998 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/90e83aa1-ab9f-4409-9515-6df2c46796cc-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 10:39:12 crc kubenswrapper[4998]: I0227 10:39:12.447446 4998 generic.go:334] "Generic (PLEG): container finished" podID="90e83aa1-ab9f-4409-9515-6df2c46796cc" containerID="e088aa17038527b16d120351b78485f781fe5e9522b19f745c98878c44b295ba" exitCode=0 Feb 27 10:39:12 crc kubenswrapper[4998]: I0227 10:39:12.447514 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-65b865b5bf-kvsff" event={"ID":"90e83aa1-ab9f-4409-9515-6df2c46796cc","Type":"ContainerDied","Data":"e088aa17038527b16d120351b78485f781fe5e9522b19f745c98878c44b295ba"} Feb 27 10:39:12 crc kubenswrapper[4998]: I0227 10:39:12.447803 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-65b865b5bf-kvsff" event={"ID":"90e83aa1-ab9f-4409-9515-6df2c46796cc","Type":"ContainerDied","Data":"f885f06738127817b82268344421af2326d32f341336406fe9306a6fd3b0a478"} Feb 27 10:39:12 crc kubenswrapper[4998]: I0227 10:39:12.447541 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-65b865b5bf-kvsff" Feb 27 10:39:12 crc kubenswrapper[4998]: I0227 10:39:12.447826 4998 scope.go:117] "RemoveContainer" containerID="dc3dad9b629d02afd0597eac6ecebaefdc7cb58f4d94e9ce3c84a781733473a9" Feb 27 10:39:12 crc kubenswrapper[4998]: I0227 10:39:12.452059 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7455798654-94zkv" event={"ID":"56b83176-1737-4d20-a5ad-0b88394e2d40","Type":"ContainerStarted","Data":"a245199ab4b893986907590014e32c06b899066d0898830e40f08b8a01c5b7a0"} Feb 27 10:39:12 crc kubenswrapper[4998]: I0227 10:39:12.452092 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7455798654-94zkv" event={"ID":"56b83176-1737-4d20-a5ad-0b88394e2d40","Type":"ContainerStarted","Data":"a5fb59a60b9c92cb0f92a951ee210fdaa37b2c0a122dbbc1c442c833376f647f"} Feb 27 10:39:12 crc kubenswrapper[4998]: I0227 10:39:12.452102 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7455798654-94zkv" event={"ID":"56b83176-1737-4d20-a5ad-0b88394e2d40","Type":"ContainerStarted","Data":"69f40facca8afa20b583c97a5a23e3693638ee9378aede88f79d7f2c0bfaf3f0"} Feb 27 10:39:12 crc kubenswrapper[4998]: I0227 10:39:12.452284 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7455798654-94zkv" Feb 27 10:39:12 crc kubenswrapper[4998]: I0227 10:39:12.452316 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7455798654-94zkv" Feb 27 10:39:12 crc kubenswrapper[4998]: I0227 10:39:12.473706 4998 scope.go:117] "RemoveContainer" containerID="e088aa17038527b16d120351b78485f781fe5e9522b19f745c98878c44b295ba" Feb 27 10:39:12 crc kubenswrapper[4998]: I0227 10:39:12.490980 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-7455798654-94zkv" podStartSLOduration=2.49079283 podStartE2EDuration="2.49079283s" podCreationTimestamp="2026-02-27 10:39:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:39:12.479492281 +0000 UTC m=+1304.477763249" watchObservedRunningTime="2026-02-27 10:39:12.49079283 +0000 UTC m=+1304.489063798" Feb 27 10:39:12 crc kubenswrapper[4998]: I0227 10:39:12.495949 4998 scope.go:117] "RemoveContainer" containerID="dc3dad9b629d02afd0597eac6ecebaefdc7cb58f4d94e9ce3c84a781733473a9" Feb 27 10:39:12 crc kubenswrapper[4998]: E0227 10:39:12.496383 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc3dad9b629d02afd0597eac6ecebaefdc7cb58f4d94e9ce3c84a781733473a9\": container with ID starting with dc3dad9b629d02afd0597eac6ecebaefdc7cb58f4d94e9ce3c84a781733473a9 not found: ID does not exist" containerID="dc3dad9b629d02afd0597eac6ecebaefdc7cb58f4d94e9ce3c84a781733473a9" Feb 27 10:39:12 crc kubenswrapper[4998]: I0227 10:39:12.496420 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc3dad9b629d02afd0597eac6ecebaefdc7cb58f4d94e9ce3c84a781733473a9"} err="failed to get container status \"dc3dad9b629d02afd0597eac6ecebaefdc7cb58f4d94e9ce3c84a781733473a9\": rpc error: code = NotFound desc = could not find container \"dc3dad9b629d02afd0597eac6ecebaefdc7cb58f4d94e9ce3c84a781733473a9\": container with ID starting with dc3dad9b629d02afd0597eac6ecebaefdc7cb58f4d94e9ce3c84a781733473a9 not found: ID does not exist" Feb 27 10:39:12 crc kubenswrapper[4998]: I0227 10:39:12.496444 4998 scope.go:117] "RemoveContainer" containerID="e088aa17038527b16d120351b78485f781fe5e9522b19f745c98878c44b295ba" Feb 27 10:39:12 crc kubenswrapper[4998]: E0227 10:39:12.496768 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e088aa17038527b16d120351b78485f781fe5e9522b19f745c98878c44b295ba\": container with ID starting with e088aa17038527b16d120351b78485f781fe5e9522b19f745c98878c44b295ba not found: ID does not exist" containerID="e088aa17038527b16d120351b78485f781fe5e9522b19f745c98878c44b295ba" Feb 27 10:39:12 crc kubenswrapper[4998]: I0227 10:39:12.496792 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e088aa17038527b16d120351b78485f781fe5e9522b19f745c98878c44b295ba"} err="failed to get container status \"e088aa17038527b16d120351b78485f781fe5e9522b19f745c98878c44b295ba\": rpc error: code = NotFound desc = could not find container \"e088aa17038527b16d120351b78485f781fe5e9522b19f745c98878c44b295ba\": container with ID starting with e088aa17038527b16d120351b78485f781fe5e9522b19f745c98878c44b295ba not found: ID does not exist" Feb 27 10:39:12 crc kubenswrapper[4998]: I0227 10:39:12.509292 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-65b865b5bf-kvsff"] Feb 27 10:39:12 crc kubenswrapper[4998]: I0227 10:39:12.516778 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-65b865b5bf-kvsff"] Feb 27 10:39:12 crc kubenswrapper[4998]: I0227 10:39:12.789454 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90e83aa1-ab9f-4409-9515-6df2c46796cc" path="/var/lib/kubelet/pods/90e83aa1-ab9f-4409-9515-6df2c46796cc/volumes" Feb 27 10:39:12 crc kubenswrapper[4998]: I0227 10:39:12.878025 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-5d7f558cb4-k5mxh" Feb 27 10:39:14 crc kubenswrapper[4998]: I0227 10:39:14.207980 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 27 10:39:14 crc kubenswrapper[4998]: I0227 10:39:14.273860 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbbbec2f-ab3a-413d-81d6-04a63d922de8-combined-ca-bundle\") pod \"fbbbec2f-ab3a-413d-81d6-04a63d922de8\" (UID: \"fbbbec2f-ab3a-413d-81d6-04a63d922de8\") " Feb 27 10:39:14 crc kubenswrapper[4998]: I0227 10:39:14.273908 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fbbbec2f-ab3a-413d-81d6-04a63d922de8-config-data-custom\") pod \"fbbbec2f-ab3a-413d-81d6-04a63d922de8\" (UID: \"fbbbec2f-ab3a-413d-81d6-04a63d922de8\") " Feb 27 10:39:14 crc kubenswrapper[4998]: I0227 10:39:14.273931 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fbbbec2f-ab3a-413d-81d6-04a63d922de8-etc-machine-id\") pod \"fbbbec2f-ab3a-413d-81d6-04a63d922de8\" (UID: \"fbbbec2f-ab3a-413d-81d6-04a63d922de8\") " Feb 27 10:39:14 crc kubenswrapper[4998]: I0227 10:39:14.273993 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fbbbec2f-ab3a-413d-81d6-04a63d922de8-scripts\") pod \"fbbbec2f-ab3a-413d-81d6-04a63d922de8\" (UID: \"fbbbec2f-ab3a-413d-81d6-04a63d922de8\") " Feb 27 10:39:14 crc kubenswrapper[4998]: I0227 10:39:14.274026 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hwnxc\" (UniqueName: \"kubernetes.io/projected/fbbbec2f-ab3a-413d-81d6-04a63d922de8-kube-api-access-hwnxc\") pod \"fbbbec2f-ab3a-413d-81d6-04a63d922de8\" (UID: \"fbbbec2f-ab3a-413d-81d6-04a63d922de8\") " Feb 27 10:39:14 crc kubenswrapper[4998]: I0227 10:39:14.274068 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbbbec2f-ab3a-413d-81d6-04a63d922de8-config-data\") pod \"fbbbec2f-ab3a-413d-81d6-04a63d922de8\" (UID: \"fbbbec2f-ab3a-413d-81d6-04a63d922de8\") " Feb 27 10:39:14 crc kubenswrapper[4998]: I0227 10:39:14.274465 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fbbbec2f-ab3a-413d-81d6-04a63d922de8-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "fbbbec2f-ab3a-413d-81d6-04a63d922de8" (UID: "fbbbec2f-ab3a-413d-81d6-04a63d922de8"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 10:39:14 crc kubenswrapper[4998]: I0227 10:39:14.285571 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbbbec2f-ab3a-413d-81d6-04a63d922de8-kube-api-access-hwnxc" (OuterVolumeSpecName: "kube-api-access-hwnxc") pod "fbbbec2f-ab3a-413d-81d6-04a63d922de8" (UID: "fbbbec2f-ab3a-413d-81d6-04a63d922de8"). InnerVolumeSpecName "kube-api-access-hwnxc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:39:14 crc kubenswrapper[4998]: I0227 10:39:14.295033 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbbbec2f-ab3a-413d-81d6-04a63d922de8-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "fbbbec2f-ab3a-413d-81d6-04a63d922de8" (UID: "fbbbec2f-ab3a-413d-81d6-04a63d922de8"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:39:14 crc kubenswrapper[4998]: I0227 10:39:14.300154 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbbbec2f-ab3a-413d-81d6-04a63d922de8-scripts" (OuterVolumeSpecName: "scripts") pod "fbbbec2f-ab3a-413d-81d6-04a63d922de8" (UID: "fbbbec2f-ab3a-413d-81d6-04a63d922de8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:39:14 crc kubenswrapper[4998]: I0227 10:39:14.340432 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbbbec2f-ab3a-413d-81d6-04a63d922de8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fbbbec2f-ab3a-413d-81d6-04a63d922de8" (UID: "fbbbec2f-ab3a-413d-81d6-04a63d922de8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:39:14 crc kubenswrapper[4998]: I0227 10:39:14.376270 4998 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbbbec2f-ab3a-413d-81d6-04a63d922de8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:39:14 crc kubenswrapper[4998]: I0227 10:39:14.376585 4998 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fbbbec2f-ab3a-413d-81d6-04a63d922de8-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 27 10:39:14 crc kubenswrapper[4998]: I0227 10:39:14.376673 4998 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fbbbec2f-ab3a-413d-81d6-04a63d922de8-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 27 10:39:14 crc kubenswrapper[4998]: I0227 10:39:14.376792 4998 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fbbbec2f-ab3a-413d-81d6-04a63d922de8-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 10:39:14 crc kubenswrapper[4998]: I0227 10:39:14.376887 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hwnxc\" (UniqueName: \"kubernetes.io/projected/fbbbec2f-ab3a-413d-81d6-04a63d922de8-kube-api-access-hwnxc\") on node \"crc\" DevicePath \"\"" Feb 27 10:39:14 crc kubenswrapper[4998]: I0227 10:39:14.413474 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbbbec2f-ab3a-413d-81d6-04a63d922de8-config-data" (OuterVolumeSpecName: "config-data") pod "fbbbec2f-ab3a-413d-81d6-04a63d922de8" (UID: "fbbbec2f-ab3a-413d-81d6-04a63d922de8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:39:14 crc kubenswrapper[4998]: I0227 10:39:14.473006 4998 generic.go:334] "Generic (PLEG): container finished" podID="fbbbec2f-ab3a-413d-81d6-04a63d922de8" containerID="4baa6eb69d4b3b824717412864bcaad7d13b276cd792825b5ac26a385908fdad" exitCode=0 Feb 27 10:39:14 crc kubenswrapper[4998]: I0227 10:39:14.473292 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"fbbbec2f-ab3a-413d-81d6-04a63d922de8","Type":"ContainerDied","Data":"4baa6eb69d4b3b824717412864bcaad7d13b276cd792825b5ac26a385908fdad"} Feb 27 10:39:14 crc kubenswrapper[4998]: I0227 10:39:14.473418 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"fbbbec2f-ab3a-413d-81d6-04a63d922de8","Type":"ContainerDied","Data":"75d663a7be2ffb744cf1960d3de563689971caab77da4d8cbd50c99858c14ba5"} Feb 27 10:39:14 crc kubenswrapper[4998]: I0227 10:39:14.473508 4998 scope.go:117] "RemoveContainer" containerID="1d4581fa2d57a706106f3ce3d48ebece994cf4bbe669d29a1efaa1651f2b37db" Feb 27 10:39:14 crc kubenswrapper[4998]: I0227 10:39:14.473686 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 27 10:39:14 crc kubenswrapper[4998]: I0227 10:39:14.479847 4998 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbbbec2f-ab3a-413d-81d6-04a63d922de8-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 10:39:14 crc kubenswrapper[4998]: I0227 10:39:14.505352 4998 scope.go:117] "RemoveContainer" containerID="4baa6eb69d4b3b824717412864bcaad7d13b276cd792825b5ac26a385908fdad" Feb 27 10:39:14 crc kubenswrapper[4998]: I0227 10:39:14.514274 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 27 10:39:14 crc kubenswrapper[4998]: I0227 10:39:14.527537 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 27 10:39:14 crc kubenswrapper[4998]: I0227 10:39:14.538182 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 27 10:39:14 crc kubenswrapper[4998]: E0227 10:39:14.538680 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90e83aa1-ab9f-4409-9515-6df2c46796cc" containerName="neutron-api" Feb 27 10:39:14 crc kubenswrapper[4998]: I0227 10:39:14.538704 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="90e83aa1-ab9f-4409-9515-6df2c46796cc" containerName="neutron-api" Feb 27 10:39:14 crc kubenswrapper[4998]: E0227 10:39:14.538748 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbbbec2f-ab3a-413d-81d6-04a63d922de8" containerName="cinder-scheduler" Feb 27 10:39:14 crc kubenswrapper[4998]: I0227 10:39:14.538758 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbbbec2f-ab3a-413d-81d6-04a63d922de8" containerName="cinder-scheduler" Feb 27 10:39:14 crc kubenswrapper[4998]: E0227 10:39:14.538773 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbbbec2f-ab3a-413d-81d6-04a63d922de8" containerName="probe" Feb 27 10:39:14 crc kubenswrapper[4998]: I0227 10:39:14.538782 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbbbec2f-ab3a-413d-81d6-04a63d922de8" containerName="probe" Feb 27 10:39:14 crc kubenswrapper[4998]: E0227 10:39:14.538804 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90e83aa1-ab9f-4409-9515-6df2c46796cc" containerName="neutron-httpd" Feb 27 10:39:14 crc kubenswrapper[4998]: I0227 10:39:14.538812 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="90e83aa1-ab9f-4409-9515-6df2c46796cc" containerName="neutron-httpd" Feb 27 10:39:14 crc kubenswrapper[4998]: I0227 10:39:14.539018 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbbbec2f-ab3a-413d-81d6-04a63d922de8" containerName="probe" Feb 27 10:39:14 crc kubenswrapper[4998]: I0227 10:39:14.539040 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="90e83aa1-ab9f-4409-9515-6df2c46796cc" containerName="neutron-api" Feb 27 10:39:14 crc kubenswrapper[4998]: I0227 10:39:14.539056 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="90e83aa1-ab9f-4409-9515-6df2c46796cc" containerName="neutron-httpd" Feb 27 10:39:14 crc kubenswrapper[4998]: I0227 10:39:14.539070 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbbbec2f-ab3a-413d-81d6-04a63d922de8" containerName="cinder-scheduler" Feb 27 10:39:14 crc kubenswrapper[4998]: I0227 10:39:14.540279 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 27 10:39:14 crc kubenswrapper[4998]: I0227 10:39:14.545526 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 27 10:39:14 crc kubenswrapper[4998]: I0227 10:39:14.547459 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 27 10:39:14 crc kubenswrapper[4998]: I0227 10:39:14.553568 4998 scope.go:117] "RemoveContainer" containerID="1d4581fa2d57a706106f3ce3d48ebece994cf4bbe669d29a1efaa1651f2b37db" Feb 27 10:39:14 crc kubenswrapper[4998]: E0227 10:39:14.554629 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d4581fa2d57a706106f3ce3d48ebece994cf4bbe669d29a1efaa1651f2b37db\": container with ID starting with 1d4581fa2d57a706106f3ce3d48ebece994cf4bbe669d29a1efaa1651f2b37db not found: ID does not exist" containerID="1d4581fa2d57a706106f3ce3d48ebece994cf4bbe669d29a1efaa1651f2b37db" Feb 27 10:39:14 crc kubenswrapper[4998]: I0227 10:39:14.554764 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d4581fa2d57a706106f3ce3d48ebece994cf4bbe669d29a1efaa1651f2b37db"} err="failed to get container status \"1d4581fa2d57a706106f3ce3d48ebece994cf4bbe669d29a1efaa1651f2b37db\": rpc error: code = NotFound desc = could not find container \"1d4581fa2d57a706106f3ce3d48ebece994cf4bbe669d29a1efaa1651f2b37db\": container with ID starting with 1d4581fa2d57a706106f3ce3d48ebece994cf4bbe669d29a1efaa1651f2b37db not found: ID does not exist" Feb 27 10:39:14 crc kubenswrapper[4998]: I0227 10:39:14.554861 4998 scope.go:117] "RemoveContainer" containerID="4baa6eb69d4b3b824717412864bcaad7d13b276cd792825b5ac26a385908fdad" Feb 27 10:39:14 crc kubenswrapper[4998]: E0227 10:39:14.555199 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4baa6eb69d4b3b824717412864bcaad7d13b276cd792825b5ac26a385908fdad\": container with ID starting with 4baa6eb69d4b3b824717412864bcaad7d13b276cd792825b5ac26a385908fdad not found: ID does not exist" containerID="4baa6eb69d4b3b824717412864bcaad7d13b276cd792825b5ac26a385908fdad" Feb 27 10:39:14 crc kubenswrapper[4998]: I0227 10:39:14.555363 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4baa6eb69d4b3b824717412864bcaad7d13b276cd792825b5ac26a385908fdad"} err="failed to get container status \"4baa6eb69d4b3b824717412864bcaad7d13b276cd792825b5ac26a385908fdad\": rpc error: code = NotFound desc = could not find container \"4baa6eb69d4b3b824717412864bcaad7d13b276cd792825b5ac26a385908fdad\": container with ID starting with 4baa6eb69d4b3b824717412864bcaad7d13b276cd792825b5ac26a385908fdad not found: ID does not exist" Feb 27 10:39:14 crc kubenswrapper[4998]: I0227 10:39:14.672958 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-5d7f558cb4-k5mxh" Feb 27 10:39:14 crc kubenswrapper[4998]: I0227 10:39:14.682833 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a27e8b4-9378-4f49-8a5b-a336418a70e6-scripts\") pod \"cinder-scheduler-0\" (UID: \"7a27e8b4-9378-4f49-8a5b-a336418a70e6\") " pod="openstack/cinder-scheduler-0" Feb 27 10:39:14 crc kubenswrapper[4998]: I0227 10:39:14.683097 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkvv6\" (UniqueName: \"kubernetes.io/projected/7a27e8b4-9378-4f49-8a5b-a336418a70e6-kube-api-access-vkvv6\") pod \"cinder-scheduler-0\" (UID: \"7a27e8b4-9378-4f49-8a5b-a336418a70e6\") " pod="openstack/cinder-scheduler-0" Feb 27 10:39:14 crc kubenswrapper[4998]: I0227 10:39:14.683740 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7a27e8b4-9378-4f49-8a5b-a336418a70e6-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"7a27e8b4-9378-4f49-8a5b-a336418a70e6\") " pod="openstack/cinder-scheduler-0" Feb 27 10:39:14 crc kubenswrapper[4998]: I0227 10:39:14.683856 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a27e8b4-9378-4f49-8a5b-a336418a70e6-config-data\") pod \"cinder-scheduler-0\" (UID: \"7a27e8b4-9378-4f49-8a5b-a336418a70e6\") " pod="openstack/cinder-scheduler-0" Feb 27 10:39:14 crc kubenswrapper[4998]: I0227 10:39:14.683888 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a27e8b4-9378-4f49-8a5b-a336418a70e6-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"7a27e8b4-9378-4f49-8a5b-a336418a70e6\") " pod="openstack/cinder-scheduler-0" Feb 27 10:39:14 crc kubenswrapper[4998]: I0227 10:39:14.684911 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7a27e8b4-9378-4f49-8a5b-a336418a70e6-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"7a27e8b4-9378-4f49-8a5b-a336418a70e6\") " pod="openstack/cinder-scheduler-0" Feb 27 10:39:14 crc kubenswrapper[4998]: I0227 10:39:14.734511 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-64c87cb5cd-prbxl"] Feb 27 10:39:14 crc kubenswrapper[4998]: I0227 10:39:14.737737 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-64c87cb5cd-prbxl" podUID="63a87b91-16fb-436d-8c53-317b204acebc" containerName="horizon-log" containerID="cri-o://0d4b5db9912175f7b371aef129e1a3835005cfc41d31b381885990ea6c8dca9e" gracePeriod=30 Feb 27 10:39:14 crc kubenswrapper[4998]: I0227 10:39:14.737980 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-64c87cb5cd-prbxl" podUID="63a87b91-16fb-436d-8c53-317b204acebc" containerName="horizon" containerID="cri-o://4d13db667b5d6248ab35065cb2dad51e3d395498b2e94ba8f06c2fdb4455a5a9" gracePeriod=30 Feb 27 10:39:14 crc kubenswrapper[4998]: I0227 10:39:14.777322 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fbbbec2f-ab3a-413d-81d6-04a63d922de8" path="/var/lib/kubelet/pods/fbbbec2f-ab3a-413d-81d6-04a63d922de8/volumes" Feb 27 10:39:14 crc kubenswrapper[4998]: I0227 10:39:14.792156 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a27e8b4-9378-4f49-8a5b-a336418a70e6-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"7a27e8b4-9378-4f49-8a5b-a336418a70e6\") " pod="openstack/cinder-scheduler-0" Feb 27 10:39:14 crc kubenswrapper[4998]: I0227 10:39:14.792273 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7a27e8b4-9378-4f49-8a5b-a336418a70e6-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"7a27e8b4-9378-4f49-8a5b-a336418a70e6\") " pod="openstack/cinder-scheduler-0" Feb 27 10:39:14 crc kubenswrapper[4998]: I0227 10:39:14.792352 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a27e8b4-9378-4f49-8a5b-a336418a70e6-scripts\") pod \"cinder-scheduler-0\" (UID: \"7a27e8b4-9378-4f49-8a5b-a336418a70e6\") " pod="openstack/cinder-scheduler-0" Feb 27 10:39:14 crc kubenswrapper[4998]: I0227 10:39:14.792469 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkvv6\" (UniqueName: \"kubernetes.io/projected/7a27e8b4-9378-4f49-8a5b-a336418a70e6-kube-api-access-vkvv6\") pod \"cinder-scheduler-0\" (UID: \"7a27e8b4-9378-4f49-8a5b-a336418a70e6\") " pod="openstack/cinder-scheduler-0" Feb 27 10:39:14 crc kubenswrapper[4998]: I0227 10:39:14.792508 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7a27e8b4-9378-4f49-8a5b-a336418a70e6-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"7a27e8b4-9378-4f49-8a5b-a336418a70e6\") " pod="openstack/cinder-scheduler-0" Feb 27 10:39:14 crc kubenswrapper[4998]: I0227 10:39:14.792548 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a27e8b4-9378-4f49-8a5b-a336418a70e6-config-data\") pod \"cinder-scheduler-0\" (UID: \"7a27e8b4-9378-4f49-8a5b-a336418a70e6\") " pod="openstack/cinder-scheduler-0" Feb 27 10:39:14 crc kubenswrapper[4998]: I0227 10:39:14.796521 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a27e8b4-9378-4f49-8a5b-a336418a70e6-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"7a27e8b4-9378-4f49-8a5b-a336418a70e6\") " pod="openstack/cinder-scheduler-0" Feb 27 10:39:14 crc kubenswrapper[4998]: I0227 10:39:14.797936 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a27e8b4-9378-4f49-8a5b-a336418a70e6-config-data\") pod \"cinder-scheduler-0\" (UID: \"7a27e8b4-9378-4f49-8a5b-a336418a70e6\") " pod="openstack/cinder-scheduler-0" Feb 27 10:39:14 crc kubenswrapper[4998]: I0227 10:39:14.798271 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a27e8b4-9378-4f49-8a5b-a336418a70e6-scripts\") pod \"cinder-scheduler-0\" (UID: \"7a27e8b4-9378-4f49-8a5b-a336418a70e6\") " pod="openstack/cinder-scheduler-0" Feb 27 10:39:14 crc kubenswrapper[4998]: I0227 10:39:14.798607 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7a27e8b4-9378-4f49-8a5b-a336418a70e6-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"7a27e8b4-9378-4f49-8a5b-a336418a70e6\") " pod="openstack/cinder-scheduler-0" Feb 27 10:39:14 crc kubenswrapper[4998]: I0227 10:39:14.801333 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7a27e8b4-9378-4f49-8a5b-a336418a70e6-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"7a27e8b4-9378-4f49-8a5b-a336418a70e6\") " pod="openstack/cinder-scheduler-0" Feb 27 10:39:14 crc kubenswrapper[4998]: I0227 10:39:14.816712 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkvv6\" (UniqueName: \"kubernetes.io/projected/7a27e8b4-9378-4f49-8a5b-a336418a70e6-kube-api-access-vkvv6\") pod \"cinder-scheduler-0\" (UID: \"7a27e8b4-9378-4f49-8a5b-a336418a70e6\") " pod="openstack/cinder-scheduler-0" Feb 27 10:39:14 crc kubenswrapper[4998]: I0227 10:39:14.856334 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 27 10:39:14 crc kubenswrapper[4998]: I0227 10:39:14.955106 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-8fcf676c4-nnmzc" Feb 27 10:39:14 crc kubenswrapper[4998]: I0227 10:39:14.975118 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-8fcf676c4-nnmzc" Feb 27 10:39:15 crc kubenswrapper[4998]: I0227 10:39:15.054676 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-5f9f5dcf86-47qzc"] Feb 27 10:39:15 crc kubenswrapper[4998]: I0227 10:39:15.054913 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-5f9f5dcf86-47qzc" podUID="424ef47f-d9ad-49f6-b3bf-d72859ab02c8" containerName="barbican-api-log" containerID="cri-o://c6fa1e017f2401e33d8bbbf0501bef296439ae903050928ff9e72691331bf252" gracePeriod=30 Feb 27 10:39:15 crc kubenswrapper[4998]: I0227 10:39:15.055422 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-5f9f5dcf86-47qzc" podUID="424ef47f-d9ad-49f6-b3bf-d72859ab02c8" containerName="barbican-api" containerID="cri-o://3a58f2334e863f49237f38e612faecdb92f93f8dd4b4f3e71207362e0b8e4ee9" gracePeriod=30 Feb 27 10:39:15 crc kubenswrapper[4998]: I0227 10:39:15.362838 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 27 10:39:15 crc kubenswrapper[4998]: I0227 10:39:15.487154 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"7a27e8b4-9378-4f49-8a5b-a336418a70e6","Type":"ContainerStarted","Data":"91e2909f66429708d78ecf7e1dd74e94293cca6a486339107f286390321c51d8"} Feb 27 10:39:15 crc kubenswrapper[4998]: I0227 10:39:15.491453 4998 generic.go:334] "Generic (PLEG): container finished" podID="424ef47f-d9ad-49f6-b3bf-d72859ab02c8" containerID="c6fa1e017f2401e33d8bbbf0501bef296439ae903050928ff9e72691331bf252" exitCode=143 Feb 27 10:39:15 crc kubenswrapper[4998]: I0227 10:39:15.491537 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5f9f5dcf86-47qzc" event={"ID":"424ef47f-d9ad-49f6-b3bf-d72859ab02c8","Type":"ContainerDied","Data":"c6fa1e017f2401e33d8bbbf0501bef296439ae903050928ff9e72691331bf252"} Feb 27 10:39:16 crc kubenswrapper[4998]: I0227 10:39:16.508589 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"7a27e8b4-9378-4f49-8a5b-a336418a70e6","Type":"ContainerStarted","Data":"725c07885f3a8b98b39777d9caf16eeaf36c2896e312784d0c57b3af283ea6a4"} Feb 27 10:39:17 crc kubenswrapper[4998]: I0227 10:39:17.525126 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"7a27e8b4-9378-4f49-8a5b-a336418a70e6","Type":"ContainerStarted","Data":"766d4a7334aa6e2e8041116643246a4e389fdcdbeccc17c25582ec0e39d37276"} Feb 27 10:39:17 crc kubenswrapper[4998]: I0227 10:39:17.545927 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.545904417 podStartE2EDuration="3.545904417s" podCreationTimestamp="2026-02-27 10:39:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:39:17.54440943 +0000 UTC m=+1309.542680398" watchObservedRunningTime="2026-02-27 10:39:17.545904417 +0000 UTC m=+1309.544175385" Feb 27 10:39:17 crc kubenswrapper[4998]: I0227 10:39:17.865121 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 27 10:39:18 crc kubenswrapper[4998]: I0227 10:39:18.342902 4998 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5f9f5dcf86-47qzc" podUID="424ef47f-d9ad-49f6-b3bf-d72859ab02c8" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.165:9311/healthcheck\": read tcp 10.217.0.2:35624->10.217.0.165:9311: read: connection reset by peer" Feb 27 10:39:18 crc kubenswrapper[4998]: I0227 10:39:18.343743 4998 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5f9f5dcf86-47qzc" podUID="424ef47f-d9ad-49f6-b3bf-d72859ab02c8" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.165:9311/healthcheck\": read tcp 10.217.0.2:35628->10.217.0.165:9311: read: connection reset by peer" Feb 27 10:39:18 crc kubenswrapper[4998]: I0227 10:39:18.538053 4998 generic.go:334] "Generic (PLEG): container finished" podID="63a87b91-16fb-436d-8c53-317b204acebc" containerID="4d13db667b5d6248ab35065cb2dad51e3d395498b2e94ba8f06c2fdb4455a5a9" exitCode=0 Feb 27 10:39:18 crc kubenswrapper[4998]: I0227 10:39:18.538107 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-64c87cb5cd-prbxl" event={"ID":"63a87b91-16fb-436d-8c53-317b204acebc","Type":"ContainerDied","Data":"4d13db667b5d6248ab35065cb2dad51e3d395498b2e94ba8f06c2fdb4455a5a9"} Feb 27 10:39:18 crc kubenswrapper[4998]: I0227 10:39:18.540349 4998 generic.go:334] "Generic (PLEG): container finished" podID="424ef47f-d9ad-49f6-b3bf-d72859ab02c8" containerID="3a58f2334e863f49237f38e612faecdb92f93f8dd4b4f3e71207362e0b8e4ee9" exitCode=0 Feb 27 10:39:18 crc kubenswrapper[4998]: I0227 10:39:18.540386 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5f9f5dcf86-47qzc" event={"ID":"424ef47f-d9ad-49f6-b3bf-d72859ab02c8","Type":"ContainerDied","Data":"3a58f2334e863f49237f38e612faecdb92f93f8dd4b4f3e71207362e0b8e4ee9"} Feb 27 10:39:18 crc kubenswrapper[4998]: I0227 10:39:18.778030 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5f9f5dcf86-47qzc" Feb 27 10:39:18 crc kubenswrapper[4998]: I0227 10:39:18.873094 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/424ef47f-d9ad-49f6-b3bf-d72859ab02c8-combined-ca-bundle\") pod \"424ef47f-d9ad-49f6-b3bf-d72859ab02c8\" (UID: \"424ef47f-d9ad-49f6-b3bf-d72859ab02c8\") " Feb 27 10:39:18 crc kubenswrapper[4998]: I0227 10:39:18.873163 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/424ef47f-d9ad-49f6-b3bf-d72859ab02c8-logs\") pod \"424ef47f-d9ad-49f6-b3bf-d72859ab02c8\" (UID: \"424ef47f-d9ad-49f6-b3bf-d72859ab02c8\") " Feb 27 10:39:18 crc kubenswrapper[4998]: I0227 10:39:18.873312 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/424ef47f-d9ad-49f6-b3bf-d72859ab02c8-config-data-custom\") pod \"424ef47f-d9ad-49f6-b3bf-d72859ab02c8\" (UID: \"424ef47f-d9ad-49f6-b3bf-d72859ab02c8\") " Feb 27 10:39:18 crc kubenswrapper[4998]: I0227 10:39:18.873357 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-77wbz\" (UniqueName: \"kubernetes.io/projected/424ef47f-d9ad-49f6-b3bf-d72859ab02c8-kube-api-access-77wbz\") pod \"424ef47f-d9ad-49f6-b3bf-d72859ab02c8\" (UID: \"424ef47f-d9ad-49f6-b3bf-d72859ab02c8\") " Feb 27 10:39:18 crc kubenswrapper[4998]: I0227 10:39:18.873406 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/424ef47f-d9ad-49f6-b3bf-d72859ab02c8-config-data\") pod \"424ef47f-d9ad-49f6-b3bf-d72859ab02c8\" (UID: \"424ef47f-d9ad-49f6-b3bf-d72859ab02c8\") " Feb 27 10:39:18 crc kubenswrapper[4998]: I0227 10:39:18.874077 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/424ef47f-d9ad-49f6-b3bf-d72859ab02c8-logs" (OuterVolumeSpecName: "logs") pod "424ef47f-d9ad-49f6-b3bf-d72859ab02c8" (UID: "424ef47f-d9ad-49f6-b3bf-d72859ab02c8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:39:18 crc kubenswrapper[4998]: I0227 10:39:18.887462 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/424ef47f-d9ad-49f6-b3bf-d72859ab02c8-kube-api-access-77wbz" (OuterVolumeSpecName: "kube-api-access-77wbz") pod "424ef47f-d9ad-49f6-b3bf-d72859ab02c8" (UID: "424ef47f-d9ad-49f6-b3bf-d72859ab02c8"). InnerVolumeSpecName "kube-api-access-77wbz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:39:18 crc kubenswrapper[4998]: I0227 10:39:18.896667 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/424ef47f-d9ad-49f6-b3bf-d72859ab02c8-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "424ef47f-d9ad-49f6-b3bf-d72859ab02c8" (UID: "424ef47f-d9ad-49f6-b3bf-d72859ab02c8"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:39:18 crc kubenswrapper[4998]: I0227 10:39:18.901112 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/424ef47f-d9ad-49f6-b3bf-d72859ab02c8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "424ef47f-d9ad-49f6-b3bf-d72859ab02c8" (UID: "424ef47f-d9ad-49f6-b3bf-d72859ab02c8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:39:18 crc kubenswrapper[4998]: I0227 10:39:18.930476 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/424ef47f-d9ad-49f6-b3bf-d72859ab02c8-config-data" (OuterVolumeSpecName: "config-data") pod "424ef47f-d9ad-49f6-b3bf-d72859ab02c8" (UID: "424ef47f-d9ad-49f6-b3bf-d72859ab02c8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:39:18 crc kubenswrapper[4998]: I0227 10:39:18.976116 4998 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/424ef47f-d9ad-49f6-b3bf-d72859ab02c8-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 27 10:39:18 crc kubenswrapper[4998]: I0227 10:39:18.976217 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-77wbz\" (UniqueName: \"kubernetes.io/projected/424ef47f-d9ad-49f6-b3bf-d72859ab02c8-kube-api-access-77wbz\") on node \"crc\" DevicePath \"\"" Feb 27 10:39:18 crc kubenswrapper[4998]: I0227 10:39:18.976305 4998 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/424ef47f-d9ad-49f6-b3bf-d72859ab02c8-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 10:39:18 crc kubenswrapper[4998]: I0227 10:39:18.976327 4998 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/424ef47f-d9ad-49f6-b3bf-d72859ab02c8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:39:18 crc kubenswrapper[4998]: I0227 10:39:18.976347 4998 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/424ef47f-d9ad-49f6-b3bf-d72859ab02c8-logs\") on node \"crc\" DevicePath \"\"" Feb 27 10:39:18 crc kubenswrapper[4998]: I0227 10:39:18.985860 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-7fb98b967f-nv7q9" Feb 27 10:39:19 crc kubenswrapper[4998]: I0227 10:39:19.271695 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 27 10:39:19 crc kubenswrapper[4998]: E0227 10:39:19.272115 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="424ef47f-d9ad-49f6-b3bf-d72859ab02c8" containerName="barbican-api-log" Feb 27 10:39:19 crc kubenswrapper[4998]: I0227 10:39:19.272132 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="424ef47f-d9ad-49f6-b3bf-d72859ab02c8" containerName="barbican-api-log" Feb 27 10:39:19 crc kubenswrapper[4998]: E0227 10:39:19.272157 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="424ef47f-d9ad-49f6-b3bf-d72859ab02c8" containerName="barbican-api" Feb 27 10:39:19 crc kubenswrapper[4998]: I0227 10:39:19.272163 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="424ef47f-d9ad-49f6-b3bf-d72859ab02c8" containerName="barbican-api" Feb 27 10:39:19 crc kubenswrapper[4998]: I0227 10:39:19.272347 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="424ef47f-d9ad-49f6-b3bf-d72859ab02c8" containerName="barbican-api" Feb 27 10:39:19 crc kubenswrapper[4998]: I0227 10:39:19.272374 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="424ef47f-d9ad-49f6-b3bf-d72859ab02c8" containerName="barbican-api-log" Feb 27 10:39:19 crc kubenswrapper[4998]: I0227 10:39:19.272996 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 27 10:39:19 crc kubenswrapper[4998]: I0227 10:39:19.276700 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Feb 27 10:39:19 crc kubenswrapper[4998]: I0227 10:39:19.277468 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-gdvtw" Feb 27 10:39:19 crc kubenswrapper[4998]: I0227 10:39:19.277769 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Feb 27 10:39:19 crc kubenswrapper[4998]: I0227 10:39:19.282137 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 27 10:39:19 crc kubenswrapper[4998]: I0227 10:39:19.383499 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76bds\" (UniqueName: \"kubernetes.io/projected/f9f2ae90-9f69-40cf-92d9-b1e9e320b8f5-kube-api-access-76bds\") pod \"openstackclient\" (UID: \"f9f2ae90-9f69-40cf-92d9-b1e9e320b8f5\") " pod="openstack/openstackclient" Feb 27 10:39:19 crc kubenswrapper[4998]: I0227 10:39:19.383608 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f9f2ae90-9f69-40cf-92d9-b1e9e320b8f5-openstack-config-secret\") pod \"openstackclient\" (UID: \"f9f2ae90-9f69-40cf-92d9-b1e9e320b8f5\") " pod="openstack/openstackclient" Feb 27 10:39:19 crc kubenswrapper[4998]: I0227 10:39:19.383663 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9f2ae90-9f69-40cf-92d9-b1e9e320b8f5-combined-ca-bundle\") pod \"openstackclient\" (UID: \"f9f2ae90-9f69-40cf-92d9-b1e9e320b8f5\") " pod="openstack/openstackclient" Feb 27 10:39:19 crc kubenswrapper[4998]: I0227 10:39:19.383699 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f9f2ae90-9f69-40cf-92d9-b1e9e320b8f5-openstack-config\") pod \"openstackclient\" (UID: \"f9f2ae90-9f69-40cf-92d9-b1e9e320b8f5\") " pod="openstack/openstackclient" Feb 27 10:39:19 crc kubenswrapper[4998]: I0227 10:39:19.485847 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76bds\" (UniqueName: \"kubernetes.io/projected/f9f2ae90-9f69-40cf-92d9-b1e9e320b8f5-kube-api-access-76bds\") pod \"openstackclient\" (UID: \"f9f2ae90-9f69-40cf-92d9-b1e9e320b8f5\") " pod="openstack/openstackclient" Feb 27 10:39:19 crc kubenswrapper[4998]: I0227 10:39:19.485975 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f9f2ae90-9f69-40cf-92d9-b1e9e320b8f5-openstack-config-secret\") pod \"openstackclient\" (UID: \"f9f2ae90-9f69-40cf-92d9-b1e9e320b8f5\") " pod="openstack/openstackclient" Feb 27 10:39:19 crc kubenswrapper[4998]: I0227 10:39:19.486042 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9f2ae90-9f69-40cf-92d9-b1e9e320b8f5-combined-ca-bundle\") pod \"openstackclient\" (UID: \"f9f2ae90-9f69-40cf-92d9-b1e9e320b8f5\") " pod="openstack/openstackclient" Feb 27 10:39:19 crc kubenswrapper[4998]: I0227 10:39:19.486085 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f9f2ae90-9f69-40cf-92d9-b1e9e320b8f5-openstack-config\") pod \"openstackclient\" (UID: \"f9f2ae90-9f69-40cf-92d9-b1e9e320b8f5\") " pod="openstack/openstackclient" Feb 27 10:39:19 crc kubenswrapper[4998]: I0227 10:39:19.487691 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f9f2ae90-9f69-40cf-92d9-b1e9e320b8f5-openstack-config\") pod \"openstackclient\" (UID: \"f9f2ae90-9f69-40cf-92d9-b1e9e320b8f5\") " pod="openstack/openstackclient" Feb 27 10:39:19 crc kubenswrapper[4998]: I0227 10:39:19.494005 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9f2ae90-9f69-40cf-92d9-b1e9e320b8f5-combined-ca-bundle\") pod \"openstackclient\" (UID: \"f9f2ae90-9f69-40cf-92d9-b1e9e320b8f5\") " pod="openstack/openstackclient" Feb 27 10:39:19 crc kubenswrapper[4998]: I0227 10:39:19.496011 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f9f2ae90-9f69-40cf-92d9-b1e9e320b8f5-openstack-config-secret\") pod \"openstackclient\" (UID: \"f9f2ae90-9f69-40cf-92d9-b1e9e320b8f5\") " pod="openstack/openstackclient" Feb 27 10:39:19 crc kubenswrapper[4998]: I0227 10:39:19.513821 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76bds\" (UniqueName: \"kubernetes.io/projected/f9f2ae90-9f69-40cf-92d9-b1e9e320b8f5-kube-api-access-76bds\") pod \"openstackclient\" (UID: \"f9f2ae90-9f69-40cf-92d9-b1e9e320b8f5\") " pod="openstack/openstackclient" Feb 27 10:39:19 crc kubenswrapper[4998]: I0227 10:39:19.551845 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5f9f5dcf86-47qzc" event={"ID":"424ef47f-d9ad-49f6-b3bf-d72859ab02c8","Type":"ContainerDied","Data":"22c20520db6dc1f397a5e4d6a73f488786c4f02585c0b68550c79f915be73183"} Feb 27 10:39:19 crc kubenswrapper[4998]: I0227 10:39:19.552791 4998 scope.go:117] "RemoveContainer" containerID="3a58f2334e863f49237f38e612faecdb92f93f8dd4b4f3e71207362e0b8e4ee9" Feb 27 10:39:19 crc kubenswrapper[4998]: I0227 10:39:19.551930 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5f9f5dcf86-47qzc" Feb 27 10:39:19 crc kubenswrapper[4998]: I0227 10:39:19.590538 4998 scope.go:117] "RemoveContainer" containerID="c6fa1e017f2401e33d8bbbf0501bef296439ae903050928ff9e72691331bf252" Feb 27 10:39:19 crc kubenswrapper[4998]: I0227 10:39:19.600832 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 27 10:39:19 crc kubenswrapper[4998]: I0227 10:39:19.608418 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-5f9f5dcf86-47qzc"] Feb 27 10:39:19 crc kubenswrapper[4998]: I0227 10:39:19.619559 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-5f9f5dcf86-47qzc"] Feb 27 10:39:19 crc kubenswrapper[4998]: I0227 10:39:19.856620 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 27 10:39:19 crc kubenswrapper[4998]: I0227 10:39:19.900644 4998 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-64c87cb5cd-prbxl" podUID="63a87b91-16fb-436d-8c53-317b204acebc" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.152:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.152:8443: connect: connection refused" Feb 27 10:39:20 crc kubenswrapper[4998]: I0227 10:39:20.093723 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 27 10:39:20 crc kubenswrapper[4998]: I0227 10:39:20.562320 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"f9f2ae90-9f69-40cf-92d9-b1e9e320b8f5","Type":"ContainerStarted","Data":"3bfc286aeb0cd27ed214b16b77f2202d453ae3f9052d7e9c620cdef91961d24f"} Feb 27 10:39:20 crc kubenswrapper[4998]: I0227 10:39:20.777033 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="424ef47f-d9ad-49f6-b3bf-d72859ab02c8" path="/var/lib/kubelet/pods/424ef47f-d9ad-49f6-b3bf-d72859ab02c8/volumes" Feb 27 10:39:22 crc kubenswrapper[4998]: I0227 10:39:22.965688 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-7758b6f85-bxf6h"] Feb 27 10:39:22 crc kubenswrapper[4998]: I0227 10:39:22.967889 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-7758b6f85-bxf6h" Feb 27 10:39:22 crc kubenswrapper[4998]: I0227 10:39:22.976544 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Feb 27 10:39:22 crc kubenswrapper[4998]: I0227 10:39:22.977333 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 27 10:39:22 crc kubenswrapper[4998]: I0227 10:39:22.978330 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Feb 27 10:39:23 crc kubenswrapper[4998]: I0227 10:39:23.042205 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-7758b6f85-bxf6h"] Feb 27 10:39:23 crc kubenswrapper[4998]: I0227 10:39:23.064437 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/304f7a70-581a-407b-9280-fe7642feb71f-etc-swift\") pod \"swift-proxy-7758b6f85-bxf6h\" (UID: \"304f7a70-581a-407b-9280-fe7642feb71f\") " pod="openstack/swift-proxy-7758b6f85-bxf6h" Feb 27 10:39:23 crc kubenswrapper[4998]: I0227 10:39:23.064511 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/304f7a70-581a-407b-9280-fe7642feb71f-run-httpd\") pod \"swift-proxy-7758b6f85-bxf6h\" (UID: \"304f7a70-581a-407b-9280-fe7642feb71f\") " pod="openstack/swift-proxy-7758b6f85-bxf6h" Feb 27 10:39:23 crc kubenswrapper[4998]: I0227 10:39:23.064557 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/304f7a70-581a-407b-9280-fe7642feb71f-config-data\") pod \"swift-proxy-7758b6f85-bxf6h\" (UID: \"304f7a70-581a-407b-9280-fe7642feb71f\") " pod="openstack/swift-proxy-7758b6f85-bxf6h" Feb 27 10:39:23 crc kubenswrapper[4998]: I0227 10:39:23.064596 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/304f7a70-581a-407b-9280-fe7642feb71f-log-httpd\") pod \"swift-proxy-7758b6f85-bxf6h\" (UID: \"304f7a70-581a-407b-9280-fe7642feb71f\") " pod="openstack/swift-proxy-7758b6f85-bxf6h" Feb 27 10:39:23 crc kubenswrapper[4998]: I0227 10:39:23.064613 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkw79\" (UniqueName: \"kubernetes.io/projected/304f7a70-581a-407b-9280-fe7642feb71f-kube-api-access-pkw79\") pod \"swift-proxy-7758b6f85-bxf6h\" (UID: \"304f7a70-581a-407b-9280-fe7642feb71f\") " pod="openstack/swift-proxy-7758b6f85-bxf6h" Feb 27 10:39:23 crc kubenswrapper[4998]: I0227 10:39:23.064661 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/304f7a70-581a-407b-9280-fe7642feb71f-combined-ca-bundle\") pod \"swift-proxy-7758b6f85-bxf6h\" (UID: \"304f7a70-581a-407b-9280-fe7642feb71f\") " pod="openstack/swift-proxy-7758b6f85-bxf6h" Feb 27 10:39:23 crc kubenswrapper[4998]: I0227 10:39:23.064755 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/304f7a70-581a-407b-9280-fe7642feb71f-internal-tls-certs\") pod \"swift-proxy-7758b6f85-bxf6h\" (UID: \"304f7a70-581a-407b-9280-fe7642feb71f\") " pod="openstack/swift-proxy-7758b6f85-bxf6h" Feb 27 10:39:23 crc kubenswrapper[4998]: I0227 10:39:23.064781 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/304f7a70-581a-407b-9280-fe7642feb71f-public-tls-certs\") pod \"swift-proxy-7758b6f85-bxf6h\" (UID: \"304f7a70-581a-407b-9280-fe7642feb71f\") " pod="openstack/swift-proxy-7758b6f85-bxf6h" Feb 27 10:39:23 crc kubenswrapper[4998]: I0227 10:39:23.167533 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/304f7a70-581a-407b-9280-fe7642feb71f-internal-tls-certs\") pod \"swift-proxy-7758b6f85-bxf6h\" (UID: \"304f7a70-581a-407b-9280-fe7642feb71f\") " pod="openstack/swift-proxy-7758b6f85-bxf6h" Feb 27 10:39:23 crc kubenswrapper[4998]: I0227 10:39:23.167613 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/304f7a70-581a-407b-9280-fe7642feb71f-public-tls-certs\") pod \"swift-proxy-7758b6f85-bxf6h\" (UID: \"304f7a70-581a-407b-9280-fe7642feb71f\") " pod="openstack/swift-proxy-7758b6f85-bxf6h" Feb 27 10:39:23 crc kubenswrapper[4998]: I0227 10:39:23.168685 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/304f7a70-581a-407b-9280-fe7642feb71f-etc-swift\") pod \"swift-proxy-7758b6f85-bxf6h\" (UID: \"304f7a70-581a-407b-9280-fe7642feb71f\") " pod="openstack/swift-proxy-7758b6f85-bxf6h" Feb 27 10:39:23 crc kubenswrapper[4998]: I0227 10:39:23.168729 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/304f7a70-581a-407b-9280-fe7642feb71f-run-httpd\") pod \"swift-proxy-7758b6f85-bxf6h\" (UID: \"304f7a70-581a-407b-9280-fe7642feb71f\") " pod="openstack/swift-proxy-7758b6f85-bxf6h" Feb 27 10:39:23 crc kubenswrapper[4998]: I0227 10:39:23.168776 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/304f7a70-581a-407b-9280-fe7642feb71f-config-data\") pod \"swift-proxy-7758b6f85-bxf6h\" (UID: \"304f7a70-581a-407b-9280-fe7642feb71f\") " pod="openstack/swift-proxy-7758b6f85-bxf6h" Feb 27 10:39:23 crc kubenswrapper[4998]: I0227 10:39:23.168827 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/304f7a70-581a-407b-9280-fe7642feb71f-log-httpd\") pod \"swift-proxy-7758b6f85-bxf6h\" (UID: \"304f7a70-581a-407b-9280-fe7642feb71f\") " pod="openstack/swift-proxy-7758b6f85-bxf6h" Feb 27 10:39:23 crc kubenswrapper[4998]: I0227 10:39:23.168849 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkw79\" (UniqueName: \"kubernetes.io/projected/304f7a70-581a-407b-9280-fe7642feb71f-kube-api-access-pkw79\") pod \"swift-proxy-7758b6f85-bxf6h\" (UID: \"304f7a70-581a-407b-9280-fe7642feb71f\") " pod="openstack/swift-proxy-7758b6f85-bxf6h" Feb 27 10:39:23 crc kubenswrapper[4998]: I0227 10:39:23.168903 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/304f7a70-581a-407b-9280-fe7642feb71f-combined-ca-bundle\") pod \"swift-proxy-7758b6f85-bxf6h\" (UID: \"304f7a70-581a-407b-9280-fe7642feb71f\") " pod="openstack/swift-proxy-7758b6f85-bxf6h" Feb 27 10:39:23 crc kubenswrapper[4998]: I0227 10:39:23.169583 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/304f7a70-581a-407b-9280-fe7642feb71f-log-httpd\") pod \"swift-proxy-7758b6f85-bxf6h\" (UID: \"304f7a70-581a-407b-9280-fe7642feb71f\") " pod="openstack/swift-proxy-7758b6f85-bxf6h" Feb 27 10:39:23 crc kubenswrapper[4998]: I0227 10:39:23.169663 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/304f7a70-581a-407b-9280-fe7642feb71f-run-httpd\") pod \"swift-proxy-7758b6f85-bxf6h\" (UID: \"304f7a70-581a-407b-9280-fe7642feb71f\") " pod="openstack/swift-proxy-7758b6f85-bxf6h" Feb 27 10:39:23 crc kubenswrapper[4998]: I0227 10:39:23.174635 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/304f7a70-581a-407b-9280-fe7642feb71f-public-tls-certs\") pod \"swift-proxy-7758b6f85-bxf6h\" (UID: \"304f7a70-581a-407b-9280-fe7642feb71f\") " pod="openstack/swift-proxy-7758b6f85-bxf6h" Feb 27 10:39:23 crc kubenswrapper[4998]: I0227 10:39:23.176046 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/304f7a70-581a-407b-9280-fe7642feb71f-config-data\") pod \"swift-proxy-7758b6f85-bxf6h\" (UID: \"304f7a70-581a-407b-9280-fe7642feb71f\") " pod="openstack/swift-proxy-7758b6f85-bxf6h" Feb 27 10:39:23 crc kubenswrapper[4998]: I0227 10:39:23.176655 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/304f7a70-581a-407b-9280-fe7642feb71f-combined-ca-bundle\") pod \"swift-proxy-7758b6f85-bxf6h\" (UID: \"304f7a70-581a-407b-9280-fe7642feb71f\") " pod="openstack/swift-proxy-7758b6f85-bxf6h" Feb 27 10:39:23 crc kubenswrapper[4998]: I0227 10:39:23.181990 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/304f7a70-581a-407b-9280-fe7642feb71f-internal-tls-certs\") pod \"swift-proxy-7758b6f85-bxf6h\" (UID: \"304f7a70-581a-407b-9280-fe7642feb71f\") " pod="openstack/swift-proxy-7758b6f85-bxf6h" Feb 27 10:39:23 crc kubenswrapper[4998]: I0227 10:39:23.190581 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/304f7a70-581a-407b-9280-fe7642feb71f-etc-swift\") pod \"swift-proxy-7758b6f85-bxf6h\" (UID: \"304f7a70-581a-407b-9280-fe7642feb71f\") " pod="openstack/swift-proxy-7758b6f85-bxf6h" Feb 27 10:39:23 crc kubenswrapper[4998]: I0227 10:39:23.192640 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkw79\" (UniqueName: \"kubernetes.io/projected/304f7a70-581a-407b-9280-fe7642feb71f-kube-api-access-pkw79\") pod \"swift-proxy-7758b6f85-bxf6h\" (UID: \"304f7a70-581a-407b-9280-fe7642feb71f\") " pod="openstack/swift-proxy-7758b6f85-bxf6h" Feb 27 10:39:23 crc kubenswrapper[4998]: I0227 10:39:23.289311 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-7758b6f85-bxf6h" Feb 27 10:39:23 crc kubenswrapper[4998]: I0227 10:39:23.925861 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-7758b6f85-bxf6h"] Feb 27 10:39:23 crc kubenswrapper[4998]: I0227 10:39:23.940311 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 27 10:39:23 crc kubenswrapper[4998]: I0227 10:39:23.940784 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b318c0e6-bda6-4a0b-9f83-6a636ef90c08" containerName="ceilometer-central-agent" containerID="cri-o://b5b4115a4eba4bfd7cad294788f637c5a83d54134d82cbcafc6865077b76c827" gracePeriod=30 Feb 27 10:39:23 crc kubenswrapper[4998]: I0227 10:39:23.941506 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b318c0e6-bda6-4a0b-9f83-6a636ef90c08" containerName="proxy-httpd" containerID="cri-o://952add9e480d2f75a6dc35ed32f4f1d2c64eff0c39f1db92eb9e22b123d65e62" gracePeriod=30 Feb 27 10:39:23 crc kubenswrapper[4998]: I0227 10:39:23.941580 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b318c0e6-bda6-4a0b-9f83-6a636ef90c08" containerName="ceilometer-notification-agent" containerID="cri-o://7c2ed0082690f16a18ed325c460eaa42a7a8a3ab54d79065151b8f70f891d707" gracePeriod=30 Feb 27 10:39:23 crc kubenswrapper[4998]: I0227 10:39:23.941780 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b318c0e6-bda6-4a0b-9f83-6a636ef90c08" containerName="sg-core" containerID="cri-o://3b7b29a9a14470e97f5459925c46251ada2e7b75204c1d2ec909d6dd2782441a" gracePeriod=30 Feb 27 10:39:23 crc kubenswrapper[4998]: I0227 10:39:23.953174 4998 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="b318c0e6-bda6-4a0b-9f83-6a636ef90c08" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.166:3000/\": EOF" Feb 27 10:39:23 crc kubenswrapper[4998]: W0227 10:39:23.956600 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod304f7a70_581a_407b_9280_fe7642feb71f.slice/crio-0e98ce2bc012d3c7e4ed1aeb4d73a76cefe8fcf914b5c9b2950c30d981b67006 WatchSource:0}: Error finding container 0e98ce2bc012d3c7e4ed1aeb4d73a76cefe8fcf914b5c9b2950c30d981b67006: Status 404 returned error can't find the container with id 0e98ce2bc012d3c7e4ed1aeb4d73a76cefe8fcf914b5c9b2950c30d981b67006 Feb 27 10:39:24 crc kubenswrapper[4998]: I0227 10:39:24.609007 4998 generic.go:334] "Generic (PLEG): container finished" podID="b318c0e6-bda6-4a0b-9f83-6a636ef90c08" containerID="952add9e480d2f75a6dc35ed32f4f1d2c64eff0c39f1db92eb9e22b123d65e62" exitCode=0 Feb 27 10:39:24 crc kubenswrapper[4998]: I0227 10:39:24.609548 4998 generic.go:334] "Generic (PLEG): container finished" podID="b318c0e6-bda6-4a0b-9f83-6a636ef90c08" containerID="3b7b29a9a14470e97f5459925c46251ada2e7b75204c1d2ec909d6dd2782441a" exitCode=2 Feb 27 10:39:24 crc kubenswrapper[4998]: I0227 10:39:24.609564 4998 generic.go:334] "Generic (PLEG): container finished" podID="b318c0e6-bda6-4a0b-9f83-6a636ef90c08" containerID="7c2ed0082690f16a18ed325c460eaa42a7a8a3ab54d79065151b8f70f891d707" exitCode=0 Feb 27 10:39:24 crc kubenswrapper[4998]: I0227 10:39:24.609571 4998 generic.go:334] "Generic (PLEG): container finished" podID="b318c0e6-bda6-4a0b-9f83-6a636ef90c08" containerID="b5b4115a4eba4bfd7cad294788f637c5a83d54134d82cbcafc6865077b76c827" exitCode=0 Feb 27 10:39:24 crc kubenswrapper[4998]: I0227 10:39:24.609181 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b318c0e6-bda6-4a0b-9f83-6a636ef90c08","Type":"ContainerDied","Data":"952add9e480d2f75a6dc35ed32f4f1d2c64eff0c39f1db92eb9e22b123d65e62"} Feb 27 10:39:24 crc kubenswrapper[4998]: I0227 10:39:24.609636 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b318c0e6-bda6-4a0b-9f83-6a636ef90c08","Type":"ContainerDied","Data":"3b7b29a9a14470e97f5459925c46251ada2e7b75204c1d2ec909d6dd2782441a"} Feb 27 10:39:24 crc kubenswrapper[4998]: I0227 10:39:24.609649 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b318c0e6-bda6-4a0b-9f83-6a636ef90c08","Type":"ContainerDied","Data":"7c2ed0082690f16a18ed325c460eaa42a7a8a3ab54d79065151b8f70f891d707"} Feb 27 10:39:24 crc kubenswrapper[4998]: I0227 10:39:24.609660 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b318c0e6-bda6-4a0b-9f83-6a636ef90c08","Type":"ContainerDied","Data":"b5b4115a4eba4bfd7cad294788f637c5a83d54134d82cbcafc6865077b76c827"} Feb 27 10:39:24 crc kubenswrapper[4998]: I0227 10:39:24.617401 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7758b6f85-bxf6h" event={"ID":"304f7a70-581a-407b-9280-fe7642feb71f","Type":"ContainerStarted","Data":"1a03b0224207bb1c3809cf2ddc329fa77884720105ac60ef54a04e299b1a3d79"} Feb 27 10:39:24 crc kubenswrapper[4998]: I0227 10:39:24.617435 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7758b6f85-bxf6h" event={"ID":"304f7a70-581a-407b-9280-fe7642feb71f","Type":"ContainerStarted","Data":"002622e84cca7252fec962773db1c551492add75a350c1a23fe838d6e192856a"} Feb 27 10:39:24 crc kubenswrapper[4998]: I0227 10:39:24.617445 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7758b6f85-bxf6h" event={"ID":"304f7a70-581a-407b-9280-fe7642feb71f","Type":"ContainerStarted","Data":"0e98ce2bc012d3c7e4ed1aeb4d73a76cefe8fcf914b5c9b2950c30d981b67006"} Feb 27 10:39:24 crc kubenswrapper[4998]: I0227 10:39:24.617813 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-7758b6f85-bxf6h" Feb 27 10:39:24 crc kubenswrapper[4998]: I0227 10:39:24.645580 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-7758b6f85-bxf6h" podStartSLOduration=2.64555981 podStartE2EDuration="2.64555981s" podCreationTimestamp="2026-02-27 10:39:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:39:24.634631834 +0000 UTC m=+1316.632902812" watchObservedRunningTime="2026-02-27 10:39:24.64555981 +0000 UTC m=+1316.643830778" Feb 27 10:39:24 crc kubenswrapper[4998]: I0227 10:39:24.672598 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 10:39:24 crc kubenswrapper[4998]: I0227 10:39:24.815054 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b318c0e6-bda6-4a0b-9f83-6a636ef90c08-config-data\") pod \"b318c0e6-bda6-4a0b-9f83-6a636ef90c08\" (UID: \"b318c0e6-bda6-4a0b-9f83-6a636ef90c08\") " Feb 27 10:39:24 crc kubenswrapper[4998]: I0227 10:39:24.815147 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b318c0e6-bda6-4a0b-9f83-6a636ef90c08-scripts\") pod \"b318c0e6-bda6-4a0b-9f83-6a636ef90c08\" (UID: \"b318c0e6-bda6-4a0b-9f83-6a636ef90c08\") " Feb 27 10:39:24 crc kubenswrapper[4998]: I0227 10:39:24.815184 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b318c0e6-bda6-4a0b-9f83-6a636ef90c08-sg-core-conf-yaml\") pod \"b318c0e6-bda6-4a0b-9f83-6a636ef90c08\" (UID: \"b318c0e6-bda6-4a0b-9f83-6a636ef90c08\") " Feb 27 10:39:24 crc kubenswrapper[4998]: I0227 10:39:24.815216 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b318c0e6-bda6-4a0b-9f83-6a636ef90c08-log-httpd\") pod \"b318c0e6-bda6-4a0b-9f83-6a636ef90c08\" (UID: \"b318c0e6-bda6-4a0b-9f83-6a636ef90c08\") " Feb 27 10:39:24 crc kubenswrapper[4998]: I0227 10:39:24.815253 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b318c0e6-bda6-4a0b-9f83-6a636ef90c08-combined-ca-bundle\") pod \"b318c0e6-bda6-4a0b-9f83-6a636ef90c08\" (UID: \"b318c0e6-bda6-4a0b-9f83-6a636ef90c08\") " Feb 27 10:39:24 crc kubenswrapper[4998]: I0227 10:39:24.815296 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q8hw7\" (UniqueName: \"kubernetes.io/projected/b318c0e6-bda6-4a0b-9f83-6a636ef90c08-kube-api-access-q8hw7\") pod \"b318c0e6-bda6-4a0b-9f83-6a636ef90c08\" (UID: \"b318c0e6-bda6-4a0b-9f83-6a636ef90c08\") " Feb 27 10:39:24 crc kubenswrapper[4998]: I0227 10:39:24.815318 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b318c0e6-bda6-4a0b-9f83-6a636ef90c08-run-httpd\") pod \"b318c0e6-bda6-4a0b-9f83-6a636ef90c08\" (UID: \"b318c0e6-bda6-4a0b-9f83-6a636ef90c08\") " Feb 27 10:39:24 crc kubenswrapper[4998]: I0227 10:39:24.816097 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b318c0e6-bda6-4a0b-9f83-6a636ef90c08-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "b318c0e6-bda6-4a0b-9f83-6a636ef90c08" (UID: "b318c0e6-bda6-4a0b-9f83-6a636ef90c08"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:39:24 crc kubenswrapper[4998]: I0227 10:39:24.819100 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b318c0e6-bda6-4a0b-9f83-6a636ef90c08-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "b318c0e6-bda6-4a0b-9f83-6a636ef90c08" (UID: "b318c0e6-bda6-4a0b-9f83-6a636ef90c08"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:39:24 crc kubenswrapper[4998]: I0227 10:39:24.820578 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b318c0e6-bda6-4a0b-9f83-6a636ef90c08-kube-api-access-q8hw7" (OuterVolumeSpecName: "kube-api-access-q8hw7") pod "b318c0e6-bda6-4a0b-9f83-6a636ef90c08" (UID: "b318c0e6-bda6-4a0b-9f83-6a636ef90c08"). InnerVolumeSpecName "kube-api-access-q8hw7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:39:24 crc kubenswrapper[4998]: I0227 10:39:24.822278 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b318c0e6-bda6-4a0b-9f83-6a636ef90c08-scripts" (OuterVolumeSpecName: "scripts") pod "b318c0e6-bda6-4a0b-9f83-6a636ef90c08" (UID: "b318c0e6-bda6-4a0b-9f83-6a636ef90c08"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:39:24 crc kubenswrapper[4998]: I0227 10:39:24.874553 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-w2dlm"] Feb 27 10:39:24 crc kubenswrapper[4998]: E0227 10:39:24.875219 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b318c0e6-bda6-4a0b-9f83-6a636ef90c08" containerName="proxy-httpd" Feb 27 10:39:24 crc kubenswrapper[4998]: I0227 10:39:24.875257 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="b318c0e6-bda6-4a0b-9f83-6a636ef90c08" containerName="proxy-httpd" Feb 27 10:39:24 crc kubenswrapper[4998]: E0227 10:39:24.875269 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b318c0e6-bda6-4a0b-9f83-6a636ef90c08" containerName="ceilometer-notification-agent" Feb 27 10:39:24 crc kubenswrapper[4998]: I0227 10:39:24.875276 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="b318c0e6-bda6-4a0b-9f83-6a636ef90c08" containerName="ceilometer-notification-agent" Feb 27 10:39:24 crc kubenswrapper[4998]: E0227 10:39:24.875289 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b318c0e6-bda6-4a0b-9f83-6a636ef90c08" containerName="sg-core" Feb 27 10:39:24 crc kubenswrapper[4998]: I0227 10:39:24.875295 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="b318c0e6-bda6-4a0b-9f83-6a636ef90c08" containerName="sg-core" Feb 27 10:39:24 crc kubenswrapper[4998]: E0227 10:39:24.875303 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b318c0e6-bda6-4a0b-9f83-6a636ef90c08" containerName="ceilometer-central-agent" Feb 27 10:39:24 crc kubenswrapper[4998]: I0227 10:39:24.875309 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="b318c0e6-bda6-4a0b-9f83-6a636ef90c08" containerName="ceilometer-central-agent" Feb 27 10:39:24 crc kubenswrapper[4998]: I0227 10:39:24.875529 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="b318c0e6-bda6-4a0b-9f83-6a636ef90c08" containerName="ceilometer-central-agent" Feb 27 10:39:24 crc kubenswrapper[4998]: I0227 10:39:24.875545 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="b318c0e6-bda6-4a0b-9f83-6a636ef90c08" containerName="sg-core" Feb 27 10:39:24 crc kubenswrapper[4998]: I0227 10:39:24.875554 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="b318c0e6-bda6-4a0b-9f83-6a636ef90c08" containerName="proxy-httpd" Feb 27 10:39:24 crc kubenswrapper[4998]: I0227 10:39:24.875563 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="b318c0e6-bda6-4a0b-9f83-6a636ef90c08" containerName="ceilometer-notification-agent" Feb 27 10:39:24 crc kubenswrapper[4998]: I0227 10:39:24.876439 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-w2dlm" Feb 27 10:39:24 crc kubenswrapper[4998]: I0227 10:39:24.888346 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b318c0e6-bda6-4a0b-9f83-6a636ef90c08-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "b318c0e6-bda6-4a0b-9f83-6a636ef90c08" (UID: "b318c0e6-bda6-4a0b-9f83-6a636ef90c08"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:39:24 crc kubenswrapper[4998]: I0227 10:39:24.898046 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-w2dlm"] Feb 27 10:39:24 crc kubenswrapper[4998]: I0227 10:39:24.920293 4998 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b318c0e6-bda6-4a0b-9f83-6a636ef90c08-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 27 10:39:24 crc kubenswrapper[4998]: I0227 10:39:24.920337 4998 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b318c0e6-bda6-4a0b-9f83-6a636ef90c08-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 27 10:39:24 crc kubenswrapper[4998]: I0227 10:39:24.920348 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q8hw7\" (UniqueName: \"kubernetes.io/projected/b318c0e6-bda6-4a0b-9f83-6a636ef90c08-kube-api-access-q8hw7\") on node \"crc\" DevicePath \"\"" Feb 27 10:39:24 crc kubenswrapper[4998]: I0227 10:39:24.920358 4998 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b318c0e6-bda6-4a0b-9f83-6a636ef90c08-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 27 10:39:24 crc kubenswrapper[4998]: I0227 10:39:24.920368 4998 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b318c0e6-bda6-4a0b-9f83-6a636ef90c08-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 10:39:24 crc kubenswrapper[4998]: I0227 10:39:24.953723 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b318c0e6-bda6-4a0b-9f83-6a636ef90c08-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b318c0e6-bda6-4a0b-9f83-6a636ef90c08" (UID: "b318c0e6-bda6-4a0b-9f83-6a636ef90c08"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:39:24 crc kubenswrapper[4998]: I0227 10:39:24.968078 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b318c0e6-bda6-4a0b-9f83-6a636ef90c08-config-data" (OuterVolumeSpecName: "config-data") pod "b318c0e6-bda6-4a0b-9f83-6a636ef90c08" (UID: "b318c0e6-bda6-4a0b-9f83-6a636ef90c08"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:39:24 crc kubenswrapper[4998]: I0227 10:39:24.969301 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-5249f"] Feb 27 10:39:24 crc kubenswrapper[4998]: I0227 10:39:24.970794 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-5249f" Feb 27 10:39:25 crc kubenswrapper[4998]: I0227 10:39:25.003344 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-8d8b-account-create-update-gkdwk"] Feb 27 10:39:25 crc kubenswrapper[4998]: I0227 10:39:25.005727 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-8d8b-account-create-update-gkdwk" Feb 27 10:39:25 crc kubenswrapper[4998]: I0227 10:39:25.010901 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 27 10:39:25 crc kubenswrapper[4998]: I0227 10:39:25.022327 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa43c986-f63e-4e51-9c39-6f3d39260745-operator-scripts\") pod \"nova-api-db-create-w2dlm\" (UID: \"aa43c986-f63e-4e51-9c39-6f3d39260745\") " pod="openstack/nova-api-db-create-w2dlm" Feb 27 10:39:25 crc kubenswrapper[4998]: I0227 10:39:25.022462 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nf82d\" (UniqueName: \"kubernetes.io/projected/aa43c986-f63e-4e51-9c39-6f3d39260745-kube-api-access-nf82d\") pod \"nova-api-db-create-w2dlm\" (UID: \"aa43c986-f63e-4e51-9c39-6f3d39260745\") " pod="openstack/nova-api-db-create-w2dlm" Feb 27 10:39:25 crc kubenswrapper[4998]: I0227 10:39:25.022550 4998 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b318c0e6-bda6-4a0b-9f83-6a636ef90c08-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:39:25 crc kubenswrapper[4998]: I0227 10:39:25.022566 4998 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b318c0e6-bda6-4a0b-9f83-6a636ef90c08-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 10:39:25 crc kubenswrapper[4998]: I0227 10:39:25.033470 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-5249f"] Feb 27 10:39:25 crc kubenswrapper[4998]: I0227 10:39:25.043266 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-8d8b-account-create-update-gkdwk"] Feb 27 10:39:25 crc kubenswrapper[4998]: I0227 10:39:25.074133 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-kbvf6"] Feb 27 10:39:25 crc kubenswrapper[4998]: I0227 10:39:25.075408 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-kbvf6" Feb 27 10:39:25 crc kubenswrapper[4998]: I0227 10:39:25.082314 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-kbvf6"] Feb 27 10:39:25 crc kubenswrapper[4998]: I0227 10:39:25.125773 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kl8sw\" (UniqueName: \"kubernetes.io/projected/bfa6944d-79e7-49cd-a3c3-4b183ac32c63-kube-api-access-kl8sw\") pod \"nova-api-8d8b-account-create-update-gkdwk\" (UID: \"bfa6944d-79e7-49cd-a3c3-4b183ac32c63\") " pod="openstack/nova-api-8d8b-account-create-update-gkdwk" Feb 27 10:39:25 crc kubenswrapper[4998]: I0227 10:39:25.125878 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa43c986-f63e-4e51-9c39-6f3d39260745-operator-scripts\") pod \"nova-api-db-create-w2dlm\" (UID: \"aa43c986-f63e-4e51-9c39-6f3d39260745\") " pod="openstack/nova-api-db-create-w2dlm" Feb 27 10:39:25 crc kubenswrapper[4998]: I0227 10:39:25.125913 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb09f66d-895d-4368-beac-96ae5467a97a-operator-scripts\") pod \"nova-cell0-db-create-5249f\" (UID: \"eb09f66d-895d-4368-beac-96ae5467a97a\") " pod="openstack/nova-cell0-db-create-5249f" Feb 27 10:39:25 crc kubenswrapper[4998]: I0227 10:39:25.125955 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vldh\" (UniqueName: \"kubernetes.io/projected/eb09f66d-895d-4368-beac-96ae5467a97a-kube-api-access-8vldh\") pod \"nova-cell0-db-create-5249f\" (UID: \"eb09f66d-895d-4368-beac-96ae5467a97a\") " pod="openstack/nova-cell0-db-create-5249f" Feb 27 10:39:25 crc kubenswrapper[4998]: I0227 10:39:25.126088 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bfa6944d-79e7-49cd-a3c3-4b183ac32c63-operator-scripts\") pod \"nova-api-8d8b-account-create-update-gkdwk\" (UID: \"bfa6944d-79e7-49cd-a3c3-4b183ac32c63\") " pod="openstack/nova-api-8d8b-account-create-update-gkdwk" Feb 27 10:39:25 crc kubenswrapper[4998]: I0227 10:39:25.126128 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nf82d\" (UniqueName: \"kubernetes.io/projected/aa43c986-f63e-4e51-9c39-6f3d39260745-kube-api-access-nf82d\") pod \"nova-api-db-create-w2dlm\" (UID: \"aa43c986-f63e-4e51-9c39-6f3d39260745\") " pod="openstack/nova-api-db-create-w2dlm" Feb 27 10:39:25 crc kubenswrapper[4998]: I0227 10:39:25.128494 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa43c986-f63e-4e51-9c39-6f3d39260745-operator-scripts\") pod \"nova-api-db-create-w2dlm\" (UID: \"aa43c986-f63e-4e51-9c39-6f3d39260745\") " pod="openstack/nova-api-db-create-w2dlm" Feb 27 10:39:25 crc kubenswrapper[4998]: I0227 10:39:25.156834 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nf82d\" (UniqueName: \"kubernetes.io/projected/aa43c986-f63e-4e51-9c39-6f3d39260745-kube-api-access-nf82d\") pod \"nova-api-db-create-w2dlm\" (UID: \"aa43c986-f63e-4e51-9c39-6f3d39260745\") " pod="openstack/nova-api-db-create-w2dlm" Feb 27 10:39:25 crc kubenswrapper[4998]: I0227 10:39:25.179280 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-2d4e-account-create-update-ntthm"] Feb 27 10:39:25 crc kubenswrapper[4998]: I0227 10:39:25.183710 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-2d4e-account-create-update-ntthm" Feb 27 10:39:25 crc kubenswrapper[4998]: I0227 10:39:25.190979 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 27 10:39:25 crc kubenswrapper[4998]: I0227 10:39:25.192500 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-w2dlm" Feb 27 10:39:25 crc kubenswrapper[4998]: I0227 10:39:25.206547 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 27 10:39:25 crc kubenswrapper[4998]: I0227 10:39:25.228975 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crl7k\" (UniqueName: \"kubernetes.io/projected/2eea7aa4-efe1-4601-84a2-5dce0446e27e-kube-api-access-crl7k\") pod \"nova-cell1-db-create-kbvf6\" (UID: \"2eea7aa4-efe1-4601-84a2-5dce0446e27e\") " pod="openstack/nova-cell1-db-create-kbvf6" Feb 27 10:39:25 crc kubenswrapper[4998]: I0227 10:39:25.229091 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bfa6944d-79e7-49cd-a3c3-4b183ac32c63-operator-scripts\") pod \"nova-api-8d8b-account-create-update-gkdwk\" (UID: \"bfa6944d-79e7-49cd-a3c3-4b183ac32c63\") " pod="openstack/nova-api-8d8b-account-create-update-gkdwk" Feb 27 10:39:25 crc kubenswrapper[4998]: I0227 10:39:25.229202 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kl8sw\" (UniqueName: \"kubernetes.io/projected/bfa6944d-79e7-49cd-a3c3-4b183ac32c63-kube-api-access-kl8sw\") pod \"nova-api-8d8b-account-create-update-gkdwk\" (UID: \"bfa6944d-79e7-49cd-a3c3-4b183ac32c63\") " pod="openstack/nova-api-8d8b-account-create-update-gkdwk" Feb 27 10:39:25 crc kubenswrapper[4998]: I0227 10:39:25.230800 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb09f66d-895d-4368-beac-96ae5467a97a-operator-scripts\") pod \"nova-cell0-db-create-5249f\" (UID: \"eb09f66d-895d-4368-beac-96ae5467a97a\") " pod="openstack/nova-cell0-db-create-5249f" Feb 27 10:39:25 crc kubenswrapper[4998]: I0227 10:39:25.230884 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vldh\" (UniqueName: \"kubernetes.io/projected/eb09f66d-895d-4368-beac-96ae5467a97a-kube-api-access-8vldh\") pod \"nova-cell0-db-create-5249f\" (UID: \"eb09f66d-895d-4368-beac-96ae5467a97a\") " pod="openstack/nova-cell0-db-create-5249f" Feb 27 10:39:25 crc kubenswrapper[4998]: I0227 10:39:25.230969 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2eea7aa4-efe1-4601-84a2-5dce0446e27e-operator-scripts\") pod \"nova-cell1-db-create-kbvf6\" (UID: \"2eea7aa4-efe1-4601-84a2-5dce0446e27e\") " pod="openstack/nova-cell1-db-create-kbvf6" Feb 27 10:39:25 crc kubenswrapper[4998]: I0227 10:39:25.231921 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bfa6944d-79e7-49cd-a3c3-4b183ac32c63-operator-scripts\") pod \"nova-api-8d8b-account-create-update-gkdwk\" (UID: \"bfa6944d-79e7-49cd-a3c3-4b183ac32c63\") " pod="openstack/nova-api-8d8b-account-create-update-gkdwk" Feb 27 10:39:25 crc kubenswrapper[4998]: I0227 10:39:25.232386 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb09f66d-895d-4368-beac-96ae5467a97a-operator-scripts\") pod \"nova-cell0-db-create-5249f\" (UID: \"eb09f66d-895d-4368-beac-96ae5467a97a\") " pod="openstack/nova-cell0-db-create-5249f" Feb 27 10:39:25 crc kubenswrapper[4998]: I0227 10:39:25.255331 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-2d4e-account-create-update-ntthm"] Feb 27 10:39:25 crc kubenswrapper[4998]: I0227 10:39:25.261120 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kl8sw\" (UniqueName: \"kubernetes.io/projected/bfa6944d-79e7-49cd-a3c3-4b183ac32c63-kube-api-access-kl8sw\") pod \"nova-api-8d8b-account-create-update-gkdwk\" (UID: \"bfa6944d-79e7-49cd-a3c3-4b183ac32c63\") " pod="openstack/nova-api-8d8b-account-create-update-gkdwk" Feb 27 10:39:25 crc kubenswrapper[4998]: I0227 10:39:25.273011 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vldh\" (UniqueName: \"kubernetes.io/projected/eb09f66d-895d-4368-beac-96ae5467a97a-kube-api-access-8vldh\") pod \"nova-cell0-db-create-5249f\" (UID: \"eb09f66d-895d-4368-beac-96ae5467a97a\") " pod="openstack/nova-cell0-db-create-5249f" Feb 27 10:39:25 crc kubenswrapper[4998]: I0227 10:39:25.302213 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-5249f" Feb 27 10:39:25 crc kubenswrapper[4998]: I0227 10:39:25.333849 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc3d5ef4-3dac-442c-b6e8-64435a2f474a-operator-scripts\") pod \"nova-cell0-2d4e-account-create-update-ntthm\" (UID: \"dc3d5ef4-3dac-442c-b6e8-64435a2f474a\") " pod="openstack/nova-cell0-2d4e-account-create-update-ntthm" Feb 27 10:39:25 crc kubenswrapper[4998]: I0227 10:39:25.334276 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6f2q7\" (UniqueName: \"kubernetes.io/projected/dc3d5ef4-3dac-442c-b6e8-64435a2f474a-kube-api-access-6f2q7\") pod \"nova-cell0-2d4e-account-create-update-ntthm\" (UID: \"dc3d5ef4-3dac-442c-b6e8-64435a2f474a\") " pod="openstack/nova-cell0-2d4e-account-create-update-ntthm" Feb 27 10:39:25 crc kubenswrapper[4998]: I0227 10:39:25.335071 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-8d8b-account-create-update-gkdwk" Feb 27 10:39:25 crc kubenswrapper[4998]: I0227 10:39:25.337061 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2eea7aa4-efe1-4601-84a2-5dce0446e27e-operator-scripts\") pod \"nova-cell1-db-create-kbvf6\" (UID: \"2eea7aa4-efe1-4601-84a2-5dce0446e27e\") " pod="openstack/nova-cell1-db-create-kbvf6" Feb 27 10:39:25 crc kubenswrapper[4998]: I0227 10:39:25.337462 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crl7k\" (UniqueName: \"kubernetes.io/projected/2eea7aa4-efe1-4601-84a2-5dce0446e27e-kube-api-access-crl7k\") pod \"nova-cell1-db-create-kbvf6\" (UID: \"2eea7aa4-efe1-4601-84a2-5dce0446e27e\") " pod="openstack/nova-cell1-db-create-kbvf6" Feb 27 10:39:25 crc kubenswrapper[4998]: I0227 10:39:25.338255 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2eea7aa4-efe1-4601-84a2-5dce0446e27e-operator-scripts\") pod \"nova-cell1-db-create-kbvf6\" (UID: \"2eea7aa4-efe1-4601-84a2-5dce0446e27e\") " pod="openstack/nova-cell1-db-create-kbvf6" Feb 27 10:39:25 crc kubenswrapper[4998]: I0227 10:39:25.369536 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crl7k\" (UniqueName: \"kubernetes.io/projected/2eea7aa4-efe1-4601-84a2-5dce0446e27e-kube-api-access-crl7k\") pod \"nova-cell1-db-create-kbvf6\" (UID: \"2eea7aa4-efe1-4601-84a2-5dce0446e27e\") " pod="openstack/nova-cell1-db-create-kbvf6" Feb 27 10:39:25 crc kubenswrapper[4998]: I0227 10:39:25.381147 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-8f1e-account-create-update-s2nlt"] Feb 27 10:39:25 crc kubenswrapper[4998]: I0227 10:39:25.386873 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-8f1e-account-create-update-s2nlt" Feb 27 10:39:25 crc kubenswrapper[4998]: I0227 10:39:25.392509 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 27 10:39:25 crc kubenswrapper[4998]: I0227 10:39:25.395003 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-8f1e-account-create-update-s2nlt"] Feb 27 10:39:25 crc kubenswrapper[4998]: I0227 10:39:25.396304 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-kbvf6" Feb 27 10:39:25 crc kubenswrapper[4998]: I0227 10:39:25.441426 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6f2q7\" (UniqueName: \"kubernetes.io/projected/dc3d5ef4-3dac-442c-b6e8-64435a2f474a-kube-api-access-6f2q7\") pod \"nova-cell0-2d4e-account-create-update-ntthm\" (UID: \"dc3d5ef4-3dac-442c-b6e8-64435a2f474a\") " pod="openstack/nova-cell0-2d4e-account-create-update-ntthm" Feb 27 10:39:25 crc kubenswrapper[4998]: I0227 10:39:25.441905 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/159df315-6004-48f3-a1f3-192ee4c02588-operator-scripts\") pod \"nova-cell1-8f1e-account-create-update-s2nlt\" (UID: \"159df315-6004-48f3-a1f3-192ee4c02588\") " pod="openstack/nova-cell1-8f1e-account-create-update-s2nlt" Feb 27 10:39:25 crc kubenswrapper[4998]: I0227 10:39:25.442169 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkgdd\" (UniqueName: \"kubernetes.io/projected/159df315-6004-48f3-a1f3-192ee4c02588-kube-api-access-wkgdd\") pod \"nova-cell1-8f1e-account-create-update-s2nlt\" (UID: \"159df315-6004-48f3-a1f3-192ee4c02588\") " pod="openstack/nova-cell1-8f1e-account-create-update-s2nlt" Feb 27 10:39:25 crc kubenswrapper[4998]: I0227 10:39:25.442276 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc3d5ef4-3dac-442c-b6e8-64435a2f474a-operator-scripts\") pod \"nova-cell0-2d4e-account-create-update-ntthm\" (UID: \"dc3d5ef4-3dac-442c-b6e8-64435a2f474a\") " pod="openstack/nova-cell0-2d4e-account-create-update-ntthm" Feb 27 10:39:25 crc kubenswrapper[4998]: I0227 10:39:25.443091 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc3d5ef4-3dac-442c-b6e8-64435a2f474a-operator-scripts\") pod \"nova-cell0-2d4e-account-create-update-ntthm\" (UID: \"dc3d5ef4-3dac-442c-b6e8-64435a2f474a\") " pod="openstack/nova-cell0-2d4e-account-create-update-ntthm" Feb 27 10:39:25 crc kubenswrapper[4998]: I0227 10:39:25.459917 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6f2q7\" (UniqueName: \"kubernetes.io/projected/dc3d5ef4-3dac-442c-b6e8-64435a2f474a-kube-api-access-6f2q7\") pod \"nova-cell0-2d4e-account-create-update-ntthm\" (UID: \"dc3d5ef4-3dac-442c-b6e8-64435a2f474a\") " pod="openstack/nova-cell0-2d4e-account-create-update-ntthm" Feb 27 10:39:25 crc kubenswrapper[4998]: I0227 10:39:25.547147 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wkgdd\" (UniqueName: \"kubernetes.io/projected/159df315-6004-48f3-a1f3-192ee4c02588-kube-api-access-wkgdd\") pod \"nova-cell1-8f1e-account-create-update-s2nlt\" (UID: \"159df315-6004-48f3-a1f3-192ee4c02588\") " pod="openstack/nova-cell1-8f1e-account-create-update-s2nlt" Feb 27 10:39:25 crc kubenswrapper[4998]: I0227 10:39:25.547680 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/159df315-6004-48f3-a1f3-192ee4c02588-operator-scripts\") pod \"nova-cell1-8f1e-account-create-update-s2nlt\" (UID: \"159df315-6004-48f3-a1f3-192ee4c02588\") " pod="openstack/nova-cell1-8f1e-account-create-update-s2nlt" Feb 27 10:39:25 crc kubenswrapper[4998]: I0227 10:39:25.548376 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/159df315-6004-48f3-a1f3-192ee4c02588-operator-scripts\") pod \"nova-cell1-8f1e-account-create-update-s2nlt\" (UID: \"159df315-6004-48f3-a1f3-192ee4c02588\") " pod="openstack/nova-cell1-8f1e-account-create-update-s2nlt" Feb 27 10:39:25 crc kubenswrapper[4998]: I0227 10:39:25.578275 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkgdd\" (UniqueName: \"kubernetes.io/projected/159df315-6004-48f3-a1f3-192ee4c02588-kube-api-access-wkgdd\") pod \"nova-cell1-8f1e-account-create-update-s2nlt\" (UID: \"159df315-6004-48f3-a1f3-192ee4c02588\") " pod="openstack/nova-cell1-8f1e-account-create-update-s2nlt" Feb 27 10:39:25 crc kubenswrapper[4998]: I0227 10:39:25.660008 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-2d4e-account-create-update-ntthm" Feb 27 10:39:25 crc kubenswrapper[4998]: I0227 10:39:25.666312 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 10:39:25 crc kubenswrapper[4998]: I0227 10:39:25.677317 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b318c0e6-bda6-4a0b-9f83-6a636ef90c08","Type":"ContainerDied","Data":"f77153d5e603c7d8ae22b5384f13c3e40508553c5b93bdad28cb169d91166d84"} Feb 27 10:39:25 crc kubenswrapper[4998]: I0227 10:39:25.677391 4998 scope.go:117] "RemoveContainer" containerID="952add9e480d2f75a6dc35ed32f4f1d2c64eff0c39f1db92eb9e22b123d65e62" Feb 27 10:39:25 crc kubenswrapper[4998]: I0227 10:39:25.677544 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-7758b6f85-bxf6h" Feb 27 10:39:25 crc kubenswrapper[4998]: I0227 10:39:25.730660 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 27 10:39:25 crc kubenswrapper[4998]: I0227 10:39:25.740974 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 27 10:39:25 crc kubenswrapper[4998]: I0227 10:39:25.747624 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-8f1e-account-create-update-s2nlt" Feb 27 10:39:25 crc kubenswrapper[4998]: I0227 10:39:25.751468 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 27 10:39:25 crc kubenswrapper[4998]: I0227 10:39:25.755925 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 10:39:25 crc kubenswrapper[4998]: I0227 10:39:25.761983 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 27 10:39:25 crc kubenswrapper[4998]: I0227 10:39:25.762293 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 27 10:39:25 crc kubenswrapper[4998]: I0227 10:39:25.769698 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 27 10:39:25 crc kubenswrapper[4998]: I0227 10:39:25.861346 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/906e560b-ca5b-48cf-988b-a06dfbacc876-config-data\") pod \"ceilometer-0\" (UID: \"906e560b-ca5b-48cf-988b-a06dfbacc876\") " pod="openstack/ceilometer-0" Feb 27 10:39:25 crc kubenswrapper[4998]: I0227 10:39:25.861737 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/906e560b-ca5b-48cf-988b-a06dfbacc876-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"906e560b-ca5b-48cf-988b-a06dfbacc876\") " pod="openstack/ceilometer-0" Feb 27 10:39:25 crc kubenswrapper[4998]: I0227 10:39:25.861776 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/906e560b-ca5b-48cf-988b-a06dfbacc876-scripts\") pod \"ceilometer-0\" (UID: \"906e560b-ca5b-48cf-988b-a06dfbacc876\") " pod="openstack/ceilometer-0" Feb 27 10:39:25 crc kubenswrapper[4998]: I0227 10:39:25.861808 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smssq\" (UniqueName: \"kubernetes.io/projected/906e560b-ca5b-48cf-988b-a06dfbacc876-kube-api-access-smssq\") pod \"ceilometer-0\" (UID: \"906e560b-ca5b-48cf-988b-a06dfbacc876\") " pod="openstack/ceilometer-0" Feb 27 10:39:25 crc kubenswrapper[4998]: I0227 10:39:25.861872 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/906e560b-ca5b-48cf-988b-a06dfbacc876-log-httpd\") pod \"ceilometer-0\" (UID: \"906e560b-ca5b-48cf-988b-a06dfbacc876\") " pod="openstack/ceilometer-0" Feb 27 10:39:25 crc kubenswrapper[4998]: I0227 10:39:25.861926 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/906e560b-ca5b-48cf-988b-a06dfbacc876-run-httpd\") pod \"ceilometer-0\" (UID: \"906e560b-ca5b-48cf-988b-a06dfbacc876\") " pod="openstack/ceilometer-0" Feb 27 10:39:25 crc kubenswrapper[4998]: I0227 10:39:25.861950 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/906e560b-ca5b-48cf-988b-a06dfbacc876-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"906e560b-ca5b-48cf-988b-a06dfbacc876\") " pod="openstack/ceilometer-0" Feb 27 10:39:25 crc kubenswrapper[4998]: I0227 10:39:25.901259 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-w2dlm"] Feb 27 10:39:25 crc kubenswrapper[4998]: I0227 10:39:25.969439 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/906e560b-ca5b-48cf-988b-a06dfbacc876-config-data\") pod \"ceilometer-0\" (UID: \"906e560b-ca5b-48cf-988b-a06dfbacc876\") " pod="openstack/ceilometer-0" Feb 27 10:39:25 crc kubenswrapper[4998]: I0227 10:39:25.969527 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/906e560b-ca5b-48cf-988b-a06dfbacc876-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"906e560b-ca5b-48cf-988b-a06dfbacc876\") " pod="openstack/ceilometer-0" Feb 27 10:39:25 crc kubenswrapper[4998]: I0227 10:39:25.969572 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/906e560b-ca5b-48cf-988b-a06dfbacc876-scripts\") pod \"ceilometer-0\" (UID: \"906e560b-ca5b-48cf-988b-a06dfbacc876\") " pod="openstack/ceilometer-0" Feb 27 10:39:25 crc kubenswrapper[4998]: I0227 10:39:25.969613 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smssq\" (UniqueName: \"kubernetes.io/projected/906e560b-ca5b-48cf-988b-a06dfbacc876-kube-api-access-smssq\") pod \"ceilometer-0\" (UID: \"906e560b-ca5b-48cf-988b-a06dfbacc876\") " pod="openstack/ceilometer-0" Feb 27 10:39:25 crc kubenswrapper[4998]: I0227 10:39:25.969683 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/906e560b-ca5b-48cf-988b-a06dfbacc876-log-httpd\") pod \"ceilometer-0\" (UID: \"906e560b-ca5b-48cf-988b-a06dfbacc876\") " pod="openstack/ceilometer-0" Feb 27 10:39:25 crc kubenswrapper[4998]: I0227 10:39:25.969737 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/906e560b-ca5b-48cf-988b-a06dfbacc876-run-httpd\") pod \"ceilometer-0\" (UID: \"906e560b-ca5b-48cf-988b-a06dfbacc876\") " pod="openstack/ceilometer-0" Feb 27 10:39:25 crc kubenswrapper[4998]: I0227 10:39:25.969767 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/906e560b-ca5b-48cf-988b-a06dfbacc876-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"906e560b-ca5b-48cf-988b-a06dfbacc876\") " pod="openstack/ceilometer-0" Feb 27 10:39:25 crc kubenswrapper[4998]: I0227 10:39:25.970790 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/906e560b-ca5b-48cf-988b-a06dfbacc876-log-httpd\") pod \"ceilometer-0\" (UID: \"906e560b-ca5b-48cf-988b-a06dfbacc876\") " pod="openstack/ceilometer-0" Feb 27 10:39:25 crc kubenswrapper[4998]: I0227 10:39:25.971135 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/906e560b-ca5b-48cf-988b-a06dfbacc876-run-httpd\") pod \"ceilometer-0\" (UID: \"906e560b-ca5b-48cf-988b-a06dfbacc876\") " pod="openstack/ceilometer-0" Feb 27 10:39:25 crc kubenswrapper[4998]: I0227 10:39:25.972067 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-5249f"] Feb 27 10:39:25 crc kubenswrapper[4998]: I0227 10:39:25.980496 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/906e560b-ca5b-48cf-988b-a06dfbacc876-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"906e560b-ca5b-48cf-988b-a06dfbacc876\") " pod="openstack/ceilometer-0" Feb 27 10:39:25 crc kubenswrapper[4998]: I0227 10:39:25.986376 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/906e560b-ca5b-48cf-988b-a06dfbacc876-scripts\") pod \"ceilometer-0\" (UID: \"906e560b-ca5b-48cf-988b-a06dfbacc876\") " pod="openstack/ceilometer-0" Feb 27 10:39:25 crc kubenswrapper[4998]: I0227 10:39:25.986821 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/906e560b-ca5b-48cf-988b-a06dfbacc876-config-data\") pod \"ceilometer-0\" (UID: \"906e560b-ca5b-48cf-988b-a06dfbacc876\") " pod="openstack/ceilometer-0" Feb 27 10:39:25 crc kubenswrapper[4998]: I0227 10:39:25.987065 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/906e560b-ca5b-48cf-988b-a06dfbacc876-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"906e560b-ca5b-48cf-988b-a06dfbacc876\") " pod="openstack/ceilometer-0" Feb 27 10:39:26 crc kubenswrapper[4998]: I0227 10:39:26.004550 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-8d8b-account-create-update-gkdwk"] Feb 27 10:39:26 crc kubenswrapper[4998]: I0227 10:39:26.028527 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smssq\" (UniqueName: \"kubernetes.io/projected/906e560b-ca5b-48cf-988b-a06dfbacc876-kube-api-access-smssq\") pod \"ceilometer-0\" (UID: \"906e560b-ca5b-48cf-988b-a06dfbacc876\") " pod="openstack/ceilometer-0" Feb 27 10:39:26 crc kubenswrapper[4998]: I0227 10:39:26.094585 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 10:39:26 crc kubenswrapper[4998]: I0227 10:39:26.182923 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-kbvf6"] Feb 27 10:39:26 crc kubenswrapper[4998]: I0227 10:39:26.776839 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b318c0e6-bda6-4a0b-9f83-6a636ef90c08" path="/var/lib/kubelet/pods/b318c0e6-bda6-4a0b-9f83-6a636ef90c08/volumes" Feb 27 10:39:29 crc kubenswrapper[4998]: I0227 10:39:29.900498 4998 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-64c87cb5cd-prbxl" podUID="63a87b91-16fb-436d-8c53-317b204acebc" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.152:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.152:8443: connect: connection refused" Feb 27 10:39:31 crc kubenswrapper[4998]: I0227 10:39:31.567675 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 27 10:39:31 crc kubenswrapper[4998]: I0227 10:39:31.592245 4998 scope.go:117] "RemoveContainer" containerID="3b7b29a9a14470e97f5459925c46251ada2e7b75204c1d2ec909d6dd2782441a" Feb 27 10:39:31 crc kubenswrapper[4998]: I0227 10:39:31.721838 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-kbvf6" event={"ID":"2eea7aa4-efe1-4601-84a2-5dce0446e27e","Type":"ContainerStarted","Data":"a5148db443bf10f841a0cfa31d7570b7a1b4ce3a4f5109a9fcaa8a853811c332"} Feb 27 10:39:31 crc kubenswrapper[4998]: I0227 10:39:31.726456 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-w2dlm" event={"ID":"aa43c986-f63e-4e51-9c39-6f3d39260745","Type":"ContainerStarted","Data":"4348c9b99b2cc54d58f3c0ff6e9a2004a6cff8a364867eff1af5ca2cb8f90601"} Feb 27 10:39:31 crc kubenswrapper[4998]: I0227 10:39:31.733363 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-5249f" event={"ID":"eb09f66d-895d-4368-beac-96ae5467a97a","Type":"ContainerStarted","Data":"e330e9fa5881d10bd0ff0ff345442d4baa32ad7ae474c2a77581711699888b6e"} Feb 27 10:39:31 crc kubenswrapper[4998]: I0227 10:39:31.748256 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-8d8b-account-create-update-gkdwk" event={"ID":"bfa6944d-79e7-49cd-a3c3-4b183ac32c63","Type":"ContainerStarted","Data":"4c6cb88f8d4cd3f0cee386efa072550157a7421b85ae76c51d63f9eff9f73da9"} Feb 27 10:39:31 crc kubenswrapper[4998]: I0227 10:39:31.789923 4998 scope.go:117] "RemoveContainer" containerID="7c2ed0082690f16a18ed325c460eaa42a7a8a3ab54d79065151b8f70f891d707" Feb 27 10:39:31 crc kubenswrapper[4998]: I0227 10:39:31.859378 4998 scope.go:117] "RemoveContainer" containerID="b5b4115a4eba4bfd7cad294788f637c5a83d54134d82cbcafc6865077b76c827" Feb 27 10:39:32 crc kubenswrapper[4998]: I0227 10:39:32.159698 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-2d4e-account-create-update-ntthm"] Feb 27 10:39:32 crc kubenswrapper[4998]: I0227 10:39:32.170090 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 27 10:39:32 crc kubenswrapper[4998]: I0227 10:39:32.273872 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 27 10:39:32 crc kubenswrapper[4998]: I0227 10:39:32.298814 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-8f1e-account-create-update-s2nlt"] Feb 27 10:39:32 crc kubenswrapper[4998]: I0227 10:39:32.472795 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 27 10:39:32 crc kubenswrapper[4998]: I0227 10:39:32.762625 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-2d4e-account-create-update-ntthm" event={"ID":"dc3d5ef4-3dac-442c-b6e8-64435a2f474a","Type":"ContainerStarted","Data":"3405e474b165ed537e9aaa3f84be16a6f9af9ab98609252f615e86c7db3d2c57"} Feb 27 10:39:32 crc kubenswrapper[4998]: I0227 10:39:32.762672 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-2d4e-account-create-update-ntthm" event={"ID":"dc3d5ef4-3dac-442c-b6e8-64435a2f474a","Type":"ContainerStarted","Data":"ac61ffa7165c454ced5b3f344a9e24e3710fa555a126967b42e20bac27bf6e64"} Feb 27 10:39:32 crc kubenswrapper[4998]: I0227 10:39:32.766028 4998 generic.go:334] "Generic (PLEG): container finished" podID="2eea7aa4-efe1-4601-84a2-5dce0446e27e" containerID="d1042d168a64a3e8094c92b3891df6353ed3fd743c70ef714ff254aacd631081" exitCode=0 Feb 27 10:39:32 crc kubenswrapper[4998]: I0227 10:39:32.768201 4998 generic.go:334] "Generic (PLEG): container finished" podID="aa43c986-f63e-4e51-9c39-6f3d39260745" containerID="19e9bda0bf3b7df19880c8ecb26998dd1afa76225e70f831cbfb3ee11ce597f7" exitCode=0 Feb 27 10:39:32 crc kubenswrapper[4998]: I0227 10:39:32.769393 4998 generic.go:334] "Generic (PLEG): container finished" podID="eb09f66d-895d-4368-beac-96ae5467a97a" containerID="d11113c411e393c59459117cef4dc1d2fabe18c236fb6d17bfdc73913454bed4" exitCode=0 Feb 27 10:39:32 crc kubenswrapper[4998]: I0227 10:39:32.773892 4998 generic.go:334] "Generic (PLEG): container finished" podID="bfa6944d-79e7-49cd-a3c3-4b183ac32c63" containerID="0da6500eaac45dc91f8b760e81e1d83245b2c4fb712eef06ca1085b4e34f83b1" exitCode=0 Feb 27 10:39:32 crc kubenswrapper[4998]: I0227 10:39:32.782567 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-kbvf6" event={"ID":"2eea7aa4-efe1-4601-84a2-5dce0446e27e","Type":"ContainerDied","Data":"d1042d168a64a3e8094c92b3891df6353ed3fd743c70ef714ff254aacd631081"} Feb 27 10:39:32 crc kubenswrapper[4998]: I0227 10:39:32.782633 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-w2dlm" event={"ID":"aa43c986-f63e-4e51-9c39-6f3d39260745","Type":"ContainerDied","Data":"19e9bda0bf3b7df19880c8ecb26998dd1afa76225e70f831cbfb3ee11ce597f7"} Feb 27 10:39:32 crc kubenswrapper[4998]: I0227 10:39:32.782651 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-5249f" event={"ID":"eb09f66d-895d-4368-beac-96ae5467a97a","Type":"ContainerDied","Data":"d11113c411e393c59459117cef4dc1d2fabe18c236fb6d17bfdc73913454bed4"} Feb 27 10:39:32 crc kubenswrapper[4998]: I0227 10:39:32.782667 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"906e560b-ca5b-48cf-988b-a06dfbacc876","Type":"ContainerStarted","Data":"a9477227404b373192de3bc0c16fb5b46e2167c450020f16d325f03304f83bca"} Feb 27 10:39:32 crc kubenswrapper[4998]: I0227 10:39:32.782683 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"f9f2ae90-9f69-40cf-92d9-b1e9e320b8f5","Type":"ContainerStarted","Data":"0068c77f4ee699f75a8e7821878c392c90a1da08a7446f7ef962164a6948d467"} Feb 27 10:39:32 crc kubenswrapper[4998]: I0227 10:39:32.782723 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-8d8b-account-create-update-gkdwk" event={"ID":"bfa6944d-79e7-49cd-a3c3-4b183ac32c63","Type":"ContainerDied","Data":"0da6500eaac45dc91f8b760e81e1d83245b2c4fb712eef06ca1085b4e34f83b1"} Feb 27 10:39:32 crc kubenswrapper[4998]: I0227 10:39:32.782738 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-8f1e-account-create-update-s2nlt" event={"ID":"159df315-6004-48f3-a1f3-192ee4c02588","Type":"ContainerStarted","Data":"fb4666a3aed59fa64994e8679296ded7899b8feb118b517e8904c3ad63f85c84"} Feb 27 10:39:32 crc kubenswrapper[4998]: I0227 10:39:32.797819 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-2d4e-account-create-update-ntthm" podStartSLOduration=7.797795678 podStartE2EDuration="7.797795678s" podCreationTimestamp="2026-02-27 10:39:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:39:32.780979754 +0000 UTC m=+1324.779250722" watchObservedRunningTime="2026-02-27 10:39:32.797795678 +0000 UTC m=+1324.796066646" Feb 27 10:39:32 crc kubenswrapper[4998]: I0227 10:39:32.806634 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.218399656 podStartE2EDuration="13.806613077s" podCreationTimestamp="2026-02-27 10:39:19 +0000 UTC" firstStartedPulling="2026-02-27 10:39:20.092720022 +0000 UTC m=+1312.090990990" lastFinishedPulling="2026-02-27 10:39:31.680933443 +0000 UTC m=+1323.679204411" observedRunningTime="2026-02-27 10:39:32.803845519 +0000 UTC m=+1324.802116517" watchObservedRunningTime="2026-02-27 10:39:32.806613077 +0000 UTC m=+1324.804884045" Feb 27 10:39:33 crc kubenswrapper[4998]: I0227 10:39:33.157892 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-6c687d6d7f-drrbg" Feb 27 10:39:33 crc kubenswrapper[4998]: I0227 10:39:33.252007 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-787dd6b8cd-n8j8x"] Feb 27 10:39:33 crc kubenswrapper[4998]: I0227 10:39:33.252252 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-787dd6b8cd-n8j8x" podUID="b910b535-da07-4b60-b42c-72f170ac8bbc" containerName="neutron-api" containerID="cri-o://eae121cd1e1b682313b6c28cdc2c4eae877c426cfa7a1614bda858ca83e1b95a" gracePeriod=30 Feb 27 10:39:33 crc kubenswrapper[4998]: I0227 10:39:33.252334 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-787dd6b8cd-n8j8x" podUID="b910b535-da07-4b60-b42c-72f170ac8bbc" containerName="neutron-httpd" containerID="cri-o://3a7e682515de5896cb79338cae359e4e76067f0248aad6f138cd24ec86c343bd" gracePeriod=30 Feb 27 10:39:33 crc kubenswrapper[4998]: I0227 10:39:33.298200 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-7758b6f85-bxf6h" Feb 27 10:39:33 crc kubenswrapper[4998]: I0227 10:39:33.302321 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-7758b6f85-bxf6h" Feb 27 10:39:33 crc kubenswrapper[4998]: I0227 10:39:33.786695 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-8f1e-account-create-update-s2nlt" event={"ID":"159df315-6004-48f3-a1f3-192ee4c02588","Type":"ContainerStarted","Data":"7ed0129e40a525787727447885326d046e1144735af18ff175409dfe36d92c0b"} Feb 27 10:39:33 crc kubenswrapper[4998]: I0227 10:39:33.790800 4998 generic.go:334] "Generic (PLEG): container finished" podID="b910b535-da07-4b60-b42c-72f170ac8bbc" containerID="3a7e682515de5896cb79338cae359e4e76067f0248aad6f138cd24ec86c343bd" exitCode=0 Feb 27 10:39:33 crc kubenswrapper[4998]: I0227 10:39:33.791000 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-787dd6b8cd-n8j8x" event={"ID":"b910b535-da07-4b60-b42c-72f170ac8bbc","Type":"ContainerDied","Data":"3a7e682515de5896cb79338cae359e4e76067f0248aad6f138cd24ec86c343bd"} Feb 27 10:39:33 crc kubenswrapper[4998]: I0227 10:39:33.799866 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-8f1e-account-create-update-s2nlt" podStartSLOduration=8.799851828 podStartE2EDuration="8.799851828s" podCreationTimestamp="2026-02-27 10:39:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:39:33.797940338 +0000 UTC m=+1325.796211306" watchObservedRunningTime="2026-02-27 10:39:33.799851828 +0000 UTC m=+1325.798122796" Feb 27 10:39:34 crc kubenswrapper[4998]: I0227 10:39:34.215061 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-w2dlm" Feb 27 10:39:34 crc kubenswrapper[4998]: I0227 10:39:34.352681 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nf82d\" (UniqueName: \"kubernetes.io/projected/aa43c986-f63e-4e51-9c39-6f3d39260745-kube-api-access-nf82d\") pod \"aa43c986-f63e-4e51-9c39-6f3d39260745\" (UID: \"aa43c986-f63e-4e51-9c39-6f3d39260745\") " Feb 27 10:39:34 crc kubenswrapper[4998]: I0227 10:39:34.352809 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa43c986-f63e-4e51-9c39-6f3d39260745-operator-scripts\") pod \"aa43c986-f63e-4e51-9c39-6f3d39260745\" (UID: \"aa43c986-f63e-4e51-9c39-6f3d39260745\") " Feb 27 10:39:34 crc kubenswrapper[4998]: I0227 10:39:34.425169 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-5249f" Feb 27 10:39:34 crc kubenswrapper[4998]: I0227 10:39:34.435465 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-8d8b-account-create-update-gkdwk" Feb 27 10:39:34 crc kubenswrapper[4998]: I0227 10:39:34.557902 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8vldh\" (UniqueName: \"kubernetes.io/projected/eb09f66d-895d-4368-beac-96ae5467a97a-kube-api-access-8vldh\") pod \"eb09f66d-895d-4368-beac-96ae5467a97a\" (UID: \"eb09f66d-895d-4368-beac-96ae5467a97a\") " Feb 27 10:39:34 crc kubenswrapper[4998]: I0227 10:39:34.558164 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb09f66d-895d-4368-beac-96ae5467a97a-operator-scripts\") pod \"eb09f66d-895d-4368-beac-96ae5467a97a\" (UID: \"eb09f66d-895d-4368-beac-96ae5467a97a\") " Feb 27 10:39:34 crc kubenswrapper[4998]: I0227 10:39:34.558277 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bfa6944d-79e7-49cd-a3c3-4b183ac32c63-operator-scripts\") pod \"bfa6944d-79e7-49cd-a3c3-4b183ac32c63\" (UID: \"bfa6944d-79e7-49cd-a3c3-4b183ac32c63\") " Feb 27 10:39:34 crc kubenswrapper[4998]: I0227 10:39:34.558356 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kl8sw\" (UniqueName: \"kubernetes.io/projected/bfa6944d-79e7-49cd-a3c3-4b183ac32c63-kube-api-access-kl8sw\") pod \"bfa6944d-79e7-49cd-a3c3-4b183ac32c63\" (UID: \"bfa6944d-79e7-49cd-a3c3-4b183ac32c63\") " Feb 27 10:39:34 crc kubenswrapper[4998]: I0227 10:39:34.559260 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb09f66d-895d-4368-beac-96ae5467a97a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "eb09f66d-895d-4368-beac-96ae5467a97a" (UID: "eb09f66d-895d-4368-beac-96ae5467a97a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:39:34 crc kubenswrapper[4998]: I0227 10:39:34.559326 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa43c986-f63e-4e51-9c39-6f3d39260745-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "aa43c986-f63e-4e51-9c39-6f3d39260745" (UID: "aa43c986-f63e-4e51-9c39-6f3d39260745"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:39:34 crc kubenswrapper[4998]: I0227 10:39:34.560090 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bfa6944d-79e7-49cd-a3c3-4b183ac32c63-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bfa6944d-79e7-49cd-a3c3-4b183ac32c63" (UID: "bfa6944d-79e7-49cd-a3c3-4b183ac32c63"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:39:34 crc kubenswrapper[4998]: I0227 10:39:34.563791 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfa6944d-79e7-49cd-a3c3-4b183ac32c63-kube-api-access-kl8sw" (OuterVolumeSpecName: "kube-api-access-kl8sw") pod "bfa6944d-79e7-49cd-a3c3-4b183ac32c63" (UID: "bfa6944d-79e7-49cd-a3c3-4b183ac32c63"). InnerVolumeSpecName "kube-api-access-kl8sw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:39:34 crc kubenswrapper[4998]: I0227 10:39:34.564410 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa43c986-f63e-4e51-9c39-6f3d39260745-kube-api-access-nf82d" (OuterVolumeSpecName: "kube-api-access-nf82d") pod "aa43c986-f63e-4e51-9c39-6f3d39260745" (UID: "aa43c986-f63e-4e51-9c39-6f3d39260745"). InnerVolumeSpecName "kube-api-access-nf82d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:39:34 crc kubenswrapper[4998]: I0227 10:39:34.565838 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb09f66d-895d-4368-beac-96ae5467a97a-kube-api-access-8vldh" (OuterVolumeSpecName: "kube-api-access-8vldh") pod "eb09f66d-895d-4368-beac-96ae5467a97a" (UID: "eb09f66d-895d-4368-beac-96ae5467a97a"). InnerVolumeSpecName "kube-api-access-8vldh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:39:34 crc kubenswrapper[4998]: I0227 10:39:34.623805 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-kbvf6" Feb 27 10:39:34 crc kubenswrapper[4998]: I0227 10:39:34.661634 4998 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb09f66d-895d-4368-beac-96ae5467a97a-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 10:39:34 crc kubenswrapper[4998]: I0227 10:39:34.661730 4998 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bfa6944d-79e7-49cd-a3c3-4b183ac32c63-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 10:39:34 crc kubenswrapper[4998]: I0227 10:39:34.661745 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kl8sw\" (UniqueName: \"kubernetes.io/projected/bfa6944d-79e7-49cd-a3c3-4b183ac32c63-kube-api-access-kl8sw\") on node \"crc\" DevicePath \"\"" Feb 27 10:39:34 crc kubenswrapper[4998]: I0227 10:39:34.661768 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nf82d\" (UniqueName: \"kubernetes.io/projected/aa43c986-f63e-4e51-9c39-6f3d39260745-kube-api-access-nf82d\") on node \"crc\" DevicePath \"\"" Feb 27 10:39:34 crc kubenswrapper[4998]: I0227 10:39:34.661782 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8vldh\" (UniqueName: \"kubernetes.io/projected/eb09f66d-895d-4368-beac-96ae5467a97a-kube-api-access-8vldh\") on node \"crc\" DevicePath \"\"" Feb 27 10:39:34 crc kubenswrapper[4998]: I0227 10:39:34.661795 4998 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa43c986-f63e-4e51-9c39-6f3d39260745-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 10:39:34 crc kubenswrapper[4998]: I0227 10:39:34.762699 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-crl7k\" (UniqueName: \"kubernetes.io/projected/2eea7aa4-efe1-4601-84a2-5dce0446e27e-kube-api-access-crl7k\") pod \"2eea7aa4-efe1-4601-84a2-5dce0446e27e\" (UID: \"2eea7aa4-efe1-4601-84a2-5dce0446e27e\") " Feb 27 10:39:34 crc kubenswrapper[4998]: I0227 10:39:34.762766 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2eea7aa4-efe1-4601-84a2-5dce0446e27e-operator-scripts\") pod \"2eea7aa4-efe1-4601-84a2-5dce0446e27e\" (UID: \"2eea7aa4-efe1-4601-84a2-5dce0446e27e\") " Feb 27 10:39:34 crc kubenswrapper[4998]: I0227 10:39:34.763751 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2eea7aa4-efe1-4601-84a2-5dce0446e27e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2eea7aa4-efe1-4601-84a2-5dce0446e27e" (UID: "2eea7aa4-efe1-4601-84a2-5dce0446e27e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:39:34 crc kubenswrapper[4998]: I0227 10:39:34.767450 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2eea7aa4-efe1-4601-84a2-5dce0446e27e-kube-api-access-crl7k" (OuterVolumeSpecName: "kube-api-access-crl7k") pod "2eea7aa4-efe1-4601-84a2-5dce0446e27e" (UID: "2eea7aa4-efe1-4601-84a2-5dce0446e27e"). InnerVolumeSpecName "kube-api-access-crl7k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:39:34 crc kubenswrapper[4998]: I0227 10:39:34.810675 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-kbvf6" event={"ID":"2eea7aa4-efe1-4601-84a2-5dce0446e27e","Type":"ContainerDied","Data":"a5148db443bf10f841a0cfa31d7570b7a1b4ce3a4f5109a9fcaa8a853811c332"} Feb 27 10:39:34 crc kubenswrapper[4998]: I0227 10:39:34.810722 4998 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a5148db443bf10f841a0cfa31d7570b7a1b4ce3a4f5109a9fcaa8a853811c332" Feb 27 10:39:34 crc kubenswrapper[4998]: I0227 10:39:34.810774 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-kbvf6" Feb 27 10:39:34 crc kubenswrapper[4998]: I0227 10:39:34.823402 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-w2dlm" event={"ID":"aa43c986-f63e-4e51-9c39-6f3d39260745","Type":"ContainerDied","Data":"4348c9b99b2cc54d58f3c0ff6e9a2004a6cff8a364867eff1af5ca2cb8f90601"} Feb 27 10:39:34 crc kubenswrapper[4998]: I0227 10:39:34.823432 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-w2dlm" Feb 27 10:39:34 crc kubenswrapper[4998]: I0227 10:39:34.823443 4998 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4348c9b99b2cc54d58f3c0ff6e9a2004a6cff8a364867eff1af5ca2cb8f90601" Feb 27 10:39:34 crc kubenswrapper[4998]: I0227 10:39:34.825634 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-5249f" event={"ID":"eb09f66d-895d-4368-beac-96ae5467a97a","Type":"ContainerDied","Data":"e330e9fa5881d10bd0ff0ff345442d4baa32ad7ae474c2a77581711699888b6e"} Feb 27 10:39:34 crc kubenswrapper[4998]: I0227 10:39:34.825678 4998 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e330e9fa5881d10bd0ff0ff345442d4baa32ad7ae474c2a77581711699888b6e" Feb 27 10:39:34 crc kubenswrapper[4998]: I0227 10:39:34.825755 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-5249f" Feb 27 10:39:34 crc kubenswrapper[4998]: I0227 10:39:34.828938 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-8d8b-account-create-update-gkdwk" Feb 27 10:39:34 crc kubenswrapper[4998]: I0227 10:39:34.828956 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-8d8b-account-create-update-gkdwk" event={"ID":"bfa6944d-79e7-49cd-a3c3-4b183ac32c63","Type":"ContainerDied","Data":"4c6cb88f8d4cd3f0cee386efa072550157a7421b85ae76c51d63f9eff9f73da9"} Feb 27 10:39:34 crc kubenswrapper[4998]: I0227 10:39:34.828975 4998 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c6cb88f8d4cd3f0cee386efa072550157a7421b85ae76c51d63f9eff9f73da9" Feb 27 10:39:34 crc kubenswrapper[4998]: I0227 10:39:34.865859 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-crl7k\" (UniqueName: \"kubernetes.io/projected/2eea7aa4-efe1-4601-84a2-5dce0446e27e-kube-api-access-crl7k\") on node \"crc\" DevicePath \"\"" Feb 27 10:39:34 crc kubenswrapper[4998]: I0227 10:39:34.865893 4998 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2eea7aa4-efe1-4601-84a2-5dce0446e27e-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 10:39:35 crc kubenswrapper[4998]: I0227 10:39:35.884620 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"906e560b-ca5b-48cf-988b-a06dfbacc876","Type":"ContainerStarted","Data":"2f3c41f16ef4b3b7f5d8afc8eebc2fb98b4cf85a40c3cc33cedc07024cdd8baf"} Feb 27 10:39:35 crc kubenswrapper[4998]: I0227 10:39:35.887057 4998 generic.go:334] "Generic (PLEG): container finished" podID="dc3d5ef4-3dac-442c-b6e8-64435a2f474a" containerID="3405e474b165ed537e9aaa3f84be16a6f9af9ab98609252f615e86c7db3d2c57" exitCode=0 Feb 27 10:39:35 crc kubenswrapper[4998]: I0227 10:39:35.887124 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-2d4e-account-create-update-ntthm" event={"ID":"dc3d5ef4-3dac-442c-b6e8-64435a2f474a","Type":"ContainerDied","Data":"3405e474b165ed537e9aaa3f84be16a6f9af9ab98609252f615e86c7db3d2c57"} Feb 27 10:39:35 crc kubenswrapper[4998]: I0227 10:39:35.889267 4998 generic.go:334] "Generic (PLEG): container finished" podID="159df315-6004-48f3-a1f3-192ee4c02588" containerID="7ed0129e40a525787727447885326d046e1144735af18ff175409dfe36d92c0b" exitCode=0 Feb 27 10:39:35 crc kubenswrapper[4998]: I0227 10:39:35.889297 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-8f1e-account-create-update-s2nlt" event={"ID":"159df315-6004-48f3-a1f3-192ee4c02588","Type":"ContainerDied","Data":"7ed0129e40a525787727447885326d046e1144735af18ff175409dfe36d92c0b"} Feb 27 10:39:36 crc kubenswrapper[4998]: I0227 10:39:36.900886 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"906e560b-ca5b-48cf-988b-a06dfbacc876","Type":"ContainerStarted","Data":"c13e1a462178f1ed51a00ad08bc3a5a98a2d4320fe7f9d1bd3b17d2877c8f479"} Feb 27 10:39:36 crc kubenswrapper[4998]: I0227 10:39:36.901401 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"906e560b-ca5b-48cf-988b-a06dfbacc876","Type":"ContainerStarted","Data":"b445757a5e844b2f543f0647dde7a5e977ea3ae02d2a3b056f6bd7d31b06fdff"} Feb 27 10:39:37 crc kubenswrapper[4998]: I0227 10:39:37.401074 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-2d4e-account-create-update-ntthm" Feb 27 10:39:37 crc kubenswrapper[4998]: I0227 10:39:37.411079 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-8f1e-account-create-update-s2nlt" Feb 27 10:39:37 crc kubenswrapper[4998]: I0227 10:39:37.413653 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc3d5ef4-3dac-442c-b6e8-64435a2f474a-operator-scripts\") pod \"dc3d5ef4-3dac-442c-b6e8-64435a2f474a\" (UID: \"dc3d5ef4-3dac-442c-b6e8-64435a2f474a\") " Feb 27 10:39:37 crc kubenswrapper[4998]: I0227 10:39:37.413841 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6f2q7\" (UniqueName: \"kubernetes.io/projected/dc3d5ef4-3dac-442c-b6e8-64435a2f474a-kube-api-access-6f2q7\") pod \"dc3d5ef4-3dac-442c-b6e8-64435a2f474a\" (UID: \"dc3d5ef4-3dac-442c-b6e8-64435a2f474a\") " Feb 27 10:39:37 crc kubenswrapper[4998]: I0227 10:39:37.416131 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc3d5ef4-3dac-442c-b6e8-64435a2f474a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dc3d5ef4-3dac-442c-b6e8-64435a2f474a" (UID: "dc3d5ef4-3dac-442c-b6e8-64435a2f474a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:39:37 crc kubenswrapper[4998]: I0227 10:39:37.423018 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc3d5ef4-3dac-442c-b6e8-64435a2f474a-kube-api-access-6f2q7" (OuterVolumeSpecName: "kube-api-access-6f2q7") pod "dc3d5ef4-3dac-442c-b6e8-64435a2f474a" (UID: "dc3d5ef4-3dac-442c-b6e8-64435a2f474a"). InnerVolumeSpecName "kube-api-access-6f2q7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:39:37 crc kubenswrapper[4998]: I0227 10:39:37.516309 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wkgdd\" (UniqueName: \"kubernetes.io/projected/159df315-6004-48f3-a1f3-192ee4c02588-kube-api-access-wkgdd\") pod \"159df315-6004-48f3-a1f3-192ee4c02588\" (UID: \"159df315-6004-48f3-a1f3-192ee4c02588\") " Feb 27 10:39:37 crc kubenswrapper[4998]: I0227 10:39:37.516419 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/159df315-6004-48f3-a1f3-192ee4c02588-operator-scripts\") pod \"159df315-6004-48f3-a1f3-192ee4c02588\" (UID: \"159df315-6004-48f3-a1f3-192ee4c02588\") " Feb 27 10:39:37 crc kubenswrapper[4998]: I0227 10:39:37.517117 4998 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc3d5ef4-3dac-442c-b6e8-64435a2f474a-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 10:39:37 crc kubenswrapper[4998]: I0227 10:39:37.517132 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6f2q7\" (UniqueName: \"kubernetes.io/projected/dc3d5ef4-3dac-442c-b6e8-64435a2f474a-kube-api-access-6f2q7\") on node \"crc\" DevicePath \"\"" Feb 27 10:39:37 crc kubenswrapper[4998]: I0227 10:39:37.518158 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/159df315-6004-48f3-a1f3-192ee4c02588-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "159df315-6004-48f3-a1f3-192ee4c02588" (UID: "159df315-6004-48f3-a1f3-192ee4c02588"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:39:37 crc kubenswrapper[4998]: I0227 10:39:37.524754 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/159df315-6004-48f3-a1f3-192ee4c02588-kube-api-access-wkgdd" (OuterVolumeSpecName: "kube-api-access-wkgdd") pod "159df315-6004-48f3-a1f3-192ee4c02588" (UID: "159df315-6004-48f3-a1f3-192ee4c02588"). InnerVolumeSpecName "kube-api-access-wkgdd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:39:37 crc kubenswrapper[4998]: I0227 10:39:37.619793 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wkgdd\" (UniqueName: \"kubernetes.io/projected/159df315-6004-48f3-a1f3-192ee4c02588-kube-api-access-wkgdd\") on node \"crc\" DevicePath \"\"" Feb 27 10:39:37 crc kubenswrapper[4998]: I0227 10:39:37.619845 4998 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/159df315-6004-48f3-a1f3-192ee4c02588-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 10:39:37 crc kubenswrapper[4998]: I0227 10:39:37.910870 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-8f1e-account-create-update-s2nlt" event={"ID":"159df315-6004-48f3-a1f3-192ee4c02588","Type":"ContainerDied","Data":"fb4666a3aed59fa64994e8679296ded7899b8feb118b517e8904c3ad63f85c84"} Feb 27 10:39:37 crc kubenswrapper[4998]: I0227 10:39:37.910911 4998 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb4666a3aed59fa64994e8679296ded7899b8feb118b517e8904c3ad63f85c84" Feb 27 10:39:37 crc kubenswrapper[4998]: I0227 10:39:37.910924 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-8f1e-account-create-update-s2nlt" Feb 27 10:39:37 crc kubenswrapper[4998]: I0227 10:39:37.913484 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-2d4e-account-create-update-ntthm" event={"ID":"dc3d5ef4-3dac-442c-b6e8-64435a2f474a","Type":"ContainerDied","Data":"ac61ffa7165c454ced5b3f344a9e24e3710fa555a126967b42e20bac27bf6e64"} Feb 27 10:39:37 crc kubenswrapper[4998]: I0227 10:39:37.913531 4998 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac61ffa7165c454ced5b3f344a9e24e3710fa555a126967b42e20bac27bf6e64" Feb 27 10:39:37 crc kubenswrapper[4998]: I0227 10:39:37.913593 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-2d4e-account-create-update-ntthm" Feb 27 10:39:37 crc kubenswrapper[4998]: I0227 10:39:37.915531 4998 generic.go:334] "Generic (PLEG): container finished" podID="b910b535-da07-4b60-b42c-72f170ac8bbc" containerID="eae121cd1e1b682313b6c28cdc2c4eae877c426cfa7a1614bda858ca83e1b95a" exitCode=0 Feb 27 10:39:37 crc kubenswrapper[4998]: I0227 10:39:37.915569 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-787dd6b8cd-n8j8x" event={"ID":"b910b535-da07-4b60-b42c-72f170ac8bbc","Type":"ContainerDied","Data":"eae121cd1e1b682313b6c28cdc2c4eae877c426cfa7a1614bda858ca83e1b95a"} Feb 27 10:39:38 crc kubenswrapper[4998]: I0227 10:39:38.866353 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 27 10:39:39 crc kubenswrapper[4998]: I0227 10:39:39.900731 4998 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-64c87cb5cd-prbxl" podUID="63a87b91-16fb-436d-8c53-317b204acebc" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.152:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.152:8443: connect: connection refused" Feb 27 10:39:39 crc kubenswrapper[4998]: I0227 10:39:39.900865 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-64c87cb5cd-prbxl" Feb 27 10:39:40 crc kubenswrapper[4998]: I0227 10:39:40.492884 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-787dd6b8cd-n8j8x" Feb 27 10:39:40 crc kubenswrapper[4998]: I0227 10:39:40.568374 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b910b535-da07-4b60-b42c-72f170ac8bbc-combined-ca-bundle\") pod \"b910b535-da07-4b60-b42c-72f170ac8bbc\" (UID: \"b910b535-da07-4b60-b42c-72f170ac8bbc\") " Feb 27 10:39:40 crc kubenswrapper[4998]: I0227 10:39:40.568486 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b910b535-da07-4b60-b42c-72f170ac8bbc-httpd-config\") pod \"b910b535-da07-4b60-b42c-72f170ac8bbc\" (UID: \"b910b535-da07-4b60-b42c-72f170ac8bbc\") " Feb 27 10:39:40 crc kubenswrapper[4998]: I0227 10:39:40.568518 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jpwqv\" (UniqueName: \"kubernetes.io/projected/b910b535-da07-4b60-b42c-72f170ac8bbc-kube-api-access-jpwqv\") pod \"b910b535-da07-4b60-b42c-72f170ac8bbc\" (UID: \"b910b535-da07-4b60-b42c-72f170ac8bbc\") " Feb 27 10:39:40 crc kubenswrapper[4998]: I0227 10:39:40.568629 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b910b535-da07-4b60-b42c-72f170ac8bbc-config\") pod \"b910b535-da07-4b60-b42c-72f170ac8bbc\" (UID: \"b910b535-da07-4b60-b42c-72f170ac8bbc\") " Feb 27 10:39:40 crc kubenswrapper[4998]: I0227 10:39:40.568723 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b910b535-da07-4b60-b42c-72f170ac8bbc-ovndb-tls-certs\") pod \"b910b535-da07-4b60-b42c-72f170ac8bbc\" (UID: \"b910b535-da07-4b60-b42c-72f170ac8bbc\") " Feb 27 10:39:40 crc kubenswrapper[4998]: I0227 10:39:40.592654 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b910b535-da07-4b60-b42c-72f170ac8bbc-kube-api-access-jpwqv" (OuterVolumeSpecName: "kube-api-access-jpwqv") pod "b910b535-da07-4b60-b42c-72f170ac8bbc" (UID: "b910b535-da07-4b60-b42c-72f170ac8bbc"). InnerVolumeSpecName "kube-api-access-jpwqv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:39:40 crc kubenswrapper[4998]: I0227 10:39:40.599424 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b910b535-da07-4b60-b42c-72f170ac8bbc-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "b910b535-da07-4b60-b42c-72f170ac8bbc" (UID: "b910b535-da07-4b60-b42c-72f170ac8bbc"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:39:40 crc kubenswrapper[4998]: I0227 10:39:40.654071 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-p7ghm"] Feb 27 10:39:40 crc kubenswrapper[4998]: E0227 10:39:40.654774 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfa6944d-79e7-49cd-a3c3-4b183ac32c63" containerName="mariadb-account-create-update" Feb 27 10:39:40 crc kubenswrapper[4998]: I0227 10:39:40.654794 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfa6944d-79e7-49cd-a3c3-4b183ac32c63" containerName="mariadb-account-create-update" Feb 27 10:39:40 crc kubenswrapper[4998]: E0227 10:39:40.654824 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b910b535-da07-4b60-b42c-72f170ac8bbc" containerName="neutron-httpd" Feb 27 10:39:40 crc kubenswrapper[4998]: I0227 10:39:40.654831 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="b910b535-da07-4b60-b42c-72f170ac8bbc" containerName="neutron-httpd" Feb 27 10:39:40 crc kubenswrapper[4998]: E0227 10:39:40.654839 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb09f66d-895d-4368-beac-96ae5467a97a" containerName="mariadb-database-create" Feb 27 10:39:40 crc kubenswrapper[4998]: I0227 10:39:40.654848 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb09f66d-895d-4368-beac-96ae5467a97a" containerName="mariadb-database-create" Feb 27 10:39:40 crc kubenswrapper[4998]: E0227 10:39:40.654857 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="159df315-6004-48f3-a1f3-192ee4c02588" containerName="mariadb-account-create-update" Feb 27 10:39:40 crc kubenswrapper[4998]: I0227 10:39:40.654864 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="159df315-6004-48f3-a1f3-192ee4c02588" containerName="mariadb-account-create-update" Feb 27 10:39:40 crc kubenswrapper[4998]: E0227 10:39:40.654883 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b910b535-da07-4b60-b42c-72f170ac8bbc" containerName="neutron-api" Feb 27 10:39:40 crc kubenswrapper[4998]: I0227 10:39:40.654890 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="b910b535-da07-4b60-b42c-72f170ac8bbc" containerName="neutron-api" Feb 27 10:39:40 crc kubenswrapper[4998]: E0227 10:39:40.654904 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2eea7aa4-efe1-4601-84a2-5dce0446e27e" containerName="mariadb-database-create" Feb 27 10:39:40 crc kubenswrapper[4998]: I0227 10:39:40.654910 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="2eea7aa4-efe1-4601-84a2-5dce0446e27e" containerName="mariadb-database-create" Feb 27 10:39:40 crc kubenswrapper[4998]: E0227 10:39:40.654936 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa43c986-f63e-4e51-9c39-6f3d39260745" containerName="mariadb-database-create" Feb 27 10:39:40 crc kubenswrapper[4998]: I0227 10:39:40.654943 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa43c986-f63e-4e51-9c39-6f3d39260745" containerName="mariadb-database-create" Feb 27 10:39:40 crc kubenswrapper[4998]: E0227 10:39:40.654956 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc3d5ef4-3dac-442c-b6e8-64435a2f474a" containerName="mariadb-account-create-update" Feb 27 10:39:40 crc kubenswrapper[4998]: I0227 10:39:40.654963 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc3d5ef4-3dac-442c-b6e8-64435a2f474a" containerName="mariadb-account-create-update" Feb 27 10:39:40 crc kubenswrapper[4998]: I0227 10:39:40.655193 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="b910b535-da07-4b60-b42c-72f170ac8bbc" containerName="neutron-httpd" Feb 27 10:39:40 crc kubenswrapper[4998]: I0227 10:39:40.655209 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc3d5ef4-3dac-442c-b6e8-64435a2f474a" containerName="mariadb-account-create-update" Feb 27 10:39:40 crc kubenswrapper[4998]: I0227 10:39:40.655249 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa43c986-f63e-4e51-9c39-6f3d39260745" containerName="mariadb-database-create" Feb 27 10:39:40 crc kubenswrapper[4998]: I0227 10:39:40.655264 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="b910b535-da07-4b60-b42c-72f170ac8bbc" containerName="neutron-api" Feb 27 10:39:40 crc kubenswrapper[4998]: I0227 10:39:40.655279 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb09f66d-895d-4368-beac-96ae5467a97a" containerName="mariadb-database-create" Feb 27 10:39:40 crc kubenswrapper[4998]: I0227 10:39:40.655296 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfa6944d-79e7-49cd-a3c3-4b183ac32c63" containerName="mariadb-account-create-update" Feb 27 10:39:40 crc kubenswrapper[4998]: I0227 10:39:40.655312 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="159df315-6004-48f3-a1f3-192ee4c02588" containerName="mariadb-account-create-update" Feb 27 10:39:40 crc kubenswrapper[4998]: I0227 10:39:40.655320 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="2eea7aa4-efe1-4601-84a2-5dce0446e27e" containerName="mariadb-database-create" Feb 27 10:39:40 crc kubenswrapper[4998]: I0227 10:39:40.656024 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-p7ghm" Feb 27 10:39:40 crc kubenswrapper[4998]: I0227 10:39:40.658449 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Feb 27 10:39:40 crc kubenswrapper[4998]: I0227 10:39:40.658500 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 27 10:39:40 crc kubenswrapper[4998]: I0227 10:39:40.660259 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-w6sh7" Feb 27 10:39:40 crc kubenswrapper[4998]: I0227 10:39:40.664086 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-p7ghm"] Feb 27 10:39:40 crc kubenswrapper[4998]: I0227 10:39:40.689502 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a9abdc95-5c73-40b8-a234-8b13e7be1cec-scripts\") pod \"nova-cell0-conductor-db-sync-p7ghm\" (UID: \"a9abdc95-5c73-40b8-a234-8b13e7be1cec\") " pod="openstack/nova-cell0-conductor-db-sync-p7ghm" Feb 27 10:39:40 crc kubenswrapper[4998]: I0227 10:39:40.689592 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9abdc95-5c73-40b8-a234-8b13e7be1cec-config-data\") pod \"nova-cell0-conductor-db-sync-p7ghm\" (UID: \"a9abdc95-5c73-40b8-a234-8b13e7be1cec\") " pod="openstack/nova-cell0-conductor-db-sync-p7ghm" Feb 27 10:39:40 crc kubenswrapper[4998]: I0227 10:39:40.689618 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9abdc95-5c73-40b8-a234-8b13e7be1cec-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-p7ghm\" (UID: \"a9abdc95-5c73-40b8-a234-8b13e7be1cec\") " pod="openstack/nova-cell0-conductor-db-sync-p7ghm" Feb 27 10:39:40 crc kubenswrapper[4998]: I0227 10:39:40.689639 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b910b535-da07-4b60-b42c-72f170ac8bbc-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "b910b535-da07-4b60-b42c-72f170ac8bbc" (UID: "b910b535-da07-4b60-b42c-72f170ac8bbc"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:39:40 crc kubenswrapper[4998]: I0227 10:39:40.689659 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrvfh\" (UniqueName: \"kubernetes.io/projected/a9abdc95-5c73-40b8-a234-8b13e7be1cec-kube-api-access-zrvfh\") pod \"nova-cell0-conductor-db-sync-p7ghm\" (UID: \"a9abdc95-5c73-40b8-a234-8b13e7be1cec\") " pod="openstack/nova-cell0-conductor-db-sync-p7ghm" Feb 27 10:39:40 crc kubenswrapper[4998]: I0227 10:39:40.689813 4998 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b910b535-da07-4b60-b42c-72f170ac8bbc-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 10:39:40 crc kubenswrapper[4998]: I0227 10:39:40.689830 4998 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b910b535-da07-4b60-b42c-72f170ac8bbc-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:39:40 crc kubenswrapper[4998]: I0227 10:39:40.689842 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jpwqv\" (UniqueName: \"kubernetes.io/projected/b910b535-da07-4b60-b42c-72f170ac8bbc-kube-api-access-jpwqv\") on node \"crc\" DevicePath \"\"" Feb 27 10:39:40 crc kubenswrapper[4998]: I0227 10:39:40.690497 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b910b535-da07-4b60-b42c-72f170ac8bbc-config" (OuterVolumeSpecName: "config") pod "b910b535-da07-4b60-b42c-72f170ac8bbc" (UID: "b910b535-da07-4b60-b42c-72f170ac8bbc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:39:40 crc kubenswrapper[4998]: I0227 10:39:40.693753 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b910b535-da07-4b60-b42c-72f170ac8bbc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b910b535-da07-4b60-b42c-72f170ac8bbc" (UID: "b910b535-da07-4b60-b42c-72f170ac8bbc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:39:40 crc kubenswrapper[4998]: I0227 10:39:40.791047 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a9abdc95-5c73-40b8-a234-8b13e7be1cec-scripts\") pod \"nova-cell0-conductor-db-sync-p7ghm\" (UID: \"a9abdc95-5c73-40b8-a234-8b13e7be1cec\") " pod="openstack/nova-cell0-conductor-db-sync-p7ghm" Feb 27 10:39:40 crc kubenswrapper[4998]: I0227 10:39:40.791149 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9abdc95-5c73-40b8-a234-8b13e7be1cec-config-data\") pod \"nova-cell0-conductor-db-sync-p7ghm\" (UID: \"a9abdc95-5c73-40b8-a234-8b13e7be1cec\") " pod="openstack/nova-cell0-conductor-db-sync-p7ghm" Feb 27 10:39:40 crc kubenswrapper[4998]: I0227 10:39:40.791176 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9abdc95-5c73-40b8-a234-8b13e7be1cec-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-p7ghm\" (UID: \"a9abdc95-5c73-40b8-a234-8b13e7be1cec\") " pod="openstack/nova-cell0-conductor-db-sync-p7ghm" Feb 27 10:39:40 crc kubenswrapper[4998]: I0227 10:39:40.791219 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrvfh\" (UniqueName: \"kubernetes.io/projected/a9abdc95-5c73-40b8-a234-8b13e7be1cec-kube-api-access-zrvfh\") pod \"nova-cell0-conductor-db-sync-p7ghm\" (UID: \"a9abdc95-5c73-40b8-a234-8b13e7be1cec\") " pod="openstack/nova-cell0-conductor-db-sync-p7ghm" Feb 27 10:39:40 crc kubenswrapper[4998]: I0227 10:39:40.791339 4998 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/b910b535-da07-4b60-b42c-72f170ac8bbc-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:39:40 crc kubenswrapper[4998]: I0227 10:39:40.791353 4998 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b910b535-da07-4b60-b42c-72f170ac8bbc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:39:40 crc kubenswrapper[4998]: I0227 10:39:40.795493 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a9abdc95-5c73-40b8-a234-8b13e7be1cec-scripts\") pod \"nova-cell0-conductor-db-sync-p7ghm\" (UID: \"a9abdc95-5c73-40b8-a234-8b13e7be1cec\") " pod="openstack/nova-cell0-conductor-db-sync-p7ghm" Feb 27 10:39:40 crc kubenswrapper[4998]: I0227 10:39:40.795950 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9abdc95-5c73-40b8-a234-8b13e7be1cec-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-p7ghm\" (UID: \"a9abdc95-5c73-40b8-a234-8b13e7be1cec\") " pod="openstack/nova-cell0-conductor-db-sync-p7ghm" Feb 27 10:39:40 crc kubenswrapper[4998]: I0227 10:39:40.796764 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9abdc95-5c73-40b8-a234-8b13e7be1cec-config-data\") pod \"nova-cell0-conductor-db-sync-p7ghm\" (UID: \"a9abdc95-5c73-40b8-a234-8b13e7be1cec\") " pod="openstack/nova-cell0-conductor-db-sync-p7ghm" Feb 27 10:39:40 crc kubenswrapper[4998]: I0227 10:39:40.809755 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrvfh\" (UniqueName: \"kubernetes.io/projected/a9abdc95-5c73-40b8-a234-8b13e7be1cec-kube-api-access-zrvfh\") pod \"nova-cell0-conductor-db-sync-p7ghm\" (UID: \"a9abdc95-5c73-40b8-a234-8b13e7be1cec\") " pod="openstack/nova-cell0-conductor-db-sync-p7ghm" Feb 27 10:39:40 crc kubenswrapper[4998]: I0227 10:39:40.945424 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-787dd6b8cd-n8j8x" event={"ID":"b910b535-da07-4b60-b42c-72f170ac8bbc","Type":"ContainerDied","Data":"c96b7294f4e72df341d03c04ec20641c0e0c0d9460ad674a8210907476bcca16"} Feb 27 10:39:40 crc kubenswrapper[4998]: I0227 10:39:40.945477 4998 scope.go:117] "RemoveContainer" containerID="3a7e682515de5896cb79338cae359e4e76067f0248aad6f138cd24ec86c343bd" Feb 27 10:39:40 crc kubenswrapper[4998]: I0227 10:39:40.945556 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-787dd6b8cd-n8j8x" Feb 27 10:39:40 crc kubenswrapper[4998]: I0227 10:39:40.969098 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-787dd6b8cd-n8j8x"] Feb 27 10:39:40 crc kubenswrapper[4998]: I0227 10:39:40.982014 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-787dd6b8cd-n8j8x"] Feb 27 10:39:41 crc kubenswrapper[4998]: I0227 10:39:41.070456 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-p7ghm" Feb 27 10:39:41 crc kubenswrapper[4998]: I0227 10:39:41.484491 4998 scope.go:117] "RemoveContainer" containerID="eae121cd1e1b682313b6c28cdc2c4eae877c426cfa7a1614bda858ca83e1b95a" Feb 27 10:39:42 crc kubenswrapper[4998]: I0227 10:39:42.489491 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-p7ghm"] Feb 27 10:39:42 crc kubenswrapper[4998]: I0227 10:39:42.512791 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7455798654-94zkv" Feb 27 10:39:42 crc kubenswrapper[4998]: I0227 10:39:42.530739 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7455798654-94zkv" Feb 27 10:39:42 crc kubenswrapper[4998]: I0227 10:39:42.594014 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-85b475b45b-ggjbp"] Feb 27 10:39:42 crc kubenswrapper[4998]: I0227 10:39:42.594270 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-85b475b45b-ggjbp" podUID="64d37fff-983b-4a39-89c4-dba36db2f1ba" containerName="placement-log" containerID="cri-o://8da3b426469ff337c194003544e89203c78dc69859f657de6c5989d427a40e87" gracePeriod=30 Feb 27 10:39:42 crc kubenswrapper[4998]: I0227 10:39:42.594400 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-85b475b45b-ggjbp" podUID="64d37fff-983b-4a39-89c4-dba36db2f1ba" containerName="placement-api" containerID="cri-o://cbaf5c02f514eba186509dbedb6411360e62b35dd42d0510e6c64f49914d996d" gracePeriod=30 Feb 27 10:39:42 crc kubenswrapper[4998]: I0227 10:39:42.776030 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b910b535-da07-4b60-b42c-72f170ac8bbc" path="/var/lib/kubelet/pods/b910b535-da07-4b60-b42c-72f170ac8bbc/volumes" Feb 27 10:39:42 crc kubenswrapper[4998]: I0227 10:39:42.966435 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"906e560b-ca5b-48cf-988b-a06dfbacc876","Type":"ContainerStarted","Data":"1036b183bb9630a08c6990ada6a2ecee15efb90cf972c3d0cb5a72e7d2cf14b0"} Feb 27 10:39:42 crc kubenswrapper[4998]: I0227 10:39:42.966566 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="906e560b-ca5b-48cf-988b-a06dfbacc876" containerName="ceilometer-central-agent" containerID="cri-o://2f3c41f16ef4b3b7f5d8afc8eebc2fb98b4cf85a40c3cc33cedc07024cdd8baf" gracePeriod=30 Feb 27 10:39:42 crc kubenswrapper[4998]: I0227 10:39:42.966662 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="906e560b-ca5b-48cf-988b-a06dfbacc876" containerName="proxy-httpd" containerID="cri-o://1036b183bb9630a08c6990ada6a2ecee15efb90cf972c3d0cb5a72e7d2cf14b0" gracePeriod=30 Feb 27 10:39:42 crc kubenswrapper[4998]: I0227 10:39:42.966658 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 27 10:39:42 crc kubenswrapper[4998]: I0227 10:39:42.966711 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="906e560b-ca5b-48cf-988b-a06dfbacc876" containerName="sg-core" containerID="cri-o://c13e1a462178f1ed51a00ad08bc3a5a98a2d4320fe7f9d1bd3b17d2877c8f479" gracePeriod=30 Feb 27 10:39:42 crc kubenswrapper[4998]: I0227 10:39:42.966747 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="906e560b-ca5b-48cf-988b-a06dfbacc876" containerName="ceilometer-notification-agent" containerID="cri-o://b445757a5e844b2f543f0647dde7a5e977ea3ae02d2a3b056f6bd7d31b06fdff" gracePeriod=30 Feb 27 10:39:42 crc kubenswrapper[4998]: I0227 10:39:42.977599 4998 generic.go:334] "Generic (PLEG): container finished" podID="64d37fff-983b-4a39-89c4-dba36db2f1ba" containerID="8da3b426469ff337c194003544e89203c78dc69859f657de6c5989d427a40e87" exitCode=143 Feb 27 10:39:42 crc kubenswrapper[4998]: I0227 10:39:42.977649 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-85b475b45b-ggjbp" event={"ID":"64d37fff-983b-4a39-89c4-dba36db2f1ba","Type":"ContainerDied","Data":"8da3b426469ff337c194003544e89203c78dc69859f657de6c5989d427a40e87"} Feb 27 10:39:42 crc kubenswrapper[4998]: I0227 10:39:42.981350 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-p7ghm" event={"ID":"a9abdc95-5c73-40b8-a234-8b13e7be1cec","Type":"ContainerStarted","Data":"76bb8d846aaf76b9b0b5676f4598eaa0b7118dded18bc94cf036ff7ada42dcb7"} Feb 27 10:39:43 crc kubenswrapper[4998]: I0227 10:39:43.003675 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=8.413652973 podStartE2EDuration="18.003653498s" podCreationTimestamp="2026-02-27 10:39:25 +0000 UTC" firstStartedPulling="2026-02-27 10:39:32.38382371 +0000 UTC m=+1324.382094678" lastFinishedPulling="2026-02-27 10:39:41.973824075 +0000 UTC m=+1333.972095203" observedRunningTime="2026-02-27 10:39:42.998566726 +0000 UTC m=+1334.996837714" watchObservedRunningTime="2026-02-27 10:39:43.003653498 +0000 UTC m=+1335.001924466" Feb 27 10:39:43 crc kubenswrapper[4998]: I0227 10:39:43.993326 4998 generic.go:334] "Generic (PLEG): container finished" podID="906e560b-ca5b-48cf-988b-a06dfbacc876" containerID="1036b183bb9630a08c6990ada6a2ecee15efb90cf972c3d0cb5a72e7d2cf14b0" exitCode=0 Feb 27 10:39:43 crc kubenswrapper[4998]: I0227 10:39:43.993669 4998 generic.go:334] "Generic (PLEG): container finished" podID="906e560b-ca5b-48cf-988b-a06dfbacc876" containerID="c13e1a462178f1ed51a00ad08bc3a5a98a2d4320fe7f9d1bd3b17d2877c8f479" exitCode=2 Feb 27 10:39:43 crc kubenswrapper[4998]: I0227 10:39:43.993401 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"906e560b-ca5b-48cf-988b-a06dfbacc876","Type":"ContainerDied","Data":"1036b183bb9630a08c6990ada6a2ecee15efb90cf972c3d0cb5a72e7d2cf14b0"} Feb 27 10:39:43 crc kubenswrapper[4998]: I0227 10:39:43.993712 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"906e560b-ca5b-48cf-988b-a06dfbacc876","Type":"ContainerDied","Data":"c13e1a462178f1ed51a00ad08bc3a5a98a2d4320fe7f9d1bd3b17d2877c8f479"} Feb 27 10:39:43 crc kubenswrapper[4998]: I0227 10:39:43.993724 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"906e560b-ca5b-48cf-988b-a06dfbacc876","Type":"ContainerDied","Data":"b445757a5e844b2f543f0647dde7a5e977ea3ae02d2a3b056f6bd7d31b06fdff"} Feb 27 10:39:43 crc kubenswrapper[4998]: I0227 10:39:43.993682 4998 generic.go:334] "Generic (PLEG): container finished" podID="906e560b-ca5b-48cf-988b-a06dfbacc876" containerID="b445757a5e844b2f543f0647dde7a5e977ea3ae02d2a3b056f6bd7d31b06fdff" exitCode=0 Feb 27 10:39:44 crc kubenswrapper[4998]: W0227 10:39:44.804602 4998 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddc3d5ef4_3dac_442c_b6e8_64435a2f474a.slice/crio-3405e474b165ed537e9aaa3f84be16a6f9af9ab98609252f615e86c7db3d2c57.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddc3d5ef4_3dac_442c_b6e8_64435a2f474a.slice/crio-3405e474b165ed537e9aaa3f84be16a6f9af9ab98609252f615e86c7db3d2c57.scope: no such file or directory Feb 27 10:39:44 crc kubenswrapper[4998]: W0227 10:39:44.804936 4998 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod159df315_6004_48f3_a1f3_192ee4c02588.slice/crio-fb4666a3aed59fa64994e8679296ded7899b8feb118b517e8904c3ad63f85c84": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod159df315_6004_48f3_a1f3_192ee4c02588.slice/crio-fb4666a3aed59fa64994e8679296ded7899b8feb118b517e8904c3ad63f85c84: no such file or directory Feb 27 10:39:44 crc kubenswrapper[4998]: W0227 10:39:44.804961 4998 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod159df315_6004_48f3_a1f3_192ee4c02588.slice/crio-conmon-7ed0129e40a525787727447885326d046e1144735af18ff175409dfe36d92c0b.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod159df315_6004_48f3_a1f3_192ee4c02588.slice/crio-conmon-7ed0129e40a525787727447885326d046e1144735af18ff175409dfe36d92c0b.scope: no such file or directory Feb 27 10:39:44 crc kubenswrapper[4998]: W0227 10:39:44.804978 4998 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod159df315_6004_48f3_a1f3_192ee4c02588.slice/crio-7ed0129e40a525787727447885326d046e1144735af18ff175409dfe36d92c0b.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod159df315_6004_48f3_a1f3_192ee4c02588.slice/crio-7ed0129e40a525787727447885326d046e1144735af18ff175409dfe36d92c0b.scope: no such file or directory Feb 27 10:39:44 crc kubenswrapper[4998]: W0227 10:39:44.817048 4998 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod906e560b_ca5b_48cf_988b_a06dfbacc876.slice/crio-conmon-2f3c41f16ef4b3b7f5d8afc8eebc2fb98b4cf85a40c3cc33cedc07024cdd8baf.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod906e560b_ca5b_48cf_988b_a06dfbacc876.slice/crio-conmon-2f3c41f16ef4b3b7f5d8afc8eebc2fb98b4cf85a40c3cc33cedc07024cdd8baf.scope: no such file or directory Feb 27 10:39:44 crc kubenswrapper[4998]: W0227 10:39:44.817567 4998 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod906e560b_ca5b_48cf_988b_a06dfbacc876.slice/crio-2f3c41f16ef4b3b7f5d8afc8eebc2fb98b4cf85a40c3cc33cedc07024cdd8baf.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod906e560b_ca5b_48cf_988b_a06dfbacc876.slice/crio-2f3c41f16ef4b3b7f5d8afc8eebc2fb98b4cf85a40c3cc33cedc07024cdd8baf.scope: no such file or directory Feb 27 10:39:44 crc kubenswrapper[4998]: W0227 10:39:44.817613 4998 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod906e560b_ca5b_48cf_988b_a06dfbacc876.slice/crio-conmon-b445757a5e844b2f543f0647dde7a5e977ea3ae02d2a3b056f6bd7d31b06fdff.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod906e560b_ca5b_48cf_988b_a06dfbacc876.slice/crio-conmon-b445757a5e844b2f543f0647dde7a5e977ea3ae02d2a3b056f6bd7d31b06fdff.scope: no such file or directory Feb 27 10:39:44 crc kubenswrapper[4998]: W0227 10:39:44.817634 4998 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod906e560b_ca5b_48cf_988b_a06dfbacc876.slice/crio-b445757a5e844b2f543f0647dde7a5e977ea3ae02d2a3b056f6bd7d31b06fdff.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod906e560b_ca5b_48cf_988b_a06dfbacc876.slice/crio-b445757a5e844b2f543f0647dde7a5e977ea3ae02d2a3b056f6bd7d31b06fdff.scope: no such file or directory Feb 27 10:39:44 crc kubenswrapper[4998]: W0227 10:39:44.817655 4998 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod906e560b_ca5b_48cf_988b_a06dfbacc876.slice/crio-conmon-c13e1a462178f1ed51a00ad08bc3a5a98a2d4320fe7f9d1bd3b17d2877c8f479.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod906e560b_ca5b_48cf_988b_a06dfbacc876.slice/crio-conmon-c13e1a462178f1ed51a00ad08bc3a5a98a2d4320fe7f9d1bd3b17d2877c8f479.scope: no such file or directory Feb 27 10:39:44 crc kubenswrapper[4998]: W0227 10:39:44.817674 4998 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod906e560b_ca5b_48cf_988b_a06dfbacc876.slice/crio-c13e1a462178f1ed51a00ad08bc3a5a98a2d4320fe7f9d1bd3b17d2877c8f479.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod906e560b_ca5b_48cf_988b_a06dfbacc876.slice/crio-c13e1a462178f1ed51a00ad08bc3a5a98a2d4320fe7f9d1bd3b17d2877c8f479.scope: no such file or directory Feb 27 10:39:44 crc kubenswrapper[4998]: W0227 10:39:44.822041 4998 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod906e560b_ca5b_48cf_988b_a06dfbacc876.slice/crio-conmon-1036b183bb9630a08c6990ada6a2ecee15efb90cf972c3d0cb5a72e7d2cf14b0.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod906e560b_ca5b_48cf_988b_a06dfbacc876.slice/crio-conmon-1036b183bb9630a08c6990ada6a2ecee15efb90cf972c3d0cb5a72e7d2cf14b0.scope: no such file or directory Feb 27 10:39:44 crc kubenswrapper[4998]: W0227 10:39:44.822090 4998 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod906e560b_ca5b_48cf_988b_a06dfbacc876.slice/crio-1036b183bb9630a08c6990ada6a2ecee15efb90cf972c3d0cb5a72e7d2cf14b0.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod906e560b_ca5b_48cf_988b_a06dfbacc876.slice/crio-1036b183bb9630a08c6990ada6a2ecee15efb90cf972c3d0cb5a72e7d2cf14b0.scope: no such file or directory Feb 27 10:39:45 crc kubenswrapper[4998]: I0227 10:39:45.012821 4998 generic.go:334] "Generic (PLEG): container finished" podID="906e560b-ca5b-48cf-988b-a06dfbacc876" containerID="2f3c41f16ef4b3b7f5d8afc8eebc2fb98b4cf85a40c3cc33cedc07024cdd8baf" exitCode=0 Feb 27 10:39:45 crc kubenswrapper[4998]: I0227 10:39:45.012882 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"906e560b-ca5b-48cf-988b-a06dfbacc876","Type":"ContainerDied","Data":"2f3c41f16ef4b3b7f5d8afc8eebc2fb98b4cf85a40c3cc33cedc07024cdd8baf"} Feb 27 10:39:45 crc kubenswrapper[4998]: I0227 10:39:45.016732 4998 generic.go:334] "Generic (PLEG): container finished" podID="63a87b91-16fb-436d-8c53-317b204acebc" containerID="0d4b5db9912175f7b371aef129e1a3835005cfc41d31b381885990ea6c8dca9e" exitCode=137 Feb 27 10:39:45 crc kubenswrapper[4998]: I0227 10:39:45.016763 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-64c87cb5cd-prbxl" event={"ID":"63a87b91-16fb-436d-8c53-317b204acebc","Type":"ContainerDied","Data":"0d4b5db9912175f7b371aef129e1a3835005cfc41d31b381885990ea6c8dca9e"} Feb 27 10:39:45 crc kubenswrapper[4998]: I0227 10:39:45.093919 4998 scope.go:117] "RemoveContainer" containerID="b52207ad6eaed465ed377eae77190d22ef0fde2fcfe67e02081797f3e275bd41" Feb 27 10:39:45 crc kubenswrapper[4998]: E0227 10:39:45.150289 4998 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaa43c986_f63e_4e51_9c39_6f3d39260745.slice/crio-conmon-19e9bda0bf3b7df19880c8ecb26998dd1afa76225e70f831cbfb3ee11ce597f7.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbfa6944d_79e7_49cd_a3c3_4b183ac32c63.slice/crio-0da6500eaac45dc91f8b760e81e1d83245b2c4fb712eef06ca1085b4e34f83b1.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbfa6944d_79e7_49cd_a3c3_4b183ac32c63.slice/crio-4c6cb88f8d4cd3f0cee386efa072550157a7421b85ae76c51d63f9eff9f73da9\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaa43c986_f63e_4e51_9c39_6f3d39260745.slice/crio-4348c9b99b2cc54d58f3c0ff6e9a2004a6cff8a364867eff1af5ca2cb8f90601\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb910b535_da07_4b60_b42c_72f170ac8bbc.slice/crio-c96b7294f4e72df341d03c04ec20641c0e0c0d9460ad674a8210907476bcca16\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb910b535_da07_4b60_b42c_72f170ac8bbc.slice/crio-conmon-eae121cd1e1b682313b6c28cdc2c4eae877c426cfa7a1614bda858ca83e1b95a.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod63a87b91_16fb_436d_8c53_317b204acebc.slice/crio-0d4b5db9912175f7b371aef129e1a3835005cfc41d31b381885990ea6c8dca9e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeb09f66d_895d_4368_beac_96ae5467a97a.slice/crio-e330e9fa5881d10bd0ff0ff345442d4baa32ad7ae474c2a77581711699888b6e\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb910b535_da07_4b60_b42c_72f170ac8bbc.slice/crio-eae121cd1e1b682313b6c28cdc2c4eae877c426cfa7a1614bda858ca83e1b95a.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod64d37fff_983b_4a39_89c4_dba36db2f1ba.slice/crio-conmon-8da3b426469ff337c194003544e89203c78dc69859f657de6c5989d427a40e87.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbfa6944d_79e7_49cd_a3c3_4b183ac32c63.slice/crio-conmon-0da6500eaac45dc91f8b760e81e1d83245b2c4fb712eef06ca1085b4e34f83b1.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod64d37fff_983b_4a39_89c4_dba36db2f1ba.slice/crio-8da3b426469ff337c194003544e89203c78dc69859f657de6c5989d427a40e87.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeb09f66d_895d_4368_beac_96ae5467a97a.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb910b535_da07_4b60_b42c_72f170ac8bbc.slice/crio-3a7e682515de5896cb79338cae359e4e76067f0248aad6f138cd24ec86c343bd.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeb09f66d_895d_4368_beac_96ae5467a97a.slice/crio-d11113c411e393c59459117cef4dc1d2fabe18c236fb6d17bfdc73913454bed4.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod63a87b91_16fb_436d_8c53_317b204acebc.slice/crio-conmon-0d4b5db9912175f7b371aef129e1a3835005cfc41d31b381885990ea6c8dca9e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeb09f66d_895d_4368_beac_96ae5467a97a.slice/crio-conmon-d11113c411e393c59459117cef4dc1d2fabe18c236fb6d17bfdc73913454bed4.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaa43c986_f63e_4e51_9c39_6f3d39260745.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb910b535_da07_4b60_b42c_72f170ac8bbc.slice/crio-conmon-3a7e682515de5896cb79338cae359e4e76067f0248aad6f138cd24ec86c343bd.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddc3d5ef4_3dac_442c_b6e8_64435a2f474a.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbfa6944d_79e7_49cd_a3c3_4b183ac32c63.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb910b535_da07_4b60_b42c_72f170ac8bbc.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2eea7aa4_efe1_4601_84a2_5dce0446e27e.slice/crio-a5148db443bf10f841a0cfa31d7570b7a1b4ce3a4f5109a9fcaa8a853811c332\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2eea7aa4_efe1_4601_84a2_5dce0446e27e.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod159df315_6004_48f3_a1f3_192ee4c02588.slice\": RecentStats: unable to find data in memory cache]" Feb 27 10:39:45 crc kubenswrapper[4998]: I0227 10:39:45.194297 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 10:39:45 crc kubenswrapper[4998]: I0227 10:39:45.194864 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-64c87cb5cd-prbxl" Feb 27 10:39:45 crc kubenswrapper[4998]: I0227 10:39:45.195802 4998 scope.go:117] "RemoveContainer" containerID="cc13a2b6c8fbb3c6bcdd930f569272638758940627dab215dff1966642b3ce69" Feb 27 10:39:45 crc kubenswrapper[4998]: I0227 10:39:45.250846 4998 scope.go:117] "RemoveContainer" containerID="8a12151f48ca3a2183b68ccdcf7a2ffb1c0ae380d867aaadebdd44e175cfa585" Feb 27 10:39:45 crc kubenswrapper[4998]: I0227 10:39:45.292434 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-smssq\" (UniqueName: \"kubernetes.io/projected/906e560b-ca5b-48cf-988b-a06dfbacc876-kube-api-access-smssq\") pod \"906e560b-ca5b-48cf-988b-a06dfbacc876\" (UID: \"906e560b-ca5b-48cf-988b-a06dfbacc876\") " Feb 27 10:39:45 crc kubenswrapper[4998]: I0227 10:39:45.292490 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/906e560b-ca5b-48cf-988b-a06dfbacc876-log-httpd\") pod \"906e560b-ca5b-48cf-988b-a06dfbacc876\" (UID: \"906e560b-ca5b-48cf-988b-a06dfbacc876\") " Feb 27 10:39:45 crc kubenswrapper[4998]: I0227 10:39:45.292569 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/906e560b-ca5b-48cf-988b-a06dfbacc876-config-data\") pod \"906e560b-ca5b-48cf-988b-a06dfbacc876\" (UID: \"906e560b-ca5b-48cf-988b-a06dfbacc876\") " Feb 27 10:39:45 crc kubenswrapper[4998]: I0227 10:39:45.292597 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/906e560b-ca5b-48cf-988b-a06dfbacc876-combined-ca-bundle\") pod \"906e560b-ca5b-48cf-988b-a06dfbacc876\" (UID: \"906e560b-ca5b-48cf-988b-a06dfbacc876\") " Feb 27 10:39:45 crc kubenswrapper[4998]: I0227 10:39:45.292651 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/63a87b91-16fb-436d-8c53-317b204acebc-horizon-secret-key\") pod \"63a87b91-16fb-436d-8c53-317b204acebc\" (UID: \"63a87b91-16fb-436d-8c53-317b204acebc\") " Feb 27 10:39:45 crc kubenswrapper[4998]: I0227 10:39:45.292704 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/906e560b-ca5b-48cf-988b-a06dfbacc876-scripts\") pod \"906e560b-ca5b-48cf-988b-a06dfbacc876\" (UID: \"906e560b-ca5b-48cf-988b-a06dfbacc876\") " Feb 27 10:39:45 crc kubenswrapper[4998]: I0227 10:39:45.292761 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63a87b91-16fb-436d-8c53-317b204acebc-combined-ca-bundle\") pod \"63a87b91-16fb-436d-8c53-317b204acebc\" (UID: \"63a87b91-16fb-436d-8c53-317b204acebc\") " Feb 27 10:39:45 crc kubenswrapper[4998]: I0227 10:39:45.292783 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vjkf4\" (UniqueName: \"kubernetes.io/projected/63a87b91-16fb-436d-8c53-317b204acebc-kube-api-access-vjkf4\") pod \"63a87b91-16fb-436d-8c53-317b204acebc\" (UID: \"63a87b91-16fb-436d-8c53-317b204acebc\") " Feb 27 10:39:45 crc kubenswrapper[4998]: I0227 10:39:45.292818 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/906e560b-ca5b-48cf-988b-a06dfbacc876-sg-core-conf-yaml\") pod \"906e560b-ca5b-48cf-988b-a06dfbacc876\" (UID: \"906e560b-ca5b-48cf-988b-a06dfbacc876\") " Feb 27 10:39:45 crc kubenswrapper[4998]: I0227 10:39:45.292840 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/63a87b91-16fb-436d-8c53-317b204acebc-logs\") pod \"63a87b91-16fb-436d-8c53-317b204acebc\" (UID: \"63a87b91-16fb-436d-8c53-317b204acebc\") " Feb 27 10:39:45 crc kubenswrapper[4998]: I0227 10:39:45.292867 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/63a87b91-16fb-436d-8c53-317b204acebc-scripts\") pod \"63a87b91-16fb-436d-8c53-317b204acebc\" (UID: \"63a87b91-16fb-436d-8c53-317b204acebc\") " Feb 27 10:39:45 crc kubenswrapper[4998]: I0227 10:39:45.292890 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/63a87b91-16fb-436d-8c53-317b204acebc-config-data\") pod \"63a87b91-16fb-436d-8c53-317b204acebc\" (UID: \"63a87b91-16fb-436d-8c53-317b204acebc\") " Feb 27 10:39:45 crc kubenswrapper[4998]: I0227 10:39:45.292953 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/906e560b-ca5b-48cf-988b-a06dfbacc876-run-httpd\") pod \"906e560b-ca5b-48cf-988b-a06dfbacc876\" (UID: \"906e560b-ca5b-48cf-988b-a06dfbacc876\") " Feb 27 10:39:45 crc kubenswrapper[4998]: I0227 10:39:45.292978 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/63a87b91-16fb-436d-8c53-317b204acebc-horizon-tls-certs\") pod \"63a87b91-16fb-436d-8c53-317b204acebc\" (UID: \"63a87b91-16fb-436d-8c53-317b204acebc\") " Feb 27 10:39:45 crc kubenswrapper[4998]: I0227 10:39:45.294418 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63a87b91-16fb-436d-8c53-317b204acebc-logs" (OuterVolumeSpecName: "logs") pod "63a87b91-16fb-436d-8c53-317b204acebc" (UID: "63a87b91-16fb-436d-8c53-317b204acebc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:39:45 crc kubenswrapper[4998]: I0227 10:39:45.295694 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/906e560b-ca5b-48cf-988b-a06dfbacc876-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "906e560b-ca5b-48cf-988b-a06dfbacc876" (UID: "906e560b-ca5b-48cf-988b-a06dfbacc876"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:39:45 crc kubenswrapper[4998]: I0227 10:39:45.296683 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/906e560b-ca5b-48cf-988b-a06dfbacc876-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "906e560b-ca5b-48cf-988b-a06dfbacc876" (UID: "906e560b-ca5b-48cf-988b-a06dfbacc876"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:39:45 crc kubenswrapper[4998]: I0227 10:39:45.298477 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63a87b91-16fb-436d-8c53-317b204acebc-kube-api-access-vjkf4" (OuterVolumeSpecName: "kube-api-access-vjkf4") pod "63a87b91-16fb-436d-8c53-317b204acebc" (UID: "63a87b91-16fb-436d-8c53-317b204acebc"). InnerVolumeSpecName "kube-api-access-vjkf4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:39:45 crc kubenswrapper[4998]: I0227 10:39:45.298755 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/906e560b-ca5b-48cf-988b-a06dfbacc876-scripts" (OuterVolumeSpecName: "scripts") pod "906e560b-ca5b-48cf-988b-a06dfbacc876" (UID: "906e560b-ca5b-48cf-988b-a06dfbacc876"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:39:45 crc kubenswrapper[4998]: I0227 10:39:45.299746 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/906e560b-ca5b-48cf-988b-a06dfbacc876-kube-api-access-smssq" (OuterVolumeSpecName: "kube-api-access-smssq") pod "906e560b-ca5b-48cf-988b-a06dfbacc876" (UID: "906e560b-ca5b-48cf-988b-a06dfbacc876"). InnerVolumeSpecName "kube-api-access-smssq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:39:45 crc kubenswrapper[4998]: I0227 10:39:45.300928 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63a87b91-16fb-436d-8c53-317b204acebc-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "63a87b91-16fb-436d-8c53-317b204acebc" (UID: "63a87b91-16fb-436d-8c53-317b204acebc"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:39:45 crc kubenswrapper[4998]: I0227 10:39:45.317918 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63a87b91-16fb-436d-8c53-317b204acebc-config-data" (OuterVolumeSpecName: "config-data") pod "63a87b91-16fb-436d-8c53-317b204acebc" (UID: "63a87b91-16fb-436d-8c53-317b204acebc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:39:45 crc kubenswrapper[4998]: I0227 10:39:45.323188 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63a87b91-16fb-436d-8c53-317b204acebc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "63a87b91-16fb-436d-8c53-317b204acebc" (UID: "63a87b91-16fb-436d-8c53-317b204acebc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:39:45 crc kubenswrapper[4998]: I0227 10:39:45.346279 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63a87b91-16fb-436d-8c53-317b204acebc-scripts" (OuterVolumeSpecName: "scripts") pod "63a87b91-16fb-436d-8c53-317b204acebc" (UID: "63a87b91-16fb-436d-8c53-317b204acebc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:39:45 crc kubenswrapper[4998]: I0227 10:39:45.357128 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/906e560b-ca5b-48cf-988b-a06dfbacc876-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "906e560b-ca5b-48cf-988b-a06dfbacc876" (UID: "906e560b-ca5b-48cf-988b-a06dfbacc876"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:39:45 crc kubenswrapper[4998]: I0227 10:39:45.386216 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63a87b91-16fb-436d-8c53-317b204acebc-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "63a87b91-16fb-436d-8c53-317b204acebc" (UID: "63a87b91-16fb-436d-8c53-317b204acebc"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:39:45 crc kubenswrapper[4998]: I0227 10:39:45.395689 4998 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/63a87b91-16fb-436d-8c53-317b204acebc-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 27 10:39:45 crc kubenswrapper[4998]: I0227 10:39:45.395721 4998 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/906e560b-ca5b-48cf-988b-a06dfbacc876-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 10:39:45 crc kubenswrapper[4998]: I0227 10:39:45.395730 4998 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63a87b91-16fb-436d-8c53-317b204acebc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:39:45 crc kubenswrapper[4998]: I0227 10:39:45.395739 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vjkf4\" (UniqueName: \"kubernetes.io/projected/63a87b91-16fb-436d-8c53-317b204acebc-kube-api-access-vjkf4\") on node \"crc\" DevicePath \"\"" Feb 27 10:39:45 crc kubenswrapper[4998]: I0227 10:39:45.395749 4998 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/906e560b-ca5b-48cf-988b-a06dfbacc876-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 27 10:39:45 crc kubenswrapper[4998]: I0227 10:39:45.395757 4998 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/63a87b91-16fb-436d-8c53-317b204acebc-logs\") on node \"crc\" DevicePath \"\"" Feb 27 10:39:45 crc kubenswrapper[4998]: I0227 10:39:45.395766 4998 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/63a87b91-16fb-436d-8c53-317b204acebc-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 10:39:45 crc kubenswrapper[4998]: I0227 10:39:45.395774 4998 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/63a87b91-16fb-436d-8c53-317b204acebc-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 10:39:45 crc kubenswrapper[4998]: I0227 10:39:45.395782 4998 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/906e560b-ca5b-48cf-988b-a06dfbacc876-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 27 10:39:45 crc kubenswrapper[4998]: I0227 10:39:45.395789 4998 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/63a87b91-16fb-436d-8c53-317b204acebc-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 10:39:45 crc kubenswrapper[4998]: I0227 10:39:45.395797 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-smssq\" (UniqueName: \"kubernetes.io/projected/906e560b-ca5b-48cf-988b-a06dfbacc876-kube-api-access-smssq\") on node \"crc\" DevicePath \"\"" Feb 27 10:39:45 crc kubenswrapper[4998]: I0227 10:39:45.395806 4998 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/906e560b-ca5b-48cf-988b-a06dfbacc876-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 27 10:39:45 crc kubenswrapper[4998]: I0227 10:39:45.417506 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/906e560b-ca5b-48cf-988b-a06dfbacc876-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "906e560b-ca5b-48cf-988b-a06dfbacc876" (UID: "906e560b-ca5b-48cf-988b-a06dfbacc876"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:39:45 crc kubenswrapper[4998]: I0227 10:39:45.418752 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/906e560b-ca5b-48cf-988b-a06dfbacc876-config-data" (OuterVolumeSpecName: "config-data") pod "906e560b-ca5b-48cf-988b-a06dfbacc876" (UID: "906e560b-ca5b-48cf-988b-a06dfbacc876"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:39:45 crc kubenswrapper[4998]: I0227 10:39:45.497036 4998 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/906e560b-ca5b-48cf-988b-a06dfbacc876-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 10:39:45 crc kubenswrapper[4998]: I0227 10:39:45.497072 4998 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/906e560b-ca5b-48cf-988b-a06dfbacc876-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:39:46 crc kubenswrapper[4998]: I0227 10:39:46.031064 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-64c87cb5cd-prbxl" event={"ID":"63a87b91-16fb-436d-8c53-317b204acebc","Type":"ContainerDied","Data":"e1a4ba2a3ba7af1679d743503219ceeb8d5a71a9e424c94828a1604bda046705"} Feb 27 10:39:46 crc kubenswrapper[4998]: I0227 10:39:46.031474 4998 scope.go:117] "RemoveContainer" containerID="4d13db667b5d6248ab35065cb2dad51e3d395498b2e94ba8f06c2fdb4455a5a9" Feb 27 10:39:46 crc kubenswrapper[4998]: I0227 10:39:46.031608 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-64c87cb5cd-prbxl" Feb 27 10:39:46 crc kubenswrapper[4998]: I0227 10:39:46.046564 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"906e560b-ca5b-48cf-988b-a06dfbacc876","Type":"ContainerDied","Data":"a9477227404b373192de3bc0c16fb5b46e2167c450020f16d325f03304f83bca"} Feb 27 10:39:46 crc kubenswrapper[4998]: I0227 10:39:46.046772 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 10:39:46 crc kubenswrapper[4998]: I0227 10:39:46.061212 4998 generic.go:334] "Generic (PLEG): container finished" podID="64d37fff-983b-4a39-89c4-dba36db2f1ba" containerID="cbaf5c02f514eba186509dbedb6411360e62b35dd42d0510e6c64f49914d996d" exitCode=0 Feb 27 10:39:46 crc kubenswrapper[4998]: I0227 10:39:46.061305 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-85b475b45b-ggjbp" event={"ID":"64d37fff-983b-4a39-89c4-dba36db2f1ba","Type":"ContainerDied","Data":"cbaf5c02f514eba186509dbedb6411360e62b35dd42d0510e6c64f49914d996d"} Feb 27 10:39:46 crc kubenswrapper[4998]: I0227 10:39:46.100984 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-64c87cb5cd-prbxl"] Feb 27 10:39:46 crc kubenswrapper[4998]: I0227 10:39:46.110544 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-64c87cb5cd-prbxl"] Feb 27 10:39:46 crc kubenswrapper[4998]: I0227 10:39:46.132078 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 27 10:39:46 crc kubenswrapper[4998]: I0227 10:39:46.139849 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 27 10:39:46 crc kubenswrapper[4998]: I0227 10:39:46.148150 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 27 10:39:46 crc kubenswrapper[4998]: E0227 10:39:46.148585 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63a87b91-16fb-436d-8c53-317b204acebc" containerName="horizon-log" Feb 27 10:39:46 crc kubenswrapper[4998]: I0227 10:39:46.148603 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="63a87b91-16fb-436d-8c53-317b204acebc" containerName="horizon-log" Feb 27 10:39:46 crc kubenswrapper[4998]: E0227 10:39:46.148621 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="906e560b-ca5b-48cf-988b-a06dfbacc876" containerName="proxy-httpd" Feb 27 10:39:46 crc kubenswrapper[4998]: I0227 10:39:46.148627 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="906e560b-ca5b-48cf-988b-a06dfbacc876" containerName="proxy-httpd" Feb 27 10:39:46 crc kubenswrapper[4998]: E0227 10:39:46.148639 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="906e560b-ca5b-48cf-988b-a06dfbacc876" containerName="ceilometer-notification-agent" Feb 27 10:39:46 crc kubenswrapper[4998]: I0227 10:39:46.148645 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="906e560b-ca5b-48cf-988b-a06dfbacc876" containerName="ceilometer-notification-agent" Feb 27 10:39:46 crc kubenswrapper[4998]: E0227 10:39:46.148653 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="906e560b-ca5b-48cf-988b-a06dfbacc876" containerName="sg-core" Feb 27 10:39:46 crc kubenswrapper[4998]: I0227 10:39:46.148659 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="906e560b-ca5b-48cf-988b-a06dfbacc876" containerName="sg-core" Feb 27 10:39:46 crc kubenswrapper[4998]: E0227 10:39:46.148677 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63a87b91-16fb-436d-8c53-317b204acebc" containerName="horizon" Feb 27 10:39:46 crc kubenswrapper[4998]: I0227 10:39:46.148683 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="63a87b91-16fb-436d-8c53-317b204acebc" containerName="horizon" Feb 27 10:39:46 crc kubenswrapper[4998]: E0227 10:39:46.148699 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="906e560b-ca5b-48cf-988b-a06dfbacc876" containerName="ceilometer-central-agent" Feb 27 10:39:46 crc kubenswrapper[4998]: I0227 10:39:46.148705 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="906e560b-ca5b-48cf-988b-a06dfbacc876" containerName="ceilometer-central-agent" Feb 27 10:39:46 crc kubenswrapper[4998]: I0227 10:39:46.148855 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="63a87b91-16fb-436d-8c53-317b204acebc" containerName="horizon" Feb 27 10:39:46 crc kubenswrapper[4998]: I0227 10:39:46.148868 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="906e560b-ca5b-48cf-988b-a06dfbacc876" containerName="sg-core" Feb 27 10:39:46 crc kubenswrapper[4998]: I0227 10:39:46.148878 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="906e560b-ca5b-48cf-988b-a06dfbacc876" containerName="proxy-httpd" Feb 27 10:39:46 crc kubenswrapper[4998]: I0227 10:39:46.148886 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="906e560b-ca5b-48cf-988b-a06dfbacc876" containerName="ceilometer-central-agent" Feb 27 10:39:46 crc kubenswrapper[4998]: I0227 10:39:46.148900 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="906e560b-ca5b-48cf-988b-a06dfbacc876" containerName="ceilometer-notification-agent" Feb 27 10:39:46 crc kubenswrapper[4998]: I0227 10:39:46.148908 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="63a87b91-16fb-436d-8c53-317b204acebc" containerName="horizon-log" Feb 27 10:39:46 crc kubenswrapper[4998]: I0227 10:39:46.150434 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 10:39:46 crc kubenswrapper[4998]: I0227 10:39:46.152776 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 27 10:39:46 crc kubenswrapper[4998]: I0227 10:39:46.153047 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 27 10:39:46 crc kubenswrapper[4998]: I0227 10:39:46.160797 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 27 10:39:46 crc kubenswrapper[4998]: I0227 10:39:46.218077 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/69b8224e-b49d-48aa-a714-9aad5b59274e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"69b8224e-b49d-48aa-a714-9aad5b59274e\") " pod="openstack/ceilometer-0" Feb 27 10:39:46 crc kubenswrapper[4998]: I0227 10:39:46.218171 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69b8224e-b49d-48aa-a714-9aad5b59274e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"69b8224e-b49d-48aa-a714-9aad5b59274e\") " pod="openstack/ceilometer-0" Feb 27 10:39:46 crc kubenswrapper[4998]: I0227 10:39:46.218214 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7stj\" (UniqueName: \"kubernetes.io/projected/69b8224e-b49d-48aa-a714-9aad5b59274e-kube-api-access-d7stj\") pod \"ceilometer-0\" (UID: \"69b8224e-b49d-48aa-a714-9aad5b59274e\") " pod="openstack/ceilometer-0" Feb 27 10:39:46 crc kubenswrapper[4998]: I0227 10:39:46.218268 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69b8224e-b49d-48aa-a714-9aad5b59274e-scripts\") pod \"ceilometer-0\" (UID: \"69b8224e-b49d-48aa-a714-9aad5b59274e\") " pod="openstack/ceilometer-0" Feb 27 10:39:46 crc kubenswrapper[4998]: I0227 10:39:46.218323 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/69b8224e-b49d-48aa-a714-9aad5b59274e-run-httpd\") pod \"ceilometer-0\" (UID: \"69b8224e-b49d-48aa-a714-9aad5b59274e\") " pod="openstack/ceilometer-0" Feb 27 10:39:46 crc kubenswrapper[4998]: I0227 10:39:46.218359 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/69b8224e-b49d-48aa-a714-9aad5b59274e-log-httpd\") pod \"ceilometer-0\" (UID: \"69b8224e-b49d-48aa-a714-9aad5b59274e\") " pod="openstack/ceilometer-0" Feb 27 10:39:46 crc kubenswrapper[4998]: I0227 10:39:46.218383 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69b8224e-b49d-48aa-a714-9aad5b59274e-config-data\") pod \"ceilometer-0\" (UID: \"69b8224e-b49d-48aa-a714-9aad5b59274e\") " pod="openstack/ceilometer-0" Feb 27 10:39:46 crc kubenswrapper[4998]: I0227 10:39:46.253238 4998 scope.go:117] "RemoveContainer" containerID="0d4b5db9912175f7b371aef129e1a3835005cfc41d31b381885990ea6c8dca9e" Feb 27 10:39:46 crc kubenswrapper[4998]: I0227 10:39:46.320484 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/69b8224e-b49d-48aa-a714-9aad5b59274e-run-httpd\") pod \"ceilometer-0\" (UID: \"69b8224e-b49d-48aa-a714-9aad5b59274e\") " pod="openstack/ceilometer-0" Feb 27 10:39:46 crc kubenswrapper[4998]: I0227 10:39:46.320539 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/69b8224e-b49d-48aa-a714-9aad5b59274e-log-httpd\") pod \"ceilometer-0\" (UID: \"69b8224e-b49d-48aa-a714-9aad5b59274e\") " pod="openstack/ceilometer-0" Feb 27 10:39:46 crc kubenswrapper[4998]: I0227 10:39:46.320561 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69b8224e-b49d-48aa-a714-9aad5b59274e-config-data\") pod \"ceilometer-0\" (UID: \"69b8224e-b49d-48aa-a714-9aad5b59274e\") " pod="openstack/ceilometer-0" Feb 27 10:39:46 crc kubenswrapper[4998]: I0227 10:39:46.320639 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/69b8224e-b49d-48aa-a714-9aad5b59274e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"69b8224e-b49d-48aa-a714-9aad5b59274e\") " pod="openstack/ceilometer-0" Feb 27 10:39:46 crc kubenswrapper[4998]: I0227 10:39:46.320687 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69b8224e-b49d-48aa-a714-9aad5b59274e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"69b8224e-b49d-48aa-a714-9aad5b59274e\") " pod="openstack/ceilometer-0" Feb 27 10:39:46 crc kubenswrapper[4998]: I0227 10:39:46.320713 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7stj\" (UniqueName: \"kubernetes.io/projected/69b8224e-b49d-48aa-a714-9aad5b59274e-kube-api-access-d7stj\") pod \"ceilometer-0\" (UID: \"69b8224e-b49d-48aa-a714-9aad5b59274e\") " pod="openstack/ceilometer-0" Feb 27 10:39:46 crc kubenswrapper[4998]: I0227 10:39:46.320731 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69b8224e-b49d-48aa-a714-9aad5b59274e-scripts\") pod \"ceilometer-0\" (UID: \"69b8224e-b49d-48aa-a714-9aad5b59274e\") " pod="openstack/ceilometer-0" Feb 27 10:39:46 crc kubenswrapper[4998]: I0227 10:39:46.321293 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/69b8224e-b49d-48aa-a714-9aad5b59274e-run-httpd\") pod \"ceilometer-0\" (UID: \"69b8224e-b49d-48aa-a714-9aad5b59274e\") " pod="openstack/ceilometer-0" Feb 27 10:39:46 crc kubenswrapper[4998]: I0227 10:39:46.321354 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/69b8224e-b49d-48aa-a714-9aad5b59274e-log-httpd\") pod \"ceilometer-0\" (UID: \"69b8224e-b49d-48aa-a714-9aad5b59274e\") " pod="openstack/ceilometer-0" Feb 27 10:39:46 crc kubenswrapper[4998]: I0227 10:39:46.325947 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/69b8224e-b49d-48aa-a714-9aad5b59274e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"69b8224e-b49d-48aa-a714-9aad5b59274e\") " pod="openstack/ceilometer-0" Feb 27 10:39:46 crc kubenswrapper[4998]: I0227 10:39:46.326084 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69b8224e-b49d-48aa-a714-9aad5b59274e-scripts\") pod \"ceilometer-0\" (UID: \"69b8224e-b49d-48aa-a714-9aad5b59274e\") " pod="openstack/ceilometer-0" Feb 27 10:39:46 crc kubenswrapper[4998]: I0227 10:39:46.326330 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69b8224e-b49d-48aa-a714-9aad5b59274e-config-data\") pod \"ceilometer-0\" (UID: \"69b8224e-b49d-48aa-a714-9aad5b59274e\") " pod="openstack/ceilometer-0" Feb 27 10:39:46 crc kubenswrapper[4998]: I0227 10:39:46.326440 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69b8224e-b49d-48aa-a714-9aad5b59274e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"69b8224e-b49d-48aa-a714-9aad5b59274e\") " pod="openstack/ceilometer-0" Feb 27 10:39:46 crc kubenswrapper[4998]: I0227 10:39:46.336824 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7stj\" (UniqueName: \"kubernetes.io/projected/69b8224e-b49d-48aa-a714-9aad5b59274e-kube-api-access-d7stj\") pod \"ceilometer-0\" (UID: \"69b8224e-b49d-48aa-a714-9aad5b59274e\") " pod="openstack/ceilometer-0" Feb 27 10:39:46 crc kubenswrapper[4998]: I0227 10:39:46.407442 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-85b475b45b-ggjbp" Feb 27 10:39:46 crc kubenswrapper[4998]: I0227 10:39:46.419094 4998 scope.go:117] "RemoveContainer" containerID="1036b183bb9630a08c6990ada6a2ecee15efb90cf972c3d0cb5a72e7d2cf14b0" Feb 27 10:39:46 crc kubenswrapper[4998]: I0227 10:39:46.446478 4998 scope.go:117] "RemoveContainer" containerID="c13e1a462178f1ed51a00ad08bc3a5a98a2d4320fe7f9d1bd3b17d2877c8f479" Feb 27 10:39:46 crc kubenswrapper[4998]: I0227 10:39:46.474875 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 10:39:46 crc kubenswrapper[4998]: I0227 10:39:46.487668 4998 scope.go:117] "RemoveContainer" containerID="b445757a5e844b2f543f0647dde7a5e977ea3ae02d2a3b056f6bd7d31b06fdff" Feb 27 10:39:46 crc kubenswrapper[4998]: I0227 10:39:46.516171 4998 scope.go:117] "RemoveContainer" containerID="2f3c41f16ef4b3b7f5d8afc8eebc2fb98b4cf85a40c3cc33cedc07024cdd8baf" Feb 27 10:39:46 crc kubenswrapper[4998]: I0227 10:39:46.524873 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/64d37fff-983b-4a39-89c4-dba36db2f1ba-internal-tls-certs\") pod \"64d37fff-983b-4a39-89c4-dba36db2f1ba\" (UID: \"64d37fff-983b-4a39-89c4-dba36db2f1ba\") " Feb 27 10:39:46 crc kubenswrapper[4998]: I0227 10:39:46.525024 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64d37fff-983b-4a39-89c4-dba36db2f1ba-combined-ca-bundle\") pod \"64d37fff-983b-4a39-89c4-dba36db2f1ba\" (UID: \"64d37fff-983b-4a39-89c4-dba36db2f1ba\") " Feb 27 10:39:46 crc kubenswrapper[4998]: I0227 10:39:46.525080 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/64d37fff-983b-4a39-89c4-dba36db2f1ba-scripts\") pod \"64d37fff-983b-4a39-89c4-dba36db2f1ba\" (UID: \"64d37fff-983b-4a39-89c4-dba36db2f1ba\") " Feb 27 10:39:46 crc kubenswrapper[4998]: I0227 10:39:46.525099 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64d37fff-983b-4a39-89c4-dba36db2f1ba-config-data\") pod \"64d37fff-983b-4a39-89c4-dba36db2f1ba\" (UID: \"64d37fff-983b-4a39-89c4-dba36db2f1ba\") " Feb 27 10:39:46 crc kubenswrapper[4998]: I0227 10:39:46.525853 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p4phj\" (UniqueName: \"kubernetes.io/projected/64d37fff-983b-4a39-89c4-dba36db2f1ba-kube-api-access-p4phj\") pod \"64d37fff-983b-4a39-89c4-dba36db2f1ba\" (UID: \"64d37fff-983b-4a39-89c4-dba36db2f1ba\") " Feb 27 10:39:46 crc kubenswrapper[4998]: I0227 10:39:46.525924 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/64d37fff-983b-4a39-89c4-dba36db2f1ba-public-tls-certs\") pod \"64d37fff-983b-4a39-89c4-dba36db2f1ba\" (UID: \"64d37fff-983b-4a39-89c4-dba36db2f1ba\") " Feb 27 10:39:46 crc kubenswrapper[4998]: I0227 10:39:46.526019 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/64d37fff-983b-4a39-89c4-dba36db2f1ba-logs\") pod \"64d37fff-983b-4a39-89c4-dba36db2f1ba\" (UID: \"64d37fff-983b-4a39-89c4-dba36db2f1ba\") " Feb 27 10:39:46 crc kubenswrapper[4998]: I0227 10:39:46.526893 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64d37fff-983b-4a39-89c4-dba36db2f1ba-logs" (OuterVolumeSpecName: "logs") pod "64d37fff-983b-4a39-89c4-dba36db2f1ba" (UID: "64d37fff-983b-4a39-89c4-dba36db2f1ba"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:39:46 crc kubenswrapper[4998]: I0227 10:39:46.529614 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64d37fff-983b-4a39-89c4-dba36db2f1ba-scripts" (OuterVolumeSpecName: "scripts") pod "64d37fff-983b-4a39-89c4-dba36db2f1ba" (UID: "64d37fff-983b-4a39-89c4-dba36db2f1ba"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:39:46 crc kubenswrapper[4998]: I0227 10:39:46.531176 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64d37fff-983b-4a39-89c4-dba36db2f1ba-kube-api-access-p4phj" (OuterVolumeSpecName: "kube-api-access-p4phj") pod "64d37fff-983b-4a39-89c4-dba36db2f1ba" (UID: "64d37fff-983b-4a39-89c4-dba36db2f1ba"). InnerVolumeSpecName "kube-api-access-p4phj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:39:46 crc kubenswrapper[4998]: I0227 10:39:46.605455 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64d37fff-983b-4a39-89c4-dba36db2f1ba-config-data" (OuterVolumeSpecName: "config-data") pod "64d37fff-983b-4a39-89c4-dba36db2f1ba" (UID: "64d37fff-983b-4a39-89c4-dba36db2f1ba"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:39:46 crc kubenswrapper[4998]: I0227 10:39:46.617991 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64d37fff-983b-4a39-89c4-dba36db2f1ba-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "64d37fff-983b-4a39-89c4-dba36db2f1ba" (UID: "64d37fff-983b-4a39-89c4-dba36db2f1ba"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:39:46 crc kubenswrapper[4998]: I0227 10:39:46.630400 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p4phj\" (UniqueName: \"kubernetes.io/projected/64d37fff-983b-4a39-89c4-dba36db2f1ba-kube-api-access-p4phj\") on node \"crc\" DevicePath \"\"" Feb 27 10:39:46 crc kubenswrapper[4998]: I0227 10:39:46.630440 4998 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/64d37fff-983b-4a39-89c4-dba36db2f1ba-logs\") on node \"crc\" DevicePath \"\"" Feb 27 10:39:46 crc kubenswrapper[4998]: I0227 10:39:46.630456 4998 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64d37fff-983b-4a39-89c4-dba36db2f1ba-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:39:46 crc kubenswrapper[4998]: I0227 10:39:46.630466 4998 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/64d37fff-983b-4a39-89c4-dba36db2f1ba-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 10:39:46 crc kubenswrapper[4998]: I0227 10:39:46.630476 4998 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64d37fff-983b-4a39-89c4-dba36db2f1ba-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 10:39:46 crc kubenswrapper[4998]: I0227 10:39:46.667149 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64d37fff-983b-4a39-89c4-dba36db2f1ba-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "64d37fff-983b-4a39-89c4-dba36db2f1ba" (UID: "64d37fff-983b-4a39-89c4-dba36db2f1ba"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:39:46 crc kubenswrapper[4998]: I0227 10:39:46.691887 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64d37fff-983b-4a39-89c4-dba36db2f1ba-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "64d37fff-983b-4a39-89c4-dba36db2f1ba" (UID: "64d37fff-983b-4a39-89c4-dba36db2f1ba"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:39:46 crc kubenswrapper[4998]: I0227 10:39:46.732390 4998 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/64d37fff-983b-4a39-89c4-dba36db2f1ba-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 10:39:46 crc kubenswrapper[4998]: I0227 10:39:46.732432 4998 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/64d37fff-983b-4a39-89c4-dba36db2f1ba-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 10:39:46 crc kubenswrapper[4998]: I0227 10:39:46.787907 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63a87b91-16fb-436d-8c53-317b204acebc" path="/var/lib/kubelet/pods/63a87b91-16fb-436d-8c53-317b204acebc/volumes" Feb 27 10:39:46 crc kubenswrapper[4998]: I0227 10:39:46.788667 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="906e560b-ca5b-48cf-988b-a06dfbacc876" path="/var/lib/kubelet/pods/906e560b-ca5b-48cf-988b-a06dfbacc876/volumes" Feb 27 10:39:46 crc kubenswrapper[4998]: I0227 10:39:46.864821 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 27 10:39:46 crc kubenswrapper[4998]: I0227 10:39:46.865062 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="543a0e99-c247-4ab0-940e-461f495066cc" containerName="glance-log" containerID="cri-o://6db1d88d19bd1eee8afd58b6a8044df83c964f1e0606a25d3f38e26e921e013d" gracePeriod=30 Feb 27 10:39:46 crc kubenswrapper[4998]: I0227 10:39:46.865201 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="543a0e99-c247-4ab0-940e-461f495066cc" containerName="glance-httpd" containerID="cri-o://c048982bf17cc79eda51d9c9a9497b219787811a94818d7e1916880eff6a7eea" gracePeriod=30 Feb 27 10:39:47 crc kubenswrapper[4998]: I0227 10:39:47.009672 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 27 10:39:47 crc kubenswrapper[4998]: I0227 10:39:47.070364 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"69b8224e-b49d-48aa-a714-9aad5b59274e","Type":"ContainerStarted","Data":"09f754f9b6f839472f3ed6d25c940adf60173612f1865b5f8a505b58767629f6"} Feb 27 10:39:47 crc kubenswrapper[4998]: I0227 10:39:47.072827 4998 generic.go:334] "Generic (PLEG): container finished" podID="543a0e99-c247-4ab0-940e-461f495066cc" containerID="6db1d88d19bd1eee8afd58b6a8044df83c964f1e0606a25d3f38e26e921e013d" exitCode=143 Feb 27 10:39:47 crc kubenswrapper[4998]: I0227 10:39:47.072910 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"543a0e99-c247-4ab0-940e-461f495066cc","Type":"ContainerDied","Data":"6db1d88d19bd1eee8afd58b6a8044df83c964f1e0606a25d3f38e26e921e013d"} Feb 27 10:39:47 crc kubenswrapper[4998]: I0227 10:39:47.076292 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-85b475b45b-ggjbp" event={"ID":"64d37fff-983b-4a39-89c4-dba36db2f1ba","Type":"ContainerDied","Data":"406a6f9bfef14b6832688f03dd8dd8484ea04fd03bc2fc905283e66b8ee54ede"} Feb 27 10:39:47 crc kubenswrapper[4998]: I0227 10:39:47.076320 4998 scope.go:117] "RemoveContainer" containerID="cbaf5c02f514eba186509dbedb6411360e62b35dd42d0510e6c64f49914d996d" Feb 27 10:39:47 crc kubenswrapper[4998]: I0227 10:39:47.076358 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-85b475b45b-ggjbp" Feb 27 10:39:47 crc kubenswrapper[4998]: I0227 10:39:47.108148 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-85b475b45b-ggjbp"] Feb 27 10:39:47 crc kubenswrapper[4998]: I0227 10:39:47.109791 4998 scope.go:117] "RemoveContainer" containerID="8da3b426469ff337c194003544e89203c78dc69859f657de6c5989d427a40e87" Feb 27 10:39:47 crc kubenswrapper[4998]: I0227 10:39:47.116644 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-85b475b45b-ggjbp"] Feb 27 10:39:48 crc kubenswrapper[4998]: I0227 10:39:48.363747 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 27 10:39:48 crc kubenswrapper[4998]: I0227 10:39:48.785344 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64d37fff-983b-4a39-89c4-dba36db2f1ba" path="/var/lib/kubelet/pods/64d37fff-983b-4a39-89c4-dba36db2f1ba/volumes" Feb 27 10:39:51 crc kubenswrapper[4998]: I0227 10:39:51.123802 4998 generic.go:334] "Generic (PLEG): container finished" podID="543a0e99-c247-4ab0-940e-461f495066cc" containerID="c048982bf17cc79eda51d9c9a9497b219787811a94818d7e1916880eff6a7eea" exitCode=0 Feb 27 10:39:51 crc kubenswrapper[4998]: I0227 10:39:51.123880 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"543a0e99-c247-4ab0-940e-461f495066cc","Type":"ContainerDied","Data":"c048982bf17cc79eda51d9c9a9497b219787811a94818d7e1916880eff6a7eea"} Feb 27 10:39:52 crc kubenswrapper[4998]: I0227 10:39:52.658528 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 27 10:39:52 crc kubenswrapper[4998]: I0227 10:39:52.792034 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/543a0e99-c247-4ab0-940e-461f495066cc-internal-tls-certs\") pod \"543a0e99-c247-4ab0-940e-461f495066cc\" (UID: \"543a0e99-c247-4ab0-940e-461f495066cc\") " Feb 27 10:39:52 crc kubenswrapper[4998]: I0227 10:39:52.792466 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/543a0e99-c247-4ab0-940e-461f495066cc-scripts\") pod \"543a0e99-c247-4ab0-940e-461f495066cc\" (UID: \"543a0e99-c247-4ab0-940e-461f495066cc\") " Feb 27 10:39:52 crc kubenswrapper[4998]: I0227 10:39:52.792485 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"543a0e99-c247-4ab0-940e-461f495066cc\" (UID: \"543a0e99-c247-4ab0-940e-461f495066cc\") " Feb 27 10:39:52 crc kubenswrapper[4998]: I0227 10:39:52.792505 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/543a0e99-c247-4ab0-940e-461f495066cc-logs\") pod \"543a0e99-c247-4ab0-940e-461f495066cc\" (UID: \"543a0e99-c247-4ab0-940e-461f495066cc\") " Feb 27 10:39:52 crc kubenswrapper[4998]: I0227 10:39:52.792551 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/543a0e99-c247-4ab0-940e-461f495066cc-combined-ca-bundle\") pod \"543a0e99-c247-4ab0-940e-461f495066cc\" (UID: \"543a0e99-c247-4ab0-940e-461f495066cc\") " Feb 27 10:39:52 crc kubenswrapper[4998]: I0227 10:39:52.792612 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/543a0e99-c247-4ab0-940e-461f495066cc-httpd-run\") pod \"543a0e99-c247-4ab0-940e-461f495066cc\" (UID: \"543a0e99-c247-4ab0-940e-461f495066cc\") " Feb 27 10:39:52 crc kubenswrapper[4998]: I0227 10:39:52.792629 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/543a0e99-c247-4ab0-940e-461f495066cc-config-data\") pod \"543a0e99-c247-4ab0-940e-461f495066cc\" (UID: \"543a0e99-c247-4ab0-940e-461f495066cc\") " Feb 27 10:39:52 crc kubenswrapper[4998]: I0227 10:39:52.792670 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vclcz\" (UniqueName: \"kubernetes.io/projected/543a0e99-c247-4ab0-940e-461f495066cc-kube-api-access-vclcz\") pod \"543a0e99-c247-4ab0-940e-461f495066cc\" (UID: \"543a0e99-c247-4ab0-940e-461f495066cc\") " Feb 27 10:39:52 crc kubenswrapper[4998]: I0227 10:39:52.794090 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/543a0e99-c247-4ab0-940e-461f495066cc-logs" (OuterVolumeSpecName: "logs") pod "543a0e99-c247-4ab0-940e-461f495066cc" (UID: "543a0e99-c247-4ab0-940e-461f495066cc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:39:52 crc kubenswrapper[4998]: I0227 10:39:52.794260 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/543a0e99-c247-4ab0-940e-461f495066cc-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "543a0e99-c247-4ab0-940e-461f495066cc" (UID: "543a0e99-c247-4ab0-940e-461f495066cc"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:39:52 crc kubenswrapper[4998]: I0227 10:39:52.798414 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/543a0e99-c247-4ab0-940e-461f495066cc-scripts" (OuterVolumeSpecName: "scripts") pod "543a0e99-c247-4ab0-940e-461f495066cc" (UID: "543a0e99-c247-4ab0-940e-461f495066cc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:39:52 crc kubenswrapper[4998]: I0227 10:39:52.802628 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "543a0e99-c247-4ab0-940e-461f495066cc" (UID: "543a0e99-c247-4ab0-940e-461f495066cc"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 27 10:39:52 crc kubenswrapper[4998]: I0227 10:39:52.806496 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/543a0e99-c247-4ab0-940e-461f495066cc-kube-api-access-vclcz" (OuterVolumeSpecName: "kube-api-access-vclcz") pod "543a0e99-c247-4ab0-940e-461f495066cc" (UID: "543a0e99-c247-4ab0-940e-461f495066cc"). InnerVolumeSpecName "kube-api-access-vclcz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:39:52 crc kubenswrapper[4998]: I0227 10:39:52.824889 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/543a0e99-c247-4ab0-940e-461f495066cc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "543a0e99-c247-4ab0-940e-461f495066cc" (UID: "543a0e99-c247-4ab0-940e-461f495066cc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:39:52 crc kubenswrapper[4998]: I0227 10:39:52.847937 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/543a0e99-c247-4ab0-940e-461f495066cc-config-data" (OuterVolumeSpecName: "config-data") pod "543a0e99-c247-4ab0-940e-461f495066cc" (UID: "543a0e99-c247-4ab0-940e-461f495066cc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:39:52 crc kubenswrapper[4998]: I0227 10:39:52.858037 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/543a0e99-c247-4ab0-940e-461f495066cc-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "543a0e99-c247-4ab0-940e-461f495066cc" (UID: "543a0e99-c247-4ab0-940e-461f495066cc"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:39:52 crc kubenswrapper[4998]: I0227 10:39:52.894988 4998 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/543a0e99-c247-4ab0-940e-461f495066cc-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 10:39:52 crc kubenswrapper[4998]: I0227 10:39:52.895266 4998 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Feb 27 10:39:52 crc kubenswrapper[4998]: I0227 10:39:52.895414 4998 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/543a0e99-c247-4ab0-940e-461f495066cc-logs\") on node \"crc\" DevicePath \"\"" Feb 27 10:39:52 crc kubenswrapper[4998]: I0227 10:39:52.895475 4998 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/543a0e99-c247-4ab0-940e-461f495066cc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:39:52 crc kubenswrapper[4998]: I0227 10:39:52.895538 4998 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/543a0e99-c247-4ab0-940e-461f495066cc-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 27 10:39:52 crc kubenswrapper[4998]: I0227 10:39:52.895646 4998 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/543a0e99-c247-4ab0-940e-461f495066cc-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 10:39:52 crc kubenswrapper[4998]: I0227 10:39:52.895687 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vclcz\" (UniqueName: \"kubernetes.io/projected/543a0e99-c247-4ab0-940e-461f495066cc-kube-api-access-vclcz\") on node \"crc\" DevicePath \"\"" Feb 27 10:39:52 crc kubenswrapper[4998]: I0227 10:39:52.895726 4998 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/543a0e99-c247-4ab0-940e-461f495066cc-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 10:39:52 crc kubenswrapper[4998]: I0227 10:39:52.914378 4998 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Feb 27 10:39:52 crc kubenswrapper[4998]: I0227 10:39:52.996953 4998 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Feb 27 10:39:53 crc kubenswrapper[4998]: I0227 10:39:53.149997 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"69b8224e-b49d-48aa-a714-9aad5b59274e","Type":"ContainerStarted","Data":"8c2ff7bed5b0d2a4692fe488c628b7a66f6b94c3c2af404ca203e296b2ced311"} Feb 27 10:39:53 crc kubenswrapper[4998]: I0227 10:39:53.152401 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"543a0e99-c247-4ab0-940e-461f495066cc","Type":"ContainerDied","Data":"3c73cf7ebfbe48810ee6b8010592983563bb9338dcdc49a3d8262c1a8e6422fb"} Feb 27 10:39:53 crc kubenswrapper[4998]: I0227 10:39:53.152439 4998 scope.go:117] "RemoveContainer" containerID="c048982bf17cc79eda51d9c9a9497b219787811a94818d7e1916880eff6a7eea" Feb 27 10:39:53 crc kubenswrapper[4998]: I0227 10:39:53.152666 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 27 10:39:53 crc kubenswrapper[4998]: I0227 10:39:53.155258 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-p7ghm" event={"ID":"a9abdc95-5c73-40b8-a234-8b13e7be1cec","Type":"ContainerStarted","Data":"71e78464a33c722c93cd258774f6086a1dd7d42f9906c3cc6ae3dd873c06f64d"} Feb 27 10:39:53 crc kubenswrapper[4998]: I0227 10:39:53.198302 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-p7ghm" podStartSLOduration=3.130641357 podStartE2EDuration="13.198274732s" podCreationTimestamp="2026-02-27 10:39:40 +0000 UTC" firstStartedPulling="2026-02-27 10:39:42.493857409 +0000 UTC m=+1334.492128377" lastFinishedPulling="2026-02-27 10:39:52.561490784 +0000 UTC m=+1344.559761752" observedRunningTime="2026-02-27 10:39:53.186918062 +0000 UTC m=+1345.185189050" watchObservedRunningTime="2026-02-27 10:39:53.198274732 +0000 UTC m=+1345.196545700" Feb 27 10:39:53 crc kubenswrapper[4998]: I0227 10:39:53.199501 4998 scope.go:117] "RemoveContainer" containerID="6db1d88d19bd1eee8afd58b6a8044df83c964f1e0606a25d3f38e26e921e013d" Feb 27 10:39:53 crc kubenswrapper[4998]: I0227 10:39:53.241477 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 27 10:39:53 crc kubenswrapper[4998]: I0227 10:39:53.275580 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 27 10:39:53 crc kubenswrapper[4998]: I0227 10:39:53.288580 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 27 10:39:53 crc kubenswrapper[4998]: E0227 10:39:53.289088 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="543a0e99-c247-4ab0-940e-461f495066cc" containerName="glance-log" Feb 27 10:39:53 crc kubenswrapper[4998]: I0227 10:39:53.289113 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="543a0e99-c247-4ab0-940e-461f495066cc" containerName="glance-log" Feb 27 10:39:53 crc kubenswrapper[4998]: E0227 10:39:53.289132 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="543a0e99-c247-4ab0-940e-461f495066cc" containerName="glance-httpd" Feb 27 10:39:53 crc kubenswrapper[4998]: I0227 10:39:53.289140 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="543a0e99-c247-4ab0-940e-461f495066cc" containerName="glance-httpd" Feb 27 10:39:53 crc kubenswrapper[4998]: E0227 10:39:53.289153 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64d37fff-983b-4a39-89c4-dba36db2f1ba" containerName="placement-api" Feb 27 10:39:53 crc kubenswrapper[4998]: I0227 10:39:53.289160 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="64d37fff-983b-4a39-89c4-dba36db2f1ba" containerName="placement-api" Feb 27 10:39:53 crc kubenswrapper[4998]: E0227 10:39:53.289180 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64d37fff-983b-4a39-89c4-dba36db2f1ba" containerName="placement-log" Feb 27 10:39:53 crc kubenswrapper[4998]: I0227 10:39:53.289189 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="64d37fff-983b-4a39-89c4-dba36db2f1ba" containerName="placement-log" Feb 27 10:39:53 crc kubenswrapper[4998]: I0227 10:39:53.289377 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="64d37fff-983b-4a39-89c4-dba36db2f1ba" containerName="placement-api" Feb 27 10:39:53 crc kubenswrapper[4998]: I0227 10:39:53.289388 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="543a0e99-c247-4ab0-940e-461f495066cc" containerName="glance-httpd" Feb 27 10:39:53 crc kubenswrapper[4998]: I0227 10:39:53.289395 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="64d37fff-983b-4a39-89c4-dba36db2f1ba" containerName="placement-log" Feb 27 10:39:53 crc kubenswrapper[4998]: I0227 10:39:53.289415 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="543a0e99-c247-4ab0-940e-461f495066cc" containerName="glance-log" Feb 27 10:39:53 crc kubenswrapper[4998]: I0227 10:39:53.290445 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 27 10:39:53 crc kubenswrapper[4998]: I0227 10:39:53.296768 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 27 10:39:53 crc kubenswrapper[4998]: I0227 10:39:53.297083 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 27 10:39:53 crc kubenswrapper[4998]: I0227 10:39:53.301088 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 27 10:39:53 crc kubenswrapper[4998]: I0227 10:39:53.404379 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a80b984c-5ec4-4e6e-9e7e-01c1653244d5-logs\") pod \"glance-default-internal-api-0\" (UID: \"a80b984c-5ec4-4e6e-9e7e-01c1653244d5\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:39:53 crc kubenswrapper[4998]: I0227 10:39:53.404466 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a80b984c-5ec4-4e6e-9e7e-01c1653244d5-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"a80b984c-5ec4-4e6e-9e7e-01c1653244d5\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:39:53 crc kubenswrapper[4998]: I0227 10:39:53.404514 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxt7r\" (UniqueName: \"kubernetes.io/projected/a80b984c-5ec4-4e6e-9e7e-01c1653244d5-kube-api-access-nxt7r\") pod \"glance-default-internal-api-0\" (UID: \"a80b984c-5ec4-4e6e-9e7e-01c1653244d5\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:39:53 crc kubenswrapper[4998]: I0227 10:39:53.404549 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a80b984c-5ec4-4e6e-9e7e-01c1653244d5-scripts\") pod \"glance-default-internal-api-0\" (UID: \"a80b984c-5ec4-4e6e-9e7e-01c1653244d5\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:39:53 crc kubenswrapper[4998]: I0227 10:39:53.404588 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a80b984c-5ec4-4e6e-9e7e-01c1653244d5-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"a80b984c-5ec4-4e6e-9e7e-01c1653244d5\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:39:53 crc kubenswrapper[4998]: I0227 10:39:53.404609 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a80b984c-5ec4-4e6e-9e7e-01c1653244d5-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"a80b984c-5ec4-4e6e-9e7e-01c1653244d5\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:39:53 crc kubenswrapper[4998]: I0227 10:39:53.404637 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"a80b984c-5ec4-4e6e-9e7e-01c1653244d5\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:39:53 crc kubenswrapper[4998]: I0227 10:39:53.404681 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a80b984c-5ec4-4e6e-9e7e-01c1653244d5-config-data\") pod \"glance-default-internal-api-0\" (UID: \"a80b984c-5ec4-4e6e-9e7e-01c1653244d5\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:39:53 crc kubenswrapper[4998]: I0227 10:39:53.506611 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a80b984c-5ec4-4e6e-9e7e-01c1653244d5-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"a80b984c-5ec4-4e6e-9e7e-01c1653244d5\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:39:53 crc kubenswrapper[4998]: I0227 10:39:53.506688 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxt7r\" (UniqueName: \"kubernetes.io/projected/a80b984c-5ec4-4e6e-9e7e-01c1653244d5-kube-api-access-nxt7r\") pod \"glance-default-internal-api-0\" (UID: \"a80b984c-5ec4-4e6e-9e7e-01c1653244d5\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:39:53 crc kubenswrapper[4998]: I0227 10:39:53.506723 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a80b984c-5ec4-4e6e-9e7e-01c1653244d5-scripts\") pod \"glance-default-internal-api-0\" (UID: \"a80b984c-5ec4-4e6e-9e7e-01c1653244d5\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:39:53 crc kubenswrapper[4998]: I0227 10:39:53.506759 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a80b984c-5ec4-4e6e-9e7e-01c1653244d5-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"a80b984c-5ec4-4e6e-9e7e-01c1653244d5\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:39:53 crc kubenswrapper[4998]: I0227 10:39:53.506784 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a80b984c-5ec4-4e6e-9e7e-01c1653244d5-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"a80b984c-5ec4-4e6e-9e7e-01c1653244d5\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:39:53 crc kubenswrapper[4998]: I0227 10:39:53.506814 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"a80b984c-5ec4-4e6e-9e7e-01c1653244d5\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:39:53 crc kubenswrapper[4998]: I0227 10:39:53.506855 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a80b984c-5ec4-4e6e-9e7e-01c1653244d5-config-data\") pod \"glance-default-internal-api-0\" (UID: \"a80b984c-5ec4-4e6e-9e7e-01c1653244d5\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:39:53 crc kubenswrapper[4998]: I0227 10:39:53.506918 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a80b984c-5ec4-4e6e-9e7e-01c1653244d5-logs\") pod \"glance-default-internal-api-0\" (UID: \"a80b984c-5ec4-4e6e-9e7e-01c1653244d5\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:39:53 crc kubenswrapper[4998]: I0227 10:39:53.507407 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a80b984c-5ec4-4e6e-9e7e-01c1653244d5-logs\") pod \"glance-default-internal-api-0\" (UID: \"a80b984c-5ec4-4e6e-9e7e-01c1653244d5\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:39:53 crc kubenswrapper[4998]: I0227 10:39:53.507670 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a80b984c-5ec4-4e6e-9e7e-01c1653244d5-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"a80b984c-5ec4-4e6e-9e7e-01c1653244d5\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:39:53 crc kubenswrapper[4998]: I0227 10:39:53.508615 4998 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"a80b984c-5ec4-4e6e-9e7e-01c1653244d5\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-internal-api-0" Feb 27 10:39:53 crc kubenswrapper[4998]: I0227 10:39:53.514718 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a80b984c-5ec4-4e6e-9e7e-01c1653244d5-config-data\") pod \"glance-default-internal-api-0\" (UID: \"a80b984c-5ec4-4e6e-9e7e-01c1653244d5\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:39:53 crc kubenswrapper[4998]: I0227 10:39:53.521657 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a80b984c-5ec4-4e6e-9e7e-01c1653244d5-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"a80b984c-5ec4-4e6e-9e7e-01c1653244d5\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:39:53 crc kubenswrapper[4998]: I0227 10:39:53.525132 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a80b984c-5ec4-4e6e-9e7e-01c1653244d5-scripts\") pod \"glance-default-internal-api-0\" (UID: \"a80b984c-5ec4-4e6e-9e7e-01c1653244d5\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:39:53 crc kubenswrapper[4998]: I0227 10:39:53.525436 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxt7r\" (UniqueName: \"kubernetes.io/projected/a80b984c-5ec4-4e6e-9e7e-01c1653244d5-kube-api-access-nxt7r\") pod \"glance-default-internal-api-0\" (UID: \"a80b984c-5ec4-4e6e-9e7e-01c1653244d5\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:39:53 crc kubenswrapper[4998]: I0227 10:39:53.530041 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a80b984c-5ec4-4e6e-9e7e-01c1653244d5-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"a80b984c-5ec4-4e6e-9e7e-01c1653244d5\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:39:53 crc kubenswrapper[4998]: I0227 10:39:53.543664 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"a80b984c-5ec4-4e6e-9e7e-01c1653244d5\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:39:53 crc kubenswrapper[4998]: I0227 10:39:53.620924 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 27 10:39:54 crc kubenswrapper[4998]: I0227 10:39:54.167102 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"69b8224e-b49d-48aa-a714-9aad5b59274e","Type":"ContainerStarted","Data":"9cb1765ac3ce55c4c4df1f9e81185175cbfb187394234c5b12661c71c32fa8c8"} Feb 27 10:39:54 crc kubenswrapper[4998]: W0227 10:39:54.211375 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda80b984c_5ec4_4e6e_9e7e_01c1653244d5.slice/crio-d5677815472aac7264de30b384f90a13adae03549cd27e7c95551fc2093f6789 WatchSource:0}: Error finding container d5677815472aac7264de30b384f90a13adae03549cd27e7c95551fc2093f6789: Status 404 returned error can't find the container with id d5677815472aac7264de30b384f90a13adae03549cd27e7c95551fc2093f6789 Feb 27 10:39:54 crc kubenswrapper[4998]: I0227 10:39:54.216041 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 27 10:39:54 crc kubenswrapper[4998]: I0227 10:39:54.519336 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 27 10:39:54 crc kubenswrapper[4998]: I0227 10:39:54.519763 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="a4dfdbf1-a4ba-472a-b88d-7c894603f9f7" containerName="glance-log" containerID="cri-o://7928bcec7e673b58f8f8e3aa79fdc5314578de400f9fb19091a5bea7d04a0bdf" gracePeriod=30 Feb 27 10:39:54 crc kubenswrapper[4998]: I0227 10:39:54.520129 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="a4dfdbf1-a4ba-472a-b88d-7c894603f9f7" containerName="glance-httpd" containerID="cri-o://3979bd276ede97da4dbb5aef6000d58a58fd93bb8d6ca6a51efccf76ded9b2c0" gracePeriod=30 Feb 27 10:39:54 crc kubenswrapper[4998]: I0227 10:39:54.788719 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="543a0e99-c247-4ab0-940e-461f495066cc" path="/var/lib/kubelet/pods/543a0e99-c247-4ab0-940e-461f495066cc/volumes" Feb 27 10:39:55 crc kubenswrapper[4998]: I0227 10:39:55.181558 4998 generic.go:334] "Generic (PLEG): container finished" podID="a4dfdbf1-a4ba-472a-b88d-7c894603f9f7" containerID="7928bcec7e673b58f8f8e3aa79fdc5314578de400f9fb19091a5bea7d04a0bdf" exitCode=143 Feb 27 10:39:55 crc kubenswrapper[4998]: I0227 10:39:55.181926 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a4dfdbf1-a4ba-472a-b88d-7c894603f9f7","Type":"ContainerDied","Data":"7928bcec7e673b58f8f8e3aa79fdc5314578de400f9fb19091a5bea7d04a0bdf"} Feb 27 10:39:55 crc kubenswrapper[4998]: I0227 10:39:55.185344 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"69b8224e-b49d-48aa-a714-9aad5b59274e","Type":"ContainerStarted","Data":"e1e97a94ab5fbeca40aa69a84b07f8c8898ca52b1546cf23df27c286a02dc692"} Feb 27 10:39:55 crc kubenswrapper[4998]: I0227 10:39:55.187332 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a80b984c-5ec4-4e6e-9e7e-01c1653244d5","Type":"ContainerStarted","Data":"52009334a7634229a9cb5d579ab533e61fbbf8e63c227aeffa0e1aacef97c249"} Feb 27 10:39:55 crc kubenswrapper[4998]: I0227 10:39:55.187356 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a80b984c-5ec4-4e6e-9e7e-01c1653244d5","Type":"ContainerStarted","Data":"d5677815472aac7264de30b384f90a13adae03549cd27e7c95551fc2093f6789"} Feb 27 10:39:56 crc kubenswrapper[4998]: I0227 10:39:56.201888 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a80b984c-5ec4-4e6e-9e7e-01c1653244d5","Type":"ContainerStarted","Data":"1b72723cdc95a2681292b1e02e4f44704352cdd274f303f04f6f9db75c26a7d1"} Feb 27 10:39:56 crc kubenswrapper[4998]: I0227 10:39:56.223049 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.2230259549999998 podStartE2EDuration="3.223025955s" podCreationTimestamp="2026-02-27 10:39:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:39:56.221686582 +0000 UTC m=+1348.219957550" watchObservedRunningTime="2026-02-27 10:39:56.223025955 +0000 UTC m=+1348.221296953" Feb 27 10:39:57 crc kubenswrapper[4998]: I0227 10:39:57.217794 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"69b8224e-b49d-48aa-a714-9aad5b59274e","Type":"ContainerStarted","Data":"01e2c150b6b614ec20024f2c1a5c17bb3b777d7909854e3ba14b663da9d015e6"} Feb 27 10:39:57 crc kubenswrapper[4998]: I0227 10:39:57.217957 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="69b8224e-b49d-48aa-a714-9aad5b59274e" containerName="ceilometer-notification-agent" containerID="cri-o://9cb1765ac3ce55c4c4df1f9e81185175cbfb187394234c5b12661c71c32fa8c8" gracePeriod=30 Feb 27 10:39:57 crc kubenswrapper[4998]: I0227 10:39:57.217893 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="69b8224e-b49d-48aa-a714-9aad5b59274e" containerName="ceilometer-central-agent" containerID="cri-o://8c2ff7bed5b0d2a4692fe488c628b7a66f6b94c3c2af404ca203e296b2ced311" gracePeriod=30 Feb 27 10:39:57 crc kubenswrapper[4998]: I0227 10:39:57.217990 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="69b8224e-b49d-48aa-a714-9aad5b59274e" containerName="sg-core" containerID="cri-o://e1e97a94ab5fbeca40aa69a84b07f8c8898ca52b1546cf23df27c286a02dc692" gracePeriod=30 Feb 27 10:39:57 crc kubenswrapper[4998]: I0227 10:39:57.218331 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 27 10:39:57 crc kubenswrapper[4998]: I0227 10:39:57.218077 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="69b8224e-b49d-48aa-a714-9aad5b59274e" containerName="proxy-httpd" containerID="cri-o://01e2c150b6b614ec20024f2c1a5c17bb3b777d7909854e3ba14b663da9d015e6" gracePeriod=30 Feb 27 10:39:57 crc kubenswrapper[4998]: I0227 10:39:57.240915 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.119907956 podStartE2EDuration="11.240900068s" podCreationTimestamp="2026-02-27 10:39:46 +0000 UTC" firstStartedPulling="2026-02-27 10:39:47.018042428 +0000 UTC m=+1339.016313406" lastFinishedPulling="2026-02-27 10:39:56.13903455 +0000 UTC m=+1348.137305518" observedRunningTime="2026-02-27 10:39:57.23781427 +0000 UTC m=+1349.236085238" watchObservedRunningTime="2026-02-27 10:39:57.240900068 +0000 UTC m=+1349.239171036" Feb 27 10:39:58 crc kubenswrapper[4998]: I0227 10:39:58.147207 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 27 10:39:58 crc kubenswrapper[4998]: I0227 10:39:58.192057 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4dfdbf1-a4ba-472a-b88d-7c894603f9f7-combined-ca-bundle\") pod \"a4dfdbf1-a4ba-472a-b88d-7c894603f9f7\" (UID: \"a4dfdbf1-a4ba-472a-b88d-7c894603f9f7\") " Feb 27 10:39:58 crc kubenswrapper[4998]: I0227 10:39:58.192451 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a4dfdbf1-a4ba-472a-b88d-7c894603f9f7-httpd-run\") pod \"a4dfdbf1-a4ba-472a-b88d-7c894603f9f7\" (UID: \"a4dfdbf1-a4ba-472a-b88d-7c894603f9f7\") " Feb 27 10:39:58 crc kubenswrapper[4998]: I0227 10:39:58.192562 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a4dfdbf1-a4ba-472a-b88d-7c894603f9f7-logs\") pod \"a4dfdbf1-a4ba-472a-b88d-7c894603f9f7\" (UID: \"a4dfdbf1-a4ba-472a-b88d-7c894603f9f7\") " Feb 27 10:39:58 crc kubenswrapper[4998]: I0227 10:39:58.192593 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4dfdbf1-a4ba-472a-b88d-7c894603f9f7-config-data\") pod \"a4dfdbf1-a4ba-472a-b88d-7c894603f9f7\" (UID: \"a4dfdbf1-a4ba-472a-b88d-7c894603f9f7\") " Feb 27 10:39:58 crc kubenswrapper[4998]: I0227 10:39:58.192664 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a4dfdbf1-a4ba-472a-b88d-7c894603f9f7-scripts\") pod \"a4dfdbf1-a4ba-472a-b88d-7c894603f9f7\" (UID: \"a4dfdbf1-a4ba-472a-b88d-7c894603f9f7\") " Feb 27 10:39:58 crc kubenswrapper[4998]: I0227 10:39:58.192787 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bpkpz\" (UniqueName: \"kubernetes.io/projected/a4dfdbf1-a4ba-472a-b88d-7c894603f9f7-kube-api-access-bpkpz\") pod \"a4dfdbf1-a4ba-472a-b88d-7c894603f9f7\" (UID: \"a4dfdbf1-a4ba-472a-b88d-7c894603f9f7\") " Feb 27 10:39:58 crc kubenswrapper[4998]: I0227 10:39:58.192818 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4dfdbf1-a4ba-472a-b88d-7c894603f9f7-public-tls-certs\") pod \"a4dfdbf1-a4ba-472a-b88d-7c894603f9f7\" (UID: \"a4dfdbf1-a4ba-472a-b88d-7c894603f9f7\") " Feb 27 10:39:58 crc kubenswrapper[4998]: I0227 10:39:58.192876 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"a4dfdbf1-a4ba-472a-b88d-7c894603f9f7\" (UID: \"a4dfdbf1-a4ba-472a-b88d-7c894603f9f7\") " Feb 27 10:39:58 crc kubenswrapper[4998]: I0227 10:39:58.193721 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4dfdbf1-a4ba-472a-b88d-7c894603f9f7-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "a4dfdbf1-a4ba-472a-b88d-7c894603f9f7" (UID: "a4dfdbf1-a4ba-472a-b88d-7c894603f9f7"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:39:58 crc kubenswrapper[4998]: I0227 10:39:58.195115 4998 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a4dfdbf1-a4ba-472a-b88d-7c894603f9f7-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 27 10:39:58 crc kubenswrapper[4998]: I0227 10:39:58.197395 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4dfdbf1-a4ba-472a-b88d-7c894603f9f7-logs" (OuterVolumeSpecName: "logs") pod "a4dfdbf1-a4ba-472a-b88d-7c894603f9f7" (UID: "a4dfdbf1-a4ba-472a-b88d-7c894603f9f7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:39:58 crc kubenswrapper[4998]: I0227 10:39:58.201433 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4dfdbf1-a4ba-472a-b88d-7c894603f9f7-kube-api-access-bpkpz" (OuterVolumeSpecName: "kube-api-access-bpkpz") pod "a4dfdbf1-a4ba-472a-b88d-7c894603f9f7" (UID: "a4dfdbf1-a4ba-472a-b88d-7c894603f9f7"). InnerVolumeSpecName "kube-api-access-bpkpz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:39:58 crc kubenswrapper[4998]: I0227 10:39:58.210519 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4dfdbf1-a4ba-472a-b88d-7c894603f9f7-scripts" (OuterVolumeSpecName: "scripts") pod "a4dfdbf1-a4ba-472a-b88d-7c894603f9f7" (UID: "a4dfdbf1-a4ba-472a-b88d-7c894603f9f7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:39:58 crc kubenswrapper[4998]: I0227 10:39:58.232211 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "glance") pod "a4dfdbf1-a4ba-472a-b88d-7c894603f9f7" (UID: "a4dfdbf1-a4ba-472a-b88d-7c894603f9f7"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 27 10:39:58 crc kubenswrapper[4998]: I0227 10:39:58.256395 4998 generic.go:334] "Generic (PLEG): container finished" podID="69b8224e-b49d-48aa-a714-9aad5b59274e" containerID="01e2c150b6b614ec20024f2c1a5c17bb3b777d7909854e3ba14b663da9d015e6" exitCode=0 Feb 27 10:39:58 crc kubenswrapper[4998]: I0227 10:39:58.256428 4998 generic.go:334] "Generic (PLEG): container finished" podID="69b8224e-b49d-48aa-a714-9aad5b59274e" containerID="e1e97a94ab5fbeca40aa69a84b07f8c8898ca52b1546cf23df27c286a02dc692" exitCode=2 Feb 27 10:39:58 crc kubenswrapper[4998]: I0227 10:39:58.256435 4998 generic.go:334] "Generic (PLEG): container finished" podID="69b8224e-b49d-48aa-a714-9aad5b59274e" containerID="9cb1765ac3ce55c4c4df1f9e81185175cbfb187394234c5b12661c71c32fa8c8" exitCode=0 Feb 27 10:39:58 crc kubenswrapper[4998]: I0227 10:39:58.256478 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"69b8224e-b49d-48aa-a714-9aad5b59274e","Type":"ContainerDied","Data":"01e2c150b6b614ec20024f2c1a5c17bb3b777d7909854e3ba14b663da9d015e6"} Feb 27 10:39:58 crc kubenswrapper[4998]: I0227 10:39:58.256504 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"69b8224e-b49d-48aa-a714-9aad5b59274e","Type":"ContainerDied","Data":"e1e97a94ab5fbeca40aa69a84b07f8c8898ca52b1546cf23df27c286a02dc692"} Feb 27 10:39:58 crc kubenswrapper[4998]: I0227 10:39:58.256514 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"69b8224e-b49d-48aa-a714-9aad5b59274e","Type":"ContainerDied","Data":"9cb1765ac3ce55c4c4df1f9e81185175cbfb187394234c5b12661c71c32fa8c8"} Feb 27 10:39:58 crc kubenswrapper[4998]: I0227 10:39:58.275470 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4dfdbf1-a4ba-472a-b88d-7c894603f9f7-config-data" (OuterVolumeSpecName: "config-data") pod "a4dfdbf1-a4ba-472a-b88d-7c894603f9f7" (UID: "a4dfdbf1-a4ba-472a-b88d-7c894603f9f7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:39:58 crc kubenswrapper[4998]: I0227 10:39:58.277915 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4dfdbf1-a4ba-472a-b88d-7c894603f9f7-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "a4dfdbf1-a4ba-472a-b88d-7c894603f9f7" (UID: "a4dfdbf1-a4ba-472a-b88d-7c894603f9f7"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:39:58 crc kubenswrapper[4998]: I0227 10:39:58.278263 4998 generic.go:334] "Generic (PLEG): container finished" podID="a4dfdbf1-a4ba-472a-b88d-7c894603f9f7" containerID="3979bd276ede97da4dbb5aef6000d58a58fd93bb8d6ca6a51efccf76ded9b2c0" exitCode=0 Feb 27 10:39:58 crc kubenswrapper[4998]: I0227 10:39:58.278314 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a4dfdbf1-a4ba-472a-b88d-7c894603f9f7","Type":"ContainerDied","Data":"3979bd276ede97da4dbb5aef6000d58a58fd93bb8d6ca6a51efccf76ded9b2c0"} Feb 27 10:39:58 crc kubenswrapper[4998]: I0227 10:39:58.278345 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a4dfdbf1-a4ba-472a-b88d-7c894603f9f7","Type":"ContainerDied","Data":"b1c19f0eb1db9ed519fe0017259110823b863c4815b8746211c2800ed86a6897"} Feb 27 10:39:58 crc kubenswrapper[4998]: I0227 10:39:58.278366 4998 scope.go:117] "RemoveContainer" containerID="3979bd276ede97da4dbb5aef6000d58a58fd93bb8d6ca6a51efccf76ded9b2c0" Feb 27 10:39:58 crc kubenswrapper[4998]: I0227 10:39:58.278384 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 27 10:39:58 crc kubenswrapper[4998]: I0227 10:39:58.298854 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bpkpz\" (UniqueName: \"kubernetes.io/projected/a4dfdbf1-a4ba-472a-b88d-7c894603f9f7-kube-api-access-bpkpz\") on node \"crc\" DevicePath \"\"" Feb 27 10:39:58 crc kubenswrapper[4998]: I0227 10:39:58.298884 4998 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4dfdbf1-a4ba-472a-b88d-7c894603f9f7-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 10:39:58 crc kubenswrapper[4998]: I0227 10:39:58.298906 4998 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Feb 27 10:39:58 crc kubenswrapper[4998]: I0227 10:39:58.298916 4998 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a4dfdbf1-a4ba-472a-b88d-7c894603f9f7-logs\") on node \"crc\" DevicePath \"\"" Feb 27 10:39:58 crc kubenswrapper[4998]: I0227 10:39:58.298925 4998 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4dfdbf1-a4ba-472a-b88d-7c894603f9f7-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 10:39:58 crc kubenswrapper[4998]: I0227 10:39:58.298932 4998 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a4dfdbf1-a4ba-472a-b88d-7c894603f9f7-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 10:39:58 crc kubenswrapper[4998]: I0227 10:39:58.319683 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4dfdbf1-a4ba-472a-b88d-7c894603f9f7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a4dfdbf1-a4ba-472a-b88d-7c894603f9f7" (UID: "a4dfdbf1-a4ba-472a-b88d-7c894603f9f7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:39:58 crc kubenswrapper[4998]: I0227 10:39:58.323966 4998 scope.go:117] "RemoveContainer" containerID="7928bcec7e673b58f8f8e3aa79fdc5314578de400f9fb19091a5bea7d04a0bdf" Feb 27 10:39:58 crc kubenswrapper[4998]: I0227 10:39:58.327043 4998 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Feb 27 10:39:58 crc kubenswrapper[4998]: I0227 10:39:58.354214 4998 scope.go:117] "RemoveContainer" containerID="3979bd276ede97da4dbb5aef6000d58a58fd93bb8d6ca6a51efccf76ded9b2c0" Feb 27 10:39:58 crc kubenswrapper[4998]: E0227 10:39:58.354671 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3979bd276ede97da4dbb5aef6000d58a58fd93bb8d6ca6a51efccf76ded9b2c0\": container with ID starting with 3979bd276ede97da4dbb5aef6000d58a58fd93bb8d6ca6a51efccf76ded9b2c0 not found: ID does not exist" containerID="3979bd276ede97da4dbb5aef6000d58a58fd93bb8d6ca6a51efccf76ded9b2c0" Feb 27 10:39:58 crc kubenswrapper[4998]: I0227 10:39:58.354701 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3979bd276ede97da4dbb5aef6000d58a58fd93bb8d6ca6a51efccf76ded9b2c0"} err="failed to get container status \"3979bd276ede97da4dbb5aef6000d58a58fd93bb8d6ca6a51efccf76ded9b2c0\": rpc error: code = NotFound desc = could not find container \"3979bd276ede97da4dbb5aef6000d58a58fd93bb8d6ca6a51efccf76ded9b2c0\": container with ID starting with 3979bd276ede97da4dbb5aef6000d58a58fd93bb8d6ca6a51efccf76ded9b2c0 not found: ID does not exist" Feb 27 10:39:58 crc kubenswrapper[4998]: I0227 10:39:58.354723 4998 scope.go:117] "RemoveContainer" containerID="7928bcec7e673b58f8f8e3aa79fdc5314578de400f9fb19091a5bea7d04a0bdf" Feb 27 10:39:58 crc kubenswrapper[4998]: E0227 10:39:58.355482 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7928bcec7e673b58f8f8e3aa79fdc5314578de400f9fb19091a5bea7d04a0bdf\": container with ID starting with 7928bcec7e673b58f8f8e3aa79fdc5314578de400f9fb19091a5bea7d04a0bdf not found: ID does not exist" containerID="7928bcec7e673b58f8f8e3aa79fdc5314578de400f9fb19091a5bea7d04a0bdf" Feb 27 10:39:58 crc kubenswrapper[4998]: I0227 10:39:58.355539 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7928bcec7e673b58f8f8e3aa79fdc5314578de400f9fb19091a5bea7d04a0bdf"} err="failed to get container status \"7928bcec7e673b58f8f8e3aa79fdc5314578de400f9fb19091a5bea7d04a0bdf\": rpc error: code = NotFound desc = could not find container \"7928bcec7e673b58f8f8e3aa79fdc5314578de400f9fb19091a5bea7d04a0bdf\": container with ID starting with 7928bcec7e673b58f8f8e3aa79fdc5314578de400f9fb19091a5bea7d04a0bdf not found: ID does not exist" Feb 27 10:39:58 crc kubenswrapper[4998]: I0227 10:39:58.402745 4998 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Feb 27 10:39:58 crc kubenswrapper[4998]: I0227 10:39:58.402830 4998 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4dfdbf1-a4ba-472a-b88d-7c894603f9f7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:39:58 crc kubenswrapper[4998]: I0227 10:39:58.609597 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 27 10:39:58 crc kubenswrapper[4998]: I0227 10:39:58.619965 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 27 10:39:58 crc kubenswrapper[4998]: I0227 10:39:58.630153 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 27 10:39:58 crc kubenswrapper[4998]: E0227 10:39:58.630731 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4dfdbf1-a4ba-472a-b88d-7c894603f9f7" containerName="glance-httpd" Feb 27 10:39:58 crc kubenswrapper[4998]: I0227 10:39:58.630755 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4dfdbf1-a4ba-472a-b88d-7c894603f9f7" containerName="glance-httpd" Feb 27 10:39:58 crc kubenswrapper[4998]: E0227 10:39:58.630787 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4dfdbf1-a4ba-472a-b88d-7c894603f9f7" containerName="glance-log" Feb 27 10:39:58 crc kubenswrapper[4998]: I0227 10:39:58.630797 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4dfdbf1-a4ba-472a-b88d-7c894603f9f7" containerName="glance-log" Feb 27 10:39:58 crc kubenswrapper[4998]: I0227 10:39:58.630993 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4dfdbf1-a4ba-472a-b88d-7c894603f9f7" containerName="glance-httpd" Feb 27 10:39:58 crc kubenswrapper[4998]: I0227 10:39:58.631031 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4dfdbf1-a4ba-472a-b88d-7c894603f9f7" containerName="glance-log" Feb 27 10:39:58 crc kubenswrapper[4998]: I0227 10:39:58.635527 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 27 10:39:58 crc kubenswrapper[4998]: I0227 10:39:58.637827 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 27 10:39:58 crc kubenswrapper[4998]: I0227 10:39:58.640614 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 27 10:39:58 crc kubenswrapper[4998]: I0227 10:39:58.650118 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 27 10:39:58 crc kubenswrapper[4998]: I0227 10:39:58.707431 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93b9f3a4-566e-4aa1-9980-7747c7d53efe-config-data\") pod \"glance-default-external-api-0\" (UID: \"93b9f3a4-566e-4aa1-9980-7747c7d53efe\") " pod="openstack/glance-default-external-api-0" Feb 27 10:39:58 crc kubenswrapper[4998]: I0227 10:39:58.707531 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/93b9f3a4-566e-4aa1-9980-7747c7d53efe-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"93b9f3a4-566e-4aa1-9980-7747c7d53efe\") " pod="openstack/glance-default-external-api-0" Feb 27 10:39:58 crc kubenswrapper[4998]: I0227 10:39:58.707578 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"93b9f3a4-566e-4aa1-9980-7747c7d53efe\") " pod="openstack/glance-default-external-api-0" Feb 27 10:39:58 crc kubenswrapper[4998]: I0227 10:39:58.707606 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/93b9f3a4-566e-4aa1-9980-7747c7d53efe-logs\") pod \"glance-default-external-api-0\" (UID: \"93b9f3a4-566e-4aa1-9980-7747c7d53efe\") " pod="openstack/glance-default-external-api-0" Feb 27 10:39:58 crc kubenswrapper[4998]: I0227 10:39:58.707880 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93b9f3a4-566e-4aa1-9980-7747c7d53efe-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"93b9f3a4-566e-4aa1-9980-7747c7d53efe\") " pod="openstack/glance-default-external-api-0" Feb 27 10:39:58 crc kubenswrapper[4998]: I0227 10:39:58.707969 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvqgk\" (UniqueName: \"kubernetes.io/projected/93b9f3a4-566e-4aa1-9980-7747c7d53efe-kube-api-access-lvqgk\") pod \"glance-default-external-api-0\" (UID: \"93b9f3a4-566e-4aa1-9980-7747c7d53efe\") " pod="openstack/glance-default-external-api-0" Feb 27 10:39:58 crc kubenswrapper[4998]: I0227 10:39:58.708007 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/93b9f3a4-566e-4aa1-9980-7747c7d53efe-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"93b9f3a4-566e-4aa1-9980-7747c7d53efe\") " pod="openstack/glance-default-external-api-0" Feb 27 10:39:58 crc kubenswrapper[4998]: I0227 10:39:58.708039 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/93b9f3a4-566e-4aa1-9980-7747c7d53efe-scripts\") pod \"glance-default-external-api-0\" (UID: \"93b9f3a4-566e-4aa1-9980-7747c7d53efe\") " pod="openstack/glance-default-external-api-0" Feb 27 10:39:58 crc kubenswrapper[4998]: I0227 10:39:58.794654 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4dfdbf1-a4ba-472a-b88d-7c894603f9f7" path="/var/lib/kubelet/pods/a4dfdbf1-a4ba-472a-b88d-7c894603f9f7/volumes" Feb 27 10:39:58 crc kubenswrapper[4998]: I0227 10:39:58.812300 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93b9f3a4-566e-4aa1-9980-7747c7d53efe-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"93b9f3a4-566e-4aa1-9980-7747c7d53efe\") " pod="openstack/glance-default-external-api-0" Feb 27 10:39:58 crc kubenswrapper[4998]: I0227 10:39:58.812409 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvqgk\" (UniqueName: \"kubernetes.io/projected/93b9f3a4-566e-4aa1-9980-7747c7d53efe-kube-api-access-lvqgk\") pod \"glance-default-external-api-0\" (UID: \"93b9f3a4-566e-4aa1-9980-7747c7d53efe\") " pod="openstack/glance-default-external-api-0" Feb 27 10:39:58 crc kubenswrapper[4998]: I0227 10:39:58.812469 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/93b9f3a4-566e-4aa1-9980-7747c7d53efe-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"93b9f3a4-566e-4aa1-9980-7747c7d53efe\") " pod="openstack/glance-default-external-api-0" Feb 27 10:39:58 crc kubenswrapper[4998]: I0227 10:39:58.812569 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/93b9f3a4-566e-4aa1-9980-7747c7d53efe-scripts\") pod \"glance-default-external-api-0\" (UID: \"93b9f3a4-566e-4aa1-9980-7747c7d53efe\") " pod="openstack/glance-default-external-api-0" Feb 27 10:39:58 crc kubenswrapper[4998]: I0227 10:39:58.812653 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93b9f3a4-566e-4aa1-9980-7747c7d53efe-config-data\") pod \"glance-default-external-api-0\" (UID: \"93b9f3a4-566e-4aa1-9980-7747c7d53efe\") " pod="openstack/glance-default-external-api-0" Feb 27 10:39:58 crc kubenswrapper[4998]: I0227 10:39:58.813286 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/93b9f3a4-566e-4aa1-9980-7747c7d53efe-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"93b9f3a4-566e-4aa1-9980-7747c7d53efe\") " pod="openstack/glance-default-external-api-0" Feb 27 10:39:58 crc kubenswrapper[4998]: I0227 10:39:58.813431 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"93b9f3a4-566e-4aa1-9980-7747c7d53efe\") " pod="openstack/glance-default-external-api-0" Feb 27 10:39:58 crc kubenswrapper[4998]: I0227 10:39:58.813501 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/93b9f3a4-566e-4aa1-9980-7747c7d53efe-logs\") pod \"glance-default-external-api-0\" (UID: \"93b9f3a4-566e-4aa1-9980-7747c7d53efe\") " pod="openstack/glance-default-external-api-0" Feb 27 10:39:58 crc kubenswrapper[4998]: I0227 10:39:58.814035 4998 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"93b9f3a4-566e-4aa1-9980-7747c7d53efe\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-external-api-0" Feb 27 10:39:58 crc kubenswrapper[4998]: I0227 10:39:58.814270 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/93b9f3a4-566e-4aa1-9980-7747c7d53efe-logs\") pod \"glance-default-external-api-0\" (UID: \"93b9f3a4-566e-4aa1-9980-7747c7d53efe\") " pod="openstack/glance-default-external-api-0" Feb 27 10:39:58 crc kubenswrapper[4998]: I0227 10:39:58.814361 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/93b9f3a4-566e-4aa1-9980-7747c7d53efe-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"93b9f3a4-566e-4aa1-9980-7747c7d53efe\") " pod="openstack/glance-default-external-api-0" Feb 27 10:39:58 crc kubenswrapper[4998]: I0227 10:39:58.816933 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/93b9f3a4-566e-4aa1-9980-7747c7d53efe-scripts\") pod \"glance-default-external-api-0\" (UID: \"93b9f3a4-566e-4aa1-9980-7747c7d53efe\") " pod="openstack/glance-default-external-api-0" Feb 27 10:39:58 crc kubenswrapper[4998]: I0227 10:39:58.819976 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93b9f3a4-566e-4aa1-9980-7747c7d53efe-config-data\") pod \"glance-default-external-api-0\" (UID: \"93b9f3a4-566e-4aa1-9980-7747c7d53efe\") " pod="openstack/glance-default-external-api-0" Feb 27 10:39:58 crc kubenswrapper[4998]: I0227 10:39:58.820961 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/93b9f3a4-566e-4aa1-9980-7747c7d53efe-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"93b9f3a4-566e-4aa1-9980-7747c7d53efe\") " pod="openstack/glance-default-external-api-0" Feb 27 10:39:58 crc kubenswrapper[4998]: I0227 10:39:58.821680 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93b9f3a4-566e-4aa1-9980-7747c7d53efe-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"93b9f3a4-566e-4aa1-9980-7747c7d53efe\") " pod="openstack/glance-default-external-api-0" Feb 27 10:39:58 crc kubenswrapper[4998]: I0227 10:39:58.831809 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvqgk\" (UniqueName: \"kubernetes.io/projected/93b9f3a4-566e-4aa1-9980-7747c7d53efe-kube-api-access-lvqgk\") pod \"glance-default-external-api-0\" (UID: \"93b9f3a4-566e-4aa1-9980-7747c7d53efe\") " pod="openstack/glance-default-external-api-0" Feb 27 10:39:58 crc kubenswrapper[4998]: I0227 10:39:58.845542 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"93b9f3a4-566e-4aa1-9980-7747c7d53efe\") " pod="openstack/glance-default-external-api-0" Feb 27 10:39:58 crc kubenswrapper[4998]: I0227 10:39:58.950578 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 27 10:39:59 crc kubenswrapper[4998]: I0227 10:39:59.479385 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 27 10:40:00 crc kubenswrapper[4998]: I0227 10:40:00.143439 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536480-4x855"] Feb 27 10:40:00 crc kubenswrapper[4998]: I0227 10:40:00.145125 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536480-4x855" Feb 27 10:40:00 crc kubenswrapper[4998]: I0227 10:40:00.147420 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 10:40:00 crc kubenswrapper[4998]: I0227 10:40:00.147595 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-b74ch" Feb 27 10:40:00 crc kubenswrapper[4998]: I0227 10:40:00.147662 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 10:40:00 crc kubenswrapper[4998]: I0227 10:40:00.152461 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536480-4x855"] Feb 27 10:40:00 crc kubenswrapper[4998]: I0227 10:40:00.240269 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wg67h\" (UniqueName: \"kubernetes.io/projected/5f855440-d78c-4b8b-b258-b4455899fba0-kube-api-access-wg67h\") pod \"auto-csr-approver-29536480-4x855\" (UID: \"5f855440-d78c-4b8b-b258-b4455899fba0\") " pod="openshift-infra/auto-csr-approver-29536480-4x855" Feb 27 10:40:00 crc kubenswrapper[4998]: I0227 10:40:00.304832 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"93b9f3a4-566e-4aa1-9980-7747c7d53efe","Type":"ContainerStarted","Data":"93c8558c87d8bee37d9e5261d6efce4e72f66074a5291c47930f4bbd7a6f5dd7"} Feb 27 10:40:00 crc kubenswrapper[4998]: I0227 10:40:00.304892 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"93b9f3a4-566e-4aa1-9980-7747c7d53efe","Type":"ContainerStarted","Data":"2e0a96af3221a9cca7385c6a1fa70dfb3e84fde2b57e9660415fe02d44a33df8"} Feb 27 10:40:00 crc kubenswrapper[4998]: I0227 10:40:00.342067 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wg67h\" (UniqueName: \"kubernetes.io/projected/5f855440-d78c-4b8b-b258-b4455899fba0-kube-api-access-wg67h\") pod \"auto-csr-approver-29536480-4x855\" (UID: \"5f855440-d78c-4b8b-b258-b4455899fba0\") " pod="openshift-infra/auto-csr-approver-29536480-4x855" Feb 27 10:40:00 crc kubenswrapper[4998]: I0227 10:40:00.360301 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wg67h\" (UniqueName: \"kubernetes.io/projected/5f855440-d78c-4b8b-b258-b4455899fba0-kube-api-access-wg67h\") pod \"auto-csr-approver-29536480-4x855\" (UID: \"5f855440-d78c-4b8b-b258-b4455899fba0\") " pod="openshift-infra/auto-csr-approver-29536480-4x855" Feb 27 10:40:00 crc kubenswrapper[4998]: I0227 10:40:00.470945 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536480-4x855" Feb 27 10:40:00 crc kubenswrapper[4998]: I0227 10:40:00.931872 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536480-4x855"] Feb 27 10:40:01 crc kubenswrapper[4998]: I0227 10:40:01.318478 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536480-4x855" event={"ID":"5f855440-d78c-4b8b-b258-b4455899fba0","Type":"ContainerStarted","Data":"e6ed36209c40f6c403315d8478d5a1bc33667a52e88fe1eaeffe3e4e8a2468d4"} Feb 27 10:40:02 crc kubenswrapper[4998]: I0227 10:40:02.330616 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"93b9f3a4-566e-4aa1-9980-7747c7d53efe","Type":"ContainerStarted","Data":"fe07f500eb345afdf5ef28fa5dd6f76ebca37b1b084bd77d74ed8e1bf05ca088"} Feb 27 10:40:02 crc kubenswrapper[4998]: I0227 10:40:02.359172 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.3591501489999995 podStartE2EDuration="4.359150149s" podCreationTimestamp="2026-02-27 10:39:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:40:02.349077299 +0000 UTC m=+1354.347348287" watchObservedRunningTime="2026-02-27 10:40:02.359150149 +0000 UTC m=+1354.357421117" Feb 27 10:40:03 crc kubenswrapper[4998]: I0227 10:40:03.343875 4998 generic.go:334] "Generic (PLEG): container finished" podID="5f855440-d78c-4b8b-b258-b4455899fba0" containerID="1e63bf8db9b3b73321716d312e658449f9b74fcad62f2ec8727d5da31813ded3" exitCode=0 Feb 27 10:40:03 crc kubenswrapper[4998]: I0227 10:40:03.344102 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536480-4x855" event={"ID":"5f855440-d78c-4b8b-b258-b4455899fba0","Type":"ContainerDied","Data":"1e63bf8db9b3b73321716d312e658449f9b74fcad62f2ec8727d5da31813ded3"} Feb 27 10:40:03 crc kubenswrapper[4998]: I0227 10:40:03.621507 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 27 10:40:03 crc kubenswrapper[4998]: I0227 10:40:03.621596 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 27 10:40:03 crc kubenswrapper[4998]: I0227 10:40:03.669539 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 27 10:40:03 crc kubenswrapper[4998]: I0227 10:40:03.677046 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 27 10:40:03 crc kubenswrapper[4998]: I0227 10:40:03.861886 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 10:40:03 crc kubenswrapper[4998]: I0227 10:40:03.906693 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69b8224e-b49d-48aa-a714-9aad5b59274e-config-data\") pod \"69b8224e-b49d-48aa-a714-9aad5b59274e\" (UID: \"69b8224e-b49d-48aa-a714-9aad5b59274e\") " Feb 27 10:40:03 crc kubenswrapper[4998]: I0227 10:40:03.906750 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69b8224e-b49d-48aa-a714-9aad5b59274e-scripts\") pod \"69b8224e-b49d-48aa-a714-9aad5b59274e\" (UID: \"69b8224e-b49d-48aa-a714-9aad5b59274e\") " Feb 27 10:40:03 crc kubenswrapper[4998]: I0227 10:40:03.906790 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/69b8224e-b49d-48aa-a714-9aad5b59274e-log-httpd\") pod \"69b8224e-b49d-48aa-a714-9aad5b59274e\" (UID: \"69b8224e-b49d-48aa-a714-9aad5b59274e\") " Feb 27 10:40:03 crc kubenswrapper[4998]: I0227 10:40:03.906865 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/69b8224e-b49d-48aa-a714-9aad5b59274e-run-httpd\") pod \"69b8224e-b49d-48aa-a714-9aad5b59274e\" (UID: \"69b8224e-b49d-48aa-a714-9aad5b59274e\") " Feb 27 10:40:03 crc kubenswrapper[4998]: I0227 10:40:03.906891 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/69b8224e-b49d-48aa-a714-9aad5b59274e-sg-core-conf-yaml\") pod \"69b8224e-b49d-48aa-a714-9aad5b59274e\" (UID: \"69b8224e-b49d-48aa-a714-9aad5b59274e\") " Feb 27 10:40:03 crc kubenswrapper[4998]: I0227 10:40:03.906909 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7stj\" (UniqueName: \"kubernetes.io/projected/69b8224e-b49d-48aa-a714-9aad5b59274e-kube-api-access-d7stj\") pod \"69b8224e-b49d-48aa-a714-9aad5b59274e\" (UID: \"69b8224e-b49d-48aa-a714-9aad5b59274e\") " Feb 27 10:40:03 crc kubenswrapper[4998]: I0227 10:40:03.907055 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69b8224e-b49d-48aa-a714-9aad5b59274e-combined-ca-bundle\") pod \"69b8224e-b49d-48aa-a714-9aad5b59274e\" (UID: \"69b8224e-b49d-48aa-a714-9aad5b59274e\") " Feb 27 10:40:03 crc kubenswrapper[4998]: I0227 10:40:03.907321 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69b8224e-b49d-48aa-a714-9aad5b59274e-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "69b8224e-b49d-48aa-a714-9aad5b59274e" (UID: "69b8224e-b49d-48aa-a714-9aad5b59274e"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:40:03 crc kubenswrapper[4998]: I0227 10:40:03.907589 4998 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/69b8224e-b49d-48aa-a714-9aad5b59274e-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 27 10:40:03 crc kubenswrapper[4998]: I0227 10:40:03.908105 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69b8224e-b49d-48aa-a714-9aad5b59274e-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "69b8224e-b49d-48aa-a714-9aad5b59274e" (UID: "69b8224e-b49d-48aa-a714-9aad5b59274e"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:40:03 crc kubenswrapper[4998]: I0227 10:40:03.914434 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69b8224e-b49d-48aa-a714-9aad5b59274e-scripts" (OuterVolumeSpecName: "scripts") pod "69b8224e-b49d-48aa-a714-9aad5b59274e" (UID: "69b8224e-b49d-48aa-a714-9aad5b59274e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:40:03 crc kubenswrapper[4998]: I0227 10:40:03.914935 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69b8224e-b49d-48aa-a714-9aad5b59274e-kube-api-access-d7stj" (OuterVolumeSpecName: "kube-api-access-d7stj") pod "69b8224e-b49d-48aa-a714-9aad5b59274e" (UID: "69b8224e-b49d-48aa-a714-9aad5b59274e"). InnerVolumeSpecName "kube-api-access-d7stj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:40:03 crc kubenswrapper[4998]: I0227 10:40:03.933677 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69b8224e-b49d-48aa-a714-9aad5b59274e-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "69b8224e-b49d-48aa-a714-9aad5b59274e" (UID: "69b8224e-b49d-48aa-a714-9aad5b59274e"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:40:03 crc kubenswrapper[4998]: I0227 10:40:03.992085 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69b8224e-b49d-48aa-a714-9aad5b59274e-config-data" (OuterVolumeSpecName: "config-data") pod "69b8224e-b49d-48aa-a714-9aad5b59274e" (UID: "69b8224e-b49d-48aa-a714-9aad5b59274e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:40:04 crc kubenswrapper[4998]: I0227 10:40:04.009597 4998 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69b8224e-b49d-48aa-a714-9aad5b59274e-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 10:40:04 crc kubenswrapper[4998]: I0227 10:40:04.009640 4998 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69b8224e-b49d-48aa-a714-9aad5b59274e-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 10:40:04 crc kubenswrapper[4998]: I0227 10:40:04.009657 4998 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/69b8224e-b49d-48aa-a714-9aad5b59274e-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 27 10:40:04 crc kubenswrapper[4998]: I0227 10:40:04.009668 4998 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/69b8224e-b49d-48aa-a714-9aad5b59274e-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 27 10:40:04 crc kubenswrapper[4998]: I0227 10:40:04.009684 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d7stj\" (UniqueName: \"kubernetes.io/projected/69b8224e-b49d-48aa-a714-9aad5b59274e-kube-api-access-d7stj\") on node \"crc\" DevicePath \"\"" Feb 27 10:40:04 crc kubenswrapper[4998]: I0227 10:40:04.019753 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69b8224e-b49d-48aa-a714-9aad5b59274e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "69b8224e-b49d-48aa-a714-9aad5b59274e" (UID: "69b8224e-b49d-48aa-a714-9aad5b59274e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:40:04 crc kubenswrapper[4998]: I0227 10:40:04.111326 4998 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69b8224e-b49d-48aa-a714-9aad5b59274e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:40:04 crc kubenswrapper[4998]: I0227 10:40:04.356815 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 10:40:04 crc kubenswrapper[4998]: I0227 10:40:04.356828 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"69b8224e-b49d-48aa-a714-9aad5b59274e","Type":"ContainerDied","Data":"8c2ff7bed5b0d2a4692fe488c628b7a66f6b94c3c2af404ca203e296b2ced311"} Feb 27 10:40:04 crc kubenswrapper[4998]: I0227 10:40:04.356899 4998 scope.go:117] "RemoveContainer" containerID="01e2c150b6b614ec20024f2c1a5c17bb3b777d7909854e3ba14b663da9d015e6" Feb 27 10:40:04 crc kubenswrapper[4998]: I0227 10:40:04.356728 4998 generic.go:334] "Generic (PLEG): container finished" podID="69b8224e-b49d-48aa-a714-9aad5b59274e" containerID="8c2ff7bed5b0d2a4692fe488c628b7a66f6b94c3c2af404ca203e296b2ced311" exitCode=0 Feb 27 10:40:04 crc kubenswrapper[4998]: I0227 10:40:04.357039 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"69b8224e-b49d-48aa-a714-9aad5b59274e","Type":"ContainerDied","Data":"09f754f9b6f839472f3ed6d25c940adf60173612f1865b5f8a505b58767629f6"} Feb 27 10:40:04 crc kubenswrapper[4998]: I0227 10:40:04.358664 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 27 10:40:04 crc kubenswrapper[4998]: I0227 10:40:04.358764 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 27 10:40:04 crc kubenswrapper[4998]: I0227 10:40:04.387144 4998 scope.go:117] "RemoveContainer" containerID="e1e97a94ab5fbeca40aa69a84b07f8c8898ca52b1546cf23df27c286a02dc692" Feb 27 10:40:04 crc kubenswrapper[4998]: I0227 10:40:04.412162 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 27 10:40:04 crc kubenswrapper[4998]: I0227 10:40:04.427168 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 27 10:40:04 crc kubenswrapper[4998]: I0227 10:40:04.439685 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 27 10:40:04 crc kubenswrapper[4998]: E0227 10:40:04.440172 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69b8224e-b49d-48aa-a714-9aad5b59274e" containerName="ceilometer-notification-agent" Feb 27 10:40:04 crc kubenswrapper[4998]: I0227 10:40:04.440195 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="69b8224e-b49d-48aa-a714-9aad5b59274e" containerName="ceilometer-notification-agent" Feb 27 10:40:04 crc kubenswrapper[4998]: E0227 10:40:04.440211 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69b8224e-b49d-48aa-a714-9aad5b59274e" containerName="proxy-httpd" Feb 27 10:40:04 crc kubenswrapper[4998]: I0227 10:40:04.440218 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="69b8224e-b49d-48aa-a714-9aad5b59274e" containerName="proxy-httpd" Feb 27 10:40:04 crc kubenswrapper[4998]: E0227 10:40:04.440301 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69b8224e-b49d-48aa-a714-9aad5b59274e" containerName="sg-core" Feb 27 10:40:04 crc kubenswrapper[4998]: I0227 10:40:04.440308 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="69b8224e-b49d-48aa-a714-9aad5b59274e" containerName="sg-core" Feb 27 10:40:04 crc kubenswrapper[4998]: E0227 10:40:04.440343 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69b8224e-b49d-48aa-a714-9aad5b59274e" containerName="ceilometer-central-agent" Feb 27 10:40:04 crc kubenswrapper[4998]: I0227 10:40:04.440353 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="69b8224e-b49d-48aa-a714-9aad5b59274e" containerName="ceilometer-central-agent" Feb 27 10:40:04 crc kubenswrapper[4998]: I0227 10:40:04.440573 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="69b8224e-b49d-48aa-a714-9aad5b59274e" containerName="ceilometer-notification-agent" Feb 27 10:40:04 crc kubenswrapper[4998]: I0227 10:40:04.440603 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="69b8224e-b49d-48aa-a714-9aad5b59274e" containerName="ceilometer-central-agent" Feb 27 10:40:04 crc kubenswrapper[4998]: I0227 10:40:04.440601 4998 scope.go:117] "RemoveContainer" containerID="9cb1765ac3ce55c4c4df1f9e81185175cbfb187394234c5b12661c71c32fa8c8" Feb 27 10:40:04 crc kubenswrapper[4998]: I0227 10:40:04.440619 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="69b8224e-b49d-48aa-a714-9aad5b59274e" containerName="sg-core" Feb 27 10:40:04 crc kubenswrapper[4998]: I0227 10:40:04.440636 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="69b8224e-b49d-48aa-a714-9aad5b59274e" containerName="proxy-httpd" Feb 27 10:40:04 crc kubenswrapper[4998]: I0227 10:40:04.442505 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 10:40:04 crc kubenswrapper[4998]: I0227 10:40:04.444750 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 27 10:40:04 crc kubenswrapper[4998]: I0227 10:40:04.444979 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 27 10:40:04 crc kubenswrapper[4998]: I0227 10:40:04.471129 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 27 10:40:04 crc kubenswrapper[4998]: I0227 10:40:04.479857 4998 scope.go:117] "RemoveContainer" containerID="8c2ff7bed5b0d2a4692fe488c628b7a66f6b94c3c2af404ca203e296b2ced311" Feb 27 10:40:04 crc kubenswrapper[4998]: I0227 10:40:04.505363 4998 scope.go:117] "RemoveContainer" containerID="01e2c150b6b614ec20024f2c1a5c17bb3b777d7909854e3ba14b663da9d015e6" Feb 27 10:40:04 crc kubenswrapper[4998]: E0227 10:40:04.507904 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01e2c150b6b614ec20024f2c1a5c17bb3b777d7909854e3ba14b663da9d015e6\": container with ID starting with 01e2c150b6b614ec20024f2c1a5c17bb3b777d7909854e3ba14b663da9d015e6 not found: ID does not exist" containerID="01e2c150b6b614ec20024f2c1a5c17bb3b777d7909854e3ba14b663da9d015e6" Feb 27 10:40:04 crc kubenswrapper[4998]: I0227 10:40:04.507938 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01e2c150b6b614ec20024f2c1a5c17bb3b777d7909854e3ba14b663da9d015e6"} err="failed to get container status \"01e2c150b6b614ec20024f2c1a5c17bb3b777d7909854e3ba14b663da9d015e6\": rpc error: code = NotFound desc = could not find container \"01e2c150b6b614ec20024f2c1a5c17bb3b777d7909854e3ba14b663da9d015e6\": container with ID starting with 01e2c150b6b614ec20024f2c1a5c17bb3b777d7909854e3ba14b663da9d015e6 not found: ID does not exist" Feb 27 10:40:04 crc kubenswrapper[4998]: I0227 10:40:04.507959 4998 scope.go:117] "RemoveContainer" containerID="e1e97a94ab5fbeca40aa69a84b07f8c8898ca52b1546cf23df27c286a02dc692" Feb 27 10:40:04 crc kubenswrapper[4998]: E0227 10:40:04.508434 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1e97a94ab5fbeca40aa69a84b07f8c8898ca52b1546cf23df27c286a02dc692\": container with ID starting with e1e97a94ab5fbeca40aa69a84b07f8c8898ca52b1546cf23df27c286a02dc692 not found: ID does not exist" containerID="e1e97a94ab5fbeca40aa69a84b07f8c8898ca52b1546cf23df27c286a02dc692" Feb 27 10:40:04 crc kubenswrapper[4998]: I0227 10:40:04.508481 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1e97a94ab5fbeca40aa69a84b07f8c8898ca52b1546cf23df27c286a02dc692"} err="failed to get container status \"e1e97a94ab5fbeca40aa69a84b07f8c8898ca52b1546cf23df27c286a02dc692\": rpc error: code = NotFound desc = could not find container \"e1e97a94ab5fbeca40aa69a84b07f8c8898ca52b1546cf23df27c286a02dc692\": container with ID starting with e1e97a94ab5fbeca40aa69a84b07f8c8898ca52b1546cf23df27c286a02dc692 not found: ID does not exist" Feb 27 10:40:04 crc kubenswrapper[4998]: I0227 10:40:04.508513 4998 scope.go:117] "RemoveContainer" containerID="9cb1765ac3ce55c4c4df1f9e81185175cbfb187394234c5b12661c71c32fa8c8" Feb 27 10:40:04 crc kubenswrapper[4998]: E0227 10:40:04.509153 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9cb1765ac3ce55c4c4df1f9e81185175cbfb187394234c5b12661c71c32fa8c8\": container with ID starting with 9cb1765ac3ce55c4c4df1f9e81185175cbfb187394234c5b12661c71c32fa8c8 not found: ID does not exist" containerID="9cb1765ac3ce55c4c4df1f9e81185175cbfb187394234c5b12661c71c32fa8c8" Feb 27 10:40:04 crc kubenswrapper[4998]: I0227 10:40:04.509186 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9cb1765ac3ce55c4c4df1f9e81185175cbfb187394234c5b12661c71c32fa8c8"} err="failed to get container status \"9cb1765ac3ce55c4c4df1f9e81185175cbfb187394234c5b12661c71c32fa8c8\": rpc error: code = NotFound desc = could not find container \"9cb1765ac3ce55c4c4df1f9e81185175cbfb187394234c5b12661c71c32fa8c8\": container with ID starting with 9cb1765ac3ce55c4c4df1f9e81185175cbfb187394234c5b12661c71c32fa8c8 not found: ID does not exist" Feb 27 10:40:04 crc kubenswrapper[4998]: I0227 10:40:04.509210 4998 scope.go:117] "RemoveContainer" containerID="8c2ff7bed5b0d2a4692fe488c628b7a66f6b94c3c2af404ca203e296b2ced311" Feb 27 10:40:04 crc kubenswrapper[4998]: E0227 10:40:04.509541 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c2ff7bed5b0d2a4692fe488c628b7a66f6b94c3c2af404ca203e296b2ced311\": container with ID starting with 8c2ff7bed5b0d2a4692fe488c628b7a66f6b94c3c2af404ca203e296b2ced311 not found: ID does not exist" containerID="8c2ff7bed5b0d2a4692fe488c628b7a66f6b94c3c2af404ca203e296b2ced311" Feb 27 10:40:04 crc kubenswrapper[4998]: I0227 10:40:04.509591 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c2ff7bed5b0d2a4692fe488c628b7a66f6b94c3c2af404ca203e296b2ced311"} err="failed to get container status \"8c2ff7bed5b0d2a4692fe488c628b7a66f6b94c3c2af404ca203e296b2ced311\": rpc error: code = NotFound desc = could not find container \"8c2ff7bed5b0d2a4692fe488c628b7a66f6b94c3c2af404ca203e296b2ced311\": container with ID starting with 8c2ff7bed5b0d2a4692fe488c628b7a66f6b94c3c2af404ca203e296b2ced311 not found: ID does not exist" Feb 27 10:40:04 crc kubenswrapper[4998]: I0227 10:40:04.517884 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3a32558-0a5d-4fe2-8d81-6d92513062de-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a3a32558-0a5d-4fe2-8d81-6d92513062de\") " pod="openstack/ceilometer-0" Feb 27 10:40:04 crc kubenswrapper[4998]: I0227 10:40:04.517971 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3a32558-0a5d-4fe2-8d81-6d92513062de-scripts\") pod \"ceilometer-0\" (UID: \"a3a32558-0a5d-4fe2-8d81-6d92513062de\") " pod="openstack/ceilometer-0" Feb 27 10:40:04 crc kubenswrapper[4998]: I0227 10:40:04.518014 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsvl2\" (UniqueName: \"kubernetes.io/projected/a3a32558-0a5d-4fe2-8d81-6d92513062de-kube-api-access-fsvl2\") pod \"ceilometer-0\" (UID: \"a3a32558-0a5d-4fe2-8d81-6d92513062de\") " pod="openstack/ceilometer-0" Feb 27 10:40:04 crc kubenswrapper[4998]: I0227 10:40:04.518046 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a3a32558-0a5d-4fe2-8d81-6d92513062de-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a3a32558-0a5d-4fe2-8d81-6d92513062de\") " pod="openstack/ceilometer-0" Feb 27 10:40:04 crc kubenswrapper[4998]: I0227 10:40:04.518096 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a3a32558-0a5d-4fe2-8d81-6d92513062de-run-httpd\") pod \"ceilometer-0\" (UID: \"a3a32558-0a5d-4fe2-8d81-6d92513062de\") " pod="openstack/ceilometer-0" Feb 27 10:40:04 crc kubenswrapper[4998]: I0227 10:40:04.518157 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3a32558-0a5d-4fe2-8d81-6d92513062de-config-data\") pod \"ceilometer-0\" (UID: \"a3a32558-0a5d-4fe2-8d81-6d92513062de\") " pod="openstack/ceilometer-0" Feb 27 10:40:04 crc kubenswrapper[4998]: I0227 10:40:04.518208 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a3a32558-0a5d-4fe2-8d81-6d92513062de-log-httpd\") pod \"ceilometer-0\" (UID: \"a3a32558-0a5d-4fe2-8d81-6d92513062de\") " pod="openstack/ceilometer-0" Feb 27 10:40:04 crc kubenswrapper[4998]: I0227 10:40:04.620002 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a3a32558-0a5d-4fe2-8d81-6d92513062de-log-httpd\") pod \"ceilometer-0\" (UID: \"a3a32558-0a5d-4fe2-8d81-6d92513062de\") " pod="openstack/ceilometer-0" Feb 27 10:40:04 crc kubenswrapper[4998]: I0227 10:40:04.620077 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3a32558-0a5d-4fe2-8d81-6d92513062de-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a3a32558-0a5d-4fe2-8d81-6d92513062de\") " pod="openstack/ceilometer-0" Feb 27 10:40:04 crc kubenswrapper[4998]: I0227 10:40:04.620142 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3a32558-0a5d-4fe2-8d81-6d92513062de-scripts\") pod \"ceilometer-0\" (UID: \"a3a32558-0a5d-4fe2-8d81-6d92513062de\") " pod="openstack/ceilometer-0" Feb 27 10:40:04 crc kubenswrapper[4998]: I0227 10:40:04.620194 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsvl2\" (UniqueName: \"kubernetes.io/projected/a3a32558-0a5d-4fe2-8d81-6d92513062de-kube-api-access-fsvl2\") pod \"ceilometer-0\" (UID: \"a3a32558-0a5d-4fe2-8d81-6d92513062de\") " pod="openstack/ceilometer-0" Feb 27 10:40:04 crc kubenswrapper[4998]: I0227 10:40:04.620251 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a3a32558-0a5d-4fe2-8d81-6d92513062de-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a3a32558-0a5d-4fe2-8d81-6d92513062de\") " pod="openstack/ceilometer-0" Feb 27 10:40:04 crc kubenswrapper[4998]: I0227 10:40:04.620309 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a3a32558-0a5d-4fe2-8d81-6d92513062de-run-httpd\") pod \"ceilometer-0\" (UID: \"a3a32558-0a5d-4fe2-8d81-6d92513062de\") " pod="openstack/ceilometer-0" Feb 27 10:40:04 crc kubenswrapper[4998]: I0227 10:40:04.620382 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3a32558-0a5d-4fe2-8d81-6d92513062de-config-data\") pod \"ceilometer-0\" (UID: \"a3a32558-0a5d-4fe2-8d81-6d92513062de\") " pod="openstack/ceilometer-0" Feb 27 10:40:04 crc kubenswrapper[4998]: I0227 10:40:04.620732 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a3a32558-0a5d-4fe2-8d81-6d92513062de-log-httpd\") pod \"ceilometer-0\" (UID: \"a3a32558-0a5d-4fe2-8d81-6d92513062de\") " pod="openstack/ceilometer-0" Feb 27 10:40:04 crc kubenswrapper[4998]: I0227 10:40:04.621123 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a3a32558-0a5d-4fe2-8d81-6d92513062de-run-httpd\") pod \"ceilometer-0\" (UID: \"a3a32558-0a5d-4fe2-8d81-6d92513062de\") " pod="openstack/ceilometer-0" Feb 27 10:40:04 crc kubenswrapper[4998]: I0227 10:40:04.625533 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3a32558-0a5d-4fe2-8d81-6d92513062de-config-data\") pod \"ceilometer-0\" (UID: \"a3a32558-0a5d-4fe2-8d81-6d92513062de\") " pod="openstack/ceilometer-0" Feb 27 10:40:04 crc kubenswrapper[4998]: I0227 10:40:04.629221 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a3a32558-0a5d-4fe2-8d81-6d92513062de-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a3a32558-0a5d-4fe2-8d81-6d92513062de\") " pod="openstack/ceilometer-0" Feb 27 10:40:04 crc kubenswrapper[4998]: I0227 10:40:04.629774 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3a32558-0a5d-4fe2-8d81-6d92513062de-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a3a32558-0a5d-4fe2-8d81-6d92513062de\") " pod="openstack/ceilometer-0" Feb 27 10:40:04 crc kubenswrapper[4998]: I0227 10:40:04.630554 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3a32558-0a5d-4fe2-8d81-6d92513062de-scripts\") pod \"ceilometer-0\" (UID: \"a3a32558-0a5d-4fe2-8d81-6d92513062de\") " pod="openstack/ceilometer-0" Feb 27 10:40:04 crc kubenswrapper[4998]: I0227 10:40:04.639336 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsvl2\" (UniqueName: \"kubernetes.io/projected/a3a32558-0a5d-4fe2-8d81-6d92513062de-kube-api-access-fsvl2\") pod \"ceilometer-0\" (UID: \"a3a32558-0a5d-4fe2-8d81-6d92513062de\") " pod="openstack/ceilometer-0" Feb 27 10:40:04 crc kubenswrapper[4998]: I0227 10:40:04.721544 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536480-4x855" Feb 27 10:40:04 crc kubenswrapper[4998]: I0227 10:40:04.782652 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69b8224e-b49d-48aa-a714-9aad5b59274e" path="/var/lib/kubelet/pods/69b8224e-b49d-48aa-a714-9aad5b59274e/volumes" Feb 27 10:40:04 crc kubenswrapper[4998]: I0227 10:40:04.783086 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 10:40:04 crc kubenswrapper[4998]: I0227 10:40:04.823718 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wg67h\" (UniqueName: \"kubernetes.io/projected/5f855440-d78c-4b8b-b258-b4455899fba0-kube-api-access-wg67h\") pod \"5f855440-d78c-4b8b-b258-b4455899fba0\" (UID: \"5f855440-d78c-4b8b-b258-b4455899fba0\") " Feb 27 10:40:04 crc kubenswrapper[4998]: I0227 10:40:04.828690 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f855440-d78c-4b8b-b258-b4455899fba0-kube-api-access-wg67h" (OuterVolumeSpecName: "kube-api-access-wg67h") pod "5f855440-d78c-4b8b-b258-b4455899fba0" (UID: "5f855440-d78c-4b8b-b258-b4455899fba0"). InnerVolumeSpecName "kube-api-access-wg67h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:40:04 crc kubenswrapper[4998]: I0227 10:40:04.926928 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wg67h\" (UniqueName: \"kubernetes.io/projected/5f855440-d78c-4b8b-b258-b4455899fba0-kube-api-access-wg67h\") on node \"crc\" DevicePath \"\"" Feb 27 10:40:05 crc kubenswrapper[4998]: I0227 10:40:05.205749 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 27 10:40:05 crc kubenswrapper[4998]: W0227 10:40:05.214703 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda3a32558_0a5d_4fe2_8d81_6d92513062de.slice/crio-715478b3d98b6e83302b2ac7cce48e99fbbe64ed68860fcdd9c48bb7230ff17d WatchSource:0}: Error finding container 715478b3d98b6e83302b2ac7cce48e99fbbe64ed68860fcdd9c48bb7230ff17d: Status 404 returned error can't find the container with id 715478b3d98b6e83302b2ac7cce48e99fbbe64ed68860fcdd9c48bb7230ff17d Feb 27 10:40:05 crc kubenswrapper[4998]: I0227 10:40:05.370673 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536480-4x855" event={"ID":"5f855440-d78c-4b8b-b258-b4455899fba0","Type":"ContainerDied","Data":"e6ed36209c40f6c403315d8478d5a1bc33667a52e88fe1eaeffe3e4e8a2468d4"} Feb 27 10:40:05 crc kubenswrapper[4998]: I0227 10:40:05.370957 4998 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e6ed36209c40f6c403315d8478d5a1bc33667a52e88fe1eaeffe3e4e8a2468d4" Feb 27 10:40:05 crc kubenswrapper[4998]: I0227 10:40:05.371026 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536480-4x855" Feb 27 10:40:05 crc kubenswrapper[4998]: I0227 10:40:05.382566 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a3a32558-0a5d-4fe2-8d81-6d92513062de","Type":"ContainerStarted","Data":"715478b3d98b6e83302b2ac7cce48e99fbbe64ed68860fcdd9c48bb7230ff17d"} Feb 27 10:40:05 crc kubenswrapper[4998]: I0227 10:40:05.790286 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536474-p2kkq"] Feb 27 10:40:05 crc kubenswrapper[4998]: I0227 10:40:05.794119 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536474-p2kkq"] Feb 27 10:40:06 crc kubenswrapper[4998]: I0227 10:40:06.387551 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 27 10:40:06 crc kubenswrapper[4998]: I0227 10:40:06.392567 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a3a32558-0a5d-4fe2-8d81-6d92513062de","Type":"ContainerStarted","Data":"1b41b8f5cc13cf0cdef251d1991fe61eec0f556891fd33bec1037cd685b2b3d1"} Feb 27 10:40:06 crc kubenswrapper[4998]: I0227 10:40:06.394687 4998 generic.go:334] "Generic (PLEG): container finished" podID="a9abdc95-5c73-40b8-a234-8b13e7be1cec" containerID="71e78464a33c722c93cd258774f6086a1dd7d42f9906c3cc6ae3dd873c06f64d" exitCode=0 Feb 27 10:40:06 crc kubenswrapper[4998]: I0227 10:40:06.394774 4998 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 27 10:40:06 crc kubenswrapper[4998]: I0227 10:40:06.394970 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-p7ghm" event={"ID":"a9abdc95-5c73-40b8-a234-8b13e7be1cec","Type":"ContainerDied","Data":"71e78464a33c722c93cd258774f6086a1dd7d42f9906c3cc6ae3dd873c06f64d"} Feb 27 10:40:06 crc kubenswrapper[4998]: I0227 10:40:06.442935 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 27 10:40:06 crc kubenswrapper[4998]: I0227 10:40:06.777399 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac252740-66dd-42c7-be96-44f999dedded" path="/var/lib/kubelet/pods/ac252740-66dd-42c7-be96-44f999dedded/volumes" Feb 27 10:40:07 crc kubenswrapper[4998]: I0227 10:40:07.411088 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a3a32558-0a5d-4fe2-8d81-6d92513062de","Type":"ContainerStarted","Data":"e7c7760182c891ef6d0fe326a448af30b8f435f1c321236b91a8e27b1515dbe0"} Feb 27 10:40:07 crc kubenswrapper[4998]: I0227 10:40:07.411448 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a3a32558-0a5d-4fe2-8d81-6d92513062de","Type":"ContainerStarted","Data":"2a6cb6d66f093206488ee94a047d89560449335a8f3b824b3d0ac8222f167012"} Feb 27 10:40:07 crc kubenswrapper[4998]: I0227 10:40:07.872684 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-p7ghm" Feb 27 10:40:07 crc kubenswrapper[4998]: I0227 10:40:07.947199 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9abdc95-5c73-40b8-a234-8b13e7be1cec-combined-ca-bundle\") pod \"a9abdc95-5c73-40b8-a234-8b13e7be1cec\" (UID: \"a9abdc95-5c73-40b8-a234-8b13e7be1cec\") " Feb 27 10:40:07 crc kubenswrapper[4998]: I0227 10:40:07.947258 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a9abdc95-5c73-40b8-a234-8b13e7be1cec-scripts\") pod \"a9abdc95-5c73-40b8-a234-8b13e7be1cec\" (UID: \"a9abdc95-5c73-40b8-a234-8b13e7be1cec\") " Feb 27 10:40:07 crc kubenswrapper[4998]: I0227 10:40:07.947385 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9abdc95-5c73-40b8-a234-8b13e7be1cec-config-data\") pod \"a9abdc95-5c73-40b8-a234-8b13e7be1cec\" (UID: \"a9abdc95-5c73-40b8-a234-8b13e7be1cec\") " Feb 27 10:40:07 crc kubenswrapper[4998]: I0227 10:40:07.947446 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zrvfh\" (UniqueName: \"kubernetes.io/projected/a9abdc95-5c73-40b8-a234-8b13e7be1cec-kube-api-access-zrvfh\") pod \"a9abdc95-5c73-40b8-a234-8b13e7be1cec\" (UID: \"a9abdc95-5c73-40b8-a234-8b13e7be1cec\") " Feb 27 10:40:07 crc kubenswrapper[4998]: I0227 10:40:07.956464 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9abdc95-5c73-40b8-a234-8b13e7be1cec-scripts" (OuterVolumeSpecName: "scripts") pod "a9abdc95-5c73-40b8-a234-8b13e7be1cec" (UID: "a9abdc95-5c73-40b8-a234-8b13e7be1cec"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:40:07 crc kubenswrapper[4998]: I0227 10:40:07.968925 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9abdc95-5c73-40b8-a234-8b13e7be1cec-kube-api-access-zrvfh" (OuterVolumeSpecName: "kube-api-access-zrvfh") pod "a9abdc95-5c73-40b8-a234-8b13e7be1cec" (UID: "a9abdc95-5c73-40b8-a234-8b13e7be1cec"). InnerVolumeSpecName "kube-api-access-zrvfh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:40:07 crc kubenswrapper[4998]: I0227 10:40:07.976348 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9abdc95-5c73-40b8-a234-8b13e7be1cec-config-data" (OuterVolumeSpecName: "config-data") pod "a9abdc95-5c73-40b8-a234-8b13e7be1cec" (UID: "a9abdc95-5c73-40b8-a234-8b13e7be1cec"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:40:07 crc kubenswrapper[4998]: I0227 10:40:07.984318 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9abdc95-5c73-40b8-a234-8b13e7be1cec-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a9abdc95-5c73-40b8-a234-8b13e7be1cec" (UID: "a9abdc95-5c73-40b8-a234-8b13e7be1cec"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:40:08 crc kubenswrapper[4998]: I0227 10:40:08.049327 4998 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9abdc95-5c73-40b8-a234-8b13e7be1cec-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:40:08 crc kubenswrapper[4998]: I0227 10:40:08.049373 4998 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a9abdc95-5c73-40b8-a234-8b13e7be1cec-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 10:40:08 crc kubenswrapper[4998]: I0227 10:40:08.049415 4998 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9abdc95-5c73-40b8-a234-8b13e7be1cec-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 10:40:08 crc kubenswrapper[4998]: I0227 10:40:08.049427 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zrvfh\" (UniqueName: \"kubernetes.io/projected/a9abdc95-5c73-40b8-a234-8b13e7be1cec-kube-api-access-zrvfh\") on node \"crc\" DevicePath \"\"" Feb 27 10:40:08 crc kubenswrapper[4998]: I0227 10:40:08.165345 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 27 10:40:08 crc kubenswrapper[4998]: I0227 10:40:08.433004 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-p7ghm" event={"ID":"a9abdc95-5c73-40b8-a234-8b13e7be1cec","Type":"ContainerDied","Data":"76bb8d846aaf76b9b0b5676f4598eaa0b7118dded18bc94cf036ff7ada42dcb7"} Feb 27 10:40:08 crc kubenswrapper[4998]: I0227 10:40:08.433057 4998 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="76bb8d846aaf76b9b0b5676f4598eaa0b7118dded18bc94cf036ff7ada42dcb7" Feb 27 10:40:08 crc kubenswrapper[4998]: I0227 10:40:08.433099 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-p7ghm" Feb 27 10:40:08 crc kubenswrapper[4998]: I0227 10:40:08.527205 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 27 10:40:08 crc kubenswrapper[4998]: E0227 10:40:08.527624 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f855440-d78c-4b8b-b258-b4455899fba0" containerName="oc" Feb 27 10:40:08 crc kubenswrapper[4998]: I0227 10:40:08.527649 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f855440-d78c-4b8b-b258-b4455899fba0" containerName="oc" Feb 27 10:40:08 crc kubenswrapper[4998]: E0227 10:40:08.527691 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9abdc95-5c73-40b8-a234-8b13e7be1cec" containerName="nova-cell0-conductor-db-sync" Feb 27 10:40:08 crc kubenswrapper[4998]: I0227 10:40:08.527700 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9abdc95-5c73-40b8-a234-8b13e7be1cec" containerName="nova-cell0-conductor-db-sync" Feb 27 10:40:08 crc kubenswrapper[4998]: I0227 10:40:08.527913 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9abdc95-5c73-40b8-a234-8b13e7be1cec" containerName="nova-cell0-conductor-db-sync" Feb 27 10:40:08 crc kubenswrapper[4998]: I0227 10:40:08.527958 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f855440-d78c-4b8b-b258-b4455899fba0" containerName="oc" Feb 27 10:40:08 crc kubenswrapper[4998]: I0227 10:40:08.528748 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 27 10:40:08 crc kubenswrapper[4998]: I0227 10:40:08.531641 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 27 10:40:08 crc kubenswrapper[4998]: I0227 10:40:08.534204 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-w6sh7" Feb 27 10:40:08 crc kubenswrapper[4998]: I0227 10:40:08.537492 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 27 10:40:08 crc kubenswrapper[4998]: I0227 10:40:08.668679 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b699j\" (UniqueName: \"kubernetes.io/projected/3a51769e-cff7-4683-bfbe-b498c4c3f5f4-kube-api-access-b699j\") pod \"nova-cell0-conductor-0\" (UID: \"3a51769e-cff7-4683-bfbe-b498c4c3f5f4\") " pod="openstack/nova-cell0-conductor-0" Feb 27 10:40:08 crc kubenswrapper[4998]: I0227 10:40:08.668744 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a51769e-cff7-4683-bfbe-b498c4c3f5f4-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"3a51769e-cff7-4683-bfbe-b498c4c3f5f4\") " pod="openstack/nova-cell0-conductor-0" Feb 27 10:40:08 crc kubenswrapper[4998]: I0227 10:40:08.668788 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a51769e-cff7-4683-bfbe-b498c4c3f5f4-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"3a51769e-cff7-4683-bfbe-b498c4c3f5f4\") " pod="openstack/nova-cell0-conductor-0" Feb 27 10:40:08 crc kubenswrapper[4998]: I0227 10:40:08.770735 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a51769e-cff7-4683-bfbe-b498c4c3f5f4-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"3a51769e-cff7-4683-bfbe-b498c4c3f5f4\") " pod="openstack/nova-cell0-conductor-0" Feb 27 10:40:08 crc kubenswrapper[4998]: I0227 10:40:08.770807 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a51769e-cff7-4683-bfbe-b498c4c3f5f4-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"3a51769e-cff7-4683-bfbe-b498c4c3f5f4\") " pod="openstack/nova-cell0-conductor-0" Feb 27 10:40:08 crc kubenswrapper[4998]: I0227 10:40:08.770912 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b699j\" (UniqueName: \"kubernetes.io/projected/3a51769e-cff7-4683-bfbe-b498c4c3f5f4-kube-api-access-b699j\") pod \"nova-cell0-conductor-0\" (UID: \"3a51769e-cff7-4683-bfbe-b498c4c3f5f4\") " pod="openstack/nova-cell0-conductor-0" Feb 27 10:40:08 crc kubenswrapper[4998]: I0227 10:40:08.775116 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a51769e-cff7-4683-bfbe-b498c4c3f5f4-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"3a51769e-cff7-4683-bfbe-b498c4c3f5f4\") " pod="openstack/nova-cell0-conductor-0" Feb 27 10:40:08 crc kubenswrapper[4998]: I0227 10:40:08.779445 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a51769e-cff7-4683-bfbe-b498c4c3f5f4-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"3a51769e-cff7-4683-bfbe-b498c4c3f5f4\") " pod="openstack/nova-cell0-conductor-0" Feb 27 10:40:08 crc kubenswrapper[4998]: I0227 10:40:08.804980 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b699j\" (UniqueName: \"kubernetes.io/projected/3a51769e-cff7-4683-bfbe-b498c4c3f5f4-kube-api-access-b699j\") pod \"nova-cell0-conductor-0\" (UID: \"3a51769e-cff7-4683-bfbe-b498c4c3f5f4\") " pod="openstack/nova-cell0-conductor-0" Feb 27 10:40:08 crc kubenswrapper[4998]: I0227 10:40:08.849577 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 27 10:40:08 crc kubenswrapper[4998]: I0227 10:40:08.951618 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 27 10:40:08 crc kubenswrapper[4998]: I0227 10:40:08.951651 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 27 10:40:09 crc kubenswrapper[4998]: I0227 10:40:09.009759 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 27 10:40:09 crc kubenswrapper[4998]: I0227 10:40:09.042338 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 27 10:40:09 crc kubenswrapper[4998]: I0227 10:40:09.319218 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 27 10:40:09 crc kubenswrapper[4998]: I0227 10:40:09.444043 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a3a32558-0a5d-4fe2-8d81-6d92513062de","Type":"ContainerStarted","Data":"477e08ae11d36a6170b114437f572ff996862a8a37609b80d5757dc6734ddc34"} Feb 27 10:40:09 crc kubenswrapper[4998]: I0227 10:40:09.445097 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 27 10:40:09 crc kubenswrapper[4998]: I0227 10:40:09.445017 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a3a32558-0a5d-4fe2-8d81-6d92513062de" containerName="proxy-httpd" containerID="cri-o://477e08ae11d36a6170b114437f572ff996862a8a37609b80d5757dc6734ddc34" gracePeriod=30 Feb 27 10:40:09 crc kubenswrapper[4998]: I0227 10:40:09.445034 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a3a32558-0a5d-4fe2-8d81-6d92513062de" containerName="sg-core" containerID="cri-o://e7c7760182c891ef6d0fe326a448af30b8f435f1c321236b91a8e27b1515dbe0" gracePeriod=30 Feb 27 10:40:09 crc kubenswrapper[4998]: I0227 10:40:09.445046 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a3a32558-0a5d-4fe2-8d81-6d92513062de" containerName="ceilometer-notification-agent" containerID="cri-o://2a6cb6d66f093206488ee94a047d89560449335a8f3b824b3d0ac8222f167012" gracePeriod=30 Feb 27 10:40:09 crc kubenswrapper[4998]: I0227 10:40:09.444410 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a3a32558-0a5d-4fe2-8d81-6d92513062de" containerName="ceilometer-central-agent" containerID="cri-o://1b41b8f5cc13cf0cdef251d1991fe61eec0f556891fd33bec1037cd685b2b3d1" gracePeriod=30 Feb 27 10:40:09 crc kubenswrapper[4998]: I0227 10:40:09.447404 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"3a51769e-cff7-4683-bfbe-b498c4c3f5f4","Type":"ContainerStarted","Data":"5ce0ca2f0ba44d9236b01d92a77f10b74d37fd24dc6a8a4f0f000d50a2c3aa2b"} Feb 27 10:40:09 crc kubenswrapper[4998]: I0227 10:40:09.449659 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 27 10:40:09 crc kubenswrapper[4998]: I0227 10:40:09.449837 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 27 10:40:09 crc kubenswrapper[4998]: I0227 10:40:09.471307 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.749993968 podStartE2EDuration="5.471290756s" podCreationTimestamp="2026-02-27 10:40:04 +0000 UTC" firstStartedPulling="2026-02-27 10:40:05.218531254 +0000 UTC m=+1357.216802222" lastFinishedPulling="2026-02-27 10:40:08.939828042 +0000 UTC m=+1360.938099010" observedRunningTime="2026-02-27 10:40:09.469151374 +0000 UTC m=+1361.467422342" watchObservedRunningTime="2026-02-27 10:40:09.471290756 +0000 UTC m=+1361.469561724" Feb 27 10:40:10 crc kubenswrapper[4998]: I0227 10:40:10.483462 4998 generic.go:334] "Generic (PLEG): container finished" podID="a3a32558-0a5d-4fe2-8d81-6d92513062de" containerID="477e08ae11d36a6170b114437f572ff996862a8a37609b80d5757dc6734ddc34" exitCode=0 Feb 27 10:40:10 crc kubenswrapper[4998]: I0227 10:40:10.483529 4998 generic.go:334] "Generic (PLEG): container finished" podID="a3a32558-0a5d-4fe2-8d81-6d92513062de" containerID="e7c7760182c891ef6d0fe326a448af30b8f435f1c321236b91a8e27b1515dbe0" exitCode=2 Feb 27 10:40:10 crc kubenswrapper[4998]: I0227 10:40:10.483539 4998 generic.go:334] "Generic (PLEG): container finished" podID="a3a32558-0a5d-4fe2-8d81-6d92513062de" containerID="2a6cb6d66f093206488ee94a047d89560449335a8f3b824b3d0ac8222f167012" exitCode=0 Feb 27 10:40:10 crc kubenswrapper[4998]: I0227 10:40:10.483571 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a3a32558-0a5d-4fe2-8d81-6d92513062de","Type":"ContainerDied","Data":"477e08ae11d36a6170b114437f572ff996862a8a37609b80d5757dc6734ddc34"} Feb 27 10:40:10 crc kubenswrapper[4998]: I0227 10:40:10.483639 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a3a32558-0a5d-4fe2-8d81-6d92513062de","Type":"ContainerDied","Data":"e7c7760182c891ef6d0fe326a448af30b8f435f1c321236b91a8e27b1515dbe0"} Feb 27 10:40:10 crc kubenswrapper[4998]: I0227 10:40:10.483655 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a3a32558-0a5d-4fe2-8d81-6d92513062de","Type":"ContainerDied","Data":"2a6cb6d66f093206488ee94a047d89560449335a8f3b824b3d0ac8222f167012"} Feb 27 10:40:10 crc kubenswrapper[4998]: I0227 10:40:10.485083 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"3a51769e-cff7-4683-bfbe-b498c4c3f5f4","Type":"ContainerStarted","Data":"5c5ca17d4ed3071a5920335ac53085aeb4ce66493c16a54dd66007c17f110085"} Feb 27 10:40:10 crc kubenswrapper[4998]: I0227 10:40:10.485259 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 27 10:40:10 crc kubenswrapper[4998]: I0227 10:40:10.504949 4998 patch_prober.go:28] interesting pod/machine-config-daemon-m6kr5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 10:40:10 crc kubenswrapper[4998]: I0227 10:40:10.505054 4998 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 10:40:10 crc kubenswrapper[4998]: I0227 10:40:10.505766 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.505747949 podStartE2EDuration="2.505747949s" podCreationTimestamp="2026-02-27 10:40:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:40:10.501657152 +0000 UTC m=+1362.499928120" watchObservedRunningTime="2026-02-27 10:40:10.505747949 +0000 UTC m=+1362.504018917" Feb 27 10:40:11 crc kubenswrapper[4998]: I0227 10:40:11.353950 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 27 10:40:11 crc kubenswrapper[4998]: I0227 10:40:11.355659 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 27 10:40:15 crc kubenswrapper[4998]: I0227 10:40:15.110000 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 10:40:15 crc kubenswrapper[4998]: I0227 10:40:15.198794 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3a32558-0a5d-4fe2-8d81-6d92513062de-scripts\") pod \"a3a32558-0a5d-4fe2-8d81-6d92513062de\" (UID: \"a3a32558-0a5d-4fe2-8d81-6d92513062de\") " Feb 27 10:40:15 crc kubenswrapper[4998]: I0227 10:40:15.198876 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3a32558-0a5d-4fe2-8d81-6d92513062de-config-data\") pod \"a3a32558-0a5d-4fe2-8d81-6d92513062de\" (UID: \"a3a32558-0a5d-4fe2-8d81-6d92513062de\") " Feb 27 10:40:15 crc kubenswrapper[4998]: I0227 10:40:15.198921 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a3a32558-0a5d-4fe2-8d81-6d92513062de-run-httpd\") pod \"a3a32558-0a5d-4fe2-8d81-6d92513062de\" (UID: \"a3a32558-0a5d-4fe2-8d81-6d92513062de\") " Feb 27 10:40:15 crc kubenswrapper[4998]: I0227 10:40:15.198967 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a3a32558-0a5d-4fe2-8d81-6d92513062de-sg-core-conf-yaml\") pod \"a3a32558-0a5d-4fe2-8d81-6d92513062de\" (UID: \"a3a32558-0a5d-4fe2-8d81-6d92513062de\") " Feb 27 10:40:15 crc kubenswrapper[4998]: I0227 10:40:15.199053 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a3a32558-0a5d-4fe2-8d81-6d92513062de-log-httpd\") pod \"a3a32558-0a5d-4fe2-8d81-6d92513062de\" (UID: \"a3a32558-0a5d-4fe2-8d81-6d92513062de\") " Feb 27 10:40:15 crc kubenswrapper[4998]: I0227 10:40:15.199090 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fsvl2\" (UniqueName: \"kubernetes.io/projected/a3a32558-0a5d-4fe2-8d81-6d92513062de-kube-api-access-fsvl2\") pod \"a3a32558-0a5d-4fe2-8d81-6d92513062de\" (UID: \"a3a32558-0a5d-4fe2-8d81-6d92513062de\") " Feb 27 10:40:15 crc kubenswrapper[4998]: I0227 10:40:15.199116 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3a32558-0a5d-4fe2-8d81-6d92513062de-combined-ca-bundle\") pod \"a3a32558-0a5d-4fe2-8d81-6d92513062de\" (UID: \"a3a32558-0a5d-4fe2-8d81-6d92513062de\") " Feb 27 10:40:15 crc kubenswrapper[4998]: I0227 10:40:15.199777 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a3a32558-0a5d-4fe2-8d81-6d92513062de-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a3a32558-0a5d-4fe2-8d81-6d92513062de" (UID: "a3a32558-0a5d-4fe2-8d81-6d92513062de"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:40:15 crc kubenswrapper[4998]: I0227 10:40:15.199926 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a3a32558-0a5d-4fe2-8d81-6d92513062de-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a3a32558-0a5d-4fe2-8d81-6d92513062de" (UID: "a3a32558-0a5d-4fe2-8d81-6d92513062de"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:40:15 crc kubenswrapper[4998]: I0227 10:40:15.205463 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3a32558-0a5d-4fe2-8d81-6d92513062de-kube-api-access-fsvl2" (OuterVolumeSpecName: "kube-api-access-fsvl2") pod "a3a32558-0a5d-4fe2-8d81-6d92513062de" (UID: "a3a32558-0a5d-4fe2-8d81-6d92513062de"). InnerVolumeSpecName "kube-api-access-fsvl2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:40:15 crc kubenswrapper[4998]: I0227 10:40:15.207120 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3a32558-0a5d-4fe2-8d81-6d92513062de-scripts" (OuterVolumeSpecName: "scripts") pod "a3a32558-0a5d-4fe2-8d81-6d92513062de" (UID: "a3a32558-0a5d-4fe2-8d81-6d92513062de"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:40:15 crc kubenswrapper[4998]: I0227 10:40:15.234355 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3a32558-0a5d-4fe2-8d81-6d92513062de-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "a3a32558-0a5d-4fe2-8d81-6d92513062de" (UID: "a3a32558-0a5d-4fe2-8d81-6d92513062de"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:40:15 crc kubenswrapper[4998]: I0227 10:40:15.290910 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3a32558-0a5d-4fe2-8d81-6d92513062de-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a3a32558-0a5d-4fe2-8d81-6d92513062de" (UID: "a3a32558-0a5d-4fe2-8d81-6d92513062de"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:40:15 crc kubenswrapper[4998]: I0227 10:40:15.300808 4998 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a3a32558-0a5d-4fe2-8d81-6d92513062de-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 27 10:40:15 crc kubenswrapper[4998]: I0227 10:40:15.301005 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fsvl2\" (UniqueName: \"kubernetes.io/projected/a3a32558-0a5d-4fe2-8d81-6d92513062de-kube-api-access-fsvl2\") on node \"crc\" DevicePath \"\"" Feb 27 10:40:15 crc kubenswrapper[4998]: I0227 10:40:15.301077 4998 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3a32558-0a5d-4fe2-8d81-6d92513062de-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:40:15 crc kubenswrapper[4998]: I0227 10:40:15.301147 4998 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3a32558-0a5d-4fe2-8d81-6d92513062de-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 10:40:15 crc kubenswrapper[4998]: I0227 10:40:15.301208 4998 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a3a32558-0a5d-4fe2-8d81-6d92513062de-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 27 10:40:15 crc kubenswrapper[4998]: I0227 10:40:15.301300 4998 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a3a32558-0a5d-4fe2-8d81-6d92513062de-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 27 10:40:15 crc kubenswrapper[4998]: I0227 10:40:15.330811 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3a32558-0a5d-4fe2-8d81-6d92513062de-config-data" (OuterVolumeSpecName: "config-data") pod "a3a32558-0a5d-4fe2-8d81-6d92513062de" (UID: "a3a32558-0a5d-4fe2-8d81-6d92513062de"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:40:15 crc kubenswrapper[4998]: I0227 10:40:15.403583 4998 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3a32558-0a5d-4fe2-8d81-6d92513062de-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 10:40:15 crc kubenswrapper[4998]: I0227 10:40:15.540141 4998 generic.go:334] "Generic (PLEG): container finished" podID="a3a32558-0a5d-4fe2-8d81-6d92513062de" containerID="1b41b8f5cc13cf0cdef251d1991fe61eec0f556891fd33bec1037cd685b2b3d1" exitCode=0 Feb 27 10:40:15 crc kubenswrapper[4998]: I0227 10:40:15.540203 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a3a32558-0a5d-4fe2-8d81-6d92513062de","Type":"ContainerDied","Data":"1b41b8f5cc13cf0cdef251d1991fe61eec0f556891fd33bec1037cd685b2b3d1"} Feb 27 10:40:15 crc kubenswrapper[4998]: I0227 10:40:15.540353 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a3a32558-0a5d-4fe2-8d81-6d92513062de","Type":"ContainerDied","Data":"715478b3d98b6e83302b2ac7cce48e99fbbe64ed68860fcdd9c48bb7230ff17d"} Feb 27 10:40:15 crc kubenswrapper[4998]: I0227 10:40:15.540458 4998 scope.go:117] "RemoveContainer" containerID="477e08ae11d36a6170b114437f572ff996862a8a37609b80d5757dc6734ddc34" Feb 27 10:40:15 crc kubenswrapper[4998]: I0227 10:40:15.542984 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 10:40:15 crc kubenswrapper[4998]: I0227 10:40:15.572689 4998 scope.go:117] "RemoveContainer" containerID="e7c7760182c891ef6d0fe326a448af30b8f435f1c321236b91a8e27b1515dbe0" Feb 27 10:40:15 crc kubenswrapper[4998]: I0227 10:40:15.596776 4998 scope.go:117] "RemoveContainer" containerID="2a6cb6d66f093206488ee94a047d89560449335a8f3b824b3d0ac8222f167012" Feb 27 10:40:15 crc kubenswrapper[4998]: I0227 10:40:15.600922 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 27 10:40:15 crc kubenswrapper[4998]: I0227 10:40:15.622198 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 27 10:40:15 crc kubenswrapper[4998]: I0227 10:40:15.629473 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 27 10:40:15 crc kubenswrapper[4998]: E0227 10:40:15.630109 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3a32558-0a5d-4fe2-8d81-6d92513062de" containerName="ceilometer-central-agent" Feb 27 10:40:15 crc kubenswrapper[4998]: I0227 10:40:15.630200 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3a32558-0a5d-4fe2-8d81-6d92513062de" containerName="ceilometer-central-agent" Feb 27 10:40:15 crc kubenswrapper[4998]: E0227 10:40:15.630485 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3a32558-0a5d-4fe2-8d81-6d92513062de" containerName="sg-core" Feb 27 10:40:15 crc kubenswrapper[4998]: I0227 10:40:15.630574 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3a32558-0a5d-4fe2-8d81-6d92513062de" containerName="sg-core" Feb 27 10:40:15 crc kubenswrapper[4998]: E0227 10:40:15.630673 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3a32558-0a5d-4fe2-8d81-6d92513062de" containerName="proxy-httpd" Feb 27 10:40:15 crc kubenswrapper[4998]: I0227 10:40:15.630786 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3a32558-0a5d-4fe2-8d81-6d92513062de" containerName="proxy-httpd" Feb 27 10:40:15 crc kubenswrapper[4998]: E0227 10:40:15.630888 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3a32558-0a5d-4fe2-8d81-6d92513062de" containerName="ceilometer-notification-agent" Feb 27 10:40:15 crc kubenswrapper[4998]: I0227 10:40:15.630992 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3a32558-0a5d-4fe2-8d81-6d92513062de" containerName="ceilometer-notification-agent" Feb 27 10:40:15 crc kubenswrapper[4998]: I0227 10:40:15.631326 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3a32558-0a5d-4fe2-8d81-6d92513062de" containerName="proxy-httpd" Feb 27 10:40:15 crc kubenswrapper[4998]: I0227 10:40:15.631426 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3a32558-0a5d-4fe2-8d81-6d92513062de" containerName="ceilometer-central-agent" Feb 27 10:40:15 crc kubenswrapper[4998]: I0227 10:40:15.631522 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3a32558-0a5d-4fe2-8d81-6d92513062de" containerName="sg-core" Feb 27 10:40:15 crc kubenswrapper[4998]: I0227 10:40:15.631609 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3a32558-0a5d-4fe2-8d81-6d92513062de" containerName="ceilometer-notification-agent" Feb 27 10:40:15 crc kubenswrapper[4998]: I0227 10:40:15.639895 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 10:40:15 crc kubenswrapper[4998]: I0227 10:40:15.643846 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 27 10:40:15 crc kubenswrapper[4998]: I0227 10:40:15.644090 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 27 10:40:15 crc kubenswrapper[4998]: I0227 10:40:15.644910 4998 scope.go:117] "RemoveContainer" containerID="1b41b8f5cc13cf0cdef251d1991fe61eec0f556891fd33bec1037cd685b2b3d1" Feb 27 10:40:15 crc kubenswrapper[4998]: I0227 10:40:15.646790 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 27 10:40:15 crc kubenswrapper[4998]: I0227 10:40:15.675104 4998 scope.go:117] "RemoveContainer" containerID="477e08ae11d36a6170b114437f572ff996862a8a37609b80d5757dc6734ddc34" Feb 27 10:40:15 crc kubenswrapper[4998]: E0227 10:40:15.675503 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"477e08ae11d36a6170b114437f572ff996862a8a37609b80d5757dc6734ddc34\": container with ID starting with 477e08ae11d36a6170b114437f572ff996862a8a37609b80d5757dc6734ddc34 not found: ID does not exist" containerID="477e08ae11d36a6170b114437f572ff996862a8a37609b80d5757dc6734ddc34" Feb 27 10:40:15 crc kubenswrapper[4998]: I0227 10:40:15.675543 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"477e08ae11d36a6170b114437f572ff996862a8a37609b80d5757dc6734ddc34"} err="failed to get container status \"477e08ae11d36a6170b114437f572ff996862a8a37609b80d5757dc6734ddc34\": rpc error: code = NotFound desc = could not find container \"477e08ae11d36a6170b114437f572ff996862a8a37609b80d5757dc6734ddc34\": container with ID starting with 477e08ae11d36a6170b114437f572ff996862a8a37609b80d5757dc6734ddc34 not found: ID does not exist" Feb 27 10:40:15 crc kubenswrapper[4998]: I0227 10:40:15.675570 4998 scope.go:117] "RemoveContainer" containerID="e7c7760182c891ef6d0fe326a448af30b8f435f1c321236b91a8e27b1515dbe0" Feb 27 10:40:15 crc kubenswrapper[4998]: E0227 10:40:15.675780 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7c7760182c891ef6d0fe326a448af30b8f435f1c321236b91a8e27b1515dbe0\": container with ID starting with e7c7760182c891ef6d0fe326a448af30b8f435f1c321236b91a8e27b1515dbe0 not found: ID does not exist" containerID="e7c7760182c891ef6d0fe326a448af30b8f435f1c321236b91a8e27b1515dbe0" Feb 27 10:40:15 crc kubenswrapper[4998]: I0227 10:40:15.675811 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7c7760182c891ef6d0fe326a448af30b8f435f1c321236b91a8e27b1515dbe0"} err="failed to get container status \"e7c7760182c891ef6d0fe326a448af30b8f435f1c321236b91a8e27b1515dbe0\": rpc error: code = NotFound desc = could not find container \"e7c7760182c891ef6d0fe326a448af30b8f435f1c321236b91a8e27b1515dbe0\": container with ID starting with e7c7760182c891ef6d0fe326a448af30b8f435f1c321236b91a8e27b1515dbe0 not found: ID does not exist" Feb 27 10:40:15 crc kubenswrapper[4998]: I0227 10:40:15.675829 4998 scope.go:117] "RemoveContainer" containerID="2a6cb6d66f093206488ee94a047d89560449335a8f3b824b3d0ac8222f167012" Feb 27 10:40:15 crc kubenswrapper[4998]: E0227 10:40:15.676042 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a6cb6d66f093206488ee94a047d89560449335a8f3b824b3d0ac8222f167012\": container with ID starting with 2a6cb6d66f093206488ee94a047d89560449335a8f3b824b3d0ac8222f167012 not found: ID does not exist" containerID="2a6cb6d66f093206488ee94a047d89560449335a8f3b824b3d0ac8222f167012" Feb 27 10:40:15 crc kubenswrapper[4998]: I0227 10:40:15.676068 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a6cb6d66f093206488ee94a047d89560449335a8f3b824b3d0ac8222f167012"} err="failed to get container status \"2a6cb6d66f093206488ee94a047d89560449335a8f3b824b3d0ac8222f167012\": rpc error: code = NotFound desc = could not find container \"2a6cb6d66f093206488ee94a047d89560449335a8f3b824b3d0ac8222f167012\": container with ID starting with 2a6cb6d66f093206488ee94a047d89560449335a8f3b824b3d0ac8222f167012 not found: ID does not exist" Feb 27 10:40:15 crc kubenswrapper[4998]: I0227 10:40:15.676085 4998 scope.go:117] "RemoveContainer" containerID="1b41b8f5cc13cf0cdef251d1991fe61eec0f556891fd33bec1037cd685b2b3d1" Feb 27 10:40:15 crc kubenswrapper[4998]: E0227 10:40:15.676624 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b41b8f5cc13cf0cdef251d1991fe61eec0f556891fd33bec1037cd685b2b3d1\": container with ID starting with 1b41b8f5cc13cf0cdef251d1991fe61eec0f556891fd33bec1037cd685b2b3d1 not found: ID does not exist" containerID="1b41b8f5cc13cf0cdef251d1991fe61eec0f556891fd33bec1037cd685b2b3d1" Feb 27 10:40:15 crc kubenswrapper[4998]: I0227 10:40:15.676654 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b41b8f5cc13cf0cdef251d1991fe61eec0f556891fd33bec1037cd685b2b3d1"} err="failed to get container status \"1b41b8f5cc13cf0cdef251d1991fe61eec0f556891fd33bec1037cd685b2b3d1\": rpc error: code = NotFound desc = could not find container \"1b41b8f5cc13cf0cdef251d1991fe61eec0f556891fd33bec1037cd685b2b3d1\": container with ID starting with 1b41b8f5cc13cf0cdef251d1991fe61eec0f556891fd33bec1037cd685b2b3d1 not found: ID does not exist" Feb 27 10:40:15 crc kubenswrapper[4998]: I0227 10:40:15.709157 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8f31638e-7dd4-4dcc-b18f-2e2618086e49-log-httpd\") pod \"ceilometer-0\" (UID: \"8f31638e-7dd4-4dcc-b18f-2e2618086e49\") " pod="openstack/ceilometer-0" Feb 27 10:40:15 crc kubenswrapper[4998]: I0227 10:40:15.709285 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvw4l\" (UniqueName: \"kubernetes.io/projected/8f31638e-7dd4-4dcc-b18f-2e2618086e49-kube-api-access-mvw4l\") pod \"ceilometer-0\" (UID: \"8f31638e-7dd4-4dcc-b18f-2e2618086e49\") " pod="openstack/ceilometer-0" Feb 27 10:40:15 crc kubenswrapper[4998]: I0227 10:40:15.709378 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8f31638e-7dd4-4dcc-b18f-2e2618086e49-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8f31638e-7dd4-4dcc-b18f-2e2618086e49\") " pod="openstack/ceilometer-0" Feb 27 10:40:15 crc kubenswrapper[4998]: I0227 10:40:15.709476 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f31638e-7dd4-4dcc-b18f-2e2618086e49-config-data\") pod \"ceilometer-0\" (UID: \"8f31638e-7dd4-4dcc-b18f-2e2618086e49\") " pod="openstack/ceilometer-0" Feb 27 10:40:15 crc kubenswrapper[4998]: I0227 10:40:15.709512 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f31638e-7dd4-4dcc-b18f-2e2618086e49-scripts\") pod \"ceilometer-0\" (UID: \"8f31638e-7dd4-4dcc-b18f-2e2618086e49\") " pod="openstack/ceilometer-0" Feb 27 10:40:15 crc kubenswrapper[4998]: I0227 10:40:15.709605 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f31638e-7dd4-4dcc-b18f-2e2618086e49-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8f31638e-7dd4-4dcc-b18f-2e2618086e49\") " pod="openstack/ceilometer-0" Feb 27 10:40:15 crc kubenswrapper[4998]: I0227 10:40:15.709662 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8f31638e-7dd4-4dcc-b18f-2e2618086e49-run-httpd\") pod \"ceilometer-0\" (UID: \"8f31638e-7dd4-4dcc-b18f-2e2618086e49\") " pod="openstack/ceilometer-0" Feb 27 10:40:15 crc kubenswrapper[4998]: I0227 10:40:15.810742 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f31638e-7dd4-4dcc-b18f-2e2618086e49-config-data\") pod \"ceilometer-0\" (UID: \"8f31638e-7dd4-4dcc-b18f-2e2618086e49\") " pod="openstack/ceilometer-0" Feb 27 10:40:15 crc kubenswrapper[4998]: I0227 10:40:15.810794 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f31638e-7dd4-4dcc-b18f-2e2618086e49-scripts\") pod \"ceilometer-0\" (UID: \"8f31638e-7dd4-4dcc-b18f-2e2618086e49\") " pod="openstack/ceilometer-0" Feb 27 10:40:15 crc kubenswrapper[4998]: I0227 10:40:15.810831 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f31638e-7dd4-4dcc-b18f-2e2618086e49-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8f31638e-7dd4-4dcc-b18f-2e2618086e49\") " pod="openstack/ceilometer-0" Feb 27 10:40:15 crc kubenswrapper[4998]: I0227 10:40:15.810857 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8f31638e-7dd4-4dcc-b18f-2e2618086e49-run-httpd\") pod \"ceilometer-0\" (UID: \"8f31638e-7dd4-4dcc-b18f-2e2618086e49\") " pod="openstack/ceilometer-0" Feb 27 10:40:15 crc kubenswrapper[4998]: I0227 10:40:15.810915 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8f31638e-7dd4-4dcc-b18f-2e2618086e49-log-httpd\") pod \"ceilometer-0\" (UID: \"8f31638e-7dd4-4dcc-b18f-2e2618086e49\") " pod="openstack/ceilometer-0" Feb 27 10:40:15 crc kubenswrapper[4998]: I0227 10:40:15.810968 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvw4l\" (UniqueName: \"kubernetes.io/projected/8f31638e-7dd4-4dcc-b18f-2e2618086e49-kube-api-access-mvw4l\") pod \"ceilometer-0\" (UID: \"8f31638e-7dd4-4dcc-b18f-2e2618086e49\") " pod="openstack/ceilometer-0" Feb 27 10:40:15 crc kubenswrapper[4998]: I0227 10:40:15.811010 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8f31638e-7dd4-4dcc-b18f-2e2618086e49-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8f31638e-7dd4-4dcc-b18f-2e2618086e49\") " pod="openstack/ceilometer-0" Feb 27 10:40:15 crc kubenswrapper[4998]: I0227 10:40:15.811714 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8f31638e-7dd4-4dcc-b18f-2e2618086e49-run-httpd\") pod \"ceilometer-0\" (UID: \"8f31638e-7dd4-4dcc-b18f-2e2618086e49\") " pod="openstack/ceilometer-0" Feb 27 10:40:15 crc kubenswrapper[4998]: I0227 10:40:15.811747 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8f31638e-7dd4-4dcc-b18f-2e2618086e49-log-httpd\") pod \"ceilometer-0\" (UID: \"8f31638e-7dd4-4dcc-b18f-2e2618086e49\") " pod="openstack/ceilometer-0" Feb 27 10:40:15 crc kubenswrapper[4998]: I0227 10:40:15.817812 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f31638e-7dd4-4dcc-b18f-2e2618086e49-scripts\") pod \"ceilometer-0\" (UID: \"8f31638e-7dd4-4dcc-b18f-2e2618086e49\") " pod="openstack/ceilometer-0" Feb 27 10:40:15 crc kubenswrapper[4998]: I0227 10:40:15.818999 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8f31638e-7dd4-4dcc-b18f-2e2618086e49-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8f31638e-7dd4-4dcc-b18f-2e2618086e49\") " pod="openstack/ceilometer-0" Feb 27 10:40:15 crc kubenswrapper[4998]: I0227 10:40:15.819787 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f31638e-7dd4-4dcc-b18f-2e2618086e49-config-data\") pod \"ceilometer-0\" (UID: \"8f31638e-7dd4-4dcc-b18f-2e2618086e49\") " pod="openstack/ceilometer-0" Feb 27 10:40:15 crc kubenswrapper[4998]: I0227 10:40:15.820859 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f31638e-7dd4-4dcc-b18f-2e2618086e49-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8f31638e-7dd4-4dcc-b18f-2e2618086e49\") " pod="openstack/ceilometer-0" Feb 27 10:40:15 crc kubenswrapper[4998]: I0227 10:40:15.838418 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvw4l\" (UniqueName: \"kubernetes.io/projected/8f31638e-7dd4-4dcc-b18f-2e2618086e49-kube-api-access-mvw4l\") pod \"ceilometer-0\" (UID: \"8f31638e-7dd4-4dcc-b18f-2e2618086e49\") " pod="openstack/ceilometer-0" Feb 27 10:40:15 crc kubenswrapper[4998]: I0227 10:40:15.958470 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 10:40:16 crc kubenswrapper[4998]: W0227 10:40:16.456905 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8f31638e_7dd4_4dcc_b18f_2e2618086e49.slice/crio-d16116264e02209b0d77368bcb2fb5d647eec4f06eb83f525518145b259ad183 WatchSource:0}: Error finding container d16116264e02209b0d77368bcb2fb5d647eec4f06eb83f525518145b259ad183: Status 404 returned error can't find the container with id d16116264e02209b0d77368bcb2fb5d647eec4f06eb83f525518145b259ad183 Feb 27 10:40:16 crc kubenswrapper[4998]: I0227 10:40:16.462553 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 27 10:40:16 crc kubenswrapper[4998]: I0227 10:40:16.551704 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8f31638e-7dd4-4dcc-b18f-2e2618086e49","Type":"ContainerStarted","Data":"d16116264e02209b0d77368bcb2fb5d647eec4f06eb83f525518145b259ad183"} Feb 27 10:40:16 crc kubenswrapper[4998]: I0227 10:40:16.788523 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3a32558-0a5d-4fe2-8d81-6d92513062de" path="/var/lib/kubelet/pods/a3a32558-0a5d-4fe2-8d81-6d92513062de/volumes" Feb 27 10:40:17 crc kubenswrapper[4998]: I0227 10:40:17.562518 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8f31638e-7dd4-4dcc-b18f-2e2618086e49","Type":"ContainerStarted","Data":"abcd36feb17787ceb059688d824775c51cce1dfe554aafb2e60271f7e175d725"} Feb 27 10:40:18 crc kubenswrapper[4998]: I0227 10:40:18.576989 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8f31638e-7dd4-4dcc-b18f-2e2618086e49","Type":"ContainerStarted","Data":"7bd08940027cd6ce215f7402e5e353df5c956e681fca1cb0bd706ada32ac99ff"} Feb 27 10:40:18 crc kubenswrapper[4998]: I0227 10:40:18.874179 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 27 10:40:19 crc kubenswrapper[4998]: I0227 10:40:19.425387 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-rwrfg"] Feb 27 10:40:19 crc kubenswrapper[4998]: I0227 10:40:19.426576 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-rwrfg" Feb 27 10:40:19 crc kubenswrapper[4998]: I0227 10:40:19.431960 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Feb 27 10:40:19 crc kubenswrapper[4998]: I0227 10:40:19.432319 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Feb 27 10:40:19 crc kubenswrapper[4998]: I0227 10:40:19.454529 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-rwrfg"] Feb 27 10:40:19 crc kubenswrapper[4998]: I0227 10:40:19.479114 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/92acff51-4ca2-43c6-ab0f-480e01e9efb8-scripts\") pod \"nova-cell0-cell-mapping-rwrfg\" (UID: \"92acff51-4ca2-43c6-ab0f-480e01e9efb8\") " pod="openstack/nova-cell0-cell-mapping-rwrfg" Feb 27 10:40:19 crc kubenswrapper[4998]: I0227 10:40:19.479164 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92acff51-4ca2-43c6-ab0f-480e01e9efb8-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-rwrfg\" (UID: \"92acff51-4ca2-43c6-ab0f-480e01e9efb8\") " pod="openstack/nova-cell0-cell-mapping-rwrfg" Feb 27 10:40:19 crc kubenswrapper[4998]: I0227 10:40:19.479238 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92acff51-4ca2-43c6-ab0f-480e01e9efb8-config-data\") pod \"nova-cell0-cell-mapping-rwrfg\" (UID: \"92acff51-4ca2-43c6-ab0f-480e01e9efb8\") " pod="openstack/nova-cell0-cell-mapping-rwrfg" Feb 27 10:40:19 crc kubenswrapper[4998]: I0227 10:40:19.479334 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsb8j\" (UniqueName: \"kubernetes.io/projected/92acff51-4ca2-43c6-ab0f-480e01e9efb8-kube-api-access-dsb8j\") pod \"nova-cell0-cell-mapping-rwrfg\" (UID: \"92acff51-4ca2-43c6-ab0f-480e01e9efb8\") " pod="openstack/nova-cell0-cell-mapping-rwrfg" Feb 27 10:40:19 crc kubenswrapper[4998]: I0227 10:40:19.572972 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 27 10:40:19 crc kubenswrapper[4998]: I0227 10:40:19.574344 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 27 10:40:19 crc kubenswrapper[4998]: I0227 10:40:19.581089 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/92acff51-4ca2-43c6-ab0f-480e01e9efb8-scripts\") pod \"nova-cell0-cell-mapping-rwrfg\" (UID: \"92acff51-4ca2-43c6-ab0f-480e01e9efb8\") " pod="openstack/nova-cell0-cell-mapping-rwrfg" Feb 27 10:40:19 crc kubenswrapper[4998]: I0227 10:40:19.581180 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92acff51-4ca2-43c6-ab0f-480e01e9efb8-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-rwrfg\" (UID: \"92acff51-4ca2-43c6-ab0f-480e01e9efb8\") " pod="openstack/nova-cell0-cell-mapping-rwrfg" Feb 27 10:40:19 crc kubenswrapper[4998]: I0227 10:40:19.581293 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92acff51-4ca2-43c6-ab0f-480e01e9efb8-config-data\") pod \"nova-cell0-cell-mapping-rwrfg\" (UID: \"92acff51-4ca2-43c6-ab0f-480e01e9efb8\") " pod="openstack/nova-cell0-cell-mapping-rwrfg" Feb 27 10:40:19 crc kubenswrapper[4998]: I0227 10:40:19.581422 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dsb8j\" (UniqueName: \"kubernetes.io/projected/92acff51-4ca2-43c6-ab0f-480e01e9efb8-kube-api-access-dsb8j\") pod \"nova-cell0-cell-mapping-rwrfg\" (UID: \"92acff51-4ca2-43c6-ab0f-480e01e9efb8\") " pod="openstack/nova-cell0-cell-mapping-rwrfg" Feb 27 10:40:19 crc kubenswrapper[4998]: I0227 10:40:19.588400 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 27 10:40:19 crc kubenswrapper[4998]: I0227 10:40:19.597179 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/92acff51-4ca2-43c6-ab0f-480e01e9efb8-scripts\") pod \"nova-cell0-cell-mapping-rwrfg\" (UID: \"92acff51-4ca2-43c6-ab0f-480e01e9efb8\") " pod="openstack/nova-cell0-cell-mapping-rwrfg" Feb 27 10:40:19 crc kubenswrapper[4998]: I0227 10:40:19.603270 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92acff51-4ca2-43c6-ab0f-480e01e9efb8-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-rwrfg\" (UID: \"92acff51-4ca2-43c6-ab0f-480e01e9efb8\") " pod="openstack/nova-cell0-cell-mapping-rwrfg" Feb 27 10:40:19 crc kubenswrapper[4998]: I0227 10:40:19.604572 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92acff51-4ca2-43c6-ab0f-480e01e9efb8-config-data\") pod \"nova-cell0-cell-mapping-rwrfg\" (UID: \"92acff51-4ca2-43c6-ab0f-480e01e9efb8\") " pod="openstack/nova-cell0-cell-mapping-rwrfg" Feb 27 10:40:19 crc kubenswrapper[4998]: I0227 10:40:19.637497 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsb8j\" (UniqueName: \"kubernetes.io/projected/92acff51-4ca2-43c6-ab0f-480e01e9efb8-kube-api-access-dsb8j\") pod \"nova-cell0-cell-mapping-rwrfg\" (UID: \"92acff51-4ca2-43c6-ab0f-480e01e9efb8\") " pod="openstack/nova-cell0-cell-mapping-rwrfg" Feb 27 10:40:19 crc kubenswrapper[4998]: I0227 10:40:19.655395 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8f31638e-7dd4-4dcc-b18f-2e2618086e49","Type":"ContainerStarted","Data":"a7a97fb5fc48f1f3a5098986c6f470e11e683f23293ce3c132535ce7a75f7f2e"} Feb 27 10:40:19 crc kubenswrapper[4998]: I0227 10:40:19.663824 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 27 10:40:19 crc kubenswrapper[4998]: I0227 10:40:19.683994 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gczh\" (UniqueName: \"kubernetes.io/projected/eb938d5c-49ff-476a-ae91-0c07a0321818-kube-api-access-4gczh\") pod \"nova-cell1-novncproxy-0\" (UID: \"eb938d5c-49ff-476a-ae91-0c07a0321818\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 10:40:19 crc kubenswrapper[4998]: I0227 10:40:19.684217 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb938d5c-49ff-476a-ae91-0c07a0321818-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"eb938d5c-49ff-476a-ae91-0c07a0321818\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 10:40:19 crc kubenswrapper[4998]: I0227 10:40:19.684386 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb938d5c-49ff-476a-ae91-0c07a0321818-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"eb938d5c-49ff-476a-ae91-0c07a0321818\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 10:40:19 crc kubenswrapper[4998]: I0227 10:40:19.722413 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 27 10:40:19 crc kubenswrapper[4998]: I0227 10:40:19.731210 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 27 10:40:19 crc kubenswrapper[4998]: I0227 10:40:19.737527 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 27 10:40:19 crc kubenswrapper[4998]: I0227 10:40:19.745816 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 27 10:40:19 crc kubenswrapper[4998]: I0227 10:40:19.751276 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-rwrfg" Feb 27 10:40:19 crc kubenswrapper[4998]: I0227 10:40:19.787245 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e07daea2-2614-48f3-ba74-542747496c3c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e07daea2-2614-48f3-ba74-542747496c3c\") " pod="openstack/nova-scheduler-0" Feb 27 10:40:19 crc kubenswrapper[4998]: I0227 10:40:19.787307 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb938d5c-49ff-476a-ae91-0c07a0321818-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"eb938d5c-49ff-476a-ae91-0c07a0321818\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 10:40:19 crc kubenswrapper[4998]: I0227 10:40:19.787358 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mwv5\" (UniqueName: \"kubernetes.io/projected/e07daea2-2614-48f3-ba74-542747496c3c-kube-api-access-8mwv5\") pod \"nova-scheduler-0\" (UID: \"e07daea2-2614-48f3-ba74-542747496c3c\") " pod="openstack/nova-scheduler-0" Feb 27 10:40:19 crc kubenswrapper[4998]: I0227 10:40:19.787382 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4gczh\" (UniqueName: \"kubernetes.io/projected/eb938d5c-49ff-476a-ae91-0c07a0321818-kube-api-access-4gczh\") pod \"nova-cell1-novncproxy-0\" (UID: \"eb938d5c-49ff-476a-ae91-0c07a0321818\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 10:40:19 crc kubenswrapper[4998]: I0227 10:40:19.787405 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e07daea2-2614-48f3-ba74-542747496c3c-config-data\") pod \"nova-scheduler-0\" (UID: \"e07daea2-2614-48f3-ba74-542747496c3c\") " pod="openstack/nova-scheduler-0" Feb 27 10:40:19 crc kubenswrapper[4998]: I0227 10:40:19.787502 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb938d5c-49ff-476a-ae91-0c07a0321818-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"eb938d5c-49ff-476a-ae91-0c07a0321818\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 10:40:19 crc kubenswrapper[4998]: I0227 10:40:19.809131 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb938d5c-49ff-476a-ae91-0c07a0321818-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"eb938d5c-49ff-476a-ae91-0c07a0321818\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 10:40:19 crc kubenswrapper[4998]: I0227 10:40:19.816409 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb938d5c-49ff-476a-ae91-0c07a0321818-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"eb938d5c-49ff-476a-ae91-0c07a0321818\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 10:40:19 crc kubenswrapper[4998]: I0227 10:40:19.821467 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 27 10:40:19 crc kubenswrapper[4998]: I0227 10:40:19.822897 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 27 10:40:19 crc kubenswrapper[4998]: I0227 10:40:19.833786 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 27 10:40:19 crc kubenswrapper[4998]: I0227 10:40:19.845888 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 27 10:40:19 crc kubenswrapper[4998]: I0227 10:40:19.847034 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 27 10:40:19 crc kubenswrapper[4998]: I0227 10:40:19.848670 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 27 10:40:19 crc kubenswrapper[4998]: I0227 10:40:19.857358 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 27 10:40:19 crc kubenswrapper[4998]: I0227 10:40:19.875922 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gczh\" (UniqueName: \"kubernetes.io/projected/eb938d5c-49ff-476a-ae91-0c07a0321818-kube-api-access-4gczh\") pod \"nova-cell1-novncproxy-0\" (UID: \"eb938d5c-49ff-476a-ae91-0c07a0321818\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 10:40:19 crc kubenswrapper[4998]: I0227 10:40:19.881744 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 27 10:40:19 crc kubenswrapper[4998]: I0227 10:40:19.889448 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pqtp\" (UniqueName: \"kubernetes.io/projected/b18428a7-ad9e-4d5f-a243-cc54ab4f3ec8-kube-api-access-4pqtp\") pod \"nova-api-0\" (UID: \"b18428a7-ad9e-4d5f-a243-cc54ab4f3ec8\") " pod="openstack/nova-api-0" Feb 27 10:40:19 crc kubenswrapper[4998]: I0227 10:40:19.889487 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b18428a7-ad9e-4d5f-a243-cc54ab4f3ec8-logs\") pod \"nova-api-0\" (UID: \"b18428a7-ad9e-4d5f-a243-cc54ab4f3ec8\") " pod="openstack/nova-api-0" Feb 27 10:40:19 crc kubenswrapper[4998]: I0227 10:40:19.889524 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpxcj\" (UniqueName: \"kubernetes.io/projected/07fa29c6-6b86-4340-bd6e-a5bf29cab4c2-kube-api-access-rpxcj\") pod \"nova-metadata-0\" (UID: \"07fa29c6-6b86-4340-bd6e-a5bf29cab4c2\") " pod="openstack/nova-metadata-0" Feb 27 10:40:19 crc kubenswrapper[4998]: I0227 10:40:19.889730 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e07daea2-2614-48f3-ba74-542747496c3c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e07daea2-2614-48f3-ba74-542747496c3c\") " pod="openstack/nova-scheduler-0" Feb 27 10:40:19 crc kubenswrapper[4998]: I0227 10:40:19.889796 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b18428a7-ad9e-4d5f-a243-cc54ab4f3ec8-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b18428a7-ad9e-4d5f-a243-cc54ab4f3ec8\") " pod="openstack/nova-api-0" Feb 27 10:40:19 crc kubenswrapper[4998]: I0227 10:40:19.889858 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mwv5\" (UniqueName: \"kubernetes.io/projected/e07daea2-2614-48f3-ba74-542747496c3c-kube-api-access-8mwv5\") pod \"nova-scheduler-0\" (UID: \"e07daea2-2614-48f3-ba74-542747496c3c\") " pod="openstack/nova-scheduler-0" Feb 27 10:40:19 crc kubenswrapper[4998]: I0227 10:40:19.889888 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e07daea2-2614-48f3-ba74-542747496c3c-config-data\") pod \"nova-scheduler-0\" (UID: \"e07daea2-2614-48f3-ba74-542747496c3c\") " pod="openstack/nova-scheduler-0" Feb 27 10:40:19 crc kubenswrapper[4998]: I0227 10:40:19.889942 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07fa29c6-6b86-4340-bd6e-a5bf29cab4c2-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"07fa29c6-6b86-4340-bd6e-a5bf29cab4c2\") " pod="openstack/nova-metadata-0" Feb 27 10:40:19 crc kubenswrapper[4998]: I0227 10:40:19.890021 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b18428a7-ad9e-4d5f-a243-cc54ab4f3ec8-config-data\") pod \"nova-api-0\" (UID: \"b18428a7-ad9e-4d5f-a243-cc54ab4f3ec8\") " pod="openstack/nova-api-0" Feb 27 10:40:19 crc kubenswrapper[4998]: I0227 10:40:19.890065 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07fa29c6-6b86-4340-bd6e-a5bf29cab4c2-config-data\") pod \"nova-metadata-0\" (UID: \"07fa29c6-6b86-4340-bd6e-a5bf29cab4c2\") " pod="openstack/nova-metadata-0" Feb 27 10:40:19 crc kubenswrapper[4998]: I0227 10:40:19.890167 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07fa29c6-6b86-4340-bd6e-a5bf29cab4c2-logs\") pod \"nova-metadata-0\" (UID: \"07fa29c6-6b86-4340-bd6e-a5bf29cab4c2\") " pod="openstack/nova-metadata-0" Feb 27 10:40:19 crc kubenswrapper[4998]: I0227 10:40:19.915886 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e07daea2-2614-48f3-ba74-542747496c3c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e07daea2-2614-48f3-ba74-542747496c3c\") " pod="openstack/nova-scheduler-0" Feb 27 10:40:19 crc kubenswrapper[4998]: I0227 10:40:19.965981 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e07daea2-2614-48f3-ba74-542747496c3c-config-data\") pod \"nova-scheduler-0\" (UID: \"e07daea2-2614-48f3-ba74-542747496c3c\") " pod="openstack/nova-scheduler-0" Feb 27 10:40:19 crc kubenswrapper[4998]: I0227 10:40:19.986467 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mwv5\" (UniqueName: \"kubernetes.io/projected/e07daea2-2614-48f3-ba74-542747496c3c-kube-api-access-8mwv5\") pod \"nova-scheduler-0\" (UID: \"e07daea2-2614-48f3-ba74-542747496c3c\") " pod="openstack/nova-scheduler-0" Feb 27 10:40:19 crc kubenswrapper[4998]: I0227 10:40:19.988764 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-d6jxk"] Feb 27 10:40:19 crc kubenswrapper[4998]: I0227 10:40:19.990410 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-d6jxk" Feb 27 10:40:19 crc kubenswrapper[4998]: I0227 10:40:19.991961 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b18428a7-ad9e-4d5f-a243-cc54ab4f3ec8-config-data\") pod \"nova-api-0\" (UID: \"b18428a7-ad9e-4d5f-a243-cc54ab4f3ec8\") " pod="openstack/nova-api-0" Feb 27 10:40:19 crc kubenswrapper[4998]: I0227 10:40:19.992016 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07fa29c6-6b86-4340-bd6e-a5bf29cab4c2-config-data\") pod \"nova-metadata-0\" (UID: \"07fa29c6-6b86-4340-bd6e-a5bf29cab4c2\") " pod="openstack/nova-metadata-0" Feb 27 10:40:19 crc kubenswrapper[4998]: I0227 10:40:19.992067 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07fa29c6-6b86-4340-bd6e-a5bf29cab4c2-logs\") pod \"nova-metadata-0\" (UID: \"07fa29c6-6b86-4340-bd6e-a5bf29cab4c2\") " pod="openstack/nova-metadata-0" Feb 27 10:40:19 crc kubenswrapper[4998]: I0227 10:40:19.992094 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pqtp\" (UniqueName: \"kubernetes.io/projected/b18428a7-ad9e-4d5f-a243-cc54ab4f3ec8-kube-api-access-4pqtp\") pod \"nova-api-0\" (UID: \"b18428a7-ad9e-4d5f-a243-cc54ab4f3ec8\") " pod="openstack/nova-api-0" Feb 27 10:40:19 crc kubenswrapper[4998]: I0227 10:40:19.992111 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b18428a7-ad9e-4d5f-a243-cc54ab4f3ec8-logs\") pod \"nova-api-0\" (UID: \"b18428a7-ad9e-4d5f-a243-cc54ab4f3ec8\") " pod="openstack/nova-api-0" Feb 27 10:40:19 crc kubenswrapper[4998]: I0227 10:40:19.992143 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rpxcj\" (UniqueName: \"kubernetes.io/projected/07fa29c6-6b86-4340-bd6e-a5bf29cab4c2-kube-api-access-rpxcj\") pod \"nova-metadata-0\" (UID: \"07fa29c6-6b86-4340-bd6e-a5bf29cab4c2\") " pod="openstack/nova-metadata-0" Feb 27 10:40:19 crc kubenswrapper[4998]: I0227 10:40:19.992244 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b18428a7-ad9e-4d5f-a243-cc54ab4f3ec8-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b18428a7-ad9e-4d5f-a243-cc54ab4f3ec8\") " pod="openstack/nova-api-0" Feb 27 10:40:19 crc kubenswrapper[4998]: I0227 10:40:19.992296 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07fa29c6-6b86-4340-bd6e-a5bf29cab4c2-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"07fa29c6-6b86-4340-bd6e-a5bf29cab4c2\") " pod="openstack/nova-metadata-0" Feb 27 10:40:19 crc kubenswrapper[4998]: I0227 10:40:19.993785 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b18428a7-ad9e-4d5f-a243-cc54ab4f3ec8-logs\") pod \"nova-api-0\" (UID: \"b18428a7-ad9e-4d5f-a243-cc54ab4f3ec8\") " pod="openstack/nova-api-0" Feb 27 10:40:19 crc kubenswrapper[4998]: I0227 10:40:19.995945 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07fa29c6-6b86-4340-bd6e-a5bf29cab4c2-config-data\") pod \"nova-metadata-0\" (UID: \"07fa29c6-6b86-4340-bd6e-a5bf29cab4c2\") " pod="openstack/nova-metadata-0" Feb 27 10:40:19 crc kubenswrapper[4998]: I0227 10:40:19.997137 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b18428a7-ad9e-4d5f-a243-cc54ab4f3ec8-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b18428a7-ad9e-4d5f-a243-cc54ab4f3ec8\") " pod="openstack/nova-api-0" Feb 27 10:40:20 crc kubenswrapper[4998]: I0227 10:40:20.003829 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 27 10:40:20 crc kubenswrapper[4998]: I0227 10:40:20.005697 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07fa29c6-6b86-4340-bd6e-a5bf29cab4c2-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"07fa29c6-6b86-4340-bd6e-a5bf29cab4c2\") " pod="openstack/nova-metadata-0" Feb 27 10:40:20 crc kubenswrapper[4998]: I0227 10:40:20.007531 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b18428a7-ad9e-4d5f-a243-cc54ab4f3ec8-config-data\") pod \"nova-api-0\" (UID: \"b18428a7-ad9e-4d5f-a243-cc54ab4f3ec8\") " pod="openstack/nova-api-0" Feb 27 10:40:20 crc kubenswrapper[4998]: I0227 10:40:20.010836 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07fa29c6-6b86-4340-bd6e-a5bf29cab4c2-logs\") pod \"nova-metadata-0\" (UID: \"07fa29c6-6b86-4340-bd6e-a5bf29cab4c2\") " pod="openstack/nova-metadata-0" Feb 27 10:40:20 crc kubenswrapper[4998]: I0227 10:40:20.024292 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rpxcj\" (UniqueName: \"kubernetes.io/projected/07fa29c6-6b86-4340-bd6e-a5bf29cab4c2-kube-api-access-rpxcj\") pod \"nova-metadata-0\" (UID: \"07fa29c6-6b86-4340-bd6e-a5bf29cab4c2\") " pod="openstack/nova-metadata-0" Feb 27 10:40:20 crc kubenswrapper[4998]: I0227 10:40:20.028036 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pqtp\" (UniqueName: \"kubernetes.io/projected/b18428a7-ad9e-4d5f-a243-cc54ab4f3ec8-kube-api-access-4pqtp\") pod \"nova-api-0\" (UID: \"b18428a7-ad9e-4d5f-a243-cc54ab4f3ec8\") " pod="openstack/nova-api-0" Feb 27 10:40:20 crc kubenswrapper[4998]: I0227 10:40:20.063182 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-d6jxk"] Feb 27 10:40:20 crc kubenswrapper[4998]: I0227 10:40:20.080127 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 27 10:40:20 crc kubenswrapper[4998]: I0227 10:40:20.094070 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4b15f23-61d5-4028-a889-c212564533cf-config\") pod \"dnsmasq-dns-757b4f8459-d6jxk\" (UID: \"f4b15f23-61d5-4028-a889-c212564533cf\") " pod="openstack/dnsmasq-dns-757b4f8459-d6jxk" Feb 27 10:40:20 crc kubenswrapper[4998]: I0227 10:40:20.094361 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f4b15f23-61d5-4028-a889-c212564533cf-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-d6jxk\" (UID: \"f4b15f23-61d5-4028-a889-c212564533cf\") " pod="openstack/dnsmasq-dns-757b4f8459-d6jxk" Feb 27 10:40:20 crc kubenswrapper[4998]: I0227 10:40:20.094495 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f4b15f23-61d5-4028-a889-c212564533cf-dns-svc\") pod \"dnsmasq-dns-757b4f8459-d6jxk\" (UID: \"f4b15f23-61d5-4028-a889-c212564533cf\") " pod="openstack/dnsmasq-dns-757b4f8459-d6jxk" Feb 27 10:40:20 crc kubenswrapper[4998]: I0227 10:40:20.094604 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f4b15f23-61d5-4028-a889-c212564533cf-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-d6jxk\" (UID: \"f4b15f23-61d5-4028-a889-c212564533cf\") " pod="openstack/dnsmasq-dns-757b4f8459-d6jxk" Feb 27 10:40:20 crc kubenswrapper[4998]: I0227 10:40:20.094680 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdfgk\" (UniqueName: \"kubernetes.io/projected/f4b15f23-61d5-4028-a889-c212564533cf-kube-api-access-gdfgk\") pod \"dnsmasq-dns-757b4f8459-d6jxk\" (UID: \"f4b15f23-61d5-4028-a889-c212564533cf\") " pod="openstack/dnsmasq-dns-757b4f8459-d6jxk" Feb 27 10:40:20 crc kubenswrapper[4998]: I0227 10:40:20.094769 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f4b15f23-61d5-4028-a889-c212564533cf-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-d6jxk\" (UID: \"f4b15f23-61d5-4028-a889-c212564533cf\") " pod="openstack/dnsmasq-dns-757b4f8459-d6jxk" Feb 27 10:40:20 crc kubenswrapper[4998]: I0227 10:40:20.196060 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdfgk\" (UniqueName: \"kubernetes.io/projected/f4b15f23-61d5-4028-a889-c212564533cf-kube-api-access-gdfgk\") pod \"dnsmasq-dns-757b4f8459-d6jxk\" (UID: \"f4b15f23-61d5-4028-a889-c212564533cf\") " pod="openstack/dnsmasq-dns-757b4f8459-d6jxk" Feb 27 10:40:20 crc kubenswrapper[4998]: I0227 10:40:20.196133 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f4b15f23-61d5-4028-a889-c212564533cf-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-d6jxk\" (UID: \"f4b15f23-61d5-4028-a889-c212564533cf\") " pod="openstack/dnsmasq-dns-757b4f8459-d6jxk" Feb 27 10:40:20 crc kubenswrapper[4998]: I0227 10:40:20.196287 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4b15f23-61d5-4028-a889-c212564533cf-config\") pod \"dnsmasq-dns-757b4f8459-d6jxk\" (UID: \"f4b15f23-61d5-4028-a889-c212564533cf\") " pod="openstack/dnsmasq-dns-757b4f8459-d6jxk" Feb 27 10:40:20 crc kubenswrapper[4998]: I0227 10:40:20.196321 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f4b15f23-61d5-4028-a889-c212564533cf-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-d6jxk\" (UID: \"f4b15f23-61d5-4028-a889-c212564533cf\") " pod="openstack/dnsmasq-dns-757b4f8459-d6jxk" Feb 27 10:40:20 crc kubenswrapper[4998]: I0227 10:40:20.196377 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f4b15f23-61d5-4028-a889-c212564533cf-dns-svc\") pod \"dnsmasq-dns-757b4f8459-d6jxk\" (UID: \"f4b15f23-61d5-4028-a889-c212564533cf\") " pod="openstack/dnsmasq-dns-757b4f8459-d6jxk" Feb 27 10:40:20 crc kubenswrapper[4998]: I0227 10:40:20.196431 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f4b15f23-61d5-4028-a889-c212564533cf-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-d6jxk\" (UID: \"f4b15f23-61d5-4028-a889-c212564533cf\") " pod="openstack/dnsmasq-dns-757b4f8459-d6jxk" Feb 27 10:40:20 crc kubenswrapper[4998]: I0227 10:40:20.197488 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f4b15f23-61d5-4028-a889-c212564533cf-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-d6jxk\" (UID: \"f4b15f23-61d5-4028-a889-c212564533cf\") " pod="openstack/dnsmasq-dns-757b4f8459-d6jxk" Feb 27 10:40:20 crc kubenswrapper[4998]: I0227 10:40:20.199392 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f4b15f23-61d5-4028-a889-c212564533cf-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-d6jxk\" (UID: \"f4b15f23-61d5-4028-a889-c212564533cf\") " pod="openstack/dnsmasq-dns-757b4f8459-d6jxk" Feb 27 10:40:20 crc kubenswrapper[4998]: I0227 10:40:20.200836 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4b15f23-61d5-4028-a889-c212564533cf-config\") pod \"dnsmasq-dns-757b4f8459-d6jxk\" (UID: \"f4b15f23-61d5-4028-a889-c212564533cf\") " pod="openstack/dnsmasq-dns-757b4f8459-d6jxk" Feb 27 10:40:20 crc kubenswrapper[4998]: I0227 10:40:20.200994 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f4b15f23-61d5-4028-a889-c212564533cf-dns-svc\") pod \"dnsmasq-dns-757b4f8459-d6jxk\" (UID: \"f4b15f23-61d5-4028-a889-c212564533cf\") " pod="openstack/dnsmasq-dns-757b4f8459-d6jxk" Feb 27 10:40:20 crc kubenswrapper[4998]: I0227 10:40:20.201160 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f4b15f23-61d5-4028-a889-c212564533cf-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-d6jxk\" (UID: \"f4b15f23-61d5-4028-a889-c212564533cf\") " pod="openstack/dnsmasq-dns-757b4f8459-d6jxk" Feb 27 10:40:20 crc kubenswrapper[4998]: I0227 10:40:20.215981 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdfgk\" (UniqueName: \"kubernetes.io/projected/f4b15f23-61d5-4028-a889-c212564533cf-kube-api-access-gdfgk\") pod \"dnsmasq-dns-757b4f8459-d6jxk\" (UID: \"f4b15f23-61d5-4028-a889-c212564533cf\") " pod="openstack/dnsmasq-dns-757b4f8459-d6jxk" Feb 27 10:40:20 crc kubenswrapper[4998]: I0227 10:40:20.264596 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 27 10:40:20 crc kubenswrapper[4998]: I0227 10:40:20.279403 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 27 10:40:20 crc kubenswrapper[4998]: I0227 10:40:20.321212 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-d6jxk" Feb 27 10:40:20 crc kubenswrapper[4998]: I0227 10:40:20.400349 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-rwrfg"] Feb 27 10:40:20 crc kubenswrapper[4998]: I0227 10:40:20.459462 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-czjfd"] Feb 27 10:40:20 crc kubenswrapper[4998]: I0227 10:40:20.460560 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-czjfd" Feb 27 10:40:20 crc kubenswrapper[4998]: I0227 10:40:20.464561 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 27 10:40:20 crc kubenswrapper[4998]: I0227 10:40:20.467176 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Feb 27 10:40:20 crc kubenswrapper[4998]: I0227 10:40:20.487777 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-czjfd"] Feb 27 10:40:20 crc kubenswrapper[4998]: I0227 10:40:20.501747 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c5290fa-d988-420b-88de-065d0558ea40-config-data\") pod \"nova-cell1-conductor-db-sync-czjfd\" (UID: \"7c5290fa-d988-420b-88de-065d0558ea40\") " pod="openstack/nova-cell1-conductor-db-sync-czjfd" Feb 27 10:40:20 crc kubenswrapper[4998]: I0227 10:40:20.501789 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c5290fa-d988-420b-88de-065d0558ea40-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-czjfd\" (UID: \"7c5290fa-d988-420b-88de-065d0558ea40\") " pod="openstack/nova-cell1-conductor-db-sync-czjfd" Feb 27 10:40:20 crc kubenswrapper[4998]: I0227 10:40:20.502199 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c5290fa-d988-420b-88de-065d0558ea40-scripts\") pod \"nova-cell1-conductor-db-sync-czjfd\" (UID: \"7c5290fa-d988-420b-88de-065d0558ea40\") " pod="openstack/nova-cell1-conductor-db-sync-czjfd" Feb 27 10:40:20 crc kubenswrapper[4998]: I0227 10:40:20.502449 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clm8s\" (UniqueName: \"kubernetes.io/projected/7c5290fa-d988-420b-88de-065d0558ea40-kube-api-access-clm8s\") pod \"nova-cell1-conductor-db-sync-czjfd\" (UID: \"7c5290fa-d988-420b-88de-065d0558ea40\") " pod="openstack/nova-cell1-conductor-db-sync-czjfd" Feb 27 10:40:20 crc kubenswrapper[4998]: I0227 10:40:20.595648 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 27 10:40:20 crc kubenswrapper[4998]: I0227 10:40:20.605938 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c5290fa-d988-420b-88de-065d0558ea40-scripts\") pod \"nova-cell1-conductor-db-sync-czjfd\" (UID: \"7c5290fa-d988-420b-88de-065d0558ea40\") " pod="openstack/nova-cell1-conductor-db-sync-czjfd" Feb 27 10:40:20 crc kubenswrapper[4998]: I0227 10:40:20.605987 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clm8s\" (UniqueName: \"kubernetes.io/projected/7c5290fa-d988-420b-88de-065d0558ea40-kube-api-access-clm8s\") pod \"nova-cell1-conductor-db-sync-czjfd\" (UID: \"7c5290fa-d988-420b-88de-065d0558ea40\") " pod="openstack/nova-cell1-conductor-db-sync-czjfd" Feb 27 10:40:20 crc kubenswrapper[4998]: I0227 10:40:20.606093 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c5290fa-d988-420b-88de-065d0558ea40-config-data\") pod \"nova-cell1-conductor-db-sync-czjfd\" (UID: \"7c5290fa-d988-420b-88de-065d0558ea40\") " pod="openstack/nova-cell1-conductor-db-sync-czjfd" Feb 27 10:40:20 crc kubenswrapper[4998]: I0227 10:40:20.606111 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c5290fa-d988-420b-88de-065d0558ea40-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-czjfd\" (UID: \"7c5290fa-d988-420b-88de-065d0558ea40\") " pod="openstack/nova-cell1-conductor-db-sync-czjfd" Feb 27 10:40:20 crc kubenswrapper[4998]: I0227 10:40:20.612869 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c5290fa-d988-420b-88de-065d0558ea40-scripts\") pod \"nova-cell1-conductor-db-sync-czjfd\" (UID: \"7c5290fa-d988-420b-88de-065d0558ea40\") " pod="openstack/nova-cell1-conductor-db-sync-czjfd" Feb 27 10:40:20 crc kubenswrapper[4998]: I0227 10:40:20.613181 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c5290fa-d988-420b-88de-065d0558ea40-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-czjfd\" (UID: \"7c5290fa-d988-420b-88de-065d0558ea40\") " pod="openstack/nova-cell1-conductor-db-sync-czjfd" Feb 27 10:40:20 crc kubenswrapper[4998]: I0227 10:40:20.615344 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c5290fa-d988-420b-88de-065d0558ea40-config-data\") pod \"nova-cell1-conductor-db-sync-czjfd\" (UID: \"7c5290fa-d988-420b-88de-065d0558ea40\") " pod="openstack/nova-cell1-conductor-db-sync-czjfd" Feb 27 10:40:20 crc kubenswrapper[4998]: W0227 10:40:20.621270 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeb938d5c_49ff_476a_ae91_0c07a0321818.slice/crio-68425bd4470ad9827bfb1a3445fe697212c3997ff4195465881ece779f39e0d1 WatchSource:0}: Error finding container 68425bd4470ad9827bfb1a3445fe697212c3997ff4195465881ece779f39e0d1: Status 404 returned error can't find the container with id 68425bd4470ad9827bfb1a3445fe697212c3997ff4195465881ece779f39e0d1 Feb 27 10:40:20 crc kubenswrapper[4998]: I0227 10:40:20.632322 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clm8s\" (UniqueName: \"kubernetes.io/projected/7c5290fa-d988-420b-88de-065d0558ea40-kube-api-access-clm8s\") pod \"nova-cell1-conductor-db-sync-czjfd\" (UID: \"7c5290fa-d988-420b-88de-065d0558ea40\") " pod="openstack/nova-cell1-conductor-db-sync-czjfd" Feb 27 10:40:20 crc kubenswrapper[4998]: I0227 10:40:20.670984 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8f31638e-7dd4-4dcc-b18f-2e2618086e49","Type":"ContainerStarted","Data":"4255f4ba334c1cb3eadc9c02afddd601bb704c885a547fccf046c262321ff086"} Feb 27 10:40:20 crc kubenswrapper[4998]: I0227 10:40:20.671261 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 27 10:40:20 crc kubenswrapper[4998]: I0227 10:40:20.673857 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"eb938d5c-49ff-476a-ae91-0c07a0321818","Type":"ContainerStarted","Data":"68425bd4470ad9827bfb1a3445fe697212c3997ff4195465881ece779f39e0d1"} Feb 27 10:40:20 crc kubenswrapper[4998]: I0227 10:40:20.678284 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-rwrfg" event={"ID":"92acff51-4ca2-43c6-ab0f-480e01e9efb8","Type":"ContainerStarted","Data":"96d8c65902e8dc4ee4350e66a3187759a0796f227008c75343b05fb488cc7258"} Feb 27 10:40:20 crc kubenswrapper[4998]: I0227 10:40:20.712749 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.9771105900000001 podStartE2EDuration="5.712732813s" podCreationTimestamp="2026-02-27 10:40:15 +0000 UTC" firstStartedPulling="2026-02-27 10:40:16.459867614 +0000 UTC m=+1368.458138582" lastFinishedPulling="2026-02-27 10:40:20.195489837 +0000 UTC m=+1372.193760805" observedRunningTime="2026-02-27 10:40:20.697143161 +0000 UTC m=+1372.695414129" watchObservedRunningTime="2026-02-27 10:40:20.712732813 +0000 UTC m=+1372.711003781" Feb 27 10:40:20 crc kubenswrapper[4998]: W0227 10:40:20.758558 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode07daea2_2614_48f3_ba74_542747496c3c.slice/crio-1b4883a1a799e9e9aa09a94b47d0bb7541664983cc654dab32a923256b32a316 WatchSource:0}: Error finding container 1b4883a1a799e9e9aa09a94b47d0bb7541664983cc654dab32a923256b32a316: Status 404 returned error can't find the container with id 1b4883a1a799e9e9aa09a94b47d0bb7541664983cc654dab32a923256b32a316 Feb 27 10:40:20 crc kubenswrapper[4998]: I0227 10:40:20.761271 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 27 10:40:20 crc kubenswrapper[4998]: I0227 10:40:20.801847 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-czjfd" Feb 27 10:40:20 crc kubenswrapper[4998]: I0227 10:40:20.918883 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 27 10:40:21 crc kubenswrapper[4998]: W0227 10:40:21.021609 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf4b15f23_61d5_4028_a889_c212564533cf.slice/crio-fa5e29e724fe4d82ea3b554abaf1733c8474df2027f9bd7fa18716076b56ed99 WatchSource:0}: Error finding container fa5e29e724fe4d82ea3b554abaf1733c8474df2027f9bd7fa18716076b56ed99: Status 404 returned error can't find the container with id fa5e29e724fe4d82ea3b554abaf1733c8474df2027f9bd7fa18716076b56ed99 Feb 27 10:40:21 crc kubenswrapper[4998]: I0227 10:40:21.024297 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-d6jxk"] Feb 27 10:40:21 crc kubenswrapper[4998]: W0227 10:40:21.030701 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb18428a7_ad9e_4d5f_a243_cc54ab4f3ec8.slice/crio-7f0e451382c5b7cbded7138f75fc64ea8a3d8e805489689b39576cda5a199bb9 WatchSource:0}: Error finding container 7f0e451382c5b7cbded7138f75fc64ea8a3d8e805489689b39576cda5a199bb9: Status 404 returned error can't find the container with id 7f0e451382c5b7cbded7138f75fc64ea8a3d8e805489689b39576cda5a199bb9 Feb 27 10:40:21 crc kubenswrapper[4998]: I0227 10:40:21.039370 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 27 10:40:21 crc kubenswrapper[4998]: I0227 10:40:21.267493 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-czjfd"] Feb 27 10:40:21 crc kubenswrapper[4998]: I0227 10:40:21.700378 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e07daea2-2614-48f3-ba74-542747496c3c","Type":"ContainerStarted","Data":"1b4883a1a799e9e9aa09a94b47d0bb7541664983cc654dab32a923256b32a316"} Feb 27 10:40:21 crc kubenswrapper[4998]: I0227 10:40:21.703952 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b18428a7-ad9e-4d5f-a243-cc54ab4f3ec8","Type":"ContainerStarted","Data":"7f0e451382c5b7cbded7138f75fc64ea8a3d8e805489689b39576cda5a199bb9"} Feb 27 10:40:21 crc kubenswrapper[4998]: I0227 10:40:21.709263 4998 generic.go:334] "Generic (PLEG): container finished" podID="f4b15f23-61d5-4028-a889-c212564533cf" containerID="fa395cc74c71064c42c5b81f816a3d385997c09302897dacf7aa1fc439125fad" exitCode=0 Feb 27 10:40:21 crc kubenswrapper[4998]: I0227 10:40:21.709383 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-d6jxk" event={"ID":"f4b15f23-61d5-4028-a889-c212564533cf","Type":"ContainerDied","Data":"fa395cc74c71064c42c5b81f816a3d385997c09302897dacf7aa1fc439125fad"} Feb 27 10:40:21 crc kubenswrapper[4998]: I0227 10:40:21.709445 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-d6jxk" event={"ID":"f4b15f23-61d5-4028-a889-c212564533cf","Type":"ContainerStarted","Data":"fa5e29e724fe4d82ea3b554abaf1733c8474df2027f9bd7fa18716076b56ed99"} Feb 27 10:40:21 crc kubenswrapper[4998]: I0227 10:40:21.719582 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-czjfd" event={"ID":"7c5290fa-d988-420b-88de-065d0558ea40","Type":"ContainerStarted","Data":"25b4d582b6dd8ffbbf76388d717f28f867dc4707194de6b1df65a414d23975ce"} Feb 27 10:40:21 crc kubenswrapper[4998]: I0227 10:40:21.720336 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-czjfd" event={"ID":"7c5290fa-d988-420b-88de-065d0558ea40","Type":"ContainerStarted","Data":"ca9fb1ff53535160df6b1d4df2f1aaa8a6c32ac6d37b9d42822187d94776e0c0"} Feb 27 10:40:21 crc kubenswrapper[4998]: I0227 10:40:21.750002 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-rwrfg" event={"ID":"92acff51-4ca2-43c6-ab0f-480e01e9efb8","Type":"ContainerStarted","Data":"550199ff57ddd5616ea967474384544dce1ead6cabf6b337129b1634632860fa"} Feb 27 10:40:21 crc kubenswrapper[4998]: I0227 10:40:21.751460 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"07fa29c6-6b86-4340-bd6e-a5bf29cab4c2","Type":"ContainerStarted","Data":"d63809d632ddce24d450e87b564a3e351cc57da6c43048fb0518e4080430fb5c"} Feb 27 10:40:21 crc kubenswrapper[4998]: I0227 10:40:21.764624 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-czjfd" podStartSLOduration=1.7646029300000001 podStartE2EDuration="1.76460293s" podCreationTimestamp="2026-02-27 10:40:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:40:21.761424984 +0000 UTC m=+1373.759695962" watchObservedRunningTime="2026-02-27 10:40:21.76460293 +0000 UTC m=+1373.762873898" Feb 27 10:40:21 crc kubenswrapper[4998]: I0227 10:40:21.793114 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-rwrfg" podStartSLOduration=2.793092276 podStartE2EDuration="2.793092276s" podCreationTimestamp="2026-02-27 10:40:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:40:21.788961928 +0000 UTC m=+1373.787232896" watchObservedRunningTime="2026-02-27 10:40:21.793092276 +0000 UTC m=+1373.791363244" Feb 27 10:40:22 crc kubenswrapper[4998]: I0227 10:40:22.807004 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-757b4f8459-d6jxk" Feb 27 10:40:22 crc kubenswrapper[4998]: I0227 10:40:22.808059 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-d6jxk" event={"ID":"f4b15f23-61d5-4028-a889-c212564533cf","Type":"ContainerStarted","Data":"93fbe3f8ae0890e4c907024bc758f53363a96cece0ec44c2f50637b785c8c03f"} Feb 27 10:40:22 crc kubenswrapper[4998]: I0227 10:40:22.808138 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-757b4f8459-d6jxk" podStartSLOduration=3.808117026 podStartE2EDuration="3.808117026s" podCreationTimestamp="2026-02-27 10:40:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:40:22.788110016 +0000 UTC m=+1374.786380994" watchObservedRunningTime="2026-02-27 10:40:22.808117026 +0000 UTC m=+1374.806387994" Feb 27 10:40:23 crc kubenswrapper[4998]: I0227 10:40:23.514583 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 27 10:40:23 crc kubenswrapper[4998]: I0227 10:40:23.526506 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 27 10:40:24 crc kubenswrapper[4998]: I0227 10:40:24.787316 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b18428a7-ad9e-4d5f-a243-cc54ab4f3ec8","Type":"ContainerStarted","Data":"4a29d2434e903e5b1e27e710ec0cce39f56aa1731ff4dd073bc8dc5ce3c43fef"} Feb 27 10:40:24 crc kubenswrapper[4998]: I0227 10:40:24.787810 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b18428a7-ad9e-4d5f-a243-cc54ab4f3ec8","Type":"ContainerStarted","Data":"99a9d98038421dae552f8865c56882490ae3e008d26e3aafca89061184a3d60f"} Feb 27 10:40:24 crc kubenswrapper[4998]: I0227 10:40:24.790042 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"eb938d5c-49ff-476a-ae91-0c07a0321818","Type":"ContainerStarted","Data":"a6dd5e47c6abb35450458dde4ec8aeba8b15341e384fc2b0033b757d85ca5355"} Feb 27 10:40:24 crc kubenswrapper[4998]: I0227 10:40:24.790335 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="eb938d5c-49ff-476a-ae91-0c07a0321818" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://a6dd5e47c6abb35450458dde4ec8aeba8b15341e384fc2b0033b757d85ca5355" gracePeriod=30 Feb 27 10:40:24 crc kubenswrapper[4998]: I0227 10:40:24.798115 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"07fa29c6-6b86-4340-bd6e-a5bf29cab4c2","Type":"ContainerStarted","Data":"fe8aee7d7efd9aa4ed1fd811199d01e41e12821070e7450590a521346c28787b"} Feb 27 10:40:24 crc kubenswrapper[4998]: I0227 10:40:24.798371 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"07fa29c6-6b86-4340-bd6e-a5bf29cab4c2","Type":"ContainerStarted","Data":"8e42e56895bde54137217e772eaaef17e4d68e7acf873eb05344a075875b56bb"} Feb 27 10:40:24 crc kubenswrapper[4998]: I0227 10:40:24.798587 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="07fa29c6-6b86-4340-bd6e-a5bf29cab4c2" containerName="nova-metadata-log" containerID="cri-o://8e42e56895bde54137217e772eaaef17e4d68e7acf873eb05344a075875b56bb" gracePeriod=30 Feb 27 10:40:24 crc kubenswrapper[4998]: I0227 10:40:24.798763 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="07fa29c6-6b86-4340-bd6e-a5bf29cab4c2" containerName="nova-metadata-metadata" containerID="cri-o://fe8aee7d7efd9aa4ed1fd811199d01e41e12821070e7450590a521346c28787b" gracePeriod=30 Feb 27 10:40:24 crc kubenswrapper[4998]: I0227 10:40:24.803299 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e07daea2-2614-48f3-ba74-542747496c3c","Type":"ContainerStarted","Data":"2244d298d8f60466149f19dcba6b5501fbd0bb4297301bb49103d733fa1ac51f"} Feb 27 10:40:24 crc kubenswrapper[4998]: I0227 10:40:24.812750 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.983279294 podStartE2EDuration="5.812728606s" podCreationTimestamp="2026-02-27 10:40:19 +0000 UTC" firstStartedPulling="2026-02-27 10:40:21.03437972 +0000 UTC m=+1373.032650678" lastFinishedPulling="2026-02-27 10:40:23.863829022 +0000 UTC m=+1375.862099990" observedRunningTime="2026-02-27 10:40:24.809664213 +0000 UTC m=+1376.807935181" watchObservedRunningTime="2026-02-27 10:40:24.812728606 +0000 UTC m=+1376.810999574" Feb 27 10:40:24 crc kubenswrapper[4998]: I0227 10:40:24.835694 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.586663792 podStartE2EDuration="5.835677745s" podCreationTimestamp="2026-02-27 10:40:19 +0000 UTC" firstStartedPulling="2026-02-27 10:40:20.625797748 +0000 UTC m=+1372.624068716" lastFinishedPulling="2026-02-27 10:40:23.874811701 +0000 UTC m=+1375.873082669" observedRunningTime="2026-02-27 10:40:24.826382524 +0000 UTC m=+1376.824653492" watchObservedRunningTime="2026-02-27 10:40:24.835677745 +0000 UTC m=+1376.833948713" Feb 27 10:40:24 crc kubenswrapper[4998]: I0227 10:40:24.847382 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.923043204 podStartE2EDuration="5.847362917s" podCreationTimestamp="2026-02-27 10:40:19 +0000 UTC" firstStartedPulling="2026-02-27 10:40:20.939685105 +0000 UTC m=+1372.937956073" lastFinishedPulling="2026-02-27 10:40:23.864004818 +0000 UTC m=+1375.862275786" observedRunningTime="2026-02-27 10:40:24.846701945 +0000 UTC m=+1376.844972923" watchObservedRunningTime="2026-02-27 10:40:24.847362917 +0000 UTC m=+1376.845633885" Feb 27 10:40:24 crc kubenswrapper[4998]: I0227 10:40:24.877030 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.794451542 podStartE2EDuration="5.877005142s" podCreationTimestamp="2026-02-27 10:40:19 +0000 UTC" firstStartedPulling="2026-02-27 10:40:20.761337924 +0000 UTC m=+1372.759608892" lastFinishedPulling="2026-02-27 10:40:23.843891524 +0000 UTC m=+1375.842162492" observedRunningTime="2026-02-27 10:40:24.867762622 +0000 UTC m=+1376.866033600" watchObservedRunningTime="2026-02-27 10:40:24.877005142 +0000 UTC m=+1376.875276120" Feb 27 10:40:25 crc kubenswrapper[4998]: I0227 10:40:25.005071 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 27 10:40:25 crc kubenswrapper[4998]: I0227 10:40:25.080572 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 27 10:40:25 crc kubenswrapper[4998]: I0227 10:40:25.265576 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 27 10:40:25 crc kubenswrapper[4998]: I0227 10:40:25.265628 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 27 10:40:25 crc kubenswrapper[4998]: I0227 10:40:25.427537 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 27 10:40:25 crc kubenswrapper[4998]: I0227 10:40:25.518991 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rpxcj\" (UniqueName: \"kubernetes.io/projected/07fa29c6-6b86-4340-bd6e-a5bf29cab4c2-kube-api-access-rpxcj\") pod \"07fa29c6-6b86-4340-bd6e-a5bf29cab4c2\" (UID: \"07fa29c6-6b86-4340-bd6e-a5bf29cab4c2\") " Feb 27 10:40:25 crc kubenswrapper[4998]: I0227 10:40:25.519137 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07fa29c6-6b86-4340-bd6e-a5bf29cab4c2-combined-ca-bundle\") pod \"07fa29c6-6b86-4340-bd6e-a5bf29cab4c2\" (UID: \"07fa29c6-6b86-4340-bd6e-a5bf29cab4c2\") " Feb 27 10:40:25 crc kubenswrapper[4998]: I0227 10:40:25.519268 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07fa29c6-6b86-4340-bd6e-a5bf29cab4c2-config-data\") pod \"07fa29c6-6b86-4340-bd6e-a5bf29cab4c2\" (UID: \"07fa29c6-6b86-4340-bd6e-a5bf29cab4c2\") " Feb 27 10:40:25 crc kubenswrapper[4998]: I0227 10:40:25.519303 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07fa29c6-6b86-4340-bd6e-a5bf29cab4c2-logs\") pod \"07fa29c6-6b86-4340-bd6e-a5bf29cab4c2\" (UID: \"07fa29c6-6b86-4340-bd6e-a5bf29cab4c2\") " Feb 27 10:40:25 crc kubenswrapper[4998]: I0227 10:40:25.519855 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07fa29c6-6b86-4340-bd6e-a5bf29cab4c2-logs" (OuterVolumeSpecName: "logs") pod "07fa29c6-6b86-4340-bd6e-a5bf29cab4c2" (UID: "07fa29c6-6b86-4340-bd6e-a5bf29cab4c2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:40:25 crc kubenswrapper[4998]: I0227 10:40:25.524016 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07fa29c6-6b86-4340-bd6e-a5bf29cab4c2-kube-api-access-rpxcj" (OuterVolumeSpecName: "kube-api-access-rpxcj") pod "07fa29c6-6b86-4340-bd6e-a5bf29cab4c2" (UID: "07fa29c6-6b86-4340-bd6e-a5bf29cab4c2"). InnerVolumeSpecName "kube-api-access-rpxcj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:40:25 crc kubenswrapper[4998]: I0227 10:40:25.547314 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07fa29c6-6b86-4340-bd6e-a5bf29cab4c2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "07fa29c6-6b86-4340-bd6e-a5bf29cab4c2" (UID: "07fa29c6-6b86-4340-bd6e-a5bf29cab4c2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:40:25 crc kubenswrapper[4998]: I0227 10:40:25.547590 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07fa29c6-6b86-4340-bd6e-a5bf29cab4c2-config-data" (OuterVolumeSpecName: "config-data") pod "07fa29c6-6b86-4340-bd6e-a5bf29cab4c2" (UID: "07fa29c6-6b86-4340-bd6e-a5bf29cab4c2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:40:25 crc kubenswrapper[4998]: I0227 10:40:25.622144 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rpxcj\" (UniqueName: \"kubernetes.io/projected/07fa29c6-6b86-4340-bd6e-a5bf29cab4c2-kube-api-access-rpxcj\") on node \"crc\" DevicePath \"\"" Feb 27 10:40:25 crc kubenswrapper[4998]: I0227 10:40:25.622209 4998 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07fa29c6-6b86-4340-bd6e-a5bf29cab4c2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:40:25 crc kubenswrapper[4998]: I0227 10:40:25.622263 4998 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07fa29c6-6b86-4340-bd6e-a5bf29cab4c2-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 10:40:25 crc kubenswrapper[4998]: I0227 10:40:25.622281 4998 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07fa29c6-6b86-4340-bd6e-a5bf29cab4c2-logs\") on node \"crc\" DevicePath \"\"" Feb 27 10:40:25 crc kubenswrapper[4998]: I0227 10:40:25.817862 4998 generic.go:334] "Generic (PLEG): container finished" podID="07fa29c6-6b86-4340-bd6e-a5bf29cab4c2" containerID="fe8aee7d7efd9aa4ed1fd811199d01e41e12821070e7450590a521346c28787b" exitCode=0 Feb 27 10:40:25 crc kubenswrapper[4998]: I0227 10:40:25.817929 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"07fa29c6-6b86-4340-bd6e-a5bf29cab4c2","Type":"ContainerDied","Data":"fe8aee7d7efd9aa4ed1fd811199d01e41e12821070e7450590a521346c28787b"} Feb 27 10:40:25 crc kubenswrapper[4998]: I0227 10:40:25.818092 4998 generic.go:334] "Generic (PLEG): container finished" podID="07fa29c6-6b86-4340-bd6e-a5bf29cab4c2" containerID="8e42e56895bde54137217e772eaaef17e4d68e7acf873eb05344a075875b56bb" exitCode=143 Feb 27 10:40:25 crc kubenswrapper[4998]: I0227 10:40:25.818219 4998 scope.go:117] "RemoveContainer" containerID="fe8aee7d7efd9aa4ed1fd811199d01e41e12821070e7450590a521346c28787b" Feb 27 10:40:25 crc kubenswrapper[4998]: I0227 10:40:25.817992 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 27 10:40:25 crc kubenswrapper[4998]: I0227 10:40:25.818199 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"07fa29c6-6b86-4340-bd6e-a5bf29cab4c2","Type":"ContainerDied","Data":"8e42e56895bde54137217e772eaaef17e4d68e7acf873eb05344a075875b56bb"} Feb 27 10:40:25 crc kubenswrapper[4998]: I0227 10:40:25.818359 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"07fa29c6-6b86-4340-bd6e-a5bf29cab4c2","Type":"ContainerDied","Data":"d63809d632ddce24d450e87b564a3e351cc57da6c43048fb0518e4080430fb5c"} Feb 27 10:40:25 crc kubenswrapper[4998]: I0227 10:40:25.857469 4998 scope.go:117] "RemoveContainer" containerID="8e42e56895bde54137217e772eaaef17e4d68e7acf873eb05344a075875b56bb" Feb 27 10:40:25 crc kubenswrapper[4998]: I0227 10:40:25.874859 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 27 10:40:25 crc kubenswrapper[4998]: I0227 10:40:25.886595 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 27 10:40:25 crc kubenswrapper[4998]: I0227 10:40:25.895113 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 27 10:40:25 crc kubenswrapper[4998]: E0227 10:40:25.895745 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07fa29c6-6b86-4340-bd6e-a5bf29cab4c2" containerName="nova-metadata-metadata" Feb 27 10:40:25 crc kubenswrapper[4998]: I0227 10:40:25.895775 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="07fa29c6-6b86-4340-bd6e-a5bf29cab4c2" containerName="nova-metadata-metadata" Feb 27 10:40:25 crc kubenswrapper[4998]: E0227 10:40:25.895799 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07fa29c6-6b86-4340-bd6e-a5bf29cab4c2" containerName="nova-metadata-log" Feb 27 10:40:25 crc kubenswrapper[4998]: I0227 10:40:25.895808 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="07fa29c6-6b86-4340-bd6e-a5bf29cab4c2" containerName="nova-metadata-log" Feb 27 10:40:25 crc kubenswrapper[4998]: I0227 10:40:25.896051 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="07fa29c6-6b86-4340-bd6e-a5bf29cab4c2" containerName="nova-metadata-metadata" Feb 27 10:40:25 crc kubenswrapper[4998]: I0227 10:40:25.896107 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="07fa29c6-6b86-4340-bd6e-a5bf29cab4c2" containerName="nova-metadata-log" Feb 27 10:40:25 crc kubenswrapper[4998]: I0227 10:40:25.897511 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 27 10:40:25 crc kubenswrapper[4998]: I0227 10:40:25.899613 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 27 10:40:25 crc kubenswrapper[4998]: I0227 10:40:25.899837 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 27 10:40:25 crc kubenswrapper[4998]: I0227 10:40:25.903505 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 27 10:40:25 crc kubenswrapper[4998]: I0227 10:40:25.938872 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20fa2ce1-ca5f-4301-81e6-d0194d9cc869-logs\") pod \"nova-metadata-0\" (UID: \"20fa2ce1-ca5f-4301-81e6-d0194d9cc869\") " pod="openstack/nova-metadata-0" Feb 27 10:40:25 crc kubenswrapper[4998]: I0227 10:40:25.938940 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbvrs\" (UniqueName: \"kubernetes.io/projected/20fa2ce1-ca5f-4301-81e6-d0194d9cc869-kube-api-access-qbvrs\") pod \"nova-metadata-0\" (UID: \"20fa2ce1-ca5f-4301-81e6-d0194d9cc869\") " pod="openstack/nova-metadata-0" Feb 27 10:40:25 crc kubenswrapper[4998]: I0227 10:40:25.939041 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/20fa2ce1-ca5f-4301-81e6-d0194d9cc869-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"20fa2ce1-ca5f-4301-81e6-d0194d9cc869\") " pod="openstack/nova-metadata-0" Feb 27 10:40:25 crc kubenswrapper[4998]: I0227 10:40:25.939083 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20fa2ce1-ca5f-4301-81e6-d0194d9cc869-config-data\") pod \"nova-metadata-0\" (UID: \"20fa2ce1-ca5f-4301-81e6-d0194d9cc869\") " pod="openstack/nova-metadata-0" Feb 27 10:40:25 crc kubenswrapper[4998]: I0227 10:40:25.939137 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20fa2ce1-ca5f-4301-81e6-d0194d9cc869-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"20fa2ce1-ca5f-4301-81e6-d0194d9cc869\") " pod="openstack/nova-metadata-0" Feb 27 10:40:25 crc kubenswrapper[4998]: I0227 10:40:25.948914 4998 scope.go:117] "RemoveContainer" containerID="fe8aee7d7efd9aa4ed1fd811199d01e41e12821070e7450590a521346c28787b" Feb 27 10:40:25 crc kubenswrapper[4998]: E0227 10:40:25.949340 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe8aee7d7efd9aa4ed1fd811199d01e41e12821070e7450590a521346c28787b\": container with ID starting with fe8aee7d7efd9aa4ed1fd811199d01e41e12821070e7450590a521346c28787b not found: ID does not exist" containerID="fe8aee7d7efd9aa4ed1fd811199d01e41e12821070e7450590a521346c28787b" Feb 27 10:40:25 crc kubenswrapper[4998]: I0227 10:40:25.949368 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe8aee7d7efd9aa4ed1fd811199d01e41e12821070e7450590a521346c28787b"} err="failed to get container status \"fe8aee7d7efd9aa4ed1fd811199d01e41e12821070e7450590a521346c28787b\": rpc error: code = NotFound desc = could not find container \"fe8aee7d7efd9aa4ed1fd811199d01e41e12821070e7450590a521346c28787b\": container with ID starting with fe8aee7d7efd9aa4ed1fd811199d01e41e12821070e7450590a521346c28787b not found: ID does not exist" Feb 27 10:40:25 crc kubenswrapper[4998]: I0227 10:40:25.949388 4998 scope.go:117] "RemoveContainer" containerID="8e42e56895bde54137217e772eaaef17e4d68e7acf873eb05344a075875b56bb" Feb 27 10:40:25 crc kubenswrapper[4998]: E0227 10:40:25.950021 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e42e56895bde54137217e772eaaef17e4d68e7acf873eb05344a075875b56bb\": container with ID starting with 8e42e56895bde54137217e772eaaef17e4d68e7acf873eb05344a075875b56bb not found: ID does not exist" containerID="8e42e56895bde54137217e772eaaef17e4d68e7acf873eb05344a075875b56bb" Feb 27 10:40:25 crc kubenswrapper[4998]: I0227 10:40:25.950064 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e42e56895bde54137217e772eaaef17e4d68e7acf873eb05344a075875b56bb"} err="failed to get container status \"8e42e56895bde54137217e772eaaef17e4d68e7acf873eb05344a075875b56bb\": rpc error: code = NotFound desc = could not find container \"8e42e56895bde54137217e772eaaef17e4d68e7acf873eb05344a075875b56bb\": container with ID starting with 8e42e56895bde54137217e772eaaef17e4d68e7acf873eb05344a075875b56bb not found: ID does not exist" Feb 27 10:40:25 crc kubenswrapper[4998]: I0227 10:40:25.950094 4998 scope.go:117] "RemoveContainer" containerID="fe8aee7d7efd9aa4ed1fd811199d01e41e12821070e7450590a521346c28787b" Feb 27 10:40:25 crc kubenswrapper[4998]: I0227 10:40:25.950511 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe8aee7d7efd9aa4ed1fd811199d01e41e12821070e7450590a521346c28787b"} err="failed to get container status \"fe8aee7d7efd9aa4ed1fd811199d01e41e12821070e7450590a521346c28787b\": rpc error: code = NotFound desc = could not find container \"fe8aee7d7efd9aa4ed1fd811199d01e41e12821070e7450590a521346c28787b\": container with ID starting with fe8aee7d7efd9aa4ed1fd811199d01e41e12821070e7450590a521346c28787b not found: ID does not exist" Feb 27 10:40:25 crc kubenswrapper[4998]: I0227 10:40:25.950564 4998 scope.go:117] "RemoveContainer" containerID="8e42e56895bde54137217e772eaaef17e4d68e7acf873eb05344a075875b56bb" Feb 27 10:40:25 crc kubenswrapper[4998]: I0227 10:40:25.950928 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e42e56895bde54137217e772eaaef17e4d68e7acf873eb05344a075875b56bb"} err="failed to get container status \"8e42e56895bde54137217e772eaaef17e4d68e7acf873eb05344a075875b56bb\": rpc error: code = NotFound desc = could not find container \"8e42e56895bde54137217e772eaaef17e4d68e7acf873eb05344a075875b56bb\": container with ID starting with 8e42e56895bde54137217e772eaaef17e4d68e7acf873eb05344a075875b56bb not found: ID does not exist" Feb 27 10:40:26 crc kubenswrapper[4998]: I0227 10:40:26.041742 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20fa2ce1-ca5f-4301-81e6-d0194d9cc869-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"20fa2ce1-ca5f-4301-81e6-d0194d9cc869\") " pod="openstack/nova-metadata-0" Feb 27 10:40:26 crc kubenswrapper[4998]: I0227 10:40:26.043673 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20fa2ce1-ca5f-4301-81e6-d0194d9cc869-logs\") pod \"nova-metadata-0\" (UID: \"20fa2ce1-ca5f-4301-81e6-d0194d9cc869\") " pod="openstack/nova-metadata-0" Feb 27 10:40:26 crc kubenswrapper[4998]: I0227 10:40:26.043747 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbvrs\" (UniqueName: \"kubernetes.io/projected/20fa2ce1-ca5f-4301-81e6-d0194d9cc869-kube-api-access-qbvrs\") pod \"nova-metadata-0\" (UID: \"20fa2ce1-ca5f-4301-81e6-d0194d9cc869\") " pod="openstack/nova-metadata-0" Feb 27 10:40:26 crc kubenswrapper[4998]: I0227 10:40:26.043868 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/20fa2ce1-ca5f-4301-81e6-d0194d9cc869-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"20fa2ce1-ca5f-4301-81e6-d0194d9cc869\") " pod="openstack/nova-metadata-0" Feb 27 10:40:26 crc kubenswrapper[4998]: I0227 10:40:26.043925 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20fa2ce1-ca5f-4301-81e6-d0194d9cc869-config-data\") pod \"nova-metadata-0\" (UID: \"20fa2ce1-ca5f-4301-81e6-d0194d9cc869\") " pod="openstack/nova-metadata-0" Feb 27 10:40:26 crc kubenswrapper[4998]: I0227 10:40:26.044210 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20fa2ce1-ca5f-4301-81e6-d0194d9cc869-logs\") pod \"nova-metadata-0\" (UID: \"20fa2ce1-ca5f-4301-81e6-d0194d9cc869\") " pod="openstack/nova-metadata-0" Feb 27 10:40:26 crc kubenswrapper[4998]: I0227 10:40:26.047132 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20fa2ce1-ca5f-4301-81e6-d0194d9cc869-config-data\") pod \"nova-metadata-0\" (UID: \"20fa2ce1-ca5f-4301-81e6-d0194d9cc869\") " pod="openstack/nova-metadata-0" Feb 27 10:40:26 crc kubenswrapper[4998]: I0227 10:40:26.047852 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20fa2ce1-ca5f-4301-81e6-d0194d9cc869-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"20fa2ce1-ca5f-4301-81e6-d0194d9cc869\") " pod="openstack/nova-metadata-0" Feb 27 10:40:26 crc kubenswrapper[4998]: I0227 10:40:26.060806 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbvrs\" (UniqueName: \"kubernetes.io/projected/20fa2ce1-ca5f-4301-81e6-d0194d9cc869-kube-api-access-qbvrs\") pod \"nova-metadata-0\" (UID: \"20fa2ce1-ca5f-4301-81e6-d0194d9cc869\") " pod="openstack/nova-metadata-0" Feb 27 10:40:26 crc kubenswrapper[4998]: I0227 10:40:26.072200 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/20fa2ce1-ca5f-4301-81e6-d0194d9cc869-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"20fa2ce1-ca5f-4301-81e6-d0194d9cc869\") " pod="openstack/nova-metadata-0" Feb 27 10:40:26 crc kubenswrapper[4998]: I0227 10:40:26.247048 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 27 10:40:26 crc kubenswrapper[4998]: W0227 10:40:26.692449 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod20fa2ce1_ca5f_4301_81e6_d0194d9cc869.slice/crio-c7e53f8a09d979446621859377e9caa6f958c843c709c27dba6ae85e4ef316b2 WatchSource:0}: Error finding container c7e53f8a09d979446621859377e9caa6f958c843c709c27dba6ae85e4ef316b2: Status 404 returned error can't find the container with id c7e53f8a09d979446621859377e9caa6f958c843c709c27dba6ae85e4ef316b2 Feb 27 10:40:26 crc kubenswrapper[4998]: I0227 10:40:26.696133 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 27 10:40:26 crc kubenswrapper[4998]: I0227 10:40:26.776827 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07fa29c6-6b86-4340-bd6e-a5bf29cab4c2" path="/var/lib/kubelet/pods/07fa29c6-6b86-4340-bd6e-a5bf29cab4c2/volumes" Feb 27 10:40:26 crc kubenswrapper[4998]: I0227 10:40:26.830319 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"20fa2ce1-ca5f-4301-81e6-d0194d9cc869","Type":"ContainerStarted","Data":"c7e53f8a09d979446621859377e9caa6f958c843c709c27dba6ae85e4ef316b2"} Feb 27 10:40:27 crc kubenswrapper[4998]: I0227 10:40:27.865015 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"20fa2ce1-ca5f-4301-81e6-d0194d9cc869","Type":"ContainerStarted","Data":"40d80b3b91e5ac1526d8cfc941eb9d579308840288520894683b4361fbc0808f"} Feb 27 10:40:27 crc kubenswrapper[4998]: I0227 10:40:27.865300 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"20fa2ce1-ca5f-4301-81e6-d0194d9cc869","Type":"ContainerStarted","Data":"b322c4f3a147c4c47fcf83bb32d339d7dc18a93f0f82b7795fa7aaf352c83ab3"} Feb 27 10:40:28 crc kubenswrapper[4998]: I0227 10:40:28.875720 4998 generic.go:334] "Generic (PLEG): container finished" podID="92acff51-4ca2-43c6-ab0f-480e01e9efb8" containerID="550199ff57ddd5616ea967474384544dce1ead6cabf6b337129b1634632860fa" exitCode=0 Feb 27 10:40:28 crc kubenswrapper[4998]: I0227 10:40:28.875965 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-rwrfg" event={"ID":"92acff51-4ca2-43c6-ab0f-480e01e9efb8","Type":"ContainerDied","Data":"550199ff57ddd5616ea967474384544dce1ead6cabf6b337129b1634632860fa"} Feb 27 10:40:28 crc kubenswrapper[4998]: I0227 10:40:28.879172 4998 generic.go:334] "Generic (PLEG): container finished" podID="7c5290fa-d988-420b-88de-065d0558ea40" containerID="25b4d582b6dd8ffbbf76388d717f28f867dc4707194de6b1df65a414d23975ce" exitCode=0 Feb 27 10:40:28 crc kubenswrapper[4998]: I0227 10:40:28.879951 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-czjfd" event={"ID":"7c5290fa-d988-420b-88de-065d0558ea40","Type":"ContainerDied","Data":"25b4d582b6dd8ffbbf76388d717f28f867dc4707194de6b1df65a414d23975ce"} Feb 27 10:40:28 crc kubenswrapper[4998]: I0227 10:40:28.903672 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.903654774 podStartE2EDuration="3.903654774s" podCreationTimestamp="2026-02-27 10:40:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:40:27.886785021 +0000 UTC m=+1379.885056019" watchObservedRunningTime="2026-02-27 10:40:28.903654774 +0000 UTC m=+1380.901925742" Feb 27 10:40:30 crc kubenswrapper[4998]: I0227 10:40:30.081093 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 27 10:40:30 crc kubenswrapper[4998]: I0227 10:40:30.110392 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 27 10:40:30 crc kubenswrapper[4998]: I0227 10:40:30.280341 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 27 10:40:30 crc kubenswrapper[4998]: I0227 10:40:30.280710 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 27 10:40:30 crc kubenswrapper[4998]: I0227 10:40:30.324547 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-757b4f8459-d6jxk" Feb 27 10:40:30 crc kubenswrapper[4998]: I0227 10:40:30.353768 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-rwrfg" Feb 27 10:40:30 crc kubenswrapper[4998]: I0227 10:40:30.365360 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-czjfd" Feb 27 10:40:30 crc kubenswrapper[4998]: I0227 10:40:30.411821 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-6qj8s"] Feb 27 10:40:30 crc kubenswrapper[4998]: I0227 10:40:30.412081 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c9776ccc5-6qj8s" podUID="5d039984-b6c7-4498-b215-d46fdab92f47" containerName="dnsmasq-dns" containerID="cri-o://6699df0a79ea327072aee2c5a5231e89a775e998435397badc0de7158d249bcf" gracePeriod=10 Feb 27 10:40:30 crc kubenswrapper[4998]: I0227 10:40:30.553973 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c5290fa-d988-420b-88de-065d0558ea40-combined-ca-bundle\") pod \"7c5290fa-d988-420b-88de-065d0558ea40\" (UID: \"7c5290fa-d988-420b-88de-065d0558ea40\") " Feb 27 10:40:30 crc kubenswrapper[4998]: I0227 10:40:30.554548 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c5290fa-d988-420b-88de-065d0558ea40-scripts\") pod \"7c5290fa-d988-420b-88de-065d0558ea40\" (UID: \"7c5290fa-d988-420b-88de-065d0558ea40\") " Feb 27 10:40:30 crc kubenswrapper[4998]: I0227 10:40:30.554584 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c5290fa-d988-420b-88de-065d0558ea40-config-data\") pod \"7c5290fa-d988-420b-88de-065d0558ea40\" (UID: \"7c5290fa-d988-420b-88de-065d0558ea40\") " Feb 27 10:40:30 crc kubenswrapper[4998]: I0227 10:40:30.554678 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dsb8j\" (UniqueName: \"kubernetes.io/projected/92acff51-4ca2-43c6-ab0f-480e01e9efb8-kube-api-access-dsb8j\") pod \"92acff51-4ca2-43c6-ab0f-480e01e9efb8\" (UID: \"92acff51-4ca2-43c6-ab0f-480e01e9efb8\") " Feb 27 10:40:30 crc kubenswrapper[4998]: I0227 10:40:30.554756 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92acff51-4ca2-43c6-ab0f-480e01e9efb8-config-data\") pod \"92acff51-4ca2-43c6-ab0f-480e01e9efb8\" (UID: \"92acff51-4ca2-43c6-ab0f-480e01e9efb8\") " Feb 27 10:40:30 crc kubenswrapper[4998]: I0227 10:40:30.554784 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92acff51-4ca2-43c6-ab0f-480e01e9efb8-combined-ca-bundle\") pod \"92acff51-4ca2-43c6-ab0f-480e01e9efb8\" (UID: \"92acff51-4ca2-43c6-ab0f-480e01e9efb8\") " Feb 27 10:40:30 crc kubenswrapper[4998]: I0227 10:40:30.554804 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-clm8s\" (UniqueName: \"kubernetes.io/projected/7c5290fa-d988-420b-88de-065d0558ea40-kube-api-access-clm8s\") pod \"7c5290fa-d988-420b-88de-065d0558ea40\" (UID: \"7c5290fa-d988-420b-88de-065d0558ea40\") " Feb 27 10:40:30 crc kubenswrapper[4998]: I0227 10:40:30.554840 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/92acff51-4ca2-43c6-ab0f-480e01e9efb8-scripts\") pod \"92acff51-4ca2-43c6-ab0f-480e01e9efb8\" (UID: \"92acff51-4ca2-43c6-ab0f-480e01e9efb8\") " Feb 27 10:40:30 crc kubenswrapper[4998]: I0227 10:40:30.563443 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92acff51-4ca2-43c6-ab0f-480e01e9efb8-scripts" (OuterVolumeSpecName: "scripts") pod "92acff51-4ca2-43c6-ab0f-480e01e9efb8" (UID: "92acff51-4ca2-43c6-ab0f-480e01e9efb8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:40:30 crc kubenswrapper[4998]: I0227 10:40:30.565371 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c5290fa-d988-420b-88de-065d0558ea40-kube-api-access-clm8s" (OuterVolumeSpecName: "kube-api-access-clm8s") pod "7c5290fa-d988-420b-88de-065d0558ea40" (UID: "7c5290fa-d988-420b-88de-065d0558ea40"). InnerVolumeSpecName "kube-api-access-clm8s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:40:30 crc kubenswrapper[4998]: I0227 10:40:30.567045 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c5290fa-d988-420b-88de-065d0558ea40-scripts" (OuterVolumeSpecName: "scripts") pod "7c5290fa-d988-420b-88de-065d0558ea40" (UID: "7c5290fa-d988-420b-88de-065d0558ea40"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:40:30 crc kubenswrapper[4998]: I0227 10:40:30.583800 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92acff51-4ca2-43c6-ab0f-480e01e9efb8-kube-api-access-dsb8j" (OuterVolumeSpecName: "kube-api-access-dsb8j") pod "92acff51-4ca2-43c6-ab0f-480e01e9efb8" (UID: "92acff51-4ca2-43c6-ab0f-480e01e9efb8"). InnerVolumeSpecName "kube-api-access-dsb8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:40:30 crc kubenswrapper[4998]: I0227 10:40:30.597378 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c5290fa-d988-420b-88de-065d0558ea40-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7c5290fa-d988-420b-88de-065d0558ea40" (UID: "7c5290fa-d988-420b-88de-065d0558ea40"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:40:30 crc kubenswrapper[4998]: I0227 10:40:30.604615 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c5290fa-d988-420b-88de-065d0558ea40-config-data" (OuterVolumeSpecName: "config-data") pod "7c5290fa-d988-420b-88de-065d0558ea40" (UID: "7c5290fa-d988-420b-88de-065d0558ea40"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:40:30 crc kubenswrapper[4998]: I0227 10:40:30.607869 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92acff51-4ca2-43c6-ab0f-480e01e9efb8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "92acff51-4ca2-43c6-ab0f-480e01e9efb8" (UID: "92acff51-4ca2-43c6-ab0f-480e01e9efb8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:40:30 crc kubenswrapper[4998]: I0227 10:40:30.626049 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92acff51-4ca2-43c6-ab0f-480e01e9efb8-config-data" (OuterVolumeSpecName: "config-data") pod "92acff51-4ca2-43c6-ab0f-480e01e9efb8" (UID: "92acff51-4ca2-43c6-ab0f-480e01e9efb8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:40:30 crc kubenswrapper[4998]: I0227 10:40:30.657554 4998 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92acff51-4ca2-43c6-ab0f-480e01e9efb8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:40:30 crc kubenswrapper[4998]: I0227 10:40:30.657717 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-clm8s\" (UniqueName: \"kubernetes.io/projected/7c5290fa-d988-420b-88de-065d0558ea40-kube-api-access-clm8s\") on node \"crc\" DevicePath \"\"" Feb 27 10:40:30 crc kubenswrapper[4998]: I0227 10:40:30.657779 4998 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/92acff51-4ca2-43c6-ab0f-480e01e9efb8-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 10:40:30 crc kubenswrapper[4998]: I0227 10:40:30.657893 4998 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c5290fa-d988-420b-88de-065d0558ea40-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:40:30 crc kubenswrapper[4998]: I0227 10:40:30.657957 4998 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c5290fa-d988-420b-88de-065d0558ea40-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 10:40:30 crc kubenswrapper[4998]: I0227 10:40:30.658012 4998 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c5290fa-d988-420b-88de-065d0558ea40-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 10:40:30 crc kubenswrapper[4998]: I0227 10:40:30.658067 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dsb8j\" (UniqueName: \"kubernetes.io/projected/92acff51-4ca2-43c6-ab0f-480e01e9efb8-kube-api-access-dsb8j\") on node \"crc\" DevicePath \"\"" Feb 27 10:40:30 crc kubenswrapper[4998]: I0227 10:40:30.658128 4998 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92acff51-4ca2-43c6-ab0f-480e01e9efb8-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 10:40:30 crc kubenswrapper[4998]: I0227 10:40:30.904367 4998 generic.go:334] "Generic (PLEG): container finished" podID="5d039984-b6c7-4498-b215-d46fdab92f47" containerID="6699df0a79ea327072aee2c5a5231e89a775e998435397badc0de7158d249bcf" exitCode=0 Feb 27 10:40:30 crc kubenswrapper[4998]: I0227 10:40:30.904421 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-6qj8s" event={"ID":"5d039984-b6c7-4498-b215-d46fdab92f47","Type":"ContainerDied","Data":"6699df0a79ea327072aee2c5a5231e89a775e998435397badc0de7158d249bcf"} Feb 27 10:40:30 crc kubenswrapper[4998]: I0227 10:40:30.906174 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-czjfd" event={"ID":"7c5290fa-d988-420b-88de-065d0558ea40","Type":"ContainerDied","Data":"ca9fb1ff53535160df6b1d4df2f1aaa8a6c32ac6d37b9d42822187d94776e0c0"} Feb 27 10:40:30 crc kubenswrapper[4998]: I0227 10:40:30.906200 4998 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca9fb1ff53535160df6b1d4df2f1aaa8a6c32ac6d37b9d42822187d94776e0c0" Feb 27 10:40:30 crc kubenswrapper[4998]: I0227 10:40:30.906285 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-czjfd" Feb 27 10:40:30 crc kubenswrapper[4998]: I0227 10:40:30.911944 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-rwrfg" Feb 27 10:40:30 crc kubenswrapper[4998]: I0227 10:40:30.912384 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-rwrfg" event={"ID":"92acff51-4ca2-43c6-ab0f-480e01e9efb8","Type":"ContainerDied","Data":"96d8c65902e8dc4ee4350e66a3187759a0796f227008c75343b05fb488cc7258"} Feb 27 10:40:30 crc kubenswrapper[4998]: I0227 10:40:30.912402 4998 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="96d8c65902e8dc4ee4350e66a3187759a0796f227008c75343b05fb488cc7258" Feb 27 10:40:30 crc kubenswrapper[4998]: I0227 10:40:30.947655 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 27 10:40:30 crc kubenswrapper[4998]: I0227 10:40:30.951919 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-6qj8s" Feb 27 10:40:30 crc kubenswrapper[4998]: I0227 10:40:30.993732 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 27 10:40:30 crc kubenswrapper[4998]: E0227 10:40:30.994360 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d039984-b6c7-4498-b215-d46fdab92f47" containerName="dnsmasq-dns" Feb 27 10:40:30 crc kubenswrapper[4998]: I0227 10:40:30.994456 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d039984-b6c7-4498-b215-d46fdab92f47" containerName="dnsmasq-dns" Feb 27 10:40:30 crc kubenswrapper[4998]: E0227 10:40:30.994538 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c5290fa-d988-420b-88de-065d0558ea40" containerName="nova-cell1-conductor-db-sync" Feb 27 10:40:30 crc kubenswrapper[4998]: I0227 10:40:30.994612 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c5290fa-d988-420b-88de-065d0558ea40" containerName="nova-cell1-conductor-db-sync" Feb 27 10:40:30 crc kubenswrapper[4998]: E0227 10:40:30.994677 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d039984-b6c7-4498-b215-d46fdab92f47" containerName="init" Feb 27 10:40:30 crc kubenswrapper[4998]: I0227 10:40:30.994759 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d039984-b6c7-4498-b215-d46fdab92f47" containerName="init" Feb 27 10:40:30 crc kubenswrapper[4998]: E0227 10:40:30.994827 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92acff51-4ca2-43c6-ab0f-480e01e9efb8" containerName="nova-manage" Feb 27 10:40:30 crc kubenswrapper[4998]: I0227 10:40:30.994879 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="92acff51-4ca2-43c6-ab0f-480e01e9efb8" containerName="nova-manage" Feb 27 10:40:30 crc kubenswrapper[4998]: I0227 10:40:30.995146 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="92acff51-4ca2-43c6-ab0f-480e01e9efb8" containerName="nova-manage" Feb 27 10:40:30 crc kubenswrapper[4998]: I0227 10:40:30.995249 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d039984-b6c7-4498-b215-d46fdab92f47" containerName="dnsmasq-dns" Feb 27 10:40:30 crc kubenswrapper[4998]: I0227 10:40:30.995317 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c5290fa-d988-420b-88de-065d0558ea40" containerName="nova-cell1-conductor-db-sync" Feb 27 10:40:30 crc kubenswrapper[4998]: I0227 10:40:30.995971 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 27 10:40:31 crc kubenswrapper[4998]: I0227 10:40:31.008036 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 27 10:40:31 crc kubenswrapper[4998]: I0227 10:40:31.022841 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 27 10:40:31 crc kubenswrapper[4998]: I0227 10:40:31.067339 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5d039984-b6c7-4498-b215-d46fdab92f47-dns-svc\") pod \"5d039984-b6c7-4498-b215-d46fdab92f47\" (UID: \"5d039984-b6c7-4498-b215-d46fdab92f47\") " Feb 27 10:40:31 crc kubenswrapper[4998]: I0227 10:40:31.067429 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5d039984-b6c7-4498-b215-d46fdab92f47-ovsdbserver-nb\") pod \"5d039984-b6c7-4498-b215-d46fdab92f47\" (UID: \"5d039984-b6c7-4498-b215-d46fdab92f47\") " Feb 27 10:40:31 crc kubenswrapper[4998]: I0227 10:40:31.067475 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d039984-b6c7-4498-b215-d46fdab92f47-config\") pod \"5d039984-b6c7-4498-b215-d46fdab92f47\" (UID: \"5d039984-b6c7-4498-b215-d46fdab92f47\") " Feb 27 10:40:31 crc kubenswrapper[4998]: I0227 10:40:31.067491 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l48wq\" (UniqueName: \"kubernetes.io/projected/5d039984-b6c7-4498-b215-d46fdab92f47-kube-api-access-l48wq\") pod \"5d039984-b6c7-4498-b215-d46fdab92f47\" (UID: \"5d039984-b6c7-4498-b215-d46fdab92f47\") " Feb 27 10:40:31 crc kubenswrapper[4998]: I0227 10:40:31.067531 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5d039984-b6c7-4498-b215-d46fdab92f47-dns-swift-storage-0\") pod \"5d039984-b6c7-4498-b215-d46fdab92f47\" (UID: \"5d039984-b6c7-4498-b215-d46fdab92f47\") " Feb 27 10:40:31 crc kubenswrapper[4998]: I0227 10:40:31.067547 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5d039984-b6c7-4498-b215-d46fdab92f47-ovsdbserver-sb\") pod \"5d039984-b6c7-4498-b215-d46fdab92f47\" (UID: \"5d039984-b6c7-4498-b215-d46fdab92f47\") " Feb 27 10:40:31 crc kubenswrapper[4998]: I0227 10:40:31.072355 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d039984-b6c7-4498-b215-d46fdab92f47-kube-api-access-l48wq" (OuterVolumeSpecName: "kube-api-access-l48wq") pod "5d039984-b6c7-4498-b215-d46fdab92f47" (UID: "5d039984-b6c7-4498-b215-d46fdab92f47"). InnerVolumeSpecName "kube-api-access-l48wq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:40:31 crc kubenswrapper[4998]: I0227 10:40:31.117091 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d039984-b6c7-4498-b215-d46fdab92f47-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5d039984-b6c7-4498-b215-d46fdab92f47" (UID: "5d039984-b6c7-4498-b215-d46fdab92f47"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:40:31 crc kubenswrapper[4998]: I0227 10:40:31.117272 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d039984-b6c7-4498-b215-d46fdab92f47-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5d039984-b6c7-4498-b215-d46fdab92f47" (UID: "5d039984-b6c7-4498-b215-d46fdab92f47"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:40:31 crc kubenswrapper[4998]: I0227 10:40:31.132262 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d039984-b6c7-4498-b215-d46fdab92f47-config" (OuterVolumeSpecName: "config") pod "5d039984-b6c7-4498-b215-d46fdab92f47" (UID: "5d039984-b6c7-4498-b215-d46fdab92f47"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:40:31 crc kubenswrapper[4998]: I0227 10:40:31.134428 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d039984-b6c7-4498-b215-d46fdab92f47-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5d039984-b6c7-4498-b215-d46fdab92f47" (UID: "5d039984-b6c7-4498-b215-d46fdab92f47"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:40:31 crc kubenswrapper[4998]: I0227 10:40:31.140575 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d039984-b6c7-4498-b215-d46fdab92f47-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "5d039984-b6c7-4498-b215-d46fdab92f47" (UID: "5d039984-b6c7-4498-b215-d46fdab92f47"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:40:31 crc kubenswrapper[4998]: I0227 10:40:31.170676 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55kz6\" (UniqueName: \"kubernetes.io/projected/f0d7aa87-1b5e-4f9e-a031-923f6c24c818-kube-api-access-55kz6\") pod \"nova-cell1-conductor-0\" (UID: \"f0d7aa87-1b5e-4f9e-a031-923f6c24c818\") " pod="openstack/nova-cell1-conductor-0" Feb 27 10:40:31 crc kubenswrapper[4998]: I0227 10:40:31.171345 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0d7aa87-1b5e-4f9e-a031-923f6c24c818-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"f0d7aa87-1b5e-4f9e-a031-923f6c24c818\") " pod="openstack/nova-cell1-conductor-0" Feb 27 10:40:31 crc kubenswrapper[4998]: I0227 10:40:31.171433 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0d7aa87-1b5e-4f9e-a031-923f6c24c818-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"f0d7aa87-1b5e-4f9e-a031-923f6c24c818\") " pod="openstack/nova-cell1-conductor-0" Feb 27 10:40:31 crc kubenswrapper[4998]: I0227 10:40:31.171548 4998 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5d039984-b6c7-4498-b215-d46fdab92f47-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 27 10:40:31 crc kubenswrapper[4998]: I0227 10:40:31.171564 4998 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d039984-b6c7-4498-b215-d46fdab92f47-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:40:31 crc kubenswrapper[4998]: I0227 10:40:31.171588 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l48wq\" (UniqueName: \"kubernetes.io/projected/5d039984-b6c7-4498-b215-d46fdab92f47-kube-api-access-l48wq\") on node \"crc\" DevicePath \"\"" Feb 27 10:40:31 crc kubenswrapper[4998]: I0227 10:40:31.171599 4998 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5d039984-b6c7-4498-b215-d46fdab92f47-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 27 10:40:31 crc kubenswrapper[4998]: I0227 10:40:31.171608 4998 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5d039984-b6c7-4498-b215-d46fdab92f47-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 27 10:40:31 crc kubenswrapper[4998]: I0227 10:40:31.171616 4998 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5d039984-b6c7-4498-b215-d46fdab92f47-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 27 10:40:31 crc kubenswrapper[4998]: I0227 10:40:31.171830 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 27 10:40:31 crc kubenswrapper[4998]: I0227 10:40:31.190558 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 27 10:40:31 crc kubenswrapper[4998]: I0227 10:40:31.190813 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="20fa2ce1-ca5f-4301-81e6-d0194d9cc869" containerName="nova-metadata-log" containerID="cri-o://b322c4f3a147c4c47fcf83bb32d339d7dc18a93f0f82b7795fa7aaf352c83ab3" gracePeriod=30 Feb 27 10:40:31 crc kubenswrapper[4998]: I0227 10:40:31.190937 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="20fa2ce1-ca5f-4301-81e6-d0194d9cc869" containerName="nova-metadata-metadata" containerID="cri-o://40d80b3b91e5ac1526d8cfc941eb9d579308840288520894683b4361fbc0808f" gracePeriod=30 Feb 27 10:40:31 crc kubenswrapper[4998]: I0227 10:40:31.248095 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 27 10:40:31 crc kubenswrapper[4998]: I0227 10:40:31.248143 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 27 10:40:31 crc kubenswrapper[4998]: I0227 10:40:31.272878 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55kz6\" (UniqueName: \"kubernetes.io/projected/f0d7aa87-1b5e-4f9e-a031-923f6c24c818-kube-api-access-55kz6\") pod \"nova-cell1-conductor-0\" (UID: \"f0d7aa87-1b5e-4f9e-a031-923f6c24c818\") " pod="openstack/nova-cell1-conductor-0" Feb 27 10:40:31 crc kubenswrapper[4998]: I0227 10:40:31.272980 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0d7aa87-1b5e-4f9e-a031-923f6c24c818-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"f0d7aa87-1b5e-4f9e-a031-923f6c24c818\") " pod="openstack/nova-cell1-conductor-0" Feb 27 10:40:31 crc kubenswrapper[4998]: I0227 10:40:31.273066 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0d7aa87-1b5e-4f9e-a031-923f6c24c818-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"f0d7aa87-1b5e-4f9e-a031-923f6c24c818\") " pod="openstack/nova-cell1-conductor-0" Feb 27 10:40:31 crc kubenswrapper[4998]: I0227 10:40:31.278008 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0d7aa87-1b5e-4f9e-a031-923f6c24c818-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"f0d7aa87-1b5e-4f9e-a031-923f6c24c818\") " pod="openstack/nova-cell1-conductor-0" Feb 27 10:40:31 crc kubenswrapper[4998]: I0227 10:40:31.280109 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0d7aa87-1b5e-4f9e-a031-923f6c24c818-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"f0d7aa87-1b5e-4f9e-a031-923f6c24c818\") " pod="openstack/nova-cell1-conductor-0" Feb 27 10:40:31 crc kubenswrapper[4998]: I0227 10:40:31.289395 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55kz6\" (UniqueName: \"kubernetes.io/projected/f0d7aa87-1b5e-4f9e-a031-923f6c24c818-kube-api-access-55kz6\") pod \"nova-cell1-conductor-0\" (UID: \"f0d7aa87-1b5e-4f9e-a031-923f6c24c818\") " pod="openstack/nova-cell1-conductor-0" Feb 27 10:40:31 crc kubenswrapper[4998]: I0227 10:40:31.312915 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 27 10:40:31 crc kubenswrapper[4998]: I0227 10:40:31.366522 4998 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b18428a7-ad9e-4d5f-a243-cc54ab4f3ec8" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.196:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 27 10:40:31 crc kubenswrapper[4998]: I0227 10:40:31.366553 4998 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b18428a7-ad9e-4d5f-a243-cc54ab4f3ec8" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.196:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 27 10:40:31 crc kubenswrapper[4998]: I0227 10:40:31.509398 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 27 10:40:31 crc kubenswrapper[4998]: I0227 10:40:31.817715 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 27 10:40:31 crc kubenswrapper[4998]: I0227 10:40:31.923718 4998 generic.go:334] "Generic (PLEG): container finished" podID="20fa2ce1-ca5f-4301-81e6-d0194d9cc869" containerID="40d80b3b91e5ac1526d8cfc941eb9d579308840288520894683b4361fbc0808f" exitCode=0 Feb 27 10:40:31 crc kubenswrapper[4998]: I0227 10:40:31.924078 4998 generic.go:334] "Generic (PLEG): container finished" podID="20fa2ce1-ca5f-4301-81e6-d0194d9cc869" containerID="b322c4f3a147c4c47fcf83bb32d339d7dc18a93f0f82b7795fa7aaf352c83ab3" exitCode=143 Feb 27 10:40:31 crc kubenswrapper[4998]: I0227 10:40:31.923807 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"20fa2ce1-ca5f-4301-81e6-d0194d9cc869","Type":"ContainerDied","Data":"40d80b3b91e5ac1526d8cfc941eb9d579308840288520894683b4361fbc0808f"} Feb 27 10:40:31 crc kubenswrapper[4998]: I0227 10:40:31.924166 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"20fa2ce1-ca5f-4301-81e6-d0194d9cc869","Type":"ContainerDied","Data":"b322c4f3a147c4c47fcf83bb32d339d7dc18a93f0f82b7795fa7aaf352c83ab3"} Feb 27 10:40:31 crc kubenswrapper[4998]: I0227 10:40:31.925385 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"f0d7aa87-1b5e-4f9e-a031-923f6c24c818","Type":"ContainerStarted","Data":"edaa831d80122503521f21940992eb0d9dfe228283c51818134a2f74fdda02f3"} Feb 27 10:40:31 crc kubenswrapper[4998]: I0227 10:40:31.927861 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-6qj8s" Feb 27 10:40:31 crc kubenswrapper[4998]: I0227 10:40:31.935200 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-6qj8s" event={"ID":"5d039984-b6c7-4498-b215-d46fdab92f47","Type":"ContainerDied","Data":"44848b4aeedb25bec6fa56575b7e94e68a5dd159519f6b991c8ccfb4f206a72b"} Feb 27 10:40:31 crc kubenswrapper[4998]: I0227 10:40:31.935313 4998 scope.go:117] "RemoveContainer" containerID="6699df0a79ea327072aee2c5a5231e89a775e998435397badc0de7158d249bcf" Feb 27 10:40:31 crc kubenswrapper[4998]: I0227 10:40:31.935820 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b18428a7-ad9e-4d5f-a243-cc54ab4f3ec8" containerName="nova-api-api" containerID="cri-o://4a29d2434e903e5b1e27e710ec0cce39f56aa1731ff4dd073bc8dc5ce3c43fef" gracePeriod=30 Feb 27 10:40:31 crc kubenswrapper[4998]: I0227 10:40:31.936185 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b18428a7-ad9e-4d5f-a243-cc54ab4f3ec8" containerName="nova-api-log" containerID="cri-o://99a9d98038421dae552f8865c56882490ae3e008d26e3aafca89061184a3d60f" gracePeriod=30 Feb 27 10:40:31 crc kubenswrapper[4998]: I0227 10:40:31.972421 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-6qj8s"] Feb 27 10:40:31 crc kubenswrapper[4998]: I0227 10:40:31.982359 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-6qj8s"] Feb 27 10:40:32 crc kubenswrapper[4998]: I0227 10:40:32.117964 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 27 10:40:32 crc kubenswrapper[4998]: I0227 10:40:32.198382 4998 scope.go:117] "RemoveContainer" containerID="5f4ced49849e0649523d983e7c39c8a92ffaa3036d0d34953cbc2d2ed82949fc" Feb 27 10:40:32 crc kubenswrapper[4998]: I0227 10:40:32.297064 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qbvrs\" (UniqueName: \"kubernetes.io/projected/20fa2ce1-ca5f-4301-81e6-d0194d9cc869-kube-api-access-qbvrs\") pod \"20fa2ce1-ca5f-4301-81e6-d0194d9cc869\" (UID: \"20fa2ce1-ca5f-4301-81e6-d0194d9cc869\") " Feb 27 10:40:32 crc kubenswrapper[4998]: I0227 10:40:32.297223 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20fa2ce1-ca5f-4301-81e6-d0194d9cc869-combined-ca-bundle\") pod \"20fa2ce1-ca5f-4301-81e6-d0194d9cc869\" (UID: \"20fa2ce1-ca5f-4301-81e6-d0194d9cc869\") " Feb 27 10:40:32 crc kubenswrapper[4998]: I0227 10:40:32.297308 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20fa2ce1-ca5f-4301-81e6-d0194d9cc869-logs\") pod \"20fa2ce1-ca5f-4301-81e6-d0194d9cc869\" (UID: \"20fa2ce1-ca5f-4301-81e6-d0194d9cc869\") " Feb 27 10:40:32 crc kubenswrapper[4998]: I0227 10:40:32.297392 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20fa2ce1-ca5f-4301-81e6-d0194d9cc869-config-data\") pod \"20fa2ce1-ca5f-4301-81e6-d0194d9cc869\" (UID: \"20fa2ce1-ca5f-4301-81e6-d0194d9cc869\") " Feb 27 10:40:32 crc kubenswrapper[4998]: I0227 10:40:32.297464 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/20fa2ce1-ca5f-4301-81e6-d0194d9cc869-nova-metadata-tls-certs\") pod \"20fa2ce1-ca5f-4301-81e6-d0194d9cc869\" (UID: \"20fa2ce1-ca5f-4301-81e6-d0194d9cc869\") " Feb 27 10:40:32 crc kubenswrapper[4998]: I0227 10:40:32.299112 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20fa2ce1-ca5f-4301-81e6-d0194d9cc869-logs" (OuterVolumeSpecName: "logs") pod "20fa2ce1-ca5f-4301-81e6-d0194d9cc869" (UID: "20fa2ce1-ca5f-4301-81e6-d0194d9cc869"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:40:32 crc kubenswrapper[4998]: I0227 10:40:32.304975 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20fa2ce1-ca5f-4301-81e6-d0194d9cc869-kube-api-access-qbvrs" (OuterVolumeSpecName: "kube-api-access-qbvrs") pod "20fa2ce1-ca5f-4301-81e6-d0194d9cc869" (UID: "20fa2ce1-ca5f-4301-81e6-d0194d9cc869"). InnerVolumeSpecName "kube-api-access-qbvrs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:40:32 crc kubenswrapper[4998]: I0227 10:40:32.325978 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20fa2ce1-ca5f-4301-81e6-d0194d9cc869-config-data" (OuterVolumeSpecName: "config-data") pod "20fa2ce1-ca5f-4301-81e6-d0194d9cc869" (UID: "20fa2ce1-ca5f-4301-81e6-d0194d9cc869"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:40:32 crc kubenswrapper[4998]: I0227 10:40:32.334043 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20fa2ce1-ca5f-4301-81e6-d0194d9cc869-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "20fa2ce1-ca5f-4301-81e6-d0194d9cc869" (UID: "20fa2ce1-ca5f-4301-81e6-d0194d9cc869"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:40:32 crc kubenswrapper[4998]: I0227 10:40:32.358070 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20fa2ce1-ca5f-4301-81e6-d0194d9cc869-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "20fa2ce1-ca5f-4301-81e6-d0194d9cc869" (UID: "20fa2ce1-ca5f-4301-81e6-d0194d9cc869"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:40:32 crc kubenswrapper[4998]: I0227 10:40:32.399617 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qbvrs\" (UniqueName: \"kubernetes.io/projected/20fa2ce1-ca5f-4301-81e6-d0194d9cc869-kube-api-access-qbvrs\") on node \"crc\" DevicePath \"\"" Feb 27 10:40:32 crc kubenswrapper[4998]: I0227 10:40:32.399842 4998 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20fa2ce1-ca5f-4301-81e6-d0194d9cc869-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:40:32 crc kubenswrapper[4998]: I0227 10:40:32.399914 4998 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20fa2ce1-ca5f-4301-81e6-d0194d9cc869-logs\") on node \"crc\" DevicePath \"\"" Feb 27 10:40:32 crc kubenswrapper[4998]: I0227 10:40:32.399976 4998 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20fa2ce1-ca5f-4301-81e6-d0194d9cc869-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 10:40:32 crc kubenswrapper[4998]: I0227 10:40:32.400051 4998 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/20fa2ce1-ca5f-4301-81e6-d0194d9cc869-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 10:40:32 crc kubenswrapper[4998]: I0227 10:40:32.783582 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d039984-b6c7-4498-b215-d46fdab92f47" path="/var/lib/kubelet/pods/5d039984-b6c7-4498-b215-d46fdab92f47/volumes" Feb 27 10:40:32 crc kubenswrapper[4998]: I0227 10:40:32.937559 4998 generic.go:334] "Generic (PLEG): container finished" podID="b18428a7-ad9e-4d5f-a243-cc54ab4f3ec8" containerID="99a9d98038421dae552f8865c56882490ae3e008d26e3aafca89061184a3d60f" exitCode=143 Feb 27 10:40:32 crc kubenswrapper[4998]: I0227 10:40:32.937611 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b18428a7-ad9e-4d5f-a243-cc54ab4f3ec8","Type":"ContainerDied","Data":"99a9d98038421dae552f8865c56882490ae3e008d26e3aafca89061184a3d60f"} Feb 27 10:40:32 crc kubenswrapper[4998]: I0227 10:40:32.941490 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"20fa2ce1-ca5f-4301-81e6-d0194d9cc869","Type":"ContainerDied","Data":"c7e53f8a09d979446621859377e9caa6f958c843c709c27dba6ae85e4ef316b2"} Feb 27 10:40:32 crc kubenswrapper[4998]: I0227 10:40:32.941528 4998 scope.go:117] "RemoveContainer" containerID="40d80b3b91e5ac1526d8cfc941eb9d579308840288520894683b4361fbc0808f" Feb 27 10:40:32 crc kubenswrapper[4998]: I0227 10:40:32.941655 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 27 10:40:32 crc kubenswrapper[4998]: I0227 10:40:32.945606 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"f0d7aa87-1b5e-4f9e-a031-923f6c24c818","Type":"ContainerStarted","Data":"8e76642e0ffcca89eeaa75a8df8971a5f0f524e483eeea6587ea7bc9d27edf0a"} Feb 27 10:40:32 crc kubenswrapper[4998]: I0227 10:40:32.945754 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="e07daea2-2614-48f3-ba74-542747496c3c" containerName="nova-scheduler-scheduler" containerID="cri-o://2244d298d8f60466149f19dcba6b5501fbd0bb4297301bb49103d733fa1ac51f" gracePeriod=30 Feb 27 10:40:32 crc kubenswrapper[4998]: I0227 10:40:32.945790 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 27 10:40:32 crc kubenswrapper[4998]: I0227 10:40:32.967704 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 27 10:40:32 crc kubenswrapper[4998]: I0227 10:40:32.973037 4998 scope.go:117] "RemoveContainer" containerID="b322c4f3a147c4c47fcf83bb32d339d7dc18a93f0f82b7795fa7aaf352c83ab3" Feb 27 10:40:32 crc kubenswrapper[4998]: I0227 10:40:32.980563 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 27 10:40:32 crc kubenswrapper[4998]: I0227 10:40:32.986963 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.986944366 podStartE2EDuration="2.986944366s" podCreationTimestamp="2026-02-27 10:40:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:40:32.974076274 +0000 UTC m=+1384.972347242" watchObservedRunningTime="2026-02-27 10:40:32.986944366 +0000 UTC m=+1384.985215334" Feb 27 10:40:33 crc kubenswrapper[4998]: I0227 10:40:33.012394 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 27 10:40:33 crc kubenswrapper[4998]: E0227 10:40:33.012923 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20fa2ce1-ca5f-4301-81e6-d0194d9cc869" containerName="nova-metadata-metadata" Feb 27 10:40:33 crc kubenswrapper[4998]: I0227 10:40:33.012942 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="20fa2ce1-ca5f-4301-81e6-d0194d9cc869" containerName="nova-metadata-metadata" Feb 27 10:40:33 crc kubenswrapper[4998]: E0227 10:40:33.012969 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20fa2ce1-ca5f-4301-81e6-d0194d9cc869" containerName="nova-metadata-log" Feb 27 10:40:33 crc kubenswrapper[4998]: I0227 10:40:33.012977 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="20fa2ce1-ca5f-4301-81e6-d0194d9cc869" containerName="nova-metadata-log" Feb 27 10:40:33 crc kubenswrapper[4998]: I0227 10:40:33.013272 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="20fa2ce1-ca5f-4301-81e6-d0194d9cc869" containerName="nova-metadata-log" Feb 27 10:40:33 crc kubenswrapper[4998]: I0227 10:40:33.013294 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="20fa2ce1-ca5f-4301-81e6-d0194d9cc869" containerName="nova-metadata-metadata" Feb 27 10:40:33 crc kubenswrapper[4998]: I0227 10:40:33.014617 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 27 10:40:33 crc kubenswrapper[4998]: I0227 10:40:33.019965 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 27 10:40:33 crc kubenswrapper[4998]: I0227 10:40:33.020109 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 27 10:40:33 crc kubenswrapper[4998]: I0227 10:40:33.033803 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 27 10:40:33 crc kubenswrapper[4998]: I0227 10:40:33.113538 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/952fef0f-7957-4cec-81ee-60043bf510c9-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"952fef0f-7957-4cec-81ee-60043bf510c9\") " pod="openstack/nova-metadata-0" Feb 27 10:40:33 crc kubenswrapper[4998]: I0227 10:40:33.114033 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/952fef0f-7957-4cec-81ee-60043bf510c9-logs\") pod \"nova-metadata-0\" (UID: \"952fef0f-7957-4cec-81ee-60043bf510c9\") " pod="openstack/nova-metadata-0" Feb 27 10:40:33 crc kubenswrapper[4998]: I0227 10:40:33.114059 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqfd9\" (UniqueName: \"kubernetes.io/projected/952fef0f-7957-4cec-81ee-60043bf510c9-kube-api-access-jqfd9\") pod \"nova-metadata-0\" (UID: \"952fef0f-7957-4cec-81ee-60043bf510c9\") " pod="openstack/nova-metadata-0" Feb 27 10:40:33 crc kubenswrapper[4998]: I0227 10:40:33.114178 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/952fef0f-7957-4cec-81ee-60043bf510c9-config-data\") pod \"nova-metadata-0\" (UID: \"952fef0f-7957-4cec-81ee-60043bf510c9\") " pod="openstack/nova-metadata-0" Feb 27 10:40:33 crc kubenswrapper[4998]: I0227 10:40:33.114202 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/952fef0f-7957-4cec-81ee-60043bf510c9-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"952fef0f-7957-4cec-81ee-60043bf510c9\") " pod="openstack/nova-metadata-0" Feb 27 10:40:33 crc kubenswrapper[4998]: I0227 10:40:33.215437 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/952fef0f-7957-4cec-81ee-60043bf510c9-config-data\") pod \"nova-metadata-0\" (UID: \"952fef0f-7957-4cec-81ee-60043bf510c9\") " pod="openstack/nova-metadata-0" Feb 27 10:40:33 crc kubenswrapper[4998]: I0227 10:40:33.215625 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/952fef0f-7957-4cec-81ee-60043bf510c9-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"952fef0f-7957-4cec-81ee-60043bf510c9\") " pod="openstack/nova-metadata-0" Feb 27 10:40:33 crc kubenswrapper[4998]: I0227 10:40:33.215761 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/952fef0f-7957-4cec-81ee-60043bf510c9-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"952fef0f-7957-4cec-81ee-60043bf510c9\") " pod="openstack/nova-metadata-0" Feb 27 10:40:33 crc kubenswrapper[4998]: I0227 10:40:33.215804 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/952fef0f-7957-4cec-81ee-60043bf510c9-logs\") pod \"nova-metadata-0\" (UID: \"952fef0f-7957-4cec-81ee-60043bf510c9\") " pod="openstack/nova-metadata-0" Feb 27 10:40:33 crc kubenswrapper[4998]: I0227 10:40:33.215828 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqfd9\" (UniqueName: \"kubernetes.io/projected/952fef0f-7957-4cec-81ee-60043bf510c9-kube-api-access-jqfd9\") pod \"nova-metadata-0\" (UID: \"952fef0f-7957-4cec-81ee-60043bf510c9\") " pod="openstack/nova-metadata-0" Feb 27 10:40:33 crc kubenswrapper[4998]: I0227 10:40:33.216619 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/952fef0f-7957-4cec-81ee-60043bf510c9-logs\") pod \"nova-metadata-0\" (UID: \"952fef0f-7957-4cec-81ee-60043bf510c9\") " pod="openstack/nova-metadata-0" Feb 27 10:40:33 crc kubenswrapper[4998]: I0227 10:40:33.221839 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/952fef0f-7957-4cec-81ee-60043bf510c9-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"952fef0f-7957-4cec-81ee-60043bf510c9\") " pod="openstack/nova-metadata-0" Feb 27 10:40:33 crc kubenswrapper[4998]: I0227 10:40:33.223548 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/952fef0f-7957-4cec-81ee-60043bf510c9-config-data\") pod \"nova-metadata-0\" (UID: \"952fef0f-7957-4cec-81ee-60043bf510c9\") " pod="openstack/nova-metadata-0" Feb 27 10:40:33 crc kubenswrapper[4998]: I0227 10:40:33.226198 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/952fef0f-7957-4cec-81ee-60043bf510c9-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"952fef0f-7957-4cec-81ee-60043bf510c9\") " pod="openstack/nova-metadata-0" Feb 27 10:40:33 crc kubenswrapper[4998]: I0227 10:40:33.244814 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqfd9\" (UniqueName: \"kubernetes.io/projected/952fef0f-7957-4cec-81ee-60043bf510c9-kube-api-access-jqfd9\") pod \"nova-metadata-0\" (UID: \"952fef0f-7957-4cec-81ee-60043bf510c9\") " pod="openstack/nova-metadata-0" Feb 27 10:40:33 crc kubenswrapper[4998]: I0227 10:40:33.350394 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 27 10:40:33 crc kubenswrapper[4998]: I0227 10:40:33.815308 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 27 10:40:33 crc kubenswrapper[4998]: I0227 10:40:33.963721 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"952fef0f-7957-4cec-81ee-60043bf510c9","Type":"ContainerStarted","Data":"56b63101273c17a788bd7d6e1382178e3462c602931a011ca3b1098125e6ce06"} Feb 27 10:40:34 crc kubenswrapper[4998]: I0227 10:40:34.784368 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20fa2ce1-ca5f-4301-81e6-d0194d9cc869" path="/var/lib/kubelet/pods/20fa2ce1-ca5f-4301-81e6-d0194d9cc869/volumes" Feb 27 10:40:34 crc kubenswrapper[4998]: I0227 10:40:34.977457 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"952fef0f-7957-4cec-81ee-60043bf510c9","Type":"ContainerStarted","Data":"765563b0ca7736d32ad2dff81917695fd2aad1c1c4599da47709ecd79c0c2782"} Feb 27 10:40:34 crc kubenswrapper[4998]: I0227 10:40:34.977494 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"952fef0f-7957-4cec-81ee-60043bf510c9","Type":"ContainerStarted","Data":"abb9d8924566106088e4a57486571b14b4e6a46bcff6cea1dc561f773019a0ae"} Feb 27 10:40:35 crc kubenswrapper[4998]: I0227 10:40:35.001744 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.001720246 podStartE2EDuration="3.001720246s" podCreationTimestamp="2026-02-27 10:40:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:40:34.995279001 +0000 UTC m=+1386.993549989" watchObservedRunningTime="2026-02-27 10:40:35.001720246 +0000 UTC m=+1386.999991214" Feb 27 10:40:35 crc kubenswrapper[4998]: E0227 10:40:35.082664 4998 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2244d298d8f60466149f19dcba6b5501fbd0bb4297301bb49103d733fa1ac51f" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 27 10:40:35 crc kubenswrapper[4998]: E0227 10:40:35.084666 4998 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2244d298d8f60466149f19dcba6b5501fbd0bb4297301bb49103d733fa1ac51f" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 27 10:40:35 crc kubenswrapper[4998]: E0227 10:40:35.086873 4998 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2244d298d8f60466149f19dcba6b5501fbd0bb4297301bb49103d733fa1ac51f" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 27 10:40:35 crc kubenswrapper[4998]: E0227 10:40:35.086993 4998 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="e07daea2-2614-48f3-ba74-542747496c3c" containerName="nova-scheduler-scheduler" Feb 27 10:40:35 crc kubenswrapper[4998]: I0227 10:40:35.989733 4998 generic.go:334] "Generic (PLEG): container finished" podID="e07daea2-2614-48f3-ba74-542747496c3c" containerID="2244d298d8f60466149f19dcba6b5501fbd0bb4297301bb49103d733fa1ac51f" exitCode=0 Feb 27 10:40:35 crc kubenswrapper[4998]: I0227 10:40:35.989811 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e07daea2-2614-48f3-ba74-542747496c3c","Type":"ContainerDied","Data":"2244d298d8f60466149f19dcba6b5501fbd0bb4297301bb49103d733fa1ac51f"} Feb 27 10:40:36 crc kubenswrapper[4998]: I0227 10:40:36.169081 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 27 10:40:36 crc kubenswrapper[4998]: I0227 10:40:36.290729 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8mwv5\" (UniqueName: \"kubernetes.io/projected/e07daea2-2614-48f3-ba74-542747496c3c-kube-api-access-8mwv5\") pod \"e07daea2-2614-48f3-ba74-542747496c3c\" (UID: \"e07daea2-2614-48f3-ba74-542747496c3c\") " Feb 27 10:40:36 crc kubenswrapper[4998]: I0227 10:40:36.291148 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e07daea2-2614-48f3-ba74-542747496c3c-combined-ca-bundle\") pod \"e07daea2-2614-48f3-ba74-542747496c3c\" (UID: \"e07daea2-2614-48f3-ba74-542747496c3c\") " Feb 27 10:40:36 crc kubenswrapper[4998]: I0227 10:40:36.291200 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e07daea2-2614-48f3-ba74-542747496c3c-config-data\") pod \"e07daea2-2614-48f3-ba74-542747496c3c\" (UID: \"e07daea2-2614-48f3-ba74-542747496c3c\") " Feb 27 10:40:36 crc kubenswrapper[4998]: I0227 10:40:36.298474 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e07daea2-2614-48f3-ba74-542747496c3c-kube-api-access-8mwv5" (OuterVolumeSpecName: "kube-api-access-8mwv5") pod "e07daea2-2614-48f3-ba74-542747496c3c" (UID: "e07daea2-2614-48f3-ba74-542747496c3c"). InnerVolumeSpecName "kube-api-access-8mwv5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:40:36 crc kubenswrapper[4998]: I0227 10:40:36.338017 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e07daea2-2614-48f3-ba74-542747496c3c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e07daea2-2614-48f3-ba74-542747496c3c" (UID: "e07daea2-2614-48f3-ba74-542747496c3c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:40:36 crc kubenswrapper[4998]: I0227 10:40:36.345194 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e07daea2-2614-48f3-ba74-542747496c3c-config-data" (OuterVolumeSpecName: "config-data") pod "e07daea2-2614-48f3-ba74-542747496c3c" (UID: "e07daea2-2614-48f3-ba74-542747496c3c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:40:36 crc kubenswrapper[4998]: I0227 10:40:36.395076 4998 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e07daea2-2614-48f3-ba74-542747496c3c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:40:36 crc kubenswrapper[4998]: I0227 10:40:36.395116 4998 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e07daea2-2614-48f3-ba74-542747496c3c-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 10:40:36 crc kubenswrapper[4998]: I0227 10:40:36.395126 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8mwv5\" (UniqueName: \"kubernetes.io/projected/e07daea2-2614-48f3-ba74-542747496c3c-kube-api-access-8mwv5\") on node \"crc\" DevicePath \"\"" Feb 27 10:40:36 crc kubenswrapper[4998]: I0227 10:40:36.753094 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 27 10:40:36 crc kubenswrapper[4998]: I0227 10:40:36.904523 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b18428a7-ad9e-4d5f-a243-cc54ab4f3ec8-combined-ca-bundle\") pod \"b18428a7-ad9e-4d5f-a243-cc54ab4f3ec8\" (UID: \"b18428a7-ad9e-4d5f-a243-cc54ab4f3ec8\") " Feb 27 10:40:36 crc kubenswrapper[4998]: I0227 10:40:36.904872 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b18428a7-ad9e-4d5f-a243-cc54ab4f3ec8-logs\") pod \"b18428a7-ad9e-4d5f-a243-cc54ab4f3ec8\" (UID: \"b18428a7-ad9e-4d5f-a243-cc54ab4f3ec8\") " Feb 27 10:40:36 crc kubenswrapper[4998]: I0227 10:40:36.905033 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4pqtp\" (UniqueName: \"kubernetes.io/projected/b18428a7-ad9e-4d5f-a243-cc54ab4f3ec8-kube-api-access-4pqtp\") pod \"b18428a7-ad9e-4d5f-a243-cc54ab4f3ec8\" (UID: \"b18428a7-ad9e-4d5f-a243-cc54ab4f3ec8\") " Feb 27 10:40:36 crc kubenswrapper[4998]: I0227 10:40:36.905116 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b18428a7-ad9e-4d5f-a243-cc54ab4f3ec8-config-data\") pod \"b18428a7-ad9e-4d5f-a243-cc54ab4f3ec8\" (UID: \"b18428a7-ad9e-4d5f-a243-cc54ab4f3ec8\") " Feb 27 10:40:36 crc kubenswrapper[4998]: I0227 10:40:36.905308 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b18428a7-ad9e-4d5f-a243-cc54ab4f3ec8-logs" (OuterVolumeSpecName: "logs") pod "b18428a7-ad9e-4d5f-a243-cc54ab4f3ec8" (UID: "b18428a7-ad9e-4d5f-a243-cc54ab4f3ec8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:40:36 crc kubenswrapper[4998]: I0227 10:40:36.906441 4998 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b18428a7-ad9e-4d5f-a243-cc54ab4f3ec8-logs\") on node \"crc\" DevicePath \"\"" Feb 27 10:40:36 crc kubenswrapper[4998]: I0227 10:40:36.910147 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b18428a7-ad9e-4d5f-a243-cc54ab4f3ec8-kube-api-access-4pqtp" (OuterVolumeSpecName: "kube-api-access-4pqtp") pod "b18428a7-ad9e-4d5f-a243-cc54ab4f3ec8" (UID: "b18428a7-ad9e-4d5f-a243-cc54ab4f3ec8"). InnerVolumeSpecName "kube-api-access-4pqtp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:40:36 crc kubenswrapper[4998]: I0227 10:40:36.931968 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b18428a7-ad9e-4d5f-a243-cc54ab4f3ec8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b18428a7-ad9e-4d5f-a243-cc54ab4f3ec8" (UID: "b18428a7-ad9e-4d5f-a243-cc54ab4f3ec8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:40:36 crc kubenswrapper[4998]: I0227 10:40:36.937701 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b18428a7-ad9e-4d5f-a243-cc54ab4f3ec8-config-data" (OuterVolumeSpecName: "config-data") pod "b18428a7-ad9e-4d5f-a243-cc54ab4f3ec8" (UID: "b18428a7-ad9e-4d5f-a243-cc54ab4f3ec8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:40:37 crc kubenswrapper[4998]: I0227 10:40:37.007990 4998 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b18428a7-ad9e-4d5f-a243-cc54ab4f3ec8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:40:37 crc kubenswrapper[4998]: I0227 10:40:37.008034 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4pqtp\" (UniqueName: \"kubernetes.io/projected/b18428a7-ad9e-4d5f-a243-cc54ab4f3ec8-kube-api-access-4pqtp\") on node \"crc\" DevicePath \"\"" Feb 27 10:40:37 crc kubenswrapper[4998]: I0227 10:40:37.008056 4998 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b18428a7-ad9e-4d5f-a243-cc54ab4f3ec8-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 10:40:37 crc kubenswrapper[4998]: I0227 10:40:37.013698 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 27 10:40:37 crc kubenswrapper[4998]: I0227 10:40:37.013916 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e07daea2-2614-48f3-ba74-542747496c3c","Type":"ContainerDied","Data":"1b4883a1a799e9e9aa09a94b47d0bb7541664983cc654dab32a923256b32a316"} Feb 27 10:40:37 crc kubenswrapper[4998]: I0227 10:40:37.013980 4998 scope.go:117] "RemoveContainer" containerID="2244d298d8f60466149f19dcba6b5501fbd0bb4297301bb49103d733fa1ac51f" Feb 27 10:40:37 crc kubenswrapper[4998]: I0227 10:40:37.017290 4998 generic.go:334] "Generic (PLEG): container finished" podID="b18428a7-ad9e-4d5f-a243-cc54ab4f3ec8" containerID="4a29d2434e903e5b1e27e710ec0cce39f56aa1731ff4dd073bc8dc5ce3c43fef" exitCode=0 Feb 27 10:40:37 crc kubenswrapper[4998]: I0227 10:40:37.017335 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 27 10:40:37 crc kubenswrapper[4998]: I0227 10:40:37.017340 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b18428a7-ad9e-4d5f-a243-cc54ab4f3ec8","Type":"ContainerDied","Data":"4a29d2434e903e5b1e27e710ec0cce39f56aa1731ff4dd073bc8dc5ce3c43fef"} Feb 27 10:40:37 crc kubenswrapper[4998]: I0227 10:40:37.017445 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b18428a7-ad9e-4d5f-a243-cc54ab4f3ec8","Type":"ContainerDied","Data":"7f0e451382c5b7cbded7138f75fc64ea8a3d8e805489689b39576cda5a199bb9"} Feb 27 10:40:37 crc kubenswrapper[4998]: I0227 10:40:37.039315 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 27 10:40:37 crc kubenswrapper[4998]: I0227 10:40:37.050193 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 27 10:40:37 crc kubenswrapper[4998]: I0227 10:40:37.056157 4998 scope.go:117] "RemoveContainer" containerID="4a29d2434e903e5b1e27e710ec0cce39f56aa1731ff4dd073bc8dc5ce3c43fef" Feb 27 10:40:37 crc kubenswrapper[4998]: I0227 10:40:37.064942 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 27 10:40:37 crc kubenswrapper[4998]: I0227 10:40:37.076992 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 27 10:40:37 crc kubenswrapper[4998]: I0227 10:40:37.085211 4998 scope.go:117] "RemoveContainer" containerID="99a9d98038421dae552f8865c56882490ae3e008d26e3aafca89061184a3d60f" Feb 27 10:40:37 crc kubenswrapper[4998]: I0227 10:40:37.101001 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 27 10:40:37 crc kubenswrapper[4998]: E0227 10:40:37.101622 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e07daea2-2614-48f3-ba74-542747496c3c" containerName="nova-scheduler-scheduler" Feb 27 10:40:37 crc kubenswrapper[4998]: I0227 10:40:37.101643 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="e07daea2-2614-48f3-ba74-542747496c3c" containerName="nova-scheduler-scheduler" Feb 27 10:40:37 crc kubenswrapper[4998]: E0227 10:40:37.101666 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b18428a7-ad9e-4d5f-a243-cc54ab4f3ec8" containerName="nova-api-log" Feb 27 10:40:37 crc kubenswrapper[4998]: I0227 10:40:37.101675 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="b18428a7-ad9e-4d5f-a243-cc54ab4f3ec8" containerName="nova-api-log" Feb 27 10:40:37 crc kubenswrapper[4998]: E0227 10:40:37.101701 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b18428a7-ad9e-4d5f-a243-cc54ab4f3ec8" containerName="nova-api-api" Feb 27 10:40:37 crc kubenswrapper[4998]: I0227 10:40:37.101708 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="b18428a7-ad9e-4d5f-a243-cc54ab4f3ec8" containerName="nova-api-api" Feb 27 10:40:37 crc kubenswrapper[4998]: I0227 10:40:37.101949 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="b18428a7-ad9e-4d5f-a243-cc54ab4f3ec8" containerName="nova-api-log" Feb 27 10:40:37 crc kubenswrapper[4998]: I0227 10:40:37.101968 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="e07daea2-2614-48f3-ba74-542747496c3c" containerName="nova-scheduler-scheduler" Feb 27 10:40:37 crc kubenswrapper[4998]: I0227 10:40:37.102025 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="b18428a7-ad9e-4d5f-a243-cc54ab4f3ec8" containerName="nova-api-api" Feb 27 10:40:37 crc kubenswrapper[4998]: I0227 10:40:37.103493 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 27 10:40:37 crc kubenswrapper[4998]: I0227 10:40:37.111965 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 27 10:40:37 crc kubenswrapper[4998]: I0227 10:40:37.114108 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 27 10:40:37 crc kubenswrapper[4998]: I0227 10:40:37.140373 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 27 10:40:37 crc kubenswrapper[4998]: I0227 10:40:37.140457 4998 scope.go:117] "RemoveContainer" containerID="4a29d2434e903e5b1e27e710ec0cce39f56aa1731ff4dd073bc8dc5ce3c43fef" Feb 27 10:40:37 crc kubenswrapper[4998]: I0227 10:40:37.142012 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 27 10:40:37 crc kubenswrapper[4998]: E0227 10:40:37.144181 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a29d2434e903e5b1e27e710ec0cce39f56aa1731ff4dd073bc8dc5ce3c43fef\": container with ID starting with 4a29d2434e903e5b1e27e710ec0cce39f56aa1731ff4dd073bc8dc5ce3c43fef not found: ID does not exist" containerID="4a29d2434e903e5b1e27e710ec0cce39f56aa1731ff4dd073bc8dc5ce3c43fef" Feb 27 10:40:37 crc kubenswrapper[4998]: I0227 10:40:37.144217 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a29d2434e903e5b1e27e710ec0cce39f56aa1731ff4dd073bc8dc5ce3c43fef"} err="failed to get container status \"4a29d2434e903e5b1e27e710ec0cce39f56aa1731ff4dd073bc8dc5ce3c43fef\": rpc error: code = NotFound desc = could not find container \"4a29d2434e903e5b1e27e710ec0cce39f56aa1731ff4dd073bc8dc5ce3c43fef\": container with ID starting with 4a29d2434e903e5b1e27e710ec0cce39f56aa1731ff4dd073bc8dc5ce3c43fef not found: ID does not exist" Feb 27 10:40:37 crc kubenswrapper[4998]: I0227 10:40:37.144261 4998 scope.go:117] "RemoveContainer" containerID="99a9d98038421dae552f8865c56882490ae3e008d26e3aafca89061184a3d60f" Feb 27 10:40:37 crc kubenswrapper[4998]: E0227 10:40:37.145705 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99a9d98038421dae552f8865c56882490ae3e008d26e3aafca89061184a3d60f\": container with ID starting with 99a9d98038421dae552f8865c56882490ae3e008d26e3aafca89061184a3d60f not found: ID does not exist" containerID="99a9d98038421dae552f8865c56882490ae3e008d26e3aafca89061184a3d60f" Feb 27 10:40:37 crc kubenswrapper[4998]: I0227 10:40:37.145735 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99a9d98038421dae552f8865c56882490ae3e008d26e3aafca89061184a3d60f"} err="failed to get container status \"99a9d98038421dae552f8865c56882490ae3e008d26e3aafca89061184a3d60f\": rpc error: code = NotFound desc = could not find container \"99a9d98038421dae552f8865c56882490ae3e008d26e3aafca89061184a3d60f\": container with ID starting with 99a9d98038421dae552f8865c56882490ae3e008d26e3aafca89061184a3d60f not found: ID does not exist" Feb 27 10:40:37 crc kubenswrapper[4998]: I0227 10:40:37.147392 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 27 10:40:37 crc kubenswrapper[4998]: I0227 10:40:37.148808 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 27 10:40:37 crc kubenswrapper[4998]: I0227 10:40:37.211908 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05c40d08-f020-4ef5-8e19-fbbc9abe46a4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"05c40d08-f020-4ef5-8e19-fbbc9abe46a4\") " pod="openstack/nova-scheduler-0" Feb 27 10:40:37 crc kubenswrapper[4998]: I0227 10:40:37.211961 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zk7d\" (UniqueName: \"kubernetes.io/projected/05c40d08-f020-4ef5-8e19-fbbc9abe46a4-kube-api-access-2zk7d\") pod \"nova-scheduler-0\" (UID: \"05c40d08-f020-4ef5-8e19-fbbc9abe46a4\") " pod="openstack/nova-scheduler-0" Feb 27 10:40:37 crc kubenswrapper[4998]: I0227 10:40:37.211988 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05c40d08-f020-4ef5-8e19-fbbc9abe46a4-config-data\") pod \"nova-scheduler-0\" (UID: \"05c40d08-f020-4ef5-8e19-fbbc9abe46a4\") " pod="openstack/nova-scheduler-0" Feb 27 10:40:37 crc kubenswrapper[4998]: I0227 10:40:37.313314 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05c40d08-f020-4ef5-8e19-fbbc9abe46a4-config-data\") pod \"nova-scheduler-0\" (UID: \"05c40d08-f020-4ef5-8e19-fbbc9abe46a4\") " pod="openstack/nova-scheduler-0" Feb 27 10:40:37 crc kubenswrapper[4998]: I0227 10:40:37.313478 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1a7d456-32f8-4fab-a874-4d30b877628c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a1a7d456-32f8-4fab-a874-4d30b877628c\") " pod="openstack/nova-api-0" Feb 27 10:40:37 crc kubenswrapper[4998]: I0227 10:40:37.313518 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1a7d456-32f8-4fab-a874-4d30b877628c-config-data\") pod \"nova-api-0\" (UID: \"a1a7d456-32f8-4fab-a874-4d30b877628c\") " pod="openstack/nova-api-0" Feb 27 10:40:37 crc kubenswrapper[4998]: I0227 10:40:37.313574 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a1a7d456-32f8-4fab-a874-4d30b877628c-logs\") pod \"nova-api-0\" (UID: \"a1a7d456-32f8-4fab-a874-4d30b877628c\") " pod="openstack/nova-api-0" Feb 27 10:40:37 crc kubenswrapper[4998]: I0227 10:40:37.313666 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05c40d08-f020-4ef5-8e19-fbbc9abe46a4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"05c40d08-f020-4ef5-8e19-fbbc9abe46a4\") " pod="openstack/nova-scheduler-0" Feb 27 10:40:37 crc kubenswrapper[4998]: I0227 10:40:37.313731 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rf27\" (UniqueName: \"kubernetes.io/projected/a1a7d456-32f8-4fab-a874-4d30b877628c-kube-api-access-6rf27\") pod \"nova-api-0\" (UID: \"a1a7d456-32f8-4fab-a874-4d30b877628c\") " pod="openstack/nova-api-0" Feb 27 10:40:37 crc kubenswrapper[4998]: I0227 10:40:37.313773 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zk7d\" (UniqueName: \"kubernetes.io/projected/05c40d08-f020-4ef5-8e19-fbbc9abe46a4-kube-api-access-2zk7d\") pod \"nova-scheduler-0\" (UID: \"05c40d08-f020-4ef5-8e19-fbbc9abe46a4\") " pod="openstack/nova-scheduler-0" Feb 27 10:40:37 crc kubenswrapper[4998]: I0227 10:40:37.317982 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05c40d08-f020-4ef5-8e19-fbbc9abe46a4-config-data\") pod \"nova-scheduler-0\" (UID: \"05c40d08-f020-4ef5-8e19-fbbc9abe46a4\") " pod="openstack/nova-scheduler-0" Feb 27 10:40:37 crc kubenswrapper[4998]: I0227 10:40:37.319305 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05c40d08-f020-4ef5-8e19-fbbc9abe46a4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"05c40d08-f020-4ef5-8e19-fbbc9abe46a4\") " pod="openstack/nova-scheduler-0" Feb 27 10:40:37 crc kubenswrapper[4998]: I0227 10:40:37.337072 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zk7d\" (UniqueName: \"kubernetes.io/projected/05c40d08-f020-4ef5-8e19-fbbc9abe46a4-kube-api-access-2zk7d\") pod \"nova-scheduler-0\" (UID: \"05c40d08-f020-4ef5-8e19-fbbc9abe46a4\") " pod="openstack/nova-scheduler-0" Feb 27 10:40:37 crc kubenswrapper[4998]: I0227 10:40:37.415747 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rf27\" (UniqueName: \"kubernetes.io/projected/a1a7d456-32f8-4fab-a874-4d30b877628c-kube-api-access-6rf27\") pod \"nova-api-0\" (UID: \"a1a7d456-32f8-4fab-a874-4d30b877628c\") " pod="openstack/nova-api-0" Feb 27 10:40:37 crc kubenswrapper[4998]: I0227 10:40:37.415947 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1a7d456-32f8-4fab-a874-4d30b877628c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a1a7d456-32f8-4fab-a874-4d30b877628c\") " pod="openstack/nova-api-0" Feb 27 10:40:37 crc kubenswrapper[4998]: I0227 10:40:37.416061 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1a7d456-32f8-4fab-a874-4d30b877628c-config-data\") pod \"nova-api-0\" (UID: \"a1a7d456-32f8-4fab-a874-4d30b877628c\") " pod="openstack/nova-api-0" Feb 27 10:40:37 crc kubenswrapper[4998]: I0227 10:40:37.416762 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a1a7d456-32f8-4fab-a874-4d30b877628c-logs\") pod \"nova-api-0\" (UID: \"a1a7d456-32f8-4fab-a874-4d30b877628c\") " pod="openstack/nova-api-0" Feb 27 10:40:37 crc kubenswrapper[4998]: I0227 10:40:37.417209 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a1a7d456-32f8-4fab-a874-4d30b877628c-logs\") pod \"nova-api-0\" (UID: \"a1a7d456-32f8-4fab-a874-4d30b877628c\") " pod="openstack/nova-api-0" Feb 27 10:40:37 crc kubenswrapper[4998]: I0227 10:40:37.419697 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1a7d456-32f8-4fab-a874-4d30b877628c-config-data\") pod \"nova-api-0\" (UID: \"a1a7d456-32f8-4fab-a874-4d30b877628c\") " pod="openstack/nova-api-0" Feb 27 10:40:37 crc kubenswrapper[4998]: I0227 10:40:37.420354 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1a7d456-32f8-4fab-a874-4d30b877628c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a1a7d456-32f8-4fab-a874-4d30b877628c\") " pod="openstack/nova-api-0" Feb 27 10:40:37 crc kubenswrapper[4998]: I0227 10:40:37.433840 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rf27\" (UniqueName: \"kubernetes.io/projected/a1a7d456-32f8-4fab-a874-4d30b877628c-kube-api-access-6rf27\") pod \"nova-api-0\" (UID: \"a1a7d456-32f8-4fab-a874-4d30b877628c\") " pod="openstack/nova-api-0" Feb 27 10:40:37 crc kubenswrapper[4998]: I0227 10:40:37.435091 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 27 10:40:37 crc kubenswrapper[4998]: I0227 10:40:37.457008 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 27 10:40:37 crc kubenswrapper[4998]: I0227 10:40:37.886120 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 27 10:40:37 crc kubenswrapper[4998]: I0227 10:40:37.949505 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 27 10:40:38 crc kubenswrapper[4998]: I0227 10:40:38.029454 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a1a7d456-32f8-4fab-a874-4d30b877628c","Type":"ContainerStarted","Data":"81aea7b7e58934072e2ddd5982036e75f41cea71d115222029df5d29ceee8aa9"} Feb 27 10:40:38 crc kubenswrapper[4998]: I0227 10:40:38.032755 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"05c40d08-f020-4ef5-8e19-fbbc9abe46a4","Type":"ContainerStarted","Data":"b839a8e997f2f3786c278a528e710fda1012762940011d12624376ef41599785"} Feb 27 10:40:38 crc kubenswrapper[4998]: I0227 10:40:38.350464 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 27 10:40:38 crc kubenswrapper[4998]: I0227 10:40:38.350977 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 27 10:40:38 crc kubenswrapper[4998]: I0227 10:40:38.782953 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b18428a7-ad9e-4d5f-a243-cc54ab4f3ec8" path="/var/lib/kubelet/pods/b18428a7-ad9e-4d5f-a243-cc54ab4f3ec8/volumes" Feb 27 10:40:38 crc kubenswrapper[4998]: I0227 10:40:38.783684 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e07daea2-2614-48f3-ba74-542747496c3c" path="/var/lib/kubelet/pods/e07daea2-2614-48f3-ba74-542747496c3c/volumes" Feb 27 10:40:39 crc kubenswrapper[4998]: I0227 10:40:39.070094 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a1a7d456-32f8-4fab-a874-4d30b877628c","Type":"ContainerStarted","Data":"614401a0865e5c7d2a1504f1312385ab35d0686fde7759d60f12466c2e88549d"} Feb 27 10:40:39 crc kubenswrapper[4998]: I0227 10:40:39.070428 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a1a7d456-32f8-4fab-a874-4d30b877628c","Type":"ContainerStarted","Data":"a1cf325b9a47e051ef21c8d128253e87b91b5b713af5f8ca103f4afddbcc1ee9"} Feb 27 10:40:39 crc kubenswrapper[4998]: I0227 10:40:39.072319 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"05c40d08-f020-4ef5-8e19-fbbc9abe46a4","Type":"ContainerStarted","Data":"7489d769d417fc7bdac009962cd25469a34a21ca3b93e912cca036f68e5c2fb9"} Feb 27 10:40:39 crc kubenswrapper[4998]: I0227 10:40:39.101883 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.101861694 podStartE2EDuration="2.101861694s" podCreationTimestamp="2026-02-27 10:40:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:40:39.095248152 +0000 UTC m=+1391.093519130" watchObservedRunningTime="2026-02-27 10:40:39.101861694 +0000 UTC m=+1391.100132662" Feb 27 10:40:40 crc kubenswrapper[4998]: I0227 10:40:40.504120 4998 patch_prober.go:28] interesting pod/machine-config-daemon-m6kr5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 10:40:40 crc kubenswrapper[4998]: I0227 10:40:40.504408 4998 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 10:40:41 crc kubenswrapper[4998]: I0227 10:40:41.356685 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 27 10:40:41 crc kubenswrapper[4998]: I0227 10:40:41.386303 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=4.386284997 podStartE2EDuration="4.386284997s" podCreationTimestamp="2026-02-27 10:40:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:40:39.124251235 +0000 UTC m=+1391.122522213" watchObservedRunningTime="2026-02-27 10:40:41.386284997 +0000 UTC m=+1393.384555965" Feb 27 10:40:42 crc kubenswrapper[4998]: I0227 10:40:42.435586 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 27 10:40:43 crc kubenswrapper[4998]: I0227 10:40:43.350804 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 27 10:40:43 crc kubenswrapper[4998]: I0227 10:40:43.351102 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 27 10:40:44 crc kubenswrapper[4998]: I0227 10:40:44.365350 4998 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="952fef0f-7957-4cec-81ee-60043bf510c9" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.201:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 27 10:40:44 crc kubenswrapper[4998]: I0227 10:40:44.365413 4998 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="952fef0f-7957-4cec-81ee-60043bf510c9" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.201:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 27 10:40:45 crc kubenswrapper[4998]: I0227 10:40:45.479265 4998 scope.go:117] "RemoveContainer" containerID="5d69fd183772516d968030557baa8c6087e1fe8930806bedbd19ccb753e4d54c" Feb 27 10:40:45 crc kubenswrapper[4998]: I0227 10:40:45.967095 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 27 10:40:47 crc kubenswrapper[4998]: I0227 10:40:47.435918 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 27 10:40:47 crc kubenswrapper[4998]: I0227 10:40:47.458597 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 27 10:40:47 crc kubenswrapper[4998]: I0227 10:40:47.458650 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 27 10:40:47 crc kubenswrapper[4998]: I0227 10:40:47.462495 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 27 10:40:48 crc kubenswrapper[4998]: I0227 10:40:48.208690 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 27 10:40:48 crc kubenswrapper[4998]: I0227 10:40:48.541423 4998 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a1a7d456-32f8-4fab-a874-4d30b877628c" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.203:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 27 10:40:48 crc kubenswrapper[4998]: I0227 10:40:48.541423 4998 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a1a7d456-32f8-4fab-a874-4d30b877628c" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.203:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 27 10:40:49 crc kubenswrapper[4998]: I0227 10:40:49.558951 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 27 10:40:49 crc kubenswrapper[4998]: I0227 10:40:49.560162 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="2940ab76-4aef-4a1d-8cd5-6fd4cbced5be" containerName="kube-state-metrics" containerID="cri-o://9dc2a66377bafe558f9015d2aeb659428ad2b9001f08812180a6c776d0ef7b6e" gracePeriod=30 Feb 27 10:40:50 crc kubenswrapper[4998]: I0227 10:40:50.200396 4998 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="2940ab76-4aef-4a1d-8cd5-6fd4cbced5be" containerName="kube-state-metrics" probeResult="failure" output="Get \"http://10.217.0.106:8081/readyz\": dial tcp 10.217.0.106:8081: connect: connection refused" Feb 27 10:40:50 crc kubenswrapper[4998]: I0227 10:40:50.243050 4998 generic.go:334] "Generic (PLEG): container finished" podID="2940ab76-4aef-4a1d-8cd5-6fd4cbced5be" containerID="9dc2a66377bafe558f9015d2aeb659428ad2b9001f08812180a6c776d0ef7b6e" exitCode=2 Feb 27 10:40:50 crc kubenswrapper[4998]: I0227 10:40:50.243364 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"2940ab76-4aef-4a1d-8cd5-6fd4cbced5be","Type":"ContainerDied","Data":"9dc2a66377bafe558f9015d2aeb659428ad2b9001f08812180a6c776d0ef7b6e"} Feb 27 10:40:50 crc kubenswrapper[4998]: I0227 10:40:50.656792 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 27 10:40:50 crc kubenswrapper[4998]: I0227 10:40:50.785045 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ql5ts\" (UniqueName: \"kubernetes.io/projected/2940ab76-4aef-4a1d-8cd5-6fd4cbced5be-kube-api-access-ql5ts\") pod \"2940ab76-4aef-4a1d-8cd5-6fd4cbced5be\" (UID: \"2940ab76-4aef-4a1d-8cd5-6fd4cbced5be\") " Feb 27 10:40:50 crc kubenswrapper[4998]: I0227 10:40:50.791007 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2940ab76-4aef-4a1d-8cd5-6fd4cbced5be-kube-api-access-ql5ts" (OuterVolumeSpecName: "kube-api-access-ql5ts") pod "2940ab76-4aef-4a1d-8cd5-6fd4cbced5be" (UID: "2940ab76-4aef-4a1d-8cd5-6fd4cbced5be"). InnerVolumeSpecName "kube-api-access-ql5ts". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:40:50 crc kubenswrapper[4998]: I0227 10:40:50.886993 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ql5ts\" (UniqueName: \"kubernetes.io/projected/2940ab76-4aef-4a1d-8cd5-6fd4cbced5be-kube-api-access-ql5ts\") on node \"crc\" DevicePath \"\"" Feb 27 10:40:51 crc kubenswrapper[4998]: I0227 10:40:51.256176 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"2940ab76-4aef-4a1d-8cd5-6fd4cbced5be","Type":"ContainerDied","Data":"b8a23a0969b6a2af551da48b3a73a47ef633ee638b39995b67a82110a5c380a5"} Feb 27 10:40:51 crc kubenswrapper[4998]: I0227 10:40:51.256514 4998 scope.go:117] "RemoveContainer" containerID="9dc2a66377bafe558f9015d2aeb659428ad2b9001f08812180a6c776d0ef7b6e" Feb 27 10:40:51 crc kubenswrapper[4998]: I0227 10:40:51.256323 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 27 10:40:51 crc kubenswrapper[4998]: I0227 10:40:51.297216 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 27 10:40:51 crc kubenswrapper[4998]: I0227 10:40:51.317572 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 27 10:40:51 crc kubenswrapper[4998]: I0227 10:40:51.328760 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 27 10:40:51 crc kubenswrapper[4998]: E0227 10:40:51.329156 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2940ab76-4aef-4a1d-8cd5-6fd4cbced5be" containerName="kube-state-metrics" Feb 27 10:40:51 crc kubenswrapper[4998]: I0227 10:40:51.329263 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="2940ab76-4aef-4a1d-8cd5-6fd4cbced5be" containerName="kube-state-metrics" Feb 27 10:40:51 crc kubenswrapper[4998]: I0227 10:40:51.329460 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="2940ab76-4aef-4a1d-8cd5-6fd4cbced5be" containerName="kube-state-metrics" Feb 27 10:40:51 crc kubenswrapper[4998]: I0227 10:40:51.330078 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 27 10:40:51 crc kubenswrapper[4998]: I0227 10:40:51.332585 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Feb 27 10:40:51 crc kubenswrapper[4998]: I0227 10:40:51.332638 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Feb 27 10:40:51 crc kubenswrapper[4998]: I0227 10:40:51.355508 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 27 10:40:51 crc kubenswrapper[4998]: I0227 10:40:51.397075 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba497ed5-458a-4720-a701-9e6f9a200c6d-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"ba497ed5-458a-4720-a701-9e6f9a200c6d\") " pod="openstack/kube-state-metrics-0" Feb 27 10:40:51 crc kubenswrapper[4998]: I0227 10:40:51.397151 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba497ed5-458a-4720-a701-9e6f9a200c6d-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"ba497ed5-458a-4720-a701-9e6f9a200c6d\") " pod="openstack/kube-state-metrics-0" Feb 27 10:40:51 crc kubenswrapper[4998]: I0227 10:40:51.397182 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hbc5\" (UniqueName: \"kubernetes.io/projected/ba497ed5-458a-4720-a701-9e6f9a200c6d-kube-api-access-7hbc5\") pod \"kube-state-metrics-0\" (UID: \"ba497ed5-458a-4720-a701-9e6f9a200c6d\") " pod="openstack/kube-state-metrics-0" Feb 27 10:40:51 crc kubenswrapper[4998]: I0227 10:40:51.397378 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/ba497ed5-458a-4720-a701-9e6f9a200c6d-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"ba497ed5-458a-4720-a701-9e6f9a200c6d\") " pod="openstack/kube-state-metrics-0" Feb 27 10:40:51 crc kubenswrapper[4998]: I0227 10:40:51.498572 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/ba497ed5-458a-4720-a701-9e6f9a200c6d-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"ba497ed5-458a-4720-a701-9e6f9a200c6d\") " pod="openstack/kube-state-metrics-0" Feb 27 10:40:51 crc kubenswrapper[4998]: I0227 10:40:51.498908 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba497ed5-458a-4720-a701-9e6f9a200c6d-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"ba497ed5-458a-4720-a701-9e6f9a200c6d\") " pod="openstack/kube-state-metrics-0" Feb 27 10:40:51 crc kubenswrapper[4998]: I0227 10:40:51.499024 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba497ed5-458a-4720-a701-9e6f9a200c6d-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"ba497ed5-458a-4720-a701-9e6f9a200c6d\") " pod="openstack/kube-state-metrics-0" Feb 27 10:40:51 crc kubenswrapper[4998]: I0227 10:40:51.499555 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hbc5\" (UniqueName: \"kubernetes.io/projected/ba497ed5-458a-4720-a701-9e6f9a200c6d-kube-api-access-7hbc5\") pod \"kube-state-metrics-0\" (UID: \"ba497ed5-458a-4720-a701-9e6f9a200c6d\") " pod="openstack/kube-state-metrics-0" Feb 27 10:40:51 crc kubenswrapper[4998]: I0227 10:40:51.505170 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba497ed5-458a-4720-a701-9e6f9a200c6d-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"ba497ed5-458a-4720-a701-9e6f9a200c6d\") " pod="openstack/kube-state-metrics-0" Feb 27 10:40:51 crc kubenswrapper[4998]: I0227 10:40:51.512554 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/ba497ed5-458a-4720-a701-9e6f9a200c6d-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"ba497ed5-458a-4720-a701-9e6f9a200c6d\") " pod="openstack/kube-state-metrics-0" Feb 27 10:40:51 crc kubenswrapper[4998]: I0227 10:40:51.512891 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba497ed5-458a-4720-a701-9e6f9a200c6d-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"ba497ed5-458a-4720-a701-9e6f9a200c6d\") " pod="openstack/kube-state-metrics-0" Feb 27 10:40:51 crc kubenswrapper[4998]: I0227 10:40:51.515893 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hbc5\" (UniqueName: \"kubernetes.io/projected/ba497ed5-458a-4720-a701-9e6f9a200c6d-kube-api-access-7hbc5\") pod \"kube-state-metrics-0\" (UID: \"ba497ed5-458a-4720-a701-9e6f9a200c6d\") " pod="openstack/kube-state-metrics-0" Feb 27 10:40:51 crc kubenswrapper[4998]: I0227 10:40:51.570777 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 27 10:40:51 crc kubenswrapper[4998]: I0227 10:40:51.575883 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8f31638e-7dd4-4dcc-b18f-2e2618086e49" containerName="ceilometer-central-agent" containerID="cri-o://abcd36feb17787ceb059688d824775c51cce1dfe554aafb2e60271f7e175d725" gracePeriod=30 Feb 27 10:40:51 crc kubenswrapper[4998]: I0227 10:40:51.575949 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8f31638e-7dd4-4dcc-b18f-2e2618086e49" containerName="sg-core" containerID="cri-o://a7a97fb5fc48f1f3a5098986c6f470e11e683f23293ce3c132535ce7a75f7f2e" gracePeriod=30 Feb 27 10:40:51 crc kubenswrapper[4998]: I0227 10:40:51.575982 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8f31638e-7dd4-4dcc-b18f-2e2618086e49" containerName="proxy-httpd" containerID="cri-o://4255f4ba334c1cb3eadc9c02afddd601bb704c885a547fccf046c262321ff086" gracePeriod=30 Feb 27 10:40:51 crc kubenswrapper[4998]: I0227 10:40:51.575995 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8f31638e-7dd4-4dcc-b18f-2e2618086e49" containerName="ceilometer-notification-agent" containerID="cri-o://7bd08940027cd6ce215f7402e5e353df5c956e681fca1cb0bd706ada32ac99ff" gracePeriod=30 Feb 27 10:40:51 crc kubenswrapper[4998]: I0227 10:40:51.648608 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 27 10:40:52 crc kubenswrapper[4998]: I0227 10:40:52.153112 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 27 10:40:52 crc kubenswrapper[4998]: I0227 10:40:52.282332 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"ba497ed5-458a-4720-a701-9e6f9a200c6d","Type":"ContainerStarted","Data":"f6386999b9ccfd7a721567fc06bee74688ac1592c2bfb3553efb08e35c597891"} Feb 27 10:40:52 crc kubenswrapper[4998]: I0227 10:40:52.290926 4998 generic.go:334] "Generic (PLEG): container finished" podID="8f31638e-7dd4-4dcc-b18f-2e2618086e49" containerID="4255f4ba334c1cb3eadc9c02afddd601bb704c885a547fccf046c262321ff086" exitCode=0 Feb 27 10:40:52 crc kubenswrapper[4998]: I0227 10:40:52.290978 4998 generic.go:334] "Generic (PLEG): container finished" podID="8f31638e-7dd4-4dcc-b18f-2e2618086e49" containerID="a7a97fb5fc48f1f3a5098986c6f470e11e683f23293ce3c132535ce7a75f7f2e" exitCode=2 Feb 27 10:40:52 crc kubenswrapper[4998]: I0227 10:40:52.290996 4998 generic.go:334] "Generic (PLEG): container finished" podID="8f31638e-7dd4-4dcc-b18f-2e2618086e49" containerID="abcd36feb17787ceb059688d824775c51cce1dfe554aafb2e60271f7e175d725" exitCode=0 Feb 27 10:40:52 crc kubenswrapper[4998]: I0227 10:40:52.291019 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8f31638e-7dd4-4dcc-b18f-2e2618086e49","Type":"ContainerDied","Data":"4255f4ba334c1cb3eadc9c02afddd601bb704c885a547fccf046c262321ff086"} Feb 27 10:40:52 crc kubenswrapper[4998]: I0227 10:40:52.291082 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8f31638e-7dd4-4dcc-b18f-2e2618086e49","Type":"ContainerDied","Data":"a7a97fb5fc48f1f3a5098986c6f470e11e683f23293ce3c132535ce7a75f7f2e"} Feb 27 10:40:52 crc kubenswrapper[4998]: I0227 10:40:52.291105 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8f31638e-7dd4-4dcc-b18f-2e2618086e49","Type":"ContainerDied","Data":"abcd36feb17787ceb059688d824775c51cce1dfe554aafb2e60271f7e175d725"} Feb 27 10:40:52 crc kubenswrapper[4998]: I0227 10:40:52.778921 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2940ab76-4aef-4a1d-8cd5-6fd4cbced5be" path="/var/lib/kubelet/pods/2940ab76-4aef-4a1d-8cd5-6fd4cbced5be/volumes" Feb 27 10:40:53 crc kubenswrapper[4998]: I0227 10:40:53.300456 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"ba497ed5-458a-4720-a701-9e6f9a200c6d","Type":"ContainerStarted","Data":"a59b3d77d804b1b815deb06f6424ee946b346591f6144b30317dab88eadc2017"} Feb 27 10:40:53 crc kubenswrapper[4998]: I0227 10:40:53.300780 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 27 10:40:53 crc kubenswrapper[4998]: I0227 10:40:53.321238 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=1.912280878 podStartE2EDuration="2.321199702s" podCreationTimestamp="2026-02-27 10:40:51 +0000 UTC" firstStartedPulling="2026-02-27 10:40:52.161219429 +0000 UTC m=+1404.159490397" lastFinishedPulling="2026-02-27 10:40:52.570138253 +0000 UTC m=+1404.568409221" observedRunningTime="2026-02-27 10:40:53.315299534 +0000 UTC m=+1405.313570512" watchObservedRunningTime="2026-02-27 10:40:53.321199702 +0000 UTC m=+1405.319470670" Feb 27 10:40:53 crc kubenswrapper[4998]: I0227 10:40:53.355787 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 27 10:40:53 crc kubenswrapper[4998]: I0227 10:40:53.357339 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 27 10:40:53 crc kubenswrapper[4998]: I0227 10:40:53.359970 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 27 10:40:54 crc kubenswrapper[4998]: I0227 10:40:54.318052 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 27 10:40:55 crc kubenswrapper[4998]: E0227 10:40:55.114340 4998 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod92acff51_4ca2_43c6_ab0f_480e01e9efb8.slice/crio-conmon-550199ff57ddd5616ea967474384544dce1ead6cabf6b337129b1634632860fa.scope\": RecentStats: unable to find data in memory cache]" Feb 27 10:40:55 crc kubenswrapper[4998]: I0227 10:40:55.330582 4998 generic.go:334] "Generic (PLEG): container finished" podID="eb938d5c-49ff-476a-ae91-0c07a0321818" containerID="a6dd5e47c6abb35450458dde4ec8aeba8b15341e384fc2b0033b757d85ca5355" exitCode=137 Feb 27 10:40:55 crc kubenswrapper[4998]: I0227 10:40:55.330650 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"eb938d5c-49ff-476a-ae91-0c07a0321818","Type":"ContainerDied","Data":"a6dd5e47c6abb35450458dde4ec8aeba8b15341e384fc2b0033b757d85ca5355"} Feb 27 10:40:55 crc kubenswrapper[4998]: I0227 10:40:55.330966 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"eb938d5c-49ff-476a-ae91-0c07a0321818","Type":"ContainerDied","Data":"68425bd4470ad9827bfb1a3445fe697212c3997ff4195465881ece779f39e0d1"} Feb 27 10:40:55 crc kubenswrapper[4998]: I0227 10:40:55.330986 4998 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="68425bd4470ad9827bfb1a3445fe697212c3997ff4195465881ece779f39e0d1" Feb 27 10:40:55 crc kubenswrapper[4998]: I0227 10:40:55.336283 4998 generic.go:334] "Generic (PLEG): container finished" podID="8f31638e-7dd4-4dcc-b18f-2e2618086e49" containerID="7bd08940027cd6ce215f7402e5e353df5c956e681fca1cb0bd706ada32ac99ff" exitCode=0 Feb 27 10:40:55 crc kubenswrapper[4998]: I0227 10:40:55.336409 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8f31638e-7dd4-4dcc-b18f-2e2618086e49","Type":"ContainerDied","Data":"7bd08940027cd6ce215f7402e5e353df5c956e681fca1cb0bd706ada32ac99ff"} Feb 27 10:40:55 crc kubenswrapper[4998]: I0227 10:40:55.347517 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 27 10:40:55 crc kubenswrapper[4998]: I0227 10:40:55.474559 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb938d5c-49ff-476a-ae91-0c07a0321818-config-data\") pod \"eb938d5c-49ff-476a-ae91-0c07a0321818\" (UID: \"eb938d5c-49ff-476a-ae91-0c07a0321818\") " Feb 27 10:40:55 crc kubenswrapper[4998]: I0227 10:40:55.474690 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb938d5c-49ff-476a-ae91-0c07a0321818-combined-ca-bundle\") pod \"eb938d5c-49ff-476a-ae91-0c07a0321818\" (UID: \"eb938d5c-49ff-476a-ae91-0c07a0321818\") " Feb 27 10:40:55 crc kubenswrapper[4998]: I0227 10:40:55.474751 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4gczh\" (UniqueName: \"kubernetes.io/projected/eb938d5c-49ff-476a-ae91-0c07a0321818-kube-api-access-4gczh\") pod \"eb938d5c-49ff-476a-ae91-0c07a0321818\" (UID: \"eb938d5c-49ff-476a-ae91-0c07a0321818\") " Feb 27 10:40:55 crc kubenswrapper[4998]: I0227 10:40:55.480435 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb938d5c-49ff-476a-ae91-0c07a0321818-kube-api-access-4gczh" (OuterVolumeSpecName: "kube-api-access-4gczh") pod "eb938d5c-49ff-476a-ae91-0c07a0321818" (UID: "eb938d5c-49ff-476a-ae91-0c07a0321818"). InnerVolumeSpecName "kube-api-access-4gczh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:40:55 crc kubenswrapper[4998]: I0227 10:40:55.502922 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb938d5c-49ff-476a-ae91-0c07a0321818-config-data" (OuterVolumeSpecName: "config-data") pod "eb938d5c-49ff-476a-ae91-0c07a0321818" (UID: "eb938d5c-49ff-476a-ae91-0c07a0321818"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:40:55 crc kubenswrapper[4998]: I0227 10:40:55.514107 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb938d5c-49ff-476a-ae91-0c07a0321818-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eb938d5c-49ff-476a-ae91-0c07a0321818" (UID: "eb938d5c-49ff-476a-ae91-0c07a0321818"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:40:55 crc kubenswrapper[4998]: I0227 10:40:55.529649 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 10:40:55 crc kubenswrapper[4998]: I0227 10:40:55.577844 4998 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb938d5c-49ff-476a-ae91-0c07a0321818-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 10:40:55 crc kubenswrapper[4998]: I0227 10:40:55.577880 4998 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb938d5c-49ff-476a-ae91-0c07a0321818-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:40:55 crc kubenswrapper[4998]: I0227 10:40:55.577892 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4gczh\" (UniqueName: \"kubernetes.io/projected/eb938d5c-49ff-476a-ae91-0c07a0321818-kube-api-access-4gczh\") on node \"crc\" DevicePath \"\"" Feb 27 10:40:55 crc kubenswrapper[4998]: I0227 10:40:55.679357 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mvw4l\" (UniqueName: \"kubernetes.io/projected/8f31638e-7dd4-4dcc-b18f-2e2618086e49-kube-api-access-mvw4l\") pod \"8f31638e-7dd4-4dcc-b18f-2e2618086e49\" (UID: \"8f31638e-7dd4-4dcc-b18f-2e2618086e49\") " Feb 27 10:40:55 crc kubenswrapper[4998]: I0227 10:40:55.680272 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f31638e-7dd4-4dcc-b18f-2e2618086e49-config-data\") pod \"8f31638e-7dd4-4dcc-b18f-2e2618086e49\" (UID: \"8f31638e-7dd4-4dcc-b18f-2e2618086e49\") " Feb 27 10:40:55 crc kubenswrapper[4998]: I0227 10:40:55.680304 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8f31638e-7dd4-4dcc-b18f-2e2618086e49-sg-core-conf-yaml\") pod \"8f31638e-7dd4-4dcc-b18f-2e2618086e49\" (UID: \"8f31638e-7dd4-4dcc-b18f-2e2618086e49\") " Feb 27 10:40:55 crc kubenswrapper[4998]: I0227 10:40:55.680332 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8f31638e-7dd4-4dcc-b18f-2e2618086e49-log-httpd\") pod \"8f31638e-7dd4-4dcc-b18f-2e2618086e49\" (UID: \"8f31638e-7dd4-4dcc-b18f-2e2618086e49\") " Feb 27 10:40:55 crc kubenswrapper[4998]: I0227 10:40:55.680411 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8f31638e-7dd4-4dcc-b18f-2e2618086e49-run-httpd\") pod \"8f31638e-7dd4-4dcc-b18f-2e2618086e49\" (UID: \"8f31638e-7dd4-4dcc-b18f-2e2618086e49\") " Feb 27 10:40:55 crc kubenswrapper[4998]: I0227 10:40:55.680504 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f31638e-7dd4-4dcc-b18f-2e2618086e49-scripts\") pod \"8f31638e-7dd4-4dcc-b18f-2e2618086e49\" (UID: \"8f31638e-7dd4-4dcc-b18f-2e2618086e49\") " Feb 27 10:40:55 crc kubenswrapper[4998]: I0227 10:40:55.680540 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f31638e-7dd4-4dcc-b18f-2e2618086e49-combined-ca-bundle\") pod \"8f31638e-7dd4-4dcc-b18f-2e2618086e49\" (UID: \"8f31638e-7dd4-4dcc-b18f-2e2618086e49\") " Feb 27 10:40:55 crc kubenswrapper[4998]: I0227 10:40:55.680677 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f31638e-7dd4-4dcc-b18f-2e2618086e49-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "8f31638e-7dd4-4dcc-b18f-2e2618086e49" (UID: "8f31638e-7dd4-4dcc-b18f-2e2618086e49"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:40:55 crc kubenswrapper[4998]: I0227 10:40:55.681034 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f31638e-7dd4-4dcc-b18f-2e2618086e49-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "8f31638e-7dd4-4dcc-b18f-2e2618086e49" (UID: "8f31638e-7dd4-4dcc-b18f-2e2618086e49"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:40:55 crc kubenswrapper[4998]: I0227 10:40:55.681687 4998 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8f31638e-7dd4-4dcc-b18f-2e2618086e49-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 27 10:40:55 crc kubenswrapper[4998]: I0227 10:40:55.681798 4998 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8f31638e-7dd4-4dcc-b18f-2e2618086e49-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 27 10:40:55 crc kubenswrapper[4998]: I0227 10:40:55.684729 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f31638e-7dd4-4dcc-b18f-2e2618086e49-scripts" (OuterVolumeSpecName: "scripts") pod "8f31638e-7dd4-4dcc-b18f-2e2618086e49" (UID: "8f31638e-7dd4-4dcc-b18f-2e2618086e49"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:40:55 crc kubenswrapper[4998]: I0227 10:40:55.686274 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f31638e-7dd4-4dcc-b18f-2e2618086e49-kube-api-access-mvw4l" (OuterVolumeSpecName: "kube-api-access-mvw4l") pod "8f31638e-7dd4-4dcc-b18f-2e2618086e49" (UID: "8f31638e-7dd4-4dcc-b18f-2e2618086e49"). InnerVolumeSpecName "kube-api-access-mvw4l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:40:55 crc kubenswrapper[4998]: I0227 10:40:55.707111 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f31638e-7dd4-4dcc-b18f-2e2618086e49-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "8f31638e-7dd4-4dcc-b18f-2e2618086e49" (UID: "8f31638e-7dd4-4dcc-b18f-2e2618086e49"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:40:55 crc kubenswrapper[4998]: I0227 10:40:55.781466 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f31638e-7dd4-4dcc-b18f-2e2618086e49-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8f31638e-7dd4-4dcc-b18f-2e2618086e49" (UID: "8f31638e-7dd4-4dcc-b18f-2e2618086e49"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:40:55 crc kubenswrapper[4998]: I0227 10:40:55.783617 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mvw4l\" (UniqueName: \"kubernetes.io/projected/8f31638e-7dd4-4dcc-b18f-2e2618086e49-kube-api-access-mvw4l\") on node \"crc\" DevicePath \"\"" Feb 27 10:40:55 crc kubenswrapper[4998]: I0227 10:40:55.783638 4998 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8f31638e-7dd4-4dcc-b18f-2e2618086e49-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 27 10:40:55 crc kubenswrapper[4998]: I0227 10:40:55.783647 4998 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f31638e-7dd4-4dcc-b18f-2e2618086e49-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 10:40:55 crc kubenswrapper[4998]: I0227 10:40:55.783655 4998 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f31638e-7dd4-4dcc-b18f-2e2618086e49-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:40:55 crc kubenswrapper[4998]: I0227 10:40:55.801798 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f31638e-7dd4-4dcc-b18f-2e2618086e49-config-data" (OuterVolumeSpecName: "config-data") pod "8f31638e-7dd4-4dcc-b18f-2e2618086e49" (UID: "8f31638e-7dd4-4dcc-b18f-2e2618086e49"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:40:55 crc kubenswrapper[4998]: I0227 10:40:55.885715 4998 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f31638e-7dd4-4dcc-b18f-2e2618086e49-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 10:40:56 crc kubenswrapper[4998]: I0227 10:40:56.351044 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8f31638e-7dd4-4dcc-b18f-2e2618086e49","Type":"ContainerDied","Data":"d16116264e02209b0d77368bcb2fb5d647eec4f06eb83f525518145b259ad183"} Feb 27 10:40:56 crc kubenswrapper[4998]: I0227 10:40:56.351166 4998 scope.go:117] "RemoveContainer" containerID="4255f4ba334c1cb3eadc9c02afddd601bb704c885a547fccf046c262321ff086" Feb 27 10:40:56 crc kubenswrapper[4998]: I0227 10:40:56.351213 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 27 10:40:56 crc kubenswrapper[4998]: I0227 10:40:56.351066 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 10:40:56 crc kubenswrapper[4998]: I0227 10:40:56.382858 4998 scope.go:117] "RemoveContainer" containerID="a7a97fb5fc48f1f3a5098986c6f470e11e683f23293ce3c132535ce7a75f7f2e" Feb 27 10:40:56 crc kubenswrapper[4998]: I0227 10:40:56.396932 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 27 10:40:56 crc kubenswrapper[4998]: I0227 10:40:56.415313 4998 scope.go:117] "RemoveContainer" containerID="7bd08940027cd6ce215f7402e5e353df5c956e681fca1cb0bd706ada32ac99ff" Feb 27 10:40:56 crc kubenswrapper[4998]: I0227 10:40:56.415366 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 27 10:40:56 crc kubenswrapper[4998]: I0227 10:40:56.439594 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 27 10:40:56 crc kubenswrapper[4998]: I0227 10:40:56.453375 4998 scope.go:117] "RemoveContainer" containerID="abcd36feb17787ceb059688d824775c51cce1dfe554aafb2e60271f7e175d725" Feb 27 10:40:56 crc kubenswrapper[4998]: I0227 10:40:56.467335 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 27 10:40:56 crc kubenswrapper[4998]: I0227 10:40:56.480836 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 27 10:40:56 crc kubenswrapper[4998]: E0227 10:40:56.481256 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f31638e-7dd4-4dcc-b18f-2e2618086e49" containerName="ceilometer-notification-agent" Feb 27 10:40:56 crc kubenswrapper[4998]: I0227 10:40:56.481274 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f31638e-7dd4-4dcc-b18f-2e2618086e49" containerName="ceilometer-notification-agent" Feb 27 10:40:56 crc kubenswrapper[4998]: E0227 10:40:56.481294 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f31638e-7dd4-4dcc-b18f-2e2618086e49" containerName="sg-core" Feb 27 10:40:56 crc kubenswrapper[4998]: I0227 10:40:56.481302 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f31638e-7dd4-4dcc-b18f-2e2618086e49" containerName="sg-core" Feb 27 10:40:56 crc kubenswrapper[4998]: E0227 10:40:56.481339 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f31638e-7dd4-4dcc-b18f-2e2618086e49" containerName="ceilometer-central-agent" Feb 27 10:40:56 crc kubenswrapper[4998]: I0227 10:40:56.481346 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f31638e-7dd4-4dcc-b18f-2e2618086e49" containerName="ceilometer-central-agent" Feb 27 10:40:56 crc kubenswrapper[4998]: E0227 10:40:56.481372 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb938d5c-49ff-476a-ae91-0c07a0321818" containerName="nova-cell1-novncproxy-novncproxy" Feb 27 10:40:56 crc kubenswrapper[4998]: I0227 10:40:56.481380 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb938d5c-49ff-476a-ae91-0c07a0321818" containerName="nova-cell1-novncproxy-novncproxy" Feb 27 10:40:56 crc kubenswrapper[4998]: E0227 10:40:56.481394 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f31638e-7dd4-4dcc-b18f-2e2618086e49" containerName="proxy-httpd" Feb 27 10:40:56 crc kubenswrapper[4998]: I0227 10:40:56.481400 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f31638e-7dd4-4dcc-b18f-2e2618086e49" containerName="proxy-httpd" Feb 27 10:40:56 crc kubenswrapper[4998]: I0227 10:40:56.481578 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f31638e-7dd4-4dcc-b18f-2e2618086e49" containerName="ceilometer-notification-agent" Feb 27 10:40:56 crc kubenswrapper[4998]: I0227 10:40:56.481589 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f31638e-7dd4-4dcc-b18f-2e2618086e49" containerName="proxy-httpd" Feb 27 10:40:56 crc kubenswrapper[4998]: I0227 10:40:56.481603 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f31638e-7dd4-4dcc-b18f-2e2618086e49" containerName="ceilometer-central-agent" Feb 27 10:40:56 crc kubenswrapper[4998]: I0227 10:40:56.481612 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f31638e-7dd4-4dcc-b18f-2e2618086e49" containerName="sg-core" Feb 27 10:40:56 crc kubenswrapper[4998]: I0227 10:40:56.481624 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb938d5c-49ff-476a-ae91-0c07a0321818" containerName="nova-cell1-novncproxy-novncproxy" Feb 27 10:40:56 crc kubenswrapper[4998]: I0227 10:40:56.482242 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 27 10:40:56 crc kubenswrapper[4998]: I0227 10:40:56.483984 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Feb 27 10:40:56 crc kubenswrapper[4998]: I0227 10:40:56.484155 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Feb 27 10:40:56 crc kubenswrapper[4998]: I0227 10:40:56.485026 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 27 10:40:56 crc kubenswrapper[4998]: I0227 10:40:56.493967 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 27 10:40:56 crc kubenswrapper[4998]: I0227 10:40:56.506684 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 27 10:40:56 crc kubenswrapper[4998]: I0227 10:40:56.508969 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 10:40:56 crc kubenswrapper[4998]: I0227 10:40:56.511177 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 27 10:40:56 crc kubenswrapper[4998]: I0227 10:40:56.511177 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 27 10:40:56 crc kubenswrapper[4998]: I0227 10:40:56.511292 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 27 10:40:56 crc kubenswrapper[4998]: I0227 10:40:56.527951 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 27 10:40:56 crc kubenswrapper[4998]: I0227 10:40:56.602126 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56011d10-8d8f-4738-9f54-b33c5254701d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"56011d10-8d8f-4738-9f54-b33c5254701d\") " pod="openstack/ceilometer-0" Feb 27 10:40:56 crc kubenswrapper[4998]: I0227 10:40:56.602160 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/56011d10-8d8f-4738-9f54-b33c5254701d-log-httpd\") pod \"ceilometer-0\" (UID: \"56011d10-8d8f-4738-9f54-b33c5254701d\") " pod="openstack/ceilometer-0" Feb 27 10:40:56 crc kubenswrapper[4998]: I0227 10:40:56.602326 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/33ba9efe-d1af-4c38-b767-9f1b41518e97-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"33ba9efe-d1af-4c38-b767-9f1b41518e97\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 10:40:56 crc kubenswrapper[4998]: I0227 10:40:56.602396 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33ba9efe-d1af-4c38-b767-9f1b41518e97-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"33ba9efe-d1af-4c38-b767-9f1b41518e97\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 10:40:56 crc kubenswrapper[4998]: I0227 10:40:56.602451 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/33ba9efe-d1af-4c38-b767-9f1b41518e97-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"33ba9efe-d1af-4c38-b767-9f1b41518e97\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 10:40:56 crc kubenswrapper[4998]: I0227 10:40:56.602506 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/56011d10-8d8f-4738-9f54-b33c5254701d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"56011d10-8d8f-4738-9f54-b33c5254701d\") " pod="openstack/ceilometer-0" Feb 27 10:40:56 crc kubenswrapper[4998]: I0227 10:40:56.602532 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hq4pf\" (UniqueName: \"kubernetes.io/projected/33ba9efe-d1af-4c38-b767-9f1b41518e97-kube-api-access-hq4pf\") pod \"nova-cell1-novncproxy-0\" (UID: \"33ba9efe-d1af-4c38-b767-9f1b41518e97\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 10:40:56 crc kubenswrapper[4998]: I0227 10:40:56.602634 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56011d10-8d8f-4738-9f54-b33c5254701d-config-data\") pod \"ceilometer-0\" (UID: \"56011d10-8d8f-4738-9f54-b33c5254701d\") " pod="openstack/ceilometer-0" Feb 27 10:40:56 crc kubenswrapper[4998]: I0227 10:40:56.602693 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/56011d10-8d8f-4738-9f54-b33c5254701d-run-httpd\") pod \"ceilometer-0\" (UID: \"56011d10-8d8f-4738-9f54-b33c5254701d\") " pod="openstack/ceilometer-0" Feb 27 10:40:56 crc kubenswrapper[4998]: I0227 10:40:56.602733 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hzgl\" (UniqueName: \"kubernetes.io/projected/56011d10-8d8f-4738-9f54-b33c5254701d-kube-api-access-8hzgl\") pod \"ceilometer-0\" (UID: \"56011d10-8d8f-4738-9f54-b33c5254701d\") " pod="openstack/ceilometer-0" Feb 27 10:40:56 crc kubenswrapper[4998]: I0227 10:40:56.602773 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33ba9efe-d1af-4c38-b767-9f1b41518e97-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"33ba9efe-d1af-4c38-b767-9f1b41518e97\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 10:40:56 crc kubenswrapper[4998]: I0227 10:40:56.602795 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/56011d10-8d8f-4738-9f54-b33c5254701d-scripts\") pod \"ceilometer-0\" (UID: \"56011d10-8d8f-4738-9f54-b33c5254701d\") " pod="openstack/ceilometer-0" Feb 27 10:40:56 crc kubenswrapper[4998]: I0227 10:40:56.602879 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/56011d10-8d8f-4738-9f54-b33c5254701d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"56011d10-8d8f-4738-9f54-b33c5254701d\") " pod="openstack/ceilometer-0" Feb 27 10:40:56 crc kubenswrapper[4998]: I0227 10:40:56.704402 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/56011d10-8d8f-4738-9f54-b33c5254701d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"56011d10-8d8f-4738-9f54-b33c5254701d\") " pod="openstack/ceilometer-0" Feb 27 10:40:56 crc kubenswrapper[4998]: I0227 10:40:56.704716 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56011d10-8d8f-4738-9f54-b33c5254701d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"56011d10-8d8f-4738-9f54-b33c5254701d\") " pod="openstack/ceilometer-0" Feb 27 10:40:56 crc kubenswrapper[4998]: I0227 10:40:56.704733 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/56011d10-8d8f-4738-9f54-b33c5254701d-log-httpd\") pod \"ceilometer-0\" (UID: \"56011d10-8d8f-4738-9f54-b33c5254701d\") " pod="openstack/ceilometer-0" Feb 27 10:40:56 crc kubenswrapper[4998]: I0227 10:40:56.704791 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/33ba9efe-d1af-4c38-b767-9f1b41518e97-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"33ba9efe-d1af-4c38-b767-9f1b41518e97\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 10:40:56 crc kubenswrapper[4998]: I0227 10:40:56.704843 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33ba9efe-d1af-4c38-b767-9f1b41518e97-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"33ba9efe-d1af-4c38-b767-9f1b41518e97\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 10:40:56 crc kubenswrapper[4998]: I0227 10:40:56.704863 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/33ba9efe-d1af-4c38-b767-9f1b41518e97-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"33ba9efe-d1af-4c38-b767-9f1b41518e97\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 10:40:56 crc kubenswrapper[4998]: I0227 10:40:56.704903 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/56011d10-8d8f-4738-9f54-b33c5254701d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"56011d10-8d8f-4738-9f54-b33c5254701d\") " pod="openstack/ceilometer-0" Feb 27 10:40:56 crc kubenswrapper[4998]: I0227 10:40:56.704925 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hq4pf\" (UniqueName: \"kubernetes.io/projected/33ba9efe-d1af-4c38-b767-9f1b41518e97-kube-api-access-hq4pf\") pod \"nova-cell1-novncproxy-0\" (UID: \"33ba9efe-d1af-4c38-b767-9f1b41518e97\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 10:40:56 crc kubenswrapper[4998]: I0227 10:40:56.704950 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56011d10-8d8f-4738-9f54-b33c5254701d-config-data\") pod \"ceilometer-0\" (UID: \"56011d10-8d8f-4738-9f54-b33c5254701d\") " pod="openstack/ceilometer-0" Feb 27 10:40:56 crc kubenswrapper[4998]: I0227 10:40:56.704971 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/56011d10-8d8f-4738-9f54-b33c5254701d-run-httpd\") pod \"ceilometer-0\" (UID: \"56011d10-8d8f-4738-9f54-b33c5254701d\") " pod="openstack/ceilometer-0" Feb 27 10:40:56 crc kubenswrapper[4998]: I0227 10:40:56.704990 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hzgl\" (UniqueName: \"kubernetes.io/projected/56011d10-8d8f-4738-9f54-b33c5254701d-kube-api-access-8hzgl\") pod \"ceilometer-0\" (UID: \"56011d10-8d8f-4738-9f54-b33c5254701d\") " pod="openstack/ceilometer-0" Feb 27 10:40:56 crc kubenswrapper[4998]: I0227 10:40:56.705011 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33ba9efe-d1af-4c38-b767-9f1b41518e97-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"33ba9efe-d1af-4c38-b767-9f1b41518e97\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 10:40:56 crc kubenswrapper[4998]: I0227 10:40:56.705028 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/56011d10-8d8f-4738-9f54-b33c5254701d-scripts\") pod \"ceilometer-0\" (UID: \"56011d10-8d8f-4738-9f54-b33c5254701d\") " pod="openstack/ceilometer-0" Feb 27 10:40:56 crc kubenswrapper[4998]: I0227 10:40:56.705341 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/56011d10-8d8f-4738-9f54-b33c5254701d-log-httpd\") pod \"ceilometer-0\" (UID: \"56011d10-8d8f-4738-9f54-b33c5254701d\") " pod="openstack/ceilometer-0" Feb 27 10:40:56 crc kubenswrapper[4998]: I0227 10:40:56.705939 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/56011d10-8d8f-4738-9f54-b33c5254701d-run-httpd\") pod \"ceilometer-0\" (UID: \"56011d10-8d8f-4738-9f54-b33c5254701d\") " pod="openstack/ceilometer-0" Feb 27 10:40:56 crc kubenswrapper[4998]: I0227 10:40:56.708980 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33ba9efe-d1af-4c38-b767-9f1b41518e97-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"33ba9efe-d1af-4c38-b767-9f1b41518e97\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 10:40:56 crc kubenswrapper[4998]: I0227 10:40:56.708980 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/56011d10-8d8f-4738-9f54-b33c5254701d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"56011d10-8d8f-4738-9f54-b33c5254701d\") " pod="openstack/ceilometer-0" Feb 27 10:40:56 crc kubenswrapper[4998]: I0227 10:40:56.709389 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33ba9efe-d1af-4c38-b767-9f1b41518e97-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"33ba9efe-d1af-4c38-b767-9f1b41518e97\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 10:40:56 crc kubenswrapper[4998]: I0227 10:40:56.709984 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/56011d10-8d8f-4738-9f54-b33c5254701d-scripts\") pod \"ceilometer-0\" (UID: \"56011d10-8d8f-4738-9f54-b33c5254701d\") " pod="openstack/ceilometer-0" Feb 27 10:40:56 crc kubenswrapper[4998]: I0227 10:40:56.710144 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/33ba9efe-d1af-4c38-b767-9f1b41518e97-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"33ba9efe-d1af-4c38-b767-9f1b41518e97\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 10:40:56 crc kubenswrapper[4998]: I0227 10:40:56.710764 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56011d10-8d8f-4738-9f54-b33c5254701d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"56011d10-8d8f-4738-9f54-b33c5254701d\") " pod="openstack/ceilometer-0" Feb 27 10:40:56 crc kubenswrapper[4998]: I0227 10:40:56.711472 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/56011d10-8d8f-4738-9f54-b33c5254701d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"56011d10-8d8f-4738-9f54-b33c5254701d\") " pod="openstack/ceilometer-0" Feb 27 10:40:56 crc kubenswrapper[4998]: I0227 10:40:56.712382 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56011d10-8d8f-4738-9f54-b33c5254701d-config-data\") pod \"ceilometer-0\" (UID: \"56011d10-8d8f-4738-9f54-b33c5254701d\") " pod="openstack/ceilometer-0" Feb 27 10:40:56 crc kubenswrapper[4998]: I0227 10:40:56.713869 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/33ba9efe-d1af-4c38-b767-9f1b41518e97-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"33ba9efe-d1af-4c38-b767-9f1b41518e97\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 10:40:56 crc kubenswrapper[4998]: I0227 10:40:56.729544 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hq4pf\" (UniqueName: \"kubernetes.io/projected/33ba9efe-d1af-4c38-b767-9f1b41518e97-kube-api-access-hq4pf\") pod \"nova-cell1-novncproxy-0\" (UID: \"33ba9efe-d1af-4c38-b767-9f1b41518e97\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 10:40:56 crc kubenswrapper[4998]: I0227 10:40:56.732432 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hzgl\" (UniqueName: \"kubernetes.io/projected/56011d10-8d8f-4738-9f54-b33c5254701d-kube-api-access-8hzgl\") pod \"ceilometer-0\" (UID: \"56011d10-8d8f-4738-9f54-b33c5254701d\") " pod="openstack/ceilometer-0" Feb 27 10:40:56 crc kubenswrapper[4998]: I0227 10:40:56.775285 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f31638e-7dd4-4dcc-b18f-2e2618086e49" path="/var/lib/kubelet/pods/8f31638e-7dd4-4dcc-b18f-2e2618086e49/volumes" Feb 27 10:40:56 crc kubenswrapper[4998]: I0227 10:40:56.776122 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb938d5c-49ff-476a-ae91-0c07a0321818" path="/var/lib/kubelet/pods/eb938d5c-49ff-476a-ae91-0c07a0321818/volumes" Feb 27 10:40:56 crc kubenswrapper[4998]: I0227 10:40:56.804429 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 27 10:40:56 crc kubenswrapper[4998]: I0227 10:40:56.828130 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 10:40:57 crc kubenswrapper[4998]: I0227 10:40:57.489566 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 27 10:40:57 crc kubenswrapper[4998]: I0227 10:40:57.490301 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 27 10:40:57 crc kubenswrapper[4998]: I0227 10:40:57.490764 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 27 10:40:57 crc kubenswrapper[4998]: I0227 10:40:57.494742 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 27 10:40:57 crc kubenswrapper[4998]: I0227 10:40:57.982925 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 27 10:40:57 crc kubenswrapper[4998]: W0227 10:40:57.986010 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod56011d10_8d8f_4738_9f54_b33c5254701d.slice/crio-90bbdaa3ba21b0cc11a34d13d95d9960efef688b640c2f269b3325a10c22471d WatchSource:0}: Error finding container 90bbdaa3ba21b0cc11a34d13d95d9960efef688b640c2f269b3325a10c22471d: Status 404 returned error can't find the container with id 90bbdaa3ba21b0cc11a34d13d95d9960efef688b640c2f269b3325a10c22471d Feb 27 10:40:57 crc kubenswrapper[4998]: I0227 10:40:57.999031 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 27 10:40:58 crc kubenswrapper[4998]: I0227 10:40:58.369611 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"33ba9efe-d1af-4c38-b767-9f1b41518e97","Type":"ContainerStarted","Data":"ded5cd704583e582bf65af06bbe7da5560a144a4b0fd6cb709925d035254b0be"} Feb 27 10:40:58 crc kubenswrapper[4998]: I0227 10:40:58.369929 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"33ba9efe-d1af-4c38-b767-9f1b41518e97","Type":"ContainerStarted","Data":"88d5ffbf9dc6cf3047a432a037f9d8cb67efd9709cadb55aa904c6072d537474"} Feb 27 10:40:58 crc kubenswrapper[4998]: I0227 10:40:58.370775 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"56011d10-8d8f-4738-9f54-b33c5254701d","Type":"ContainerStarted","Data":"90bbdaa3ba21b0cc11a34d13d95d9960efef688b640c2f269b3325a10c22471d"} Feb 27 10:40:58 crc kubenswrapper[4998]: I0227 10:40:58.370989 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 27 10:40:58 crc kubenswrapper[4998]: I0227 10:40:58.382984 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 27 10:40:58 crc kubenswrapper[4998]: I0227 10:40:58.393483 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.393463571 podStartE2EDuration="2.393463571s" podCreationTimestamp="2026-02-27 10:40:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:40:58.387349957 +0000 UTC m=+1410.385620925" watchObservedRunningTime="2026-02-27 10:40:58.393463571 +0000 UTC m=+1410.391734539" Feb 27 10:40:58 crc kubenswrapper[4998]: I0227 10:40:58.572872 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-52g6d"] Feb 27 10:40:58 crc kubenswrapper[4998]: I0227 10:40:58.574853 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-52g6d" Feb 27 10:40:58 crc kubenswrapper[4998]: I0227 10:40:58.594447 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-52g6d"] Feb 27 10:40:58 crc kubenswrapper[4998]: I0227 10:40:58.752017 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jxwr\" (UniqueName: \"kubernetes.io/projected/b52da7f1-a553-4ea1-9e90-f0ea08ba7a14-kube-api-access-9jxwr\") pod \"dnsmasq-dns-89c5cd4d5-52g6d\" (UID: \"b52da7f1-a553-4ea1-9e90-f0ea08ba7a14\") " pod="openstack/dnsmasq-dns-89c5cd4d5-52g6d" Feb 27 10:40:58 crc kubenswrapper[4998]: I0227 10:40:58.752102 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b52da7f1-a553-4ea1-9e90-f0ea08ba7a14-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-52g6d\" (UID: \"b52da7f1-a553-4ea1-9e90-f0ea08ba7a14\") " pod="openstack/dnsmasq-dns-89c5cd4d5-52g6d" Feb 27 10:40:58 crc kubenswrapper[4998]: I0227 10:40:58.752135 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b52da7f1-a553-4ea1-9e90-f0ea08ba7a14-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-52g6d\" (UID: \"b52da7f1-a553-4ea1-9e90-f0ea08ba7a14\") " pod="openstack/dnsmasq-dns-89c5cd4d5-52g6d" Feb 27 10:40:58 crc kubenswrapper[4998]: I0227 10:40:58.752213 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b52da7f1-a553-4ea1-9e90-f0ea08ba7a14-config\") pod \"dnsmasq-dns-89c5cd4d5-52g6d\" (UID: \"b52da7f1-a553-4ea1-9e90-f0ea08ba7a14\") " pod="openstack/dnsmasq-dns-89c5cd4d5-52g6d" Feb 27 10:40:58 crc kubenswrapper[4998]: I0227 10:40:58.752262 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b52da7f1-a553-4ea1-9e90-f0ea08ba7a14-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-52g6d\" (UID: \"b52da7f1-a553-4ea1-9e90-f0ea08ba7a14\") " pod="openstack/dnsmasq-dns-89c5cd4d5-52g6d" Feb 27 10:40:58 crc kubenswrapper[4998]: I0227 10:40:58.752309 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b52da7f1-a553-4ea1-9e90-f0ea08ba7a14-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-52g6d\" (UID: \"b52da7f1-a553-4ea1-9e90-f0ea08ba7a14\") " pod="openstack/dnsmasq-dns-89c5cd4d5-52g6d" Feb 27 10:40:58 crc kubenswrapper[4998]: I0227 10:40:58.853644 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jxwr\" (UniqueName: \"kubernetes.io/projected/b52da7f1-a553-4ea1-9e90-f0ea08ba7a14-kube-api-access-9jxwr\") pod \"dnsmasq-dns-89c5cd4d5-52g6d\" (UID: \"b52da7f1-a553-4ea1-9e90-f0ea08ba7a14\") " pod="openstack/dnsmasq-dns-89c5cd4d5-52g6d" Feb 27 10:40:58 crc kubenswrapper[4998]: I0227 10:40:58.853713 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b52da7f1-a553-4ea1-9e90-f0ea08ba7a14-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-52g6d\" (UID: \"b52da7f1-a553-4ea1-9e90-f0ea08ba7a14\") " pod="openstack/dnsmasq-dns-89c5cd4d5-52g6d" Feb 27 10:40:58 crc kubenswrapper[4998]: I0227 10:40:58.853758 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b52da7f1-a553-4ea1-9e90-f0ea08ba7a14-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-52g6d\" (UID: \"b52da7f1-a553-4ea1-9e90-f0ea08ba7a14\") " pod="openstack/dnsmasq-dns-89c5cd4d5-52g6d" Feb 27 10:40:58 crc kubenswrapper[4998]: I0227 10:40:58.853856 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b52da7f1-a553-4ea1-9e90-f0ea08ba7a14-config\") pod \"dnsmasq-dns-89c5cd4d5-52g6d\" (UID: \"b52da7f1-a553-4ea1-9e90-f0ea08ba7a14\") " pod="openstack/dnsmasq-dns-89c5cd4d5-52g6d" Feb 27 10:40:58 crc kubenswrapper[4998]: I0227 10:40:58.853893 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b52da7f1-a553-4ea1-9e90-f0ea08ba7a14-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-52g6d\" (UID: \"b52da7f1-a553-4ea1-9e90-f0ea08ba7a14\") " pod="openstack/dnsmasq-dns-89c5cd4d5-52g6d" Feb 27 10:40:58 crc kubenswrapper[4998]: I0227 10:40:58.853941 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b52da7f1-a553-4ea1-9e90-f0ea08ba7a14-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-52g6d\" (UID: \"b52da7f1-a553-4ea1-9e90-f0ea08ba7a14\") " pod="openstack/dnsmasq-dns-89c5cd4d5-52g6d" Feb 27 10:40:58 crc kubenswrapper[4998]: I0227 10:40:58.854619 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b52da7f1-a553-4ea1-9e90-f0ea08ba7a14-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-52g6d\" (UID: \"b52da7f1-a553-4ea1-9e90-f0ea08ba7a14\") " pod="openstack/dnsmasq-dns-89c5cd4d5-52g6d" Feb 27 10:40:58 crc kubenswrapper[4998]: I0227 10:40:58.857000 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b52da7f1-a553-4ea1-9e90-f0ea08ba7a14-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-52g6d\" (UID: \"b52da7f1-a553-4ea1-9e90-f0ea08ba7a14\") " pod="openstack/dnsmasq-dns-89c5cd4d5-52g6d" Feb 27 10:40:58 crc kubenswrapper[4998]: I0227 10:40:58.858159 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b52da7f1-a553-4ea1-9e90-f0ea08ba7a14-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-52g6d\" (UID: \"b52da7f1-a553-4ea1-9e90-f0ea08ba7a14\") " pod="openstack/dnsmasq-dns-89c5cd4d5-52g6d" Feb 27 10:40:58 crc kubenswrapper[4998]: I0227 10:40:58.858433 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b52da7f1-a553-4ea1-9e90-f0ea08ba7a14-config\") pod \"dnsmasq-dns-89c5cd4d5-52g6d\" (UID: \"b52da7f1-a553-4ea1-9e90-f0ea08ba7a14\") " pod="openstack/dnsmasq-dns-89c5cd4d5-52g6d" Feb 27 10:40:58 crc kubenswrapper[4998]: I0227 10:40:58.858702 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b52da7f1-a553-4ea1-9e90-f0ea08ba7a14-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-52g6d\" (UID: \"b52da7f1-a553-4ea1-9e90-f0ea08ba7a14\") " pod="openstack/dnsmasq-dns-89c5cd4d5-52g6d" Feb 27 10:40:58 crc kubenswrapper[4998]: I0227 10:40:58.872053 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jxwr\" (UniqueName: \"kubernetes.io/projected/b52da7f1-a553-4ea1-9e90-f0ea08ba7a14-kube-api-access-9jxwr\") pod \"dnsmasq-dns-89c5cd4d5-52g6d\" (UID: \"b52da7f1-a553-4ea1-9e90-f0ea08ba7a14\") " pod="openstack/dnsmasq-dns-89c5cd4d5-52g6d" Feb 27 10:40:58 crc kubenswrapper[4998]: I0227 10:40:58.934807 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-52g6d" Feb 27 10:40:59 crc kubenswrapper[4998]: I0227 10:40:59.380145 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"56011d10-8d8f-4738-9f54-b33c5254701d","Type":"ContainerStarted","Data":"c30ff07c78b640b86fa7e1ac868d0cc279128b8367e90aa6e0ccbb91d5e9a207"} Feb 27 10:40:59 crc kubenswrapper[4998]: I0227 10:40:59.529000 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-52g6d"] Feb 27 10:40:59 crc kubenswrapper[4998]: W0227 10:40:59.531965 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb52da7f1_a553_4ea1_9e90_f0ea08ba7a14.slice/crio-72fc8679cf4772c9544660f22004ddb431a584c838de9799bcc047ab9d7bc1e3 WatchSource:0}: Error finding container 72fc8679cf4772c9544660f22004ddb431a584c838de9799bcc047ab9d7bc1e3: Status 404 returned error can't find the container with id 72fc8679cf4772c9544660f22004ddb431a584c838de9799bcc047ab9d7bc1e3 Feb 27 10:41:00 crc kubenswrapper[4998]: I0227 10:41:00.389264 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"56011d10-8d8f-4738-9f54-b33c5254701d","Type":"ContainerStarted","Data":"b7ba94d6cac7be27477770f56dda4b9fc92c32fd976243591a3d16aeb7244524"} Feb 27 10:41:00 crc kubenswrapper[4998]: I0227 10:41:00.389595 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"56011d10-8d8f-4738-9f54-b33c5254701d","Type":"ContainerStarted","Data":"958ab5ceae953c565fa483456423034c3be6d5888dd7dd69d2f08357c72627ac"} Feb 27 10:41:00 crc kubenswrapper[4998]: I0227 10:41:00.390665 4998 generic.go:334] "Generic (PLEG): container finished" podID="b52da7f1-a553-4ea1-9e90-f0ea08ba7a14" containerID="e03a1c766c6792be431d73ac8a15d462881f334fc604e9fa7e7ec974e777f58c" exitCode=0 Feb 27 10:41:00 crc kubenswrapper[4998]: I0227 10:41:00.390717 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-52g6d" event={"ID":"b52da7f1-a553-4ea1-9e90-f0ea08ba7a14","Type":"ContainerDied","Data":"e03a1c766c6792be431d73ac8a15d462881f334fc604e9fa7e7ec974e777f58c"} Feb 27 10:41:00 crc kubenswrapper[4998]: I0227 10:41:00.390790 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-52g6d" event={"ID":"b52da7f1-a553-4ea1-9e90-f0ea08ba7a14","Type":"ContainerStarted","Data":"72fc8679cf4772c9544660f22004ddb431a584c838de9799bcc047ab9d7bc1e3"} Feb 27 10:41:00 crc kubenswrapper[4998]: I0227 10:41:00.891344 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 27 10:41:01 crc kubenswrapper[4998]: I0227 10:41:01.323022 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 27 10:41:01 crc kubenswrapper[4998]: I0227 10:41:01.400124 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-52g6d" event={"ID":"b52da7f1-a553-4ea1-9e90-f0ea08ba7a14","Type":"ContainerStarted","Data":"d6688dcf5ef02b6a7065429810edff01ffe1a532f29430cdcde8a90cf5bb7e9f"} Feb 27 10:41:01 crc kubenswrapper[4998]: I0227 10:41:01.400272 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="a1a7d456-32f8-4fab-a874-4d30b877628c" containerName="nova-api-log" containerID="cri-o://a1cf325b9a47e051ef21c8d128253e87b91b5b713af5f8ca103f4afddbcc1ee9" gracePeriod=30 Feb 27 10:41:01 crc kubenswrapper[4998]: I0227 10:41:01.400750 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="a1a7d456-32f8-4fab-a874-4d30b877628c" containerName="nova-api-api" containerID="cri-o://614401a0865e5c7d2a1504f1312385ab35d0686fde7759d60f12466c2e88549d" gracePeriod=30 Feb 27 10:41:01 crc kubenswrapper[4998]: I0227 10:41:01.430305 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-89c5cd4d5-52g6d" podStartSLOduration=3.430286868 podStartE2EDuration="3.430286868s" podCreationTimestamp="2026-02-27 10:40:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:41:01.423426538 +0000 UTC m=+1413.421697496" watchObservedRunningTime="2026-02-27 10:41:01.430286868 +0000 UTC m=+1413.428557836" Feb 27 10:41:01 crc kubenswrapper[4998]: I0227 10:41:01.670459 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 27 10:41:01 crc kubenswrapper[4998]: I0227 10:41:01.806500 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 27 10:41:02 crc kubenswrapper[4998]: I0227 10:41:02.410240 4998 generic.go:334] "Generic (PLEG): container finished" podID="a1a7d456-32f8-4fab-a874-4d30b877628c" containerID="a1cf325b9a47e051ef21c8d128253e87b91b5b713af5f8ca103f4afddbcc1ee9" exitCode=143 Feb 27 10:41:02 crc kubenswrapper[4998]: I0227 10:41:02.411157 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a1a7d456-32f8-4fab-a874-4d30b877628c","Type":"ContainerDied","Data":"a1cf325b9a47e051ef21c8d128253e87b91b5b713af5f8ca103f4afddbcc1ee9"} Feb 27 10:41:02 crc kubenswrapper[4998]: I0227 10:41:02.413763 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"56011d10-8d8f-4738-9f54-b33c5254701d","Type":"ContainerStarted","Data":"2a3e0a906fcb0402d2755b17d05b7011dc5385278b8a288ac992f6dc8d70782b"} Feb 27 10:41:02 crc kubenswrapper[4998]: I0227 10:41:02.413814 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-89c5cd4d5-52g6d" Feb 27 10:41:02 crc kubenswrapper[4998]: I0227 10:41:02.413960 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="56011d10-8d8f-4738-9f54-b33c5254701d" containerName="ceilometer-central-agent" containerID="cri-o://c30ff07c78b640b86fa7e1ac868d0cc279128b8367e90aa6e0ccbb91d5e9a207" gracePeriod=30 Feb 27 10:41:02 crc kubenswrapper[4998]: I0227 10:41:02.414397 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="56011d10-8d8f-4738-9f54-b33c5254701d" containerName="proxy-httpd" containerID="cri-o://2a3e0a906fcb0402d2755b17d05b7011dc5385278b8a288ac992f6dc8d70782b" gracePeriod=30 Feb 27 10:41:02 crc kubenswrapper[4998]: I0227 10:41:02.414458 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="56011d10-8d8f-4738-9f54-b33c5254701d" containerName="sg-core" containerID="cri-o://b7ba94d6cac7be27477770f56dda4b9fc92c32fd976243591a3d16aeb7244524" gracePeriod=30 Feb 27 10:41:02 crc kubenswrapper[4998]: I0227 10:41:02.414497 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="56011d10-8d8f-4738-9f54-b33c5254701d" containerName="ceilometer-notification-agent" containerID="cri-o://958ab5ceae953c565fa483456423034c3be6d5888dd7dd69d2f08357c72627ac" gracePeriod=30 Feb 27 10:41:03 crc kubenswrapper[4998]: I0227 10:41:03.430145 4998 generic.go:334] "Generic (PLEG): container finished" podID="56011d10-8d8f-4738-9f54-b33c5254701d" containerID="2a3e0a906fcb0402d2755b17d05b7011dc5385278b8a288ac992f6dc8d70782b" exitCode=0 Feb 27 10:41:03 crc kubenswrapper[4998]: I0227 10:41:03.430650 4998 generic.go:334] "Generic (PLEG): container finished" podID="56011d10-8d8f-4738-9f54-b33c5254701d" containerID="b7ba94d6cac7be27477770f56dda4b9fc92c32fd976243591a3d16aeb7244524" exitCode=2 Feb 27 10:41:03 crc kubenswrapper[4998]: I0227 10:41:03.430671 4998 generic.go:334] "Generic (PLEG): container finished" podID="56011d10-8d8f-4738-9f54-b33c5254701d" containerID="958ab5ceae953c565fa483456423034c3be6d5888dd7dd69d2f08357c72627ac" exitCode=0 Feb 27 10:41:03 crc kubenswrapper[4998]: I0227 10:41:03.430285 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"56011d10-8d8f-4738-9f54-b33c5254701d","Type":"ContainerDied","Data":"2a3e0a906fcb0402d2755b17d05b7011dc5385278b8a288ac992f6dc8d70782b"} Feb 27 10:41:03 crc kubenswrapper[4998]: I0227 10:41:03.431101 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"56011d10-8d8f-4738-9f54-b33c5254701d","Type":"ContainerDied","Data":"b7ba94d6cac7be27477770f56dda4b9fc92c32fd976243591a3d16aeb7244524"} Feb 27 10:41:03 crc kubenswrapper[4998]: I0227 10:41:03.431126 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"56011d10-8d8f-4738-9f54-b33c5254701d","Type":"ContainerDied","Data":"958ab5ceae953c565fa483456423034c3be6d5888dd7dd69d2f08357c72627ac"} Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.120734 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.127615 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.302991 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/56011d10-8d8f-4738-9f54-b33c5254701d-log-httpd\") pod \"56011d10-8d8f-4738-9f54-b33c5254701d\" (UID: \"56011d10-8d8f-4738-9f54-b33c5254701d\") " Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.303268 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8hzgl\" (UniqueName: \"kubernetes.io/projected/56011d10-8d8f-4738-9f54-b33c5254701d-kube-api-access-8hzgl\") pod \"56011d10-8d8f-4738-9f54-b33c5254701d\" (UID: \"56011d10-8d8f-4738-9f54-b33c5254701d\") " Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.303327 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a1a7d456-32f8-4fab-a874-4d30b877628c-logs\") pod \"a1a7d456-32f8-4fab-a874-4d30b877628c\" (UID: \"a1a7d456-32f8-4fab-a874-4d30b877628c\") " Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.303344 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56011d10-8d8f-4738-9f54-b33c5254701d-config-data\") pod \"56011d10-8d8f-4738-9f54-b33c5254701d\" (UID: \"56011d10-8d8f-4738-9f54-b33c5254701d\") " Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.303361 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/56011d10-8d8f-4738-9f54-b33c5254701d-run-httpd\") pod \"56011d10-8d8f-4738-9f54-b33c5254701d\" (UID: \"56011d10-8d8f-4738-9f54-b33c5254701d\") " Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.303368 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56011d10-8d8f-4738-9f54-b33c5254701d-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "56011d10-8d8f-4738-9f54-b33c5254701d" (UID: "56011d10-8d8f-4738-9f54-b33c5254701d"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.303402 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/56011d10-8d8f-4738-9f54-b33c5254701d-sg-core-conf-yaml\") pod \"56011d10-8d8f-4738-9f54-b33c5254701d\" (UID: \"56011d10-8d8f-4738-9f54-b33c5254701d\") " Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.303452 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/56011d10-8d8f-4738-9f54-b33c5254701d-scripts\") pod \"56011d10-8d8f-4738-9f54-b33c5254701d\" (UID: \"56011d10-8d8f-4738-9f54-b33c5254701d\") " Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.303481 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/56011d10-8d8f-4738-9f54-b33c5254701d-ceilometer-tls-certs\") pod \"56011d10-8d8f-4738-9f54-b33c5254701d\" (UID: \"56011d10-8d8f-4738-9f54-b33c5254701d\") " Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.303507 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1a7d456-32f8-4fab-a874-4d30b877628c-combined-ca-bundle\") pod \"a1a7d456-32f8-4fab-a874-4d30b877628c\" (UID: \"a1a7d456-32f8-4fab-a874-4d30b877628c\") " Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.303572 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rf27\" (UniqueName: \"kubernetes.io/projected/a1a7d456-32f8-4fab-a874-4d30b877628c-kube-api-access-6rf27\") pod \"a1a7d456-32f8-4fab-a874-4d30b877628c\" (UID: \"a1a7d456-32f8-4fab-a874-4d30b877628c\") " Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.303586 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56011d10-8d8f-4738-9f54-b33c5254701d-combined-ca-bundle\") pod \"56011d10-8d8f-4738-9f54-b33c5254701d\" (UID: \"56011d10-8d8f-4738-9f54-b33c5254701d\") " Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.303766 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1a7d456-32f8-4fab-a874-4d30b877628c-config-data\") pod \"a1a7d456-32f8-4fab-a874-4d30b877628c\" (UID: \"a1a7d456-32f8-4fab-a874-4d30b877628c\") " Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.303807 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1a7d456-32f8-4fab-a874-4d30b877628c-logs" (OuterVolumeSpecName: "logs") pod "a1a7d456-32f8-4fab-a874-4d30b877628c" (UID: "a1a7d456-32f8-4fab-a874-4d30b877628c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.304167 4998 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/56011d10-8d8f-4738-9f54-b33c5254701d-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.304183 4998 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a1a7d456-32f8-4fab-a874-4d30b877628c-logs\") on node \"crc\" DevicePath \"\"" Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.305434 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56011d10-8d8f-4738-9f54-b33c5254701d-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "56011d10-8d8f-4738-9f54-b33c5254701d" (UID: "56011d10-8d8f-4738-9f54-b33c5254701d"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.313668 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56011d10-8d8f-4738-9f54-b33c5254701d-kube-api-access-8hzgl" (OuterVolumeSpecName: "kube-api-access-8hzgl") pod "56011d10-8d8f-4738-9f54-b33c5254701d" (UID: "56011d10-8d8f-4738-9f54-b33c5254701d"). InnerVolumeSpecName "kube-api-access-8hzgl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.313875 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56011d10-8d8f-4738-9f54-b33c5254701d-scripts" (OuterVolumeSpecName: "scripts") pod "56011d10-8d8f-4738-9f54-b33c5254701d" (UID: "56011d10-8d8f-4738-9f54-b33c5254701d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.316153 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1a7d456-32f8-4fab-a874-4d30b877628c-kube-api-access-6rf27" (OuterVolumeSpecName: "kube-api-access-6rf27") pod "a1a7d456-32f8-4fab-a874-4d30b877628c" (UID: "a1a7d456-32f8-4fab-a874-4d30b877628c"). InnerVolumeSpecName "kube-api-access-6rf27". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.352764 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56011d10-8d8f-4738-9f54-b33c5254701d-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "56011d10-8d8f-4738-9f54-b33c5254701d" (UID: "56011d10-8d8f-4738-9f54-b33c5254701d"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.373527 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1a7d456-32f8-4fab-a874-4d30b877628c-config-data" (OuterVolumeSpecName: "config-data") pod "a1a7d456-32f8-4fab-a874-4d30b877628c" (UID: "a1a7d456-32f8-4fab-a874-4d30b877628c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.379422 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1a7d456-32f8-4fab-a874-4d30b877628c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a1a7d456-32f8-4fab-a874-4d30b877628c" (UID: "a1a7d456-32f8-4fab-a874-4d30b877628c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.385919 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56011d10-8d8f-4738-9f54-b33c5254701d-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "56011d10-8d8f-4738-9f54-b33c5254701d" (UID: "56011d10-8d8f-4738-9f54-b33c5254701d"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.402369 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56011d10-8d8f-4738-9f54-b33c5254701d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "56011d10-8d8f-4738-9f54-b33c5254701d" (UID: "56011d10-8d8f-4738-9f54-b33c5254701d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.405472 4998 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/56011d10-8d8f-4738-9f54-b33c5254701d-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.405502 4998 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/56011d10-8d8f-4738-9f54-b33c5254701d-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.405517 4998 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1a7d456-32f8-4fab-a874-4d30b877628c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.405528 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6rf27\" (UniqueName: \"kubernetes.io/projected/a1a7d456-32f8-4fab-a874-4d30b877628c-kube-api-access-6rf27\") on node \"crc\" DevicePath \"\"" Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.405539 4998 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56011d10-8d8f-4738-9f54-b33c5254701d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.405551 4998 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1a7d456-32f8-4fab-a874-4d30b877628c-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.405562 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8hzgl\" (UniqueName: \"kubernetes.io/projected/56011d10-8d8f-4738-9f54-b33c5254701d-kube-api-access-8hzgl\") on node \"crc\" DevicePath \"\"" Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.405573 4998 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/56011d10-8d8f-4738-9f54-b33c5254701d-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.405581 4998 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/56011d10-8d8f-4738-9f54-b33c5254701d-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 27 10:41:05 crc kubenswrapper[4998]: E0227 10:41:05.432856 4998 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod92acff51_4ca2_43c6_ab0f_480e01e9efb8.slice/crio-conmon-550199ff57ddd5616ea967474384544dce1ead6cabf6b337129b1634632860fa.scope\": RecentStats: unable to find data in memory cache]" Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.437594 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56011d10-8d8f-4738-9f54-b33c5254701d-config-data" (OuterVolumeSpecName: "config-data") pod "56011d10-8d8f-4738-9f54-b33c5254701d" (UID: "56011d10-8d8f-4738-9f54-b33c5254701d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.450861 4998 generic.go:334] "Generic (PLEG): container finished" podID="a1a7d456-32f8-4fab-a874-4d30b877628c" containerID="614401a0865e5c7d2a1504f1312385ab35d0686fde7759d60f12466c2e88549d" exitCode=0 Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.450930 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a1a7d456-32f8-4fab-a874-4d30b877628c","Type":"ContainerDied","Data":"614401a0865e5c7d2a1504f1312385ab35d0686fde7759d60f12466c2e88549d"} Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.450960 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a1a7d456-32f8-4fab-a874-4d30b877628c","Type":"ContainerDied","Data":"81aea7b7e58934072e2ddd5982036e75f41cea71d115222029df5d29ceee8aa9"} Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.450976 4998 scope.go:117] "RemoveContainer" containerID="614401a0865e5c7d2a1504f1312385ab35d0686fde7759d60f12466c2e88549d" Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.451117 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.462333 4998 generic.go:334] "Generic (PLEG): container finished" podID="56011d10-8d8f-4738-9f54-b33c5254701d" containerID="c30ff07c78b640b86fa7e1ac868d0cc279128b8367e90aa6e0ccbb91d5e9a207" exitCode=0 Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.462377 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"56011d10-8d8f-4738-9f54-b33c5254701d","Type":"ContainerDied","Data":"c30ff07c78b640b86fa7e1ac868d0cc279128b8367e90aa6e0ccbb91d5e9a207"} Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.462402 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"56011d10-8d8f-4738-9f54-b33c5254701d","Type":"ContainerDied","Data":"90bbdaa3ba21b0cc11a34d13d95d9960efef688b640c2f269b3325a10c22471d"} Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.462458 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.490795 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.492076 4998 scope.go:117] "RemoveContainer" containerID="a1cf325b9a47e051ef21c8d128253e87b91b5b713af5f8ca103f4afddbcc1ee9" Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.507776 4998 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56011d10-8d8f-4738-9f54-b33c5254701d-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.523926 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.540884 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 27 10:41:05 crc kubenswrapper[4998]: E0227 10:41:05.541365 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1a7d456-32f8-4fab-a874-4d30b877628c" containerName="nova-api-log" Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.541382 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1a7d456-32f8-4fab-a874-4d30b877628c" containerName="nova-api-log" Feb 27 10:41:05 crc kubenswrapper[4998]: E0227 10:41:05.541395 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56011d10-8d8f-4738-9f54-b33c5254701d" containerName="ceilometer-central-agent" Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.541402 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="56011d10-8d8f-4738-9f54-b33c5254701d" containerName="ceilometer-central-agent" Feb 27 10:41:05 crc kubenswrapper[4998]: E0227 10:41:05.541419 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56011d10-8d8f-4738-9f54-b33c5254701d" containerName="sg-core" Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.541425 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="56011d10-8d8f-4738-9f54-b33c5254701d" containerName="sg-core" Feb 27 10:41:05 crc kubenswrapper[4998]: E0227 10:41:05.541446 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1a7d456-32f8-4fab-a874-4d30b877628c" containerName="nova-api-api" Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.541452 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1a7d456-32f8-4fab-a874-4d30b877628c" containerName="nova-api-api" Feb 27 10:41:05 crc kubenswrapper[4998]: E0227 10:41:05.541462 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56011d10-8d8f-4738-9f54-b33c5254701d" containerName="ceilometer-notification-agent" Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.541468 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="56011d10-8d8f-4738-9f54-b33c5254701d" containerName="ceilometer-notification-agent" Feb 27 10:41:05 crc kubenswrapper[4998]: E0227 10:41:05.541479 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56011d10-8d8f-4738-9f54-b33c5254701d" containerName="proxy-httpd" Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.541485 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="56011d10-8d8f-4738-9f54-b33c5254701d" containerName="proxy-httpd" Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.541659 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1a7d456-32f8-4fab-a874-4d30b877628c" containerName="nova-api-log" Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.541673 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="56011d10-8d8f-4738-9f54-b33c5254701d" containerName="sg-core" Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.541690 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1a7d456-32f8-4fab-a874-4d30b877628c" containerName="nova-api-api" Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.541698 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="56011d10-8d8f-4738-9f54-b33c5254701d" containerName="ceilometer-notification-agent" Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.541708 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="56011d10-8d8f-4738-9f54-b33c5254701d" containerName="ceilometer-central-agent" Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.541720 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="56011d10-8d8f-4738-9f54-b33c5254701d" containerName="proxy-httpd" Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.542614 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.546539 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.546697 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.547249 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.549453 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.557885 4998 scope.go:117] "RemoveContainer" containerID="614401a0865e5c7d2a1504f1312385ab35d0686fde7759d60f12466c2e88549d" Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.558357 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 27 10:41:05 crc kubenswrapper[4998]: E0227 10:41:05.560124 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"614401a0865e5c7d2a1504f1312385ab35d0686fde7759d60f12466c2e88549d\": container with ID starting with 614401a0865e5c7d2a1504f1312385ab35d0686fde7759d60f12466c2e88549d not found: ID does not exist" containerID="614401a0865e5c7d2a1504f1312385ab35d0686fde7759d60f12466c2e88549d" Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.560161 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"614401a0865e5c7d2a1504f1312385ab35d0686fde7759d60f12466c2e88549d"} err="failed to get container status \"614401a0865e5c7d2a1504f1312385ab35d0686fde7759d60f12466c2e88549d\": rpc error: code = NotFound desc = could not find container \"614401a0865e5c7d2a1504f1312385ab35d0686fde7759d60f12466c2e88549d\": container with ID starting with 614401a0865e5c7d2a1504f1312385ab35d0686fde7759d60f12466c2e88549d not found: ID does not exist" Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.560187 4998 scope.go:117] "RemoveContainer" containerID="a1cf325b9a47e051ef21c8d128253e87b91b5b713af5f8ca103f4afddbcc1ee9" Feb 27 10:41:05 crc kubenswrapper[4998]: E0227 10:41:05.560805 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1cf325b9a47e051ef21c8d128253e87b91b5b713af5f8ca103f4afddbcc1ee9\": container with ID starting with a1cf325b9a47e051ef21c8d128253e87b91b5b713af5f8ca103f4afddbcc1ee9 not found: ID does not exist" containerID="a1cf325b9a47e051ef21c8d128253e87b91b5b713af5f8ca103f4afddbcc1ee9" Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.560856 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1cf325b9a47e051ef21c8d128253e87b91b5b713af5f8ca103f4afddbcc1ee9"} err="failed to get container status \"a1cf325b9a47e051ef21c8d128253e87b91b5b713af5f8ca103f4afddbcc1ee9\": rpc error: code = NotFound desc = could not find container \"a1cf325b9a47e051ef21c8d128253e87b91b5b713af5f8ca103f4afddbcc1ee9\": container with ID starting with a1cf325b9a47e051ef21c8d128253e87b91b5b713af5f8ca103f4afddbcc1ee9 not found: ID does not exist" Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.560892 4998 scope.go:117] "RemoveContainer" containerID="2a3e0a906fcb0402d2755b17d05b7011dc5385278b8a288ac992f6dc8d70782b" Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.569409 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.576319 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.578912 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.580984 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.581072 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.583497 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.583673 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.585499 4998 scope.go:117] "RemoveContainer" containerID="b7ba94d6cac7be27477770f56dda4b9fc92c32fd976243591a3d16aeb7244524" Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.610484 4998 scope.go:117] "RemoveContainer" containerID="958ab5ceae953c565fa483456423034c3be6d5888dd7dd69d2f08357c72627ac" Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.628433 4998 scope.go:117] "RemoveContainer" containerID="c30ff07c78b640b86fa7e1ac868d0cc279128b8367e90aa6e0ccbb91d5e9a207" Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.646404 4998 scope.go:117] "RemoveContainer" containerID="2a3e0a906fcb0402d2755b17d05b7011dc5385278b8a288ac992f6dc8d70782b" Feb 27 10:41:05 crc kubenswrapper[4998]: E0227 10:41:05.646853 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a3e0a906fcb0402d2755b17d05b7011dc5385278b8a288ac992f6dc8d70782b\": container with ID starting with 2a3e0a906fcb0402d2755b17d05b7011dc5385278b8a288ac992f6dc8d70782b not found: ID does not exist" containerID="2a3e0a906fcb0402d2755b17d05b7011dc5385278b8a288ac992f6dc8d70782b" Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.646902 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a3e0a906fcb0402d2755b17d05b7011dc5385278b8a288ac992f6dc8d70782b"} err="failed to get container status \"2a3e0a906fcb0402d2755b17d05b7011dc5385278b8a288ac992f6dc8d70782b\": rpc error: code = NotFound desc = could not find container \"2a3e0a906fcb0402d2755b17d05b7011dc5385278b8a288ac992f6dc8d70782b\": container with ID starting with 2a3e0a906fcb0402d2755b17d05b7011dc5385278b8a288ac992f6dc8d70782b not found: ID does not exist" Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.646936 4998 scope.go:117] "RemoveContainer" containerID="b7ba94d6cac7be27477770f56dda4b9fc92c32fd976243591a3d16aeb7244524" Feb 27 10:41:05 crc kubenswrapper[4998]: E0227 10:41:05.647337 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b7ba94d6cac7be27477770f56dda4b9fc92c32fd976243591a3d16aeb7244524\": container with ID starting with b7ba94d6cac7be27477770f56dda4b9fc92c32fd976243591a3d16aeb7244524 not found: ID does not exist" containerID="b7ba94d6cac7be27477770f56dda4b9fc92c32fd976243591a3d16aeb7244524" Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.647362 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7ba94d6cac7be27477770f56dda4b9fc92c32fd976243591a3d16aeb7244524"} err="failed to get container status \"b7ba94d6cac7be27477770f56dda4b9fc92c32fd976243591a3d16aeb7244524\": rpc error: code = NotFound desc = could not find container \"b7ba94d6cac7be27477770f56dda4b9fc92c32fd976243591a3d16aeb7244524\": container with ID starting with b7ba94d6cac7be27477770f56dda4b9fc92c32fd976243591a3d16aeb7244524 not found: ID does not exist" Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.647382 4998 scope.go:117] "RemoveContainer" containerID="958ab5ceae953c565fa483456423034c3be6d5888dd7dd69d2f08357c72627ac" Feb 27 10:41:05 crc kubenswrapper[4998]: E0227 10:41:05.647618 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"958ab5ceae953c565fa483456423034c3be6d5888dd7dd69d2f08357c72627ac\": container with ID starting with 958ab5ceae953c565fa483456423034c3be6d5888dd7dd69d2f08357c72627ac not found: ID does not exist" containerID="958ab5ceae953c565fa483456423034c3be6d5888dd7dd69d2f08357c72627ac" Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.647655 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"958ab5ceae953c565fa483456423034c3be6d5888dd7dd69d2f08357c72627ac"} err="failed to get container status \"958ab5ceae953c565fa483456423034c3be6d5888dd7dd69d2f08357c72627ac\": rpc error: code = NotFound desc = could not find container \"958ab5ceae953c565fa483456423034c3be6d5888dd7dd69d2f08357c72627ac\": container with ID starting with 958ab5ceae953c565fa483456423034c3be6d5888dd7dd69d2f08357c72627ac not found: ID does not exist" Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.647672 4998 scope.go:117] "RemoveContainer" containerID="c30ff07c78b640b86fa7e1ac868d0cc279128b8367e90aa6e0ccbb91d5e9a207" Feb 27 10:41:05 crc kubenswrapper[4998]: E0227 10:41:05.648884 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c30ff07c78b640b86fa7e1ac868d0cc279128b8367e90aa6e0ccbb91d5e9a207\": container with ID starting with c30ff07c78b640b86fa7e1ac868d0cc279128b8367e90aa6e0ccbb91d5e9a207 not found: ID does not exist" containerID="c30ff07c78b640b86fa7e1ac868d0cc279128b8367e90aa6e0ccbb91d5e9a207" Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.648922 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c30ff07c78b640b86fa7e1ac868d0cc279128b8367e90aa6e0ccbb91d5e9a207"} err="failed to get container status \"c30ff07c78b640b86fa7e1ac868d0cc279128b8367e90aa6e0ccbb91d5e9a207\": rpc error: code = NotFound desc = could not find container \"c30ff07c78b640b86fa7e1ac868d0cc279128b8367e90aa6e0ccbb91d5e9a207\": container with ID starting with c30ff07c78b640b86fa7e1ac868d0cc279128b8367e90aa6e0ccbb91d5e9a207 not found: ID does not exist" Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.711081 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4f1132ff-2042-4f61-9889-9d666509cfc3-run-httpd\") pod \"ceilometer-0\" (UID: \"4f1132ff-2042-4f61-9889-9d666509cfc3\") " pod="openstack/ceilometer-0" Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.711122 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3fe515b7-ba51-4d18-a112-41374a6c932c-public-tls-certs\") pod \"nova-api-0\" (UID: \"3fe515b7-ba51-4d18-a112-41374a6c932c\") " pod="openstack/nova-api-0" Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.711202 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3fe515b7-ba51-4d18-a112-41374a6c932c-config-data\") pod \"nova-api-0\" (UID: \"3fe515b7-ba51-4d18-a112-41374a6c932c\") " pod="openstack/nova-api-0" Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.711240 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hq8gv\" (UniqueName: \"kubernetes.io/projected/3fe515b7-ba51-4d18-a112-41374a6c932c-kube-api-access-hq8gv\") pod \"nova-api-0\" (UID: \"3fe515b7-ba51-4d18-a112-41374a6c932c\") " pod="openstack/nova-api-0" Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.711262 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3fe515b7-ba51-4d18-a112-41374a6c932c-logs\") pod \"nova-api-0\" (UID: \"3fe515b7-ba51-4d18-a112-41374a6c932c\") " pod="openstack/nova-api-0" Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.711289 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4f1132ff-2042-4f61-9889-9d666509cfc3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4f1132ff-2042-4f61-9889-9d666509cfc3\") " pod="openstack/ceilometer-0" Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.711357 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4f1132ff-2042-4f61-9889-9d666509cfc3-log-httpd\") pod \"ceilometer-0\" (UID: \"4f1132ff-2042-4f61-9889-9d666509cfc3\") " pod="openstack/ceilometer-0" Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.711371 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fklj4\" (UniqueName: \"kubernetes.io/projected/4f1132ff-2042-4f61-9889-9d666509cfc3-kube-api-access-fklj4\") pod \"ceilometer-0\" (UID: \"4f1132ff-2042-4f61-9889-9d666509cfc3\") " pod="openstack/ceilometer-0" Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.711389 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4f1132ff-2042-4f61-9889-9d666509cfc3-scripts\") pod \"ceilometer-0\" (UID: \"4f1132ff-2042-4f61-9889-9d666509cfc3\") " pod="openstack/ceilometer-0" Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.711403 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f1132ff-2042-4f61-9889-9d666509cfc3-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"4f1132ff-2042-4f61-9889-9d666509cfc3\") " pod="openstack/ceilometer-0" Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.711420 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3fe515b7-ba51-4d18-a112-41374a6c932c-internal-tls-certs\") pod \"nova-api-0\" (UID: \"3fe515b7-ba51-4d18-a112-41374a6c932c\") " pod="openstack/nova-api-0" Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.711434 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f1132ff-2042-4f61-9889-9d666509cfc3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4f1132ff-2042-4f61-9889-9d666509cfc3\") " pod="openstack/ceilometer-0" Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.711469 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fe515b7-ba51-4d18-a112-41374a6c932c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3fe515b7-ba51-4d18-a112-41374a6c932c\") " pod="openstack/nova-api-0" Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.711484 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f1132ff-2042-4f61-9889-9d666509cfc3-config-data\") pod \"ceilometer-0\" (UID: \"4f1132ff-2042-4f61-9889-9d666509cfc3\") " pod="openstack/ceilometer-0" Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.812931 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3fe515b7-ba51-4d18-a112-41374a6c932c-config-data\") pod \"nova-api-0\" (UID: \"3fe515b7-ba51-4d18-a112-41374a6c932c\") " pod="openstack/nova-api-0" Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.813496 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hq8gv\" (UniqueName: \"kubernetes.io/projected/3fe515b7-ba51-4d18-a112-41374a6c932c-kube-api-access-hq8gv\") pod \"nova-api-0\" (UID: \"3fe515b7-ba51-4d18-a112-41374a6c932c\") " pod="openstack/nova-api-0" Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.813613 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3fe515b7-ba51-4d18-a112-41374a6c932c-logs\") pod \"nova-api-0\" (UID: \"3fe515b7-ba51-4d18-a112-41374a6c932c\") " pod="openstack/nova-api-0" Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.813735 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4f1132ff-2042-4f61-9889-9d666509cfc3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4f1132ff-2042-4f61-9889-9d666509cfc3\") " pod="openstack/ceilometer-0" Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.813869 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4f1132ff-2042-4f61-9889-9d666509cfc3-log-httpd\") pod \"ceilometer-0\" (UID: \"4f1132ff-2042-4f61-9889-9d666509cfc3\") " pod="openstack/ceilometer-0" Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.813961 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fklj4\" (UniqueName: \"kubernetes.io/projected/4f1132ff-2042-4f61-9889-9d666509cfc3-kube-api-access-fklj4\") pod \"ceilometer-0\" (UID: \"4f1132ff-2042-4f61-9889-9d666509cfc3\") " pod="openstack/ceilometer-0" Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.814062 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f1132ff-2042-4f61-9889-9d666509cfc3-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"4f1132ff-2042-4f61-9889-9d666509cfc3\") " pod="openstack/ceilometer-0" Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.814151 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4f1132ff-2042-4f61-9889-9d666509cfc3-scripts\") pod \"ceilometer-0\" (UID: \"4f1132ff-2042-4f61-9889-9d666509cfc3\") " pod="openstack/ceilometer-0" Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.814248 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3fe515b7-ba51-4d18-a112-41374a6c932c-internal-tls-certs\") pod \"nova-api-0\" (UID: \"3fe515b7-ba51-4d18-a112-41374a6c932c\") " pod="openstack/nova-api-0" Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.814331 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f1132ff-2042-4f61-9889-9d666509cfc3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4f1132ff-2042-4f61-9889-9d666509cfc3\") " pod="openstack/ceilometer-0" Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.814402 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4f1132ff-2042-4f61-9889-9d666509cfc3-log-httpd\") pod \"ceilometer-0\" (UID: \"4f1132ff-2042-4f61-9889-9d666509cfc3\") " pod="openstack/ceilometer-0" Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.813972 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3fe515b7-ba51-4d18-a112-41374a6c932c-logs\") pod \"nova-api-0\" (UID: \"3fe515b7-ba51-4d18-a112-41374a6c932c\") " pod="openstack/nova-api-0" Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.814576 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fe515b7-ba51-4d18-a112-41374a6c932c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3fe515b7-ba51-4d18-a112-41374a6c932c\") " pod="openstack/nova-api-0" Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.814663 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f1132ff-2042-4f61-9889-9d666509cfc3-config-data\") pod \"ceilometer-0\" (UID: \"4f1132ff-2042-4f61-9889-9d666509cfc3\") " pod="openstack/ceilometer-0" Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.814807 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4f1132ff-2042-4f61-9889-9d666509cfc3-run-httpd\") pod \"ceilometer-0\" (UID: \"4f1132ff-2042-4f61-9889-9d666509cfc3\") " pod="openstack/ceilometer-0" Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.814899 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3fe515b7-ba51-4d18-a112-41374a6c932c-public-tls-certs\") pod \"nova-api-0\" (UID: \"3fe515b7-ba51-4d18-a112-41374a6c932c\") " pod="openstack/nova-api-0" Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.815898 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4f1132ff-2042-4f61-9889-9d666509cfc3-run-httpd\") pod \"ceilometer-0\" (UID: \"4f1132ff-2042-4f61-9889-9d666509cfc3\") " pod="openstack/ceilometer-0" Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.817199 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3fe515b7-ba51-4d18-a112-41374a6c932c-config-data\") pod \"nova-api-0\" (UID: \"3fe515b7-ba51-4d18-a112-41374a6c932c\") " pod="openstack/nova-api-0" Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.817768 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4f1132ff-2042-4f61-9889-9d666509cfc3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4f1132ff-2042-4f61-9889-9d666509cfc3\") " pod="openstack/ceilometer-0" Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.818900 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3fe515b7-ba51-4d18-a112-41374a6c932c-public-tls-certs\") pod \"nova-api-0\" (UID: \"3fe515b7-ba51-4d18-a112-41374a6c932c\") " pod="openstack/nova-api-0" Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.819742 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3fe515b7-ba51-4d18-a112-41374a6c932c-internal-tls-certs\") pod \"nova-api-0\" (UID: \"3fe515b7-ba51-4d18-a112-41374a6c932c\") " pod="openstack/nova-api-0" Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.819982 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4f1132ff-2042-4f61-9889-9d666509cfc3-scripts\") pod \"ceilometer-0\" (UID: \"4f1132ff-2042-4f61-9889-9d666509cfc3\") " pod="openstack/ceilometer-0" Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.822414 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f1132ff-2042-4f61-9889-9d666509cfc3-config-data\") pod \"ceilometer-0\" (UID: \"4f1132ff-2042-4f61-9889-9d666509cfc3\") " pod="openstack/ceilometer-0" Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.822803 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f1132ff-2042-4f61-9889-9d666509cfc3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4f1132ff-2042-4f61-9889-9d666509cfc3\") " pod="openstack/ceilometer-0" Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.825012 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fe515b7-ba51-4d18-a112-41374a6c932c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3fe515b7-ba51-4d18-a112-41374a6c932c\") " pod="openstack/nova-api-0" Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.825186 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f1132ff-2042-4f61-9889-9d666509cfc3-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"4f1132ff-2042-4f61-9889-9d666509cfc3\") " pod="openstack/ceilometer-0" Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.840134 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hq8gv\" (UniqueName: \"kubernetes.io/projected/3fe515b7-ba51-4d18-a112-41374a6c932c-kube-api-access-hq8gv\") pod \"nova-api-0\" (UID: \"3fe515b7-ba51-4d18-a112-41374a6c932c\") " pod="openstack/nova-api-0" Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.841359 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fklj4\" (UniqueName: \"kubernetes.io/projected/4f1132ff-2042-4f61-9889-9d666509cfc3-kube-api-access-fklj4\") pod \"ceilometer-0\" (UID: \"4f1132ff-2042-4f61-9889-9d666509cfc3\") " pod="openstack/ceilometer-0" Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.869837 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 27 10:41:05 crc kubenswrapper[4998]: I0227 10:41:05.899970 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 10:41:06 crc kubenswrapper[4998]: I0227 10:41:06.380928 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 27 10:41:06 crc kubenswrapper[4998]: I0227 10:41:06.461243 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 27 10:41:06 crc kubenswrapper[4998]: I0227 10:41:06.475768 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3fe515b7-ba51-4d18-a112-41374a6c932c","Type":"ContainerStarted","Data":"d63159f02191038b3f48fffa6ffdcc5c13e6f39dd40fbd3c82142ca16459cb93"} Feb 27 10:41:06 crc kubenswrapper[4998]: I0227 10:41:06.476864 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4f1132ff-2042-4f61-9889-9d666509cfc3","Type":"ContainerStarted","Data":"94833b9efe7d553fc95119fb865cfd0b2d0e682bc4e196ea8e1cc40746d5e5d9"} Feb 27 10:41:06 crc kubenswrapper[4998]: I0227 10:41:06.778410 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56011d10-8d8f-4738-9f54-b33c5254701d" path="/var/lib/kubelet/pods/56011d10-8d8f-4738-9f54-b33c5254701d/volumes" Feb 27 10:41:06 crc kubenswrapper[4998]: I0227 10:41:06.780441 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1a7d456-32f8-4fab-a874-4d30b877628c" path="/var/lib/kubelet/pods/a1a7d456-32f8-4fab-a874-4d30b877628c/volumes" Feb 27 10:41:06 crc kubenswrapper[4998]: I0227 10:41:06.804872 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Feb 27 10:41:06 crc kubenswrapper[4998]: I0227 10:41:06.836088 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Feb 27 10:41:07 crc kubenswrapper[4998]: I0227 10:41:07.490353 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4f1132ff-2042-4f61-9889-9d666509cfc3","Type":"ContainerStarted","Data":"0b1d44c0431a2525bf842737306b99080918e51e2d295a49af44d25a47ba31a6"} Feb 27 10:41:07 crc kubenswrapper[4998]: I0227 10:41:07.493711 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3fe515b7-ba51-4d18-a112-41374a6c932c","Type":"ContainerStarted","Data":"70c0197a29b373e727700003dc0a0cefa56ff774b5356be8f96a287ebf505d58"} Feb 27 10:41:07 crc kubenswrapper[4998]: I0227 10:41:07.493858 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3fe515b7-ba51-4d18-a112-41374a6c932c","Type":"ContainerStarted","Data":"07d43a05312a770fe0abbe0fd0bda263e5ba3180959c3455c041d5314d71c695"} Feb 27 10:41:07 crc kubenswrapper[4998]: I0227 10:41:07.512101 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Feb 27 10:41:07 crc kubenswrapper[4998]: I0227 10:41:07.517805 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.517780365 podStartE2EDuration="2.517780365s" podCreationTimestamp="2026-02-27 10:41:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:41:07.512153117 +0000 UTC m=+1419.510424085" watchObservedRunningTime="2026-02-27 10:41:07.517780365 +0000 UTC m=+1419.516051333" Feb 27 10:41:07 crc kubenswrapper[4998]: I0227 10:41:07.661409 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-k7cgp"] Feb 27 10:41:07 crc kubenswrapper[4998]: I0227 10:41:07.662731 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-k7cgp" Feb 27 10:41:07 crc kubenswrapper[4998]: I0227 10:41:07.669605 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Feb 27 10:41:07 crc kubenswrapper[4998]: I0227 10:41:07.669666 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Feb 27 10:41:07 crc kubenswrapper[4998]: I0227 10:41:07.682132 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-k7cgp"] Feb 27 10:41:07 crc kubenswrapper[4998]: I0227 10:41:07.768530 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00fdbd39-90de-4af0-b166-7b8f106ec115-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-k7cgp\" (UID: \"00fdbd39-90de-4af0-b166-7b8f106ec115\") " pod="openstack/nova-cell1-cell-mapping-k7cgp" Feb 27 10:41:07 crc kubenswrapper[4998]: I0227 10:41:07.768577 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00fdbd39-90de-4af0-b166-7b8f106ec115-config-data\") pod \"nova-cell1-cell-mapping-k7cgp\" (UID: \"00fdbd39-90de-4af0-b166-7b8f106ec115\") " pod="openstack/nova-cell1-cell-mapping-k7cgp" Feb 27 10:41:07 crc kubenswrapper[4998]: I0227 10:41:07.768613 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00fdbd39-90de-4af0-b166-7b8f106ec115-scripts\") pod \"nova-cell1-cell-mapping-k7cgp\" (UID: \"00fdbd39-90de-4af0-b166-7b8f106ec115\") " pod="openstack/nova-cell1-cell-mapping-k7cgp" Feb 27 10:41:07 crc kubenswrapper[4998]: I0227 10:41:07.768876 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwq52\" (UniqueName: \"kubernetes.io/projected/00fdbd39-90de-4af0-b166-7b8f106ec115-kube-api-access-hwq52\") pod \"nova-cell1-cell-mapping-k7cgp\" (UID: \"00fdbd39-90de-4af0-b166-7b8f106ec115\") " pod="openstack/nova-cell1-cell-mapping-k7cgp" Feb 27 10:41:07 crc kubenswrapper[4998]: I0227 10:41:07.875178 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwq52\" (UniqueName: \"kubernetes.io/projected/00fdbd39-90de-4af0-b166-7b8f106ec115-kube-api-access-hwq52\") pod \"nova-cell1-cell-mapping-k7cgp\" (UID: \"00fdbd39-90de-4af0-b166-7b8f106ec115\") " pod="openstack/nova-cell1-cell-mapping-k7cgp" Feb 27 10:41:07 crc kubenswrapper[4998]: I0227 10:41:07.875477 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00fdbd39-90de-4af0-b166-7b8f106ec115-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-k7cgp\" (UID: \"00fdbd39-90de-4af0-b166-7b8f106ec115\") " pod="openstack/nova-cell1-cell-mapping-k7cgp" Feb 27 10:41:07 crc kubenswrapper[4998]: I0227 10:41:07.875501 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00fdbd39-90de-4af0-b166-7b8f106ec115-config-data\") pod \"nova-cell1-cell-mapping-k7cgp\" (UID: \"00fdbd39-90de-4af0-b166-7b8f106ec115\") " pod="openstack/nova-cell1-cell-mapping-k7cgp" Feb 27 10:41:07 crc kubenswrapper[4998]: I0227 10:41:07.875544 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00fdbd39-90de-4af0-b166-7b8f106ec115-scripts\") pod \"nova-cell1-cell-mapping-k7cgp\" (UID: \"00fdbd39-90de-4af0-b166-7b8f106ec115\") " pod="openstack/nova-cell1-cell-mapping-k7cgp" Feb 27 10:41:07 crc kubenswrapper[4998]: I0227 10:41:07.881894 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00fdbd39-90de-4af0-b166-7b8f106ec115-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-k7cgp\" (UID: \"00fdbd39-90de-4af0-b166-7b8f106ec115\") " pod="openstack/nova-cell1-cell-mapping-k7cgp" Feb 27 10:41:07 crc kubenswrapper[4998]: I0227 10:41:07.891047 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00fdbd39-90de-4af0-b166-7b8f106ec115-config-data\") pod \"nova-cell1-cell-mapping-k7cgp\" (UID: \"00fdbd39-90de-4af0-b166-7b8f106ec115\") " pod="openstack/nova-cell1-cell-mapping-k7cgp" Feb 27 10:41:07 crc kubenswrapper[4998]: I0227 10:41:07.896911 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00fdbd39-90de-4af0-b166-7b8f106ec115-scripts\") pod \"nova-cell1-cell-mapping-k7cgp\" (UID: \"00fdbd39-90de-4af0-b166-7b8f106ec115\") " pod="openstack/nova-cell1-cell-mapping-k7cgp" Feb 27 10:41:07 crc kubenswrapper[4998]: I0227 10:41:07.908788 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwq52\" (UniqueName: \"kubernetes.io/projected/00fdbd39-90de-4af0-b166-7b8f106ec115-kube-api-access-hwq52\") pod \"nova-cell1-cell-mapping-k7cgp\" (UID: \"00fdbd39-90de-4af0-b166-7b8f106ec115\") " pod="openstack/nova-cell1-cell-mapping-k7cgp" Feb 27 10:41:07 crc kubenswrapper[4998]: I0227 10:41:07.990897 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-k7cgp" Feb 27 10:41:08 crc kubenswrapper[4998]: I0227 10:41:08.423673 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-k7cgp"] Feb 27 10:41:08 crc kubenswrapper[4998]: W0227 10:41:08.434777 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod00fdbd39_90de_4af0_b166_7b8f106ec115.slice/crio-37e05194ec1814a4efcc93086a69f08d47fcea03d185a4b08a53ab50f177deb1 WatchSource:0}: Error finding container 37e05194ec1814a4efcc93086a69f08d47fcea03d185a4b08a53ab50f177deb1: Status 404 returned error can't find the container with id 37e05194ec1814a4efcc93086a69f08d47fcea03d185a4b08a53ab50f177deb1 Feb 27 10:41:08 crc kubenswrapper[4998]: I0227 10:41:08.505317 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4f1132ff-2042-4f61-9889-9d666509cfc3","Type":"ContainerStarted","Data":"cfba8f07b6b5273df2c2e0a72a4e2884b44615e22915be875953d5c64e971385"} Feb 27 10:41:08 crc kubenswrapper[4998]: I0227 10:41:08.506571 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-k7cgp" event={"ID":"00fdbd39-90de-4af0-b166-7b8f106ec115","Type":"ContainerStarted","Data":"37e05194ec1814a4efcc93086a69f08d47fcea03d185a4b08a53ab50f177deb1"} Feb 27 10:41:08 crc kubenswrapper[4998]: I0227 10:41:08.936983 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-89c5cd4d5-52g6d" Feb 27 10:41:09 crc kubenswrapper[4998]: I0227 10:41:09.008610 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-d6jxk"] Feb 27 10:41:09 crc kubenswrapper[4998]: I0227 10:41:09.008895 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-757b4f8459-d6jxk" podUID="f4b15f23-61d5-4028-a889-c212564533cf" containerName="dnsmasq-dns" containerID="cri-o://93fbe3f8ae0890e4c907024bc758f53363a96cece0ec44c2f50637b785c8c03f" gracePeriod=10 Feb 27 10:41:09 crc kubenswrapper[4998]: I0227 10:41:09.517728 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-k7cgp" event={"ID":"00fdbd39-90de-4af0-b166-7b8f106ec115","Type":"ContainerStarted","Data":"a090d5f3dba05c4961e0a631ba3f4fcb81c9f62f7244bd6634290d47e69a4c5d"} Feb 27 10:41:09 crc kubenswrapper[4998]: I0227 10:41:09.520437 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4f1132ff-2042-4f61-9889-9d666509cfc3","Type":"ContainerStarted","Data":"8d53141d69b1ed49c6284b046f61cf622a56efa32cce2cb2bbd1275f47086f51"} Feb 27 10:41:09 crc kubenswrapper[4998]: I0227 10:41:09.522715 4998 generic.go:334] "Generic (PLEG): container finished" podID="f4b15f23-61d5-4028-a889-c212564533cf" containerID="93fbe3f8ae0890e4c907024bc758f53363a96cece0ec44c2f50637b785c8c03f" exitCode=0 Feb 27 10:41:09 crc kubenswrapper[4998]: I0227 10:41:09.522743 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-d6jxk" event={"ID":"f4b15f23-61d5-4028-a889-c212564533cf","Type":"ContainerDied","Data":"93fbe3f8ae0890e4c907024bc758f53363a96cece0ec44c2f50637b785c8c03f"} Feb 27 10:41:09 crc kubenswrapper[4998]: I0227 10:41:09.522757 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-d6jxk" event={"ID":"f4b15f23-61d5-4028-a889-c212564533cf","Type":"ContainerDied","Data":"fa5e29e724fe4d82ea3b554abaf1733c8474df2027f9bd7fa18716076b56ed99"} Feb 27 10:41:09 crc kubenswrapper[4998]: I0227 10:41:09.522768 4998 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa5e29e724fe4d82ea3b554abaf1733c8474df2027f9bd7fa18716076b56ed99" Feb 27 10:41:09 crc kubenswrapper[4998]: I0227 10:41:09.528004 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-d6jxk" Feb 27 10:41:09 crc kubenswrapper[4998]: I0227 10:41:09.558513 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-k7cgp" podStartSLOduration=2.5584931060000002 podStartE2EDuration="2.558493106s" podCreationTimestamp="2026-02-27 10:41:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:41:09.535491043 +0000 UTC m=+1421.533762011" watchObservedRunningTime="2026-02-27 10:41:09.558493106 +0000 UTC m=+1421.556764074" Feb 27 10:41:09 crc kubenswrapper[4998]: I0227 10:41:09.610620 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f4b15f23-61d5-4028-a889-c212564533cf-dns-svc\") pod \"f4b15f23-61d5-4028-a889-c212564533cf\" (UID: \"f4b15f23-61d5-4028-a889-c212564533cf\") " Feb 27 10:41:09 crc kubenswrapper[4998]: I0227 10:41:09.610794 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f4b15f23-61d5-4028-a889-c212564533cf-ovsdbserver-nb\") pod \"f4b15f23-61d5-4028-a889-c212564533cf\" (UID: \"f4b15f23-61d5-4028-a889-c212564533cf\") " Feb 27 10:41:09 crc kubenswrapper[4998]: I0227 10:41:09.611445 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f4b15f23-61d5-4028-a889-c212564533cf-dns-swift-storage-0\") pod \"f4b15f23-61d5-4028-a889-c212564533cf\" (UID: \"f4b15f23-61d5-4028-a889-c212564533cf\") " Feb 27 10:41:09 crc kubenswrapper[4998]: I0227 10:41:09.611477 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4b15f23-61d5-4028-a889-c212564533cf-config\") pod \"f4b15f23-61d5-4028-a889-c212564533cf\" (UID: \"f4b15f23-61d5-4028-a889-c212564533cf\") " Feb 27 10:41:09 crc kubenswrapper[4998]: I0227 10:41:09.611566 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gdfgk\" (UniqueName: \"kubernetes.io/projected/f4b15f23-61d5-4028-a889-c212564533cf-kube-api-access-gdfgk\") pod \"f4b15f23-61d5-4028-a889-c212564533cf\" (UID: \"f4b15f23-61d5-4028-a889-c212564533cf\") " Feb 27 10:41:09 crc kubenswrapper[4998]: I0227 10:41:09.611591 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f4b15f23-61d5-4028-a889-c212564533cf-ovsdbserver-sb\") pod \"f4b15f23-61d5-4028-a889-c212564533cf\" (UID: \"f4b15f23-61d5-4028-a889-c212564533cf\") " Feb 27 10:41:09 crc kubenswrapper[4998]: I0227 10:41:09.632447 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4b15f23-61d5-4028-a889-c212564533cf-kube-api-access-gdfgk" (OuterVolumeSpecName: "kube-api-access-gdfgk") pod "f4b15f23-61d5-4028-a889-c212564533cf" (UID: "f4b15f23-61d5-4028-a889-c212564533cf"). InnerVolumeSpecName "kube-api-access-gdfgk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:41:09 crc kubenswrapper[4998]: I0227 10:41:09.658368 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4b15f23-61d5-4028-a889-c212564533cf-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f4b15f23-61d5-4028-a889-c212564533cf" (UID: "f4b15f23-61d5-4028-a889-c212564533cf"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:41:09 crc kubenswrapper[4998]: I0227 10:41:09.663093 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4b15f23-61d5-4028-a889-c212564533cf-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f4b15f23-61d5-4028-a889-c212564533cf" (UID: "f4b15f23-61d5-4028-a889-c212564533cf"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:41:09 crc kubenswrapper[4998]: I0227 10:41:09.672064 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4b15f23-61d5-4028-a889-c212564533cf-config" (OuterVolumeSpecName: "config") pod "f4b15f23-61d5-4028-a889-c212564533cf" (UID: "f4b15f23-61d5-4028-a889-c212564533cf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:41:09 crc kubenswrapper[4998]: I0227 10:41:09.681063 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4b15f23-61d5-4028-a889-c212564533cf-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f4b15f23-61d5-4028-a889-c212564533cf" (UID: "f4b15f23-61d5-4028-a889-c212564533cf"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:41:09 crc kubenswrapper[4998]: I0227 10:41:09.690668 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4b15f23-61d5-4028-a889-c212564533cf-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f4b15f23-61d5-4028-a889-c212564533cf" (UID: "f4b15f23-61d5-4028-a889-c212564533cf"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:41:09 crc kubenswrapper[4998]: I0227 10:41:09.721564 4998 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f4b15f23-61d5-4028-a889-c212564533cf-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 27 10:41:09 crc kubenswrapper[4998]: I0227 10:41:09.721598 4998 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f4b15f23-61d5-4028-a889-c212564533cf-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 27 10:41:09 crc kubenswrapper[4998]: I0227 10:41:09.721611 4998 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4b15f23-61d5-4028-a889-c212564533cf-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:41:09 crc kubenswrapper[4998]: I0227 10:41:09.721621 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gdfgk\" (UniqueName: \"kubernetes.io/projected/f4b15f23-61d5-4028-a889-c212564533cf-kube-api-access-gdfgk\") on node \"crc\" DevicePath \"\"" Feb 27 10:41:09 crc kubenswrapper[4998]: I0227 10:41:09.721631 4998 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f4b15f23-61d5-4028-a889-c212564533cf-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 27 10:41:09 crc kubenswrapper[4998]: I0227 10:41:09.721641 4998 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f4b15f23-61d5-4028-a889-c212564533cf-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 27 10:41:10 crc kubenswrapper[4998]: I0227 10:41:10.504499 4998 patch_prober.go:28] interesting pod/machine-config-daemon-m6kr5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 10:41:10 crc kubenswrapper[4998]: I0227 10:41:10.504808 4998 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 10:41:10 crc kubenswrapper[4998]: I0227 10:41:10.504852 4998 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" Feb 27 10:41:10 crc kubenswrapper[4998]: I0227 10:41:10.505689 4998 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fa835617bfc870e1b2eabc00e16bdc9b210a2250fe70bb608d05ed5f2f06bfbc"} pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 27 10:41:10 crc kubenswrapper[4998]: I0227 10:41:10.505758 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" containerName="machine-config-daemon" containerID="cri-o://fa835617bfc870e1b2eabc00e16bdc9b210a2250fe70bb608d05ed5f2f06bfbc" gracePeriod=600 Feb 27 10:41:10 crc kubenswrapper[4998]: I0227 10:41:10.537745 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4f1132ff-2042-4f61-9889-9d666509cfc3","Type":"ContainerStarted","Data":"b9e74804c9b6b3706a81b08c547136091adc4a357baa5a927adc472197718e33"} Feb 27 10:41:10 crc kubenswrapper[4998]: I0227 10:41:10.537898 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-d6jxk" Feb 27 10:41:10 crc kubenswrapper[4998]: I0227 10:41:10.538484 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 27 10:41:10 crc kubenswrapper[4998]: I0227 10:41:10.569963 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.7111792449999998 podStartE2EDuration="5.569944487s" podCreationTimestamp="2026-02-27 10:41:05 +0000 UTC" firstStartedPulling="2026-02-27 10:41:06.456013397 +0000 UTC m=+1418.454284365" lastFinishedPulling="2026-02-27 10:41:10.314778639 +0000 UTC m=+1422.313049607" observedRunningTime="2026-02-27 10:41:10.566787391 +0000 UTC m=+1422.565058359" watchObservedRunningTime="2026-02-27 10:41:10.569944487 +0000 UTC m=+1422.568215455" Feb 27 10:41:10 crc kubenswrapper[4998]: I0227 10:41:10.586663 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-d6jxk"] Feb 27 10:41:10 crc kubenswrapper[4998]: I0227 10:41:10.594661 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-d6jxk"] Feb 27 10:41:10 crc kubenswrapper[4998]: I0227 10:41:10.777010 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b15f23-61d5-4028-a889-c212564533cf" path="/var/lib/kubelet/pods/f4b15f23-61d5-4028-a889-c212564533cf/volumes" Feb 27 10:41:11 crc kubenswrapper[4998]: I0227 10:41:11.565832 4998 generic.go:334] "Generic (PLEG): container finished" podID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" containerID="fa835617bfc870e1b2eabc00e16bdc9b210a2250fe70bb608d05ed5f2f06bfbc" exitCode=0 Feb 27 10:41:11 crc kubenswrapper[4998]: I0227 10:41:11.565917 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" event={"ID":"400c5e2f-5448-49c6-bf8e-04b21e552bb2","Type":"ContainerDied","Data":"fa835617bfc870e1b2eabc00e16bdc9b210a2250fe70bb608d05ed5f2f06bfbc"} Feb 27 10:41:11 crc kubenswrapper[4998]: I0227 10:41:11.566203 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" event={"ID":"400c5e2f-5448-49c6-bf8e-04b21e552bb2","Type":"ContainerStarted","Data":"f7bf3a0484c3e7ee22533ca49a17be909a31292e5418f4d1a0cd402775584d49"} Feb 27 10:41:11 crc kubenswrapper[4998]: I0227 10:41:11.566278 4998 scope.go:117] "RemoveContainer" containerID="d4bd8462bb415ab0298cf24a40c264a6708906ed9fa7eae7a8b7e15bb36a14c4" Feb 27 10:41:13 crc kubenswrapper[4998]: I0227 10:41:13.589508 4998 generic.go:334] "Generic (PLEG): container finished" podID="00fdbd39-90de-4af0-b166-7b8f106ec115" containerID="a090d5f3dba05c4961e0a631ba3f4fcb81c9f62f7244bd6634290d47e69a4c5d" exitCode=0 Feb 27 10:41:13 crc kubenswrapper[4998]: I0227 10:41:13.590033 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-k7cgp" event={"ID":"00fdbd39-90de-4af0-b166-7b8f106ec115","Type":"ContainerDied","Data":"a090d5f3dba05c4961e0a631ba3f4fcb81c9f62f7244bd6634290d47e69a4c5d"} Feb 27 10:41:15 crc kubenswrapper[4998]: I0227 10:41:15.015997 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-k7cgp" Feb 27 10:41:15 crc kubenswrapper[4998]: I0227 10:41:15.130737 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00fdbd39-90de-4af0-b166-7b8f106ec115-combined-ca-bundle\") pod \"00fdbd39-90de-4af0-b166-7b8f106ec115\" (UID: \"00fdbd39-90de-4af0-b166-7b8f106ec115\") " Feb 27 10:41:15 crc kubenswrapper[4998]: I0227 10:41:15.130896 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hwq52\" (UniqueName: \"kubernetes.io/projected/00fdbd39-90de-4af0-b166-7b8f106ec115-kube-api-access-hwq52\") pod \"00fdbd39-90de-4af0-b166-7b8f106ec115\" (UID: \"00fdbd39-90de-4af0-b166-7b8f106ec115\") " Feb 27 10:41:15 crc kubenswrapper[4998]: I0227 10:41:15.130970 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00fdbd39-90de-4af0-b166-7b8f106ec115-scripts\") pod \"00fdbd39-90de-4af0-b166-7b8f106ec115\" (UID: \"00fdbd39-90de-4af0-b166-7b8f106ec115\") " Feb 27 10:41:15 crc kubenswrapper[4998]: I0227 10:41:15.131033 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00fdbd39-90de-4af0-b166-7b8f106ec115-config-data\") pod \"00fdbd39-90de-4af0-b166-7b8f106ec115\" (UID: \"00fdbd39-90de-4af0-b166-7b8f106ec115\") " Feb 27 10:41:15 crc kubenswrapper[4998]: I0227 10:41:15.136097 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00fdbd39-90de-4af0-b166-7b8f106ec115-scripts" (OuterVolumeSpecName: "scripts") pod "00fdbd39-90de-4af0-b166-7b8f106ec115" (UID: "00fdbd39-90de-4af0-b166-7b8f106ec115"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:41:15 crc kubenswrapper[4998]: I0227 10:41:15.136835 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00fdbd39-90de-4af0-b166-7b8f106ec115-kube-api-access-hwq52" (OuterVolumeSpecName: "kube-api-access-hwq52") pod "00fdbd39-90de-4af0-b166-7b8f106ec115" (UID: "00fdbd39-90de-4af0-b166-7b8f106ec115"). InnerVolumeSpecName "kube-api-access-hwq52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:41:15 crc kubenswrapper[4998]: I0227 10:41:15.157915 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00fdbd39-90de-4af0-b166-7b8f106ec115-config-data" (OuterVolumeSpecName: "config-data") pod "00fdbd39-90de-4af0-b166-7b8f106ec115" (UID: "00fdbd39-90de-4af0-b166-7b8f106ec115"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:41:15 crc kubenswrapper[4998]: I0227 10:41:15.161189 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00fdbd39-90de-4af0-b166-7b8f106ec115-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "00fdbd39-90de-4af0-b166-7b8f106ec115" (UID: "00fdbd39-90de-4af0-b166-7b8f106ec115"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:41:15 crc kubenswrapper[4998]: I0227 10:41:15.232989 4998 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00fdbd39-90de-4af0-b166-7b8f106ec115-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:41:15 crc kubenswrapper[4998]: I0227 10:41:15.233037 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hwq52\" (UniqueName: \"kubernetes.io/projected/00fdbd39-90de-4af0-b166-7b8f106ec115-kube-api-access-hwq52\") on node \"crc\" DevicePath \"\"" Feb 27 10:41:15 crc kubenswrapper[4998]: I0227 10:41:15.233053 4998 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00fdbd39-90de-4af0-b166-7b8f106ec115-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 10:41:15 crc kubenswrapper[4998]: I0227 10:41:15.233064 4998 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00fdbd39-90de-4af0-b166-7b8f106ec115-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 10:41:15 crc kubenswrapper[4998]: I0227 10:41:15.613735 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-k7cgp" event={"ID":"00fdbd39-90de-4af0-b166-7b8f106ec115","Type":"ContainerDied","Data":"37e05194ec1814a4efcc93086a69f08d47fcea03d185a4b08a53ab50f177deb1"} Feb 27 10:41:15 crc kubenswrapper[4998]: I0227 10:41:15.613982 4998 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="37e05194ec1814a4efcc93086a69f08d47fcea03d185a4b08a53ab50f177deb1" Feb 27 10:41:15 crc kubenswrapper[4998]: I0227 10:41:15.613810 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-k7cgp" Feb 27 10:41:15 crc kubenswrapper[4998]: E0227 10:41:15.688276 4998 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod92acff51_4ca2_43c6_ab0f_480e01e9efb8.slice/crio-conmon-550199ff57ddd5616ea967474384544dce1ead6cabf6b337129b1634632860fa.scope\": RecentStats: unable to find data in memory cache]" Feb 27 10:41:15 crc kubenswrapper[4998]: I0227 10:41:15.812420 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 27 10:41:15 crc kubenswrapper[4998]: I0227 10:41:15.812735 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="3fe515b7-ba51-4d18-a112-41374a6c932c" containerName="nova-api-log" containerID="cri-o://07d43a05312a770fe0abbe0fd0bda263e5ba3180959c3455c041d5314d71c695" gracePeriod=30 Feb 27 10:41:15 crc kubenswrapper[4998]: I0227 10:41:15.812951 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="3fe515b7-ba51-4d18-a112-41374a6c932c" containerName="nova-api-api" containerID="cri-o://70c0197a29b373e727700003dc0a0cefa56ff774b5356be8f96a287ebf505d58" gracePeriod=30 Feb 27 10:41:15 crc kubenswrapper[4998]: I0227 10:41:15.836052 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 27 10:41:15 crc kubenswrapper[4998]: I0227 10:41:15.836348 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="05c40d08-f020-4ef5-8e19-fbbc9abe46a4" containerName="nova-scheduler-scheduler" containerID="cri-o://7489d769d417fc7bdac009962cd25469a34a21ca3b93e912cca036f68e5c2fb9" gracePeriod=30 Feb 27 10:41:15 crc kubenswrapper[4998]: I0227 10:41:15.846886 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 27 10:41:15 crc kubenswrapper[4998]: I0227 10:41:15.847116 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="952fef0f-7957-4cec-81ee-60043bf510c9" containerName="nova-metadata-log" containerID="cri-o://765563b0ca7736d32ad2dff81917695fd2aad1c1c4599da47709ecd79c0c2782" gracePeriod=30 Feb 27 10:41:15 crc kubenswrapper[4998]: I0227 10:41:15.847192 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="952fef0f-7957-4cec-81ee-60043bf510c9" containerName="nova-metadata-metadata" containerID="cri-o://abb9d8924566106088e4a57486571b14b4e6a46bcff6cea1dc561f773019a0ae" gracePeriod=30 Feb 27 10:41:16 crc kubenswrapper[4998]: I0227 10:41:16.395203 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 27 10:41:16 crc kubenswrapper[4998]: I0227 10:41:16.561014 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3fe515b7-ba51-4d18-a112-41374a6c932c-public-tls-certs\") pod \"3fe515b7-ba51-4d18-a112-41374a6c932c\" (UID: \"3fe515b7-ba51-4d18-a112-41374a6c932c\") " Feb 27 10:41:16 crc kubenswrapper[4998]: I0227 10:41:16.561159 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3fe515b7-ba51-4d18-a112-41374a6c932c-logs\") pod \"3fe515b7-ba51-4d18-a112-41374a6c932c\" (UID: \"3fe515b7-ba51-4d18-a112-41374a6c932c\") " Feb 27 10:41:16 crc kubenswrapper[4998]: I0227 10:41:16.561273 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3fe515b7-ba51-4d18-a112-41374a6c932c-config-data\") pod \"3fe515b7-ba51-4d18-a112-41374a6c932c\" (UID: \"3fe515b7-ba51-4d18-a112-41374a6c932c\") " Feb 27 10:41:16 crc kubenswrapper[4998]: I0227 10:41:16.561297 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3fe515b7-ba51-4d18-a112-41374a6c932c-internal-tls-certs\") pod \"3fe515b7-ba51-4d18-a112-41374a6c932c\" (UID: \"3fe515b7-ba51-4d18-a112-41374a6c932c\") " Feb 27 10:41:16 crc kubenswrapper[4998]: I0227 10:41:16.561370 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fe515b7-ba51-4d18-a112-41374a6c932c-combined-ca-bundle\") pod \"3fe515b7-ba51-4d18-a112-41374a6c932c\" (UID: \"3fe515b7-ba51-4d18-a112-41374a6c932c\") " Feb 27 10:41:16 crc kubenswrapper[4998]: I0227 10:41:16.561396 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hq8gv\" (UniqueName: \"kubernetes.io/projected/3fe515b7-ba51-4d18-a112-41374a6c932c-kube-api-access-hq8gv\") pod \"3fe515b7-ba51-4d18-a112-41374a6c932c\" (UID: \"3fe515b7-ba51-4d18-a112-41374a6c932c\") " Feb 27 10:41:16 crc kubenswrapper[4998]: I0227 10:41:16.562105 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3fe515b7-ba51-4d18-a112-41374a6c932c-logs" (OuterVolumeSpecName: "logs") pod "3fe515b7-ba51-4d18-a112-41374a6c932c" (UID: "3fe515b7-ba51-4d18-a112-41374a6c932c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:41:16 crc kubenswrapper[4998]: I0227 10:41:16.562529 4998 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3fe515b7-ba51-4d18-a112-41374a6c932c-logs\") on node \"crc\" DevicePath \"\"" Feb 27 10:41:16 crc kubenswrapper[4998]: I0227 10:41:16.574291 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3fe515b7-ba51-4d18-a112-41374a6c932c-kube-api-access-hq8gv" (OuterVolumeSpecName: "kube-api-access-hq8gv") pod "3fe515b7-ba51-4d18-a112-41374a6c932c" (UID: "3fe515b7-ba51-4d18-a112-41374a6c932c"). InnerVolumeSpecName "kube-api-access-hq8gv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:41:16 crc kubenswrapper[4998]: I0227 10:41:16.601693 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3fe515b7-ba51-4d18-a112-41374a6c932c-config-data" (OuterVolumeSpecName: "config-data") pod "3fe515b7-ba51-4d18-a112-41374a6c932c" (UID: "3fe515b7-ba51-4d18-a112-41374a6c932c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:41:16 crc kubenswrapper[4998]: I0227 10:41:16.613069 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3fe515b7-ba51-4d18-a112-41374a6c932c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3fe515b7-ba51-4d18-a112-41374a6c932c" (UID: "3fe515b7-ba51-4d18-a112-41374a6c932c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:41:16 crc kubenswrapper[4998]: I0227 10:41:16.625186 4998 generic.go:334] "Generic (PLEG): container finished" podID="952fef0f-7957-4cec-81ee-60043bf510c9" containerID="765563b0ca7736d32ad2dff81917695fd2aad1c1c4599da47709ecd79c0c2782" exitCode=143 Feb 27 10:41:16 crc kubenswrapper[4998]: I0227 10:41:16.625307 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"952fef0f-7957-4cec-81ee-60043bf510c9","Type":"ContainerDied","Data":"765563b0ca7736d32ad2dff81917695fd2aad1c1c4599da47709ecd79c0c2782"} Feb 27 10:41:16 crc kubenswrapper[4998]: I0227 10:41:16.626917 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3fe515b7-ba51-4d18-a112-41374a6c932c-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "3fe515b7-ba51-4d18-a112-41374a6c932c" (UID: "3fe515b7-ba51-4d18-a112-41374a6c932c"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:41:16 crc kubenswrapper[4998]: I0227 10:41:16.628374 4998 generic.go:334] "Generic (PLEG): container finished" podID="3fe515b7-ba51-4d18-a112-41374a6c932c" containerID="70c0197a29b373e727700003dc0a0cefa56ff774b5356be8f96a287ebf505d58" exitCode=0 Feb 27 10:41:16 crc kubenswrapper[4998]: I0227 10:41:16.628402 4998 generic.go:334] "Generic (PLEG): container finished" podID="3fe515b7-ba51-4d18-a112-41374a6c932c" containerID="07d43a05312a770fe0abbe0fd0bda263e5ba3180959c3455c041d5314d71c695" exitCode=143 Feb 27 10:41:16 crc kubenswrapper[4998]: I0227 10:41:16.628424 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3fe515b7-ba51-4d18-a112-41374a6c932c","Type":"ContainerDied","Data":"70c0197a29b373e727700003dc0a0cefa56ff774b5356be8f96a287ebf505d58"} Feb 27 10:41:16 crc kubenswrapper[4998]: I0227 10:41:16.628454 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3fe515b7-ba51-4d18-a112-41374a6c932c","Type":"ContainerDied","Data":"07d43a05312a770fe0abbe0fd0bda263e5ba3180959c3455c041d5314d71c695"} Feb 27 10:41:16 crc kubenswrapper[4998]: I0227 10:41:16.628466 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3fe515b7-ba51-4d18-a112-41374a6c932c","Type":"ContainerDied","Data":"d63159f02191038b3f48fffa6ffdcc5c13e6f39dd40fbd3c82142ca16459cb93"} Feb 27 10:41:16 crc kubenswrapper[4998]: I0227 10:41:16.628484 4998 scope.go:117] "RemoveContainer" containerID="70c0197a29b373e727700003dc0a0cefa56ff774b5356be8f96a287ebf505d58" Feb 27 10:41:16 crc kubenswrapper[4998]: I0227 10:41:16.628676 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 27 10:41:16 crc kubenswrapper[4998]: I0227 10:41:16.644006 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3fe515b7-ba51-4d18-a112-41374a6c932c-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "3fe515b7-ba51-4d18-a112-41374a6c932c" (UID: "3fe515b7-ba51-4d18-a112-41374a6c932c"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:41:16 crc kubenswrapper[4998]: I0227 10:41:16.663187 4998 scope.go:117] "RemoveContainer" containerID="07d43a05312a770fe0abbe0fd0bda263e5ba3180959c3455c041d5314d71c695" Feb 27 10:41:16 crc kubenswrapper[4998]: I0227 10:41:16.664657 4998 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fe515b7-ba51-4d18-a112-41374a6c932c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:41:16 crc kubenswrapper[4998]: I0227 10:41:16.664719 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hq8gv\" (UniqueName: \"kubernetes.io/projected/3fe515b7-ba51-4d18-a112-41374a6c932c-kube-api-access-hq8gv\") on node \"crc\" DevicePath \"\"" Feb 27 10:41:16 crc kubenswrapper[4998]: I0227 10:41:16.664749 4998 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3fe515b7-ba51-4d18-a112-41374a6c932c-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 10:41:16 crc kubenswrapper[4998]: I0227 10:41:16.664774 4998 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3fe515b7-ba51-4d18-a112-41374a6c932c-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 10:41:16 crc kubenswrapper[4998]: I0227 10:41:16.664800 4998 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3fe515b7-ba51-4d18-a112-41374a6c932c-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 10:41:16 crc kubenswrapper[4998]: I0227 10:41:16.690122 4998 scope.go:117] "RemoveContainer" containerID="70c0197a29b373e727700003dc0a0cefa56ff774b5356be8f96a287ebf505d58" Feb 27 10:41:16 crc kubenswrapper[4998]: E0227 10:41:16.690693 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"70c0197a29b373e727700003dc0a0cefa56ff774b5356be8f96a287ebf505d58\": container with ID starting with 70c0197a29b373e727700003dc0a0cefa56ff774b5356be8f96a287ebf505d58 not found: ID does not exist" containerID="70c0197a29b373e727700003dc0a0cefa56ff774b5356be8f96a287ebf505d58" Feb 27 10:41:16 crc kubenswrapper[4998]: I0227 10:41:16.690936 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70c0197a29b373e727700003dc0a0cefa56ff774b5356be8f96a287ebf505d58"} err="failed to get container status \"70c0197a29b373e727700003dc0a0cefa56ff774b5356be8f96a287ebf505d58\": rpc error: code = NotFound desc = could not find container \"70c0197a29b373e727700003dc0a0cefa56ff774b5356be8f96a287ebf505d58\": container with ID starting with 70c0197a29b373e727700003dc0a0cefa56ff774b5356be8f96a287ebf505d58 not found: ID does not exist" Feb 27 10:41:16 crc kubenswrapper[4998]: I0227 10:41:16.691053 4998 scope.go:117] "RemoveContainer" containerID="07d43a05312a770fe0abbe0fd0bda263e5ba3180959c3455c041d5314d71c695" Feb 27 10:41:16 crc kubenswrapper[4998]: E0227 10:41:16.691657 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07d43a05312a770fe0abbe0fd0bda263e5ba3180959c3455c041d5314d71c695\": container with ID starting with 07d43a05312a770fe0abbe0fd0bda263e5ba3180959c3455c041d5314d71c695 not found: ID does not exist" containerID="07d43a05312a770fe0abbe0fd0bda263e5ba3180959c3455c041d5314d71c695" Feb 27 10:41:16 crc kubenswrapper[4998]: I0227 10:41:16.691699 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07d43a05312a770fe0abbe0fd0bda263e5ba3180959c3455c041d5314d71c695"} err="failed to get container status \"07d43a05312a770fe0abbe0fd0bda263e5ba3180959c3455c041d5314d71c695\": rpc error: code = NotFound desc = could not find container \"07d43a05312a770fe0abbe0fd0bda263e5ba3180959c3455c041d5314d71c695\": container with ID starting with 07d43a05312a770fe0abbe0fd0bda263e5ba3180959c3455c041d5314d71c695 not found: ID does not exist" Feb 27 10:41:16 crc kubenswrapper[4998]: I0227 10:41:16.691728 4998 scope.go:117] "RemoveContainer" containerID="70c0197a29b373e727700003dc0a0cefa56ff774b5356be8f96a287ebf505d58" Feb 27 10:41:16 crc kubenswrapper[4998]: I0227 10:41:16.692162 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70c0197a29b373e727700003dc0a0cefa56ff774b5356be8f96a287ebf505d58"} err="failed to get container status \"70c0197a29b373e727700003dc0a0cefa56ff774b5356be8f96a287ebf505d58\": rpc error: code = NotFound desc = could not find container \"70c0197a29b373e727700003dc0a0cefa56ff774b5356be8f96a287ebf505d58\": container with ID starting with 70c0197a29b373e727700003dc0a0cefa56ff774b5356be8f96a287ebf505d58 not found: ID does not exist" Feb 27 10:41:16 crc kubenswrapper[4998]: I0227 10:41:16.692198 4998 scope.go:117] "RemoveContainer" containerID="07d43a05312a770fe0abbe0fd0bda263e5ba3180959c3455c041d5314d71c695" Feb 27 10:41:16 crc kubenswrapper[4998]: I0227 10:41:16.692559 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07d43a05312a770fe0abbe0fd0bda263e5ba3180959c3455c041d5314d71c695"} err="failed to get container status \"07d43a05312a770fe0abbe0fd0bda263e5ba3180959c3455c041d5314d71c695\": rpc error: code = NotFound desc = could not find container \"07d43a05312a770fe0abbe0fd0bda263e5ba3180959c3455c041d5314d71c695\": container with ID starting with 07d43a05312a770fe0abbe0fd0bda263e5ba3180959c3455c041d5314d71c695 not found: ID does not exist" Feb 27 10:41:16 crc kubenswrapper[4998]: I0227 10:41:16.988827 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 27 10:41:16 crc kubenswrapper[4998]: I0227 10:41:16.995953 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 27 10:41:17 crc kubenswrapper[4998]: I0227 10:41:17.021934 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 27 10:41:17 crc kubenswrapper[4998]: E0227 10:41:17.022434 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fe515b7-ba51-4d18-a112-41374a6c932c" containerName="nova-api-api" Feb 27 10:41:17 crc kubenswrapper[4998]: I0227 10:41:17.022457 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fe515b7-ba51-4d18-a112-41374a6c932c" containerName="nova-api-api" Feb 27 10:41:17 crc kubenswrapper[4998]: E0227 10:41:17.022470 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fe515b7-ba51-4d18-a112-41374a6c932c" containerName="nova-api-log" Feb 27 10:41:17 crc kubenswrapper[4998]: I0227 10:41:17.022479 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fe515b7-ba51-4d18-a112-41374a6c932c" containerName="nova-api-log" Feb 27 10:41:17 crc kubenswrapper[4998]: E0227 10:41:17.022520 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b15f23-61d5-4028-a889-c212564533cf" containerName="dnsmasq-dns" Feb 27 10:41:17 crc kubenswrapper[4998]: I0227 10:41:17.022528 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b15f23-61d5-4028-a889-c212564533cf" containerName="dnsmasq-dns" Feb 27 10:41:17 crc kubenswrapper[4998]: E0227 10:41:17.022548 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b15f23-61d5-4028-a889-c212564533cf" containerName="init" Feb 27 10:41:17 crc kubenswrapper[4998]: I0227 10:41:17.022556 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b15f23-61d5-4028-a889-c212564533cf" containerName="init" Feb 27 10:41:17 crc kubenswrapper[4998]: E0227 10:41:17.022566 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00fdbd39-90de-4af0-b166-7b8f106ec115" containerName="nova-manage" Feb 27 10:41:17 crc kubenswrapper[4998]: I0227 10:41:17.022574 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="00fdbd39-90de-4af0-b166-7b8f106ec115" containerName="nova-manage" Feb 27 10:41:17 crc kubenswrapper[4998]: I0227 10:41:17.022812 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b15f23-61d5-4028-a889-c212564533cf" containerName="dnsmasq-dns" Feb 27 10:41:17 crc kubenswrapper[4998]: I0227 10:41:17.022839 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="00fdbd39-90de-4af0-b166-7b8f106ec115" containerName="nova-manage" Feb 27 10:41:17 crc kubenswrapper[4998]: I0227 10:41:17.022849 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="3fe515b7-ba51-4d18-a112-41374a6c932c" containerName="nova-api-api" Feb 27 10:41:17 crc kubenswrapper[4998]: I0227 10:41:17.022863 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="3fe515b7-ba51-4d18-a112-41374a6c932c" containerName="nova-api-log" Feb 27 10:41:17 crc kubenswrapper[4998]: I0227 10:41:17.024389 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 27 10:41:17 crc kubenswrapper[4998]: I0227 10:41:17.028069 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 27 10:41:17 crc kubenswrapper[4998]: I0227 10:41:17.028541 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 27 10:41:17 crc kubenswrapper[4998]: I0227 10:41:17.028892 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 27 10:41:17 crc kubenswrapper[4998]: I0227 10:41:17.035184 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 27 10:41:17 crc kubenswrapper[4998]: I0227 10:41:17.174158 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba55db21-2e4c-4171-ab36-8b3ad880e27f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ba55db21-2e4c-4171-ab36-8b3ad880e27f\") " pod="openstack/nova-api-0" Feb 27 10:41:17 crc kubenswrapper[4998]: I0227 10:41:17.174213 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba55db21-2e4c-4171-ab36-8b3ad880e27f-logs\") pod \"nova-api-0\" (UID: \"ba55db21-2e4c-4171-ab36-8b3ad880e27f\") " pod="openstack/nova-api-0" Feb 27 10:41:17 crc kubenswrapper[4998]: I0227 10:41:17.174275 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba55db21-2e4c-4171-ab36-8b3ad880e27f-config-data\") pod \"nova-api-0\" (UID: \"ba55db21-2e4c-4171-ab36-8b3ad880e27f\") " pod="openstack/nova-api-0" Feb 27 10:41:17 crc kubenswrapper[4998]: I0227 10:41:17.174385 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba55db21-2e4c-4171-ab36-8b3ad880e27f-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ba55db21-2e4c-4171-ab36-8b3ad880e27f\") " pod="openstack/nova-api-0" Feb 27 10:41:17 crc kubenswrapper[4998]: I0227 10:41:17.174444 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqtrb\" (UniqueName: \"kubernetes.io/projected/ba55db21-2e4c-4171-ab36-8b3ad880e27f-kube-api-access-rqtrb\") pod \"nova-api-0\" (UID: \"ba55db21-2e4c-4171-ab36-8b3ad880e27f\") " pod="openstack/nova-api-0" Feb 27 10:41:17 crc kubenswrapper[4998]: I0227 10:41:17.174501 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba55db21-2e4c-4171-ab36-8b3ad880e27f-public-tls-certs\") pod \"nova-api-0\" (UID: \"ba55db21-2e4c-4171-ab36-8b3ad880e27f\") " pod="openstack/nova-api-0" Feb 27 10:41:17 crc kubenswrapper[4998]: I0227 10:41:17.276527 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba55db21-2e4c-4171-ab36-8b3ad880e27f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ba55db21-2e4c-4171-ab36-8b3ad880e27f\") " pod="openstack/nova-api-0" Feb 27 10:41:17 crc kubenswrapper[4998]: I0227 10:41:17.276577 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba55db21-2e4c-4171-ab36-8b3ad880e27f-logs\") pod \"nova-api-0\" (UID: \"ba55db21-2e4c-4171-ab36-8b3ad880e27f\") " pod="openstack/nova-api-0" Feb 27 10:41:17 crc kubenswrapper[4998]: I0227 10:41:17.276606 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba55db21-2e4c-4171-ab36-8b3ad880e27f-config-data\") pod \"nova-api-0\" (UID: \"ba55db21-2e4c-4171-ab36-8b3ad880e27f\") " pod="openstack/nova-api-0" Feb 27 10:41:17 crc kubenswrapper[4998]: I0227 10:41:17.276681 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba55db21-2e4c-4171-ab36-8b3ad880e27f-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ba55db21-2e4c-4171-ab36-8b3ad880e27f\") " pod="openstack/nova-api-0" Feb 27 10:41:17 crc kubenswrapper[4998]: I0227 10:41:17.276730 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqtrb\" (UniqueName: \"kubernetes.io/projected/ba55db21-2e4c-4171-ab36-8b3ad880e27f-kube-api-access-rqtrb\") pod \"nova-api-0\" (UID: \"ba55db21-2e4c-4171-ab36-8b3ad880e27f\") " pod="openstack/nova-api-0" Feb 27 10:41:17 crc kubenswrapper[4998]: I0227 10:41:17.276771 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba55db21-2e4c-4171-ab36-8b3ad880e27f-public-tls-certs\") pod \"nova-api-0\" (UID: \"ba55db21-2e4c-4171-ab36-8b3ad880e27f\") " pod="openstack/nova-api-0" Feb 27 10:41:17 crc kubenswrapper[4998]: I0227 10:41:17.277094 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba55db21-2e4c-4171-ab36-8b3ad880e27f-logs\") pod \"nova-api-0\" (UID: \"ba55db21-2e4c-4171-ab36-8b3ad880e27f\") " pod="openstack/nova-api-0" Feb 27 10:41:17 crc kubenswrapper[4998]: I0227 10:41:17.282423 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba55db21-2e4c-4171-ab36-8b3ad880e27f-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ba55db21-2e4c-4171-ab36-8b3ad880e27f\") " pod="openstack/nova-api-0" Feb 27 10:41:17 crc kubenswrapper[4998]: I0227 10:41:17.282574 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba55db21-2e4c-4171-ab36-8b3ad880e27f-config-data\") pod \"nova-api-0\" (UID: \"ba55db21-2e4c-4171-ab36-8b3ad880e27f\") " pod="openstack/nova-api-0" Feb 27 10:41:17 crc kubenswrapper[4998]: I0227 10:41:17.282664 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba55db21-2e4c-4171-ab36-8b3ad880e27f-public-tls-certs\") pod \"nova-api-0\" (UID: \"ba55db21-2e4c-4171-ab36-8b3ad880e27f\") " pod="openstack/nova-api-0" Feb 27 10:41:17 crc kubenswrapper[4998]: I0227 10:41:17.284398 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba55db21-2e4c-4171-ab36-8b3ad880e27f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ba55db21-2e4c-4171-ab36-8b3ad880e27f\") " pod="openstack/nova-api-0" Feb 27 10:41:17 crc kubenswrapper[4998]: I0227 10:41:17.298456 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqtrb\" (UniqueName: \"kubernetes.io/projected/ba55db21-2e4c-4171-ab36-8b3ad880e27f-kube-api-access-rqtrb\") pod \"nova-api-0\" (UID: \"ba55db21-2e4c-4171-ab36-8b3ad880e27f\") " pod="openstack/nova-api-0" Feb 27 10:41:17 crc kubenswrapper[4998]: I0227 10:41:17.340295 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 27 10:41:17 crc kubenswrapper[4998]: E0227 10:41:17.440211 4998 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7489d769d417fc7bdac009962cd25469a34a21ca3b93e912cca036f68e5c2fb9" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 27 10:41:17 crc kubenswrapper[4998]: E0227 10:41:17.443863 4998 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7489d769d417fc7bdac009962cd25469a34a21ca3b93e912cca036f68e5c2fb9" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 27 10:41:17 crc kubenswrapper[4998]: E0227 10:41:17.446539 4998 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7489d769d417fc7bdac009962cd25469a34a21ca3b93e912cca036f68e5c2fb9" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 27 10:41:17 crc kubenswrapper[4998]: E0227 10:41:17.446569 4998 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="05c40d08-f020-4ef5-8e19-fbbc9abe46a4" containerName="nova-scheduler-scheduler" Feb 27 10:41:17 crc kubenswrapper[4998]: I0227 10:41:17.830697 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 27 10:41:17 crc kubenswrapper[4998]: W0227 10:41:17.834809 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podba55db21_2e4c_4171_ab36_8b3ad880e27f.slice/crio-41738377075d43f6429fd72e55f548cc0a51eed7600af1bf6e12e23d1c99846e WatchSource:0}: Error finding container 41738377075d43f6429fd72e55f548cc0a51eed7600af1bf6e12e23d1c99846e: Status 404 returned error can't find the container with id 41738377075d43f6429fd72e55f548cc0a51eed7600af1bf6e12e23d1c99846e Feb 27 10:41:18 crc kubenswrapper[4998]: I0227 10:41:18.650626 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ba55db21-2e4c-4171-ab36-8b3ad880e27f","Type":"ContainerStarted","Data":"670771f86ec96abddd9eae5b686b60562bc277cb296fc6d1a480a4eb01101362"} Feb 27 10:41:18 crc kubenswrapper[4998]: I0227 10:41:18.650872 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ba55db21-2e4c-4171-ab36-8b3ad880e27f","Type":"ContainerStarted","Data":"fb7df603b25ecec0a0e253d8a86146a48a9062d412630c397c22ef6e6d1c4893"} Feb 27 10:41:18 crc kubenswrapper[4998]: I0227 10:41:18.650882 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ba55db21-2e4c-4171-ab36-8b3ad880e27f","Type":"ContainerStarted","Data":"41738377075d43f6429fd72e55f548cc0a51eed7600af1bf6e12e23d1c99846e"} Feb 27 10:41:18 crc kubenswrapper[4998]: I0227 10:41:18.680483 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.68044712 podStartE2EDuration="2.68044712s" podCreationTimestamp="2026-02-27 10:41:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:41:18.678986081 +0000 UTC m=+1430.677257069" watchObservedRunningTime="2026-02-27 10:41:18.68044712 +0000 UTC m=+1430.678718088" Feb 27 10:41:18 crc kubenswrapper[4998]: I0227 10:41:18.777848 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3fe515b7-ba51-4d18-a112-41374a6c932c" path="/var/lib/kubelet/pods/3fe515b7-ba51-4d18-a112-41374a6c932c/volumes" Feb 27 10:41:18 crc kubenswrapper[4998]: I0227 10:41:18.987187 4998 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="952fef0f-7957-4cec-81ee-60043bf510c9" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.201:8775/\": read tcp 10.217.0.2:52984->10.217.0.201:8775: read: connection reset by peer" Feb 27 10:41:18 crc kubenswrapper[4998]: I0227 10:41:18.987237 4998 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="952fef0f-7957-4cec-81ee-60043bf510c9" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.201:8775/\": read tcp 10.217.0.2:52982->10.217.0.201:8775: read: connection reset by peer" Feb 27 10:41:19 crc kubenswrapper[4998]: I0227 10:41:19.519429 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 27 10:41:19 crc kubenswrapper[4998]: I0227 10:41:19.616946 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jqfd9\" (UniqueName: \"kubernetes.io/projected/952fef0f-7957-4cec-81ee-60043bf510c9-kube-api-access-jqfd9\") pod \"952fef0f-7957-4cec-81ee-60043bf510c9\" (UID: \"952fef0f-7957-4cec-81ee-60043bf510c9\") " Feb 27 10:41:19 crc kubenswrapper[4998]: I0227 10:41:19.617003 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/952fef0f-7957-4cec-81ee-60043bf510c9-config-data\") pod \"952fef0f-7957-4cec-81ee-60043bf510c9\" (UID: \"952fef0f-7957-4cec-81ee-60043bf510c9\") " Feb 27 10:41:19 crc kubenswrapper[4998]: I0227 10:41:19.617256 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/952fef0f-7957-4cec-81ee-60043bf510c9-combined-ca-bundle\") pod \"952fef0f-7957-4cec-81ee-60043bf510c9\" (UID: \"952fef0f-7957-4cec-81ee-60043bf510c9\") " Feb 27 10:41:19 crc kubenswrapper[4998]: I0227 10:41:19.617335 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/952fef0f-7957-4cec-81ee-60043bf510c9-logs\") pod \"952fef0f-7957-4cec-81ee-60043bf510c9\" (UID: \"952fef0f-7957-4cec-81ee-60043bf510c9\") " Feb 27 10:41:19 crc kubenswrapper[4998]: I0227 10:41:19.617565 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/952fef0f-7957-4cec-81ee-60043bf510c9-nova-metadata-tls-certs\") pod \"952fef0f-7957-4cec-81ee-60043bf510c9\" (UID: \"952fef0f-7957-4cec-81ee-60043bf510c9\") " Feb 27 10:41:19 crc kubenswrapper[4998]: I0227 10:41:19.617781 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/952fef0f-7957-4cec-81ee-60043bf510c9-logs" (OuterVolumeSpecName: "logs") pod "952fef0f-7957-4cec-81ee-60043bf510c9" (UID: "952fef0f-7957-4cec-81ee-60043bf510c9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:41:19 crc kubenswrapper[4998]: I0227 10:41:19.618403 4998 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/952fef0f-7957-4cec-81ee-60043bf510c9-logs\") on node \"crc\" DevicePath \"\"" Feb 27 10:41:19 crc kubenswrapper[4998]: I0227 10:41:19.636603 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/952fef0f-7957-4cec-81ee-60043bf510c9-kube-api-access-jqfd9" (OuterVolumeSpecName: "kube-api-access-jqfd9") pod "952fef0f-7957-4cec-81ee-60043bf510c9" (UID: "952fef0f-7957-4cec-81ee-60043bf510c9"). InnerVolumeSpecName "kube-api-access-jqfd9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:41:19 crc kubenswrapper[4998]: I0227 10:41:19.658561 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/952fef0f-7957-4cec-81ee-60043bf510c9-config-data" (OuterVolumeSpecName: "config-data") pod "952fef0f-7957-4cec-81ee-60043bf510c9" (UID: "952fef0f-7957-4cec-81ee-60043bf510c9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:41:19 crc kubenswrapper[4998]: I0227 10:41:19.668437 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/952fef0f-7957-4cec-81ee-60043bf510c9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "952fef0f-7957-4cec-81ee-60043bf510c9" (UID: "952fef0f-7957-4cec-81ee-60043bf510c9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:41:19 crc kubenswrapper[4998]: I0227 10:41:19.674541 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/952fef0f-7957-4cec-81ee-60043bf510c9-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "952fef0f-7957-4cec-81ee-60043bf510c9" (UID: "952fef0f-7957-4cec-81ee-60043bf510c9"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:41:19 crc kubenswrapper[4998]: I0227 10:41:19.675902 4998 generic.go:334] "Generic (PLEG): container finished" podID="952fef0f-7957-4cec-81ee-60043bf510c9" containerID="abb9d8924566106088e4a57486571b14b4e6a46bcff6cea1dc561f773019a0ae" exitCode=0 Feb 27 10:41:19 crc kubenswrapper[4998]: I0227 10:41:19.675976 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 27 10:41:19 crc kubenswrapper[4998]: I0227 10:41:19.676028 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"952fef0f-7957-4cec-81ee-60043bf510c9","Type":"ContainerDied","Data":"abb9d8924566106088e4a57486571b14b4e6a46bcff6cea1dc561f773019a0ae"} Feb 27 10:41:19 crc kubenswrapper[4998]: I0227 10:41:19.676106 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"952fef0f-7957-4cec-81ee-60043bf510c9","Type":"ContainerDied","Data":"56b63101273c17a788bd7d6e1382178e3462c602931a011ca3b1098125e6ce06"} Feb 27 10:41:19 crc kubenswrapper[4998]: I0227 10:41:19.676146 4998 scope.go:117] "RemoveContainer" containerID="abb9d8924566106088e4a57486571b14b4e6a46bcff6cea1dc561f773019a0ae" Feb 27 10:41:19 crc kubenswrapper[4998]: I0227 10:41:19.747496 4998 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/952fef0f-7957-4cec-81ee-60043bf510c9-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 10:41:19 crc kubenswrapper[4998]: I0227 10:41:19.747525 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jqfd9\" (UniqueName: \"kubernetes.io/projected/952fef0f-7957-4cec-81ee-60043bf510c9-kube-api-access-jqfd9\") on node \"crc\" DevicePath \"\"" Feb 27 10:41:19 crc kubenswrapper[4998]: I0227 10:41:19.747537 4998 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/952fef0f-7957-4cec-81ee-60043bf510c9-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 10:41:19 crc kubenswrapper[4998]: I0227 10:41:19.747557 4998 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/952fef0f-7957-4cec-81ee-60043bf510c9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:41:19 crc kubenswrapper[4998]: I0227 10:41:19.748645 4998 scope.go:117] "RemoveContainer" containerID="765563b0ca7736d32ad2dff81917695fd2aad1c1c4599da47709ecd79c0c2782" Feb 27 10:41:19 crc kubenswrapper[4998]: I0227 10:41:19.771624 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 27 10:41:19 crc kubenswrapper[4998]: I0227 10:41:19.787422 4998 scope.go:117] "RemoveContainer" containerID="abb9d8924566106088e4a57486571b14b4e6a46bcff6cea1dc561f773019a0ae" Feb 27 10:41:19 crc kubenswrapper[4998]: E0227 10:41:19.788184 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"abb9d8924566106088e4a57486571b14b4e6a46bcff6cea1dc561f773019a0ae\": container with ID starting with abb9d8924566106088e4a57486571b14b4e6a46bcff6cea1dc561f773019a0ae not found: ID does not exist" containerID="abb9d8924566106088e4a57486571b14b4e6a46bcff6cea1dc561f773019a0ae" Feb 27 10:41:19 crc kubenswrapper[4998]: I0227 10:41:19.788344 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abb9d8924566106088e4a57486571b14b4e6a46bcff6cea1dc561f773019a0ae"} err="failed to get container status \"abb9d8924566106088e4a57486571b14b4e6a46bcff6cea1dc561f773019a0ae\": rpc error: code = NotFound desc = could not find container \"abb9d8924566106088e4a57486571b14b4e6a46bcff6cea1dc561f773019a0ae\": container with ID starting with abb9d8924566106088e4a57486571b14b4e6a46bcff6cea1dc561f773019a0ae not found: ID does not exist" Feb 27 10:41:19 crc kubenswrapper[4998]: I0227 10:41:19.788470 4998 scope.go:117] "RemoveContainer" containerID="765563b0ca7736d32ad2dff81917695fd2aad1c1c4599da47709ecd79c0c2782" Feb 27 10:41:19 crc kubenswrapper[4998]: E0227 10:41:19.789382 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"765563b0ca7736d32ad2dff81917695fd2aad1c1c4599da47709ecd79c0c2782\": container with ID starting with 765563b0ca7736d32ad2dff81917695fd2aad1c1c4599da47709ecd79c0c2782 not found: ID does not exist" containerID="765563b0ca7736d32ad2dff81917695fd2aad1c1c4599da47709ecd79c0c2782" Feb 27 10:41:19 crc kubenswrapper[4998]: I0227 10:41:19.789424 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 27 10:41:19 crc kubenswrapper[4998]: I0227 10:41:19.789428 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"765563b0ca7736d32ad2dff81917695fd2aad1c1c4599da47709ecd79c0c2782"} err="failed to get container status \"765563b0ca7736d32ad2dff81917695fd2aad1c1c4599da47709ecd79c0c2782\": rpc error: code = NotFound desc = could not find container \"765563b0ca7736d32ad2dff81917695fd2aad1c1c4599da47709ecd79c0c2782\": container with ID starting with 765563b0ca7736d32ad2dff81917695fd2aad1c1c4599da47709ecd79c0c2782 not found: ID does not exist" Feb 27 10:41:19 crc kubenswrapper[4998]: I0227 10:41:19.799140 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 27 10:41:19 crc kubenswrapper[4998]: E0227 10:41:19.799605 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="952fef0f-7957-4cec-81ee-60043bf510c9" containerName="nova-metadata-metadata" Feb 27 10:41:19 crc kubenswrapper[4998]: I0227 10:41:19.799619 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="952fef0f-7957-4cec-81ee-60043bf510c9" containerName="nova-metadata-metadata" Feb 27 10:41:19 crc kubenswrapper[4998]: E0227 10:41:19.799655 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="952fef0f-7957-4cec-81ee-60043bf510c9" containerName="nova-metadata-log" Feb 27 10:41:19 crc kubenswrapper[4998]: I0227 10:41:19.799661 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="952fef0f-7957-4cec-81ee-60043bf510c9" containerName="nova-metadata-log" Feb 27 10:41:19 crc kubenswrapper[4998]: I0227 10:41:19.799834 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="952fef0f-7957-4cec-81ee-60043bf510c9" containerName="nova-metadata-metadata" Feb 27 10:41:19 crc kubenswrapper[4998]: I0227 10:41:19.799846 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="952fef0f-7957-4cec-81ee-60043bf510c9" containerName="nova-metadata-log" Feb 27 10:41:19 crc kubenswrapper[4998]: I0227 10:41:19.800927 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 27 10:41:19 crc kubenswrapper[4998]: I0227 10:41:19.807682 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 27 10:41:19 crc kubenswrapper[4998]: I0227 10:41:19.809754 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 27 10:41:19 crc kubenswrapper[4998]: I0227 10:41:19.809974 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 27 10:41:19 crc kubenswrapper[4998]: I0227 10:41:19.950868 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52zwd\" (UniqueName: \"kubernetes.io/projected/9945116f-270e-499c-b0a1-98473487ff27-kube-api-access-52zwd\") pod \"nova-metadata-0\" (UID: \"9945116f-270e-499c-b0a1-98473487ff27\") " pod="openstack/nova-metadata-0" Feb 27 10:41:19 crc kubenswrapper[4998]: I0227 10:41:19.950944 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9945116f-270e-499c-b0a1-98473487ff27-logs\") pod \"nova-metadata-0\" (UID: \"9945116f-270e-499c-b0a1-98473487ff27\") " pod="openstack/nova-metadata-0" Feb 27 10:41:19 crc kubenswrapper[4998]: I0227 10:41:19.951164 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9945116f-270e-499c-b0a1-98473487ff27-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"9945116f-270e-499c-b0a1-98473487ff27\") " pod="openstack/nova-metadata-0" Feb 27 10:41:19 crc kubenswrapper[4998]: I0227 10:41:19.951564 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9945116f-270e-499c-b0a1-98473487ff27-config-data\") pod \"nova-metadata-0\" (UID: \"9945116f-270e-499c-b0a1-98473487ff27\") " pod="openstack/nova-metadata-0" Feb 27 10:41:19 crc kubenswrapper[4998]: I0227 10:41:19.951771 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9945116f-270e-499c-b0a1-98473487ff27-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9945116f-270e-499c-b0a1-98473487ff27\") " pod="openstack/nova-metadata-0" Feb 27 10:41:20 crc kubenswrapper[4998]: I0227 10:41:20.053048 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9945116f-270e-499c-b0a1-98473487ff27-config-data\") pod \"nova-metadata-0\" (UID: \"9945116f-270e-499c-b0a1-98473487ff27\") " pod="openstack/nova-metadata-0" Feb 27 10:41:20 crc kubenswrapper[4998]: I0227 10:41:20.053118 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9945116f-270e-499c-b0a1-98473487ff27-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9945116f-270e-499c-b0a1-98473487ff27\") " pod="openstack/nova-metadata-0" Feb 27 10:41:20 crc kubenswrapper[4998]: I0227 10:41:20.053186 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52zwd\" (UniqueName: \"kubernetes.io/projected/9945116f-270e-499c-b0a1-98473487ff27-kube-api-access-52zwd\") pod \"nova-metadata-0\" (UID: \"9945116f-270e-499c-b0a1-98473487ff27\") " pod="openstack/nova-metadata-0" Feb 27 10:41:20 crc kubenswrapper[4998]: I0227 10:41:20.053208 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9945116f-270e-499c-b0a1-98473487ff27-logs\") pod \"nova-metadata-0\" (UID: \"9945116f-270e-499c-b0a1-98473487ff27\") " pod="openstack/nova-metadata-0" Feb 27 10:41:20 crc kubenswrapper[4998]: I0227 10:41:20.053252 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9945116f-270e-499c-b0a1-98473487ff27-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"9945116f-270e-499c-b0a1-98473487ff27\") " pod="openstack/nova-metadata-0" Feb 27 10:41:20 crc kubenswrapper[4998]: I0227 10:41:20.053860 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9945116f-270e-499c-b0a1-98473487ff27-logs\") pod \"nova-metadata-0\" (UID: \"9945116f-270e-499c-b0a1-98473487ff27\") " pod="openstack/nova-metadata-0" Feb 27 10:41:20 crc kubenswrapper[4998]: I0227 10:41:20.057526 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9945116f-270e-499c-b0a1-98473487ff27-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"9945116f-270e-499c-b0a1-98473487ff27\") " pod="openstack/nova-metadata-0" Feb 27 10:41:20 crc kubenswrapper[4998]: I0227 10:41:20.057759 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9945116f-270e-499c-b0a1-98473487ff27-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9945116f-270e-499c-b0a1-98473487ff27\") " pod="openstack/nova-metadata-0" Feb 27 10:41:20 crc kubenswrapper[4998]: I0227 10:41:20.059177 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9945116f-270e-499c-b0a1-98473487ff27-config-data\") pod \"nova-metadata-0\" (UID: \"9945116f-270e-499c-b0a1-98473487ff27\") " pod="openstack/nova-metadata-0" Feb 27 10:41:20 crc kubenswrapper[4998]: I0227 10:41:20.077445 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52zwd\" (UniqueName: \"kubernetes.io/projected/9945116f-270e-499c-b0a1-98473487ff27-kube-api-access-52zwd\") pod \"nova-metadata-0\" (UID: \"9945116f-270e-499c-b0a1-98473487ff27\") " pod="openstack/nova-metadata-0" Feb 27 10:41:20 crc kubenswrapper[4998]: I0227 10:41:20.123645 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 27 10:41:20 crc kubenswrapper[4998]: I0227 10:41:20.625574 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 27 10:41:20 crc kubenswrapper[4998]: W0227 10:41:20.628175 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9945116f_270e_499c_b0a1_98473487ff27.slice/crio-6dd714586d89d52d9efde6195fa4a1880cb1412fd34447bd3c8b7d2271290148 WatchSource:0}: Error finding container 6dd714586d89d52d9efde6195fa4a1880cb1412fd34447bd3c8b7d2271290148: Status 404 returned error can't find the container with id 6dd714586d89d52d9efde6195fa4a1880cb1412fd34447bd3c8b7d2271290148 Feb 27 10:41:20 crc kubenswrapper[4998]: I0227 10:41:20.687090 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9945116f-270e-499c-b0a1-98473487ff27","Type":"ContainerStarted","Data":"6dd714586d89d52d9efde6195fa4a1880cb1412fd34447bd3c8b7d2271290148"} Feb 27 10:41:20 crc kubenswrapper[4998]: I0227 10:41:20.777388 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="952fef0f-7957-4cec-81ee-60043bf510c9" path="/var/lib/kubelet/pods/952fef0f-7957-4cec-81ee-60043bf510c9/volumes" Feb 27 10:41:21 crc kubenswrapper[4998]: I0227 10:41:21.581531 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 27 10:41:21 crc kubenswrapper[4998]: I0227 10:41:21.690759 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05c40d08-f020-4ef5-8e19-fbbc9abe46a4-config-data\") pod \"05c40d08-f020-4ef5-8e19-fbbc9abe46a4\" (UID: \"05c40d08-f020-4ef5-8e19-fbbc9abe46a4\") " Feb 27 10:41:21 crc kubenswrapper[4998]: I0227 10:41:21.690898 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05c40d08-f020-4ef5-8e19-fbbc9abe46a4-combined-ca-bundle\") pod \"05c40d08-f020-4ef5-8e19-fbbc9abe46a4\" (UID: \"05c40d08-f020-4ef5-8e19-fbbc9abe46a4\") " Feb 27 10:41:21 crc kubenswrapper[4998]: I0227 10:41:21.691016 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2zk7d\" (UniqueName: \"kubernetes.io/projected/05c40d08-f020-4ef5-8e19-fbbc9abe46a4-kube-api-access-2zk7d\") pod \"05c40d08-f020-4ef5-8e19-fbbc9abe46a4\" (UID: \"05c40d08-f020-4ef5-8e19-fbbc9abe46a4\") " Feb 27 10:41:21 crc kubenswrapper[4998]: I0227 10:41:21.699101 4998 generic.go:334] "Generic (PLEG): container finished" podID="05c40d08-f020-4ef5-8e19-fbbc9abe46a4" containerID="7489d769d417fc7bdac009962cd25469a34a21ca3b93e912cca036f68e5c2fb9" exitCode=0 Feb 27 10:41:21 crc kubenswrapper[4998]: I0227 10:41:21.699157 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"05c40d08-f020-4ef5-8e19-fbbc9abe46a4","Type":"ContainerDied","Data":"7489d769d417fc7bdac009962cd25469a34a21ca3b93e912cca036f68e5c2fb9"} Feb 27 10:41:21 crc kubenswrapper[4998]: I0227 10:41:21.699182 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"05c40d08-f020-4ef5-8e19-fbbc9abe46a4","Type":"ContainerDied","Data":"b839a8e997f2f3786c278a528e710fda1012762940011d12624376ef41599785"} Feb 27 10:41:21 crc kubenswrapper[4998]: I0227 10:41:21.699199 4998 scope.go:117] "RemoveContainer" containerID="7489d769d417fc7bdac009962cd25469a34a21ca3b93e912cca036f68e5c2fb9" Feb 27 10:41:21 crc kubenswrapper[4998]: I0227 10:41:21.699298 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 27 10:41:21 crc kubenswrapper[4998]: I0227 10:41:21.700439 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05c40d08-f020-4ef5-8e19-fbbc9abe46a4-kube-api-access-2zk7d" (OuterVolumeSpecName: "kube-api-access-2zk7d") pod "05c40d08-f020-4ef5-8e19-fbbc9abe46a4" (UID: "05c40d08-f020-4ef5-8e19-fbbc9abe46a4"). InnerVolumeSpecName "kube-api-access-2zk7d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:41:21 crc kubenswrapper[4998]: I0227 10:41:21.703150 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9945116f-270e-499c-b0a1-98473487ff27","Type":"ContainerStarted","Data":"917a398efcf858b0f8a52ddda463d799b6b75f91767b1490a8c0505738bfc8c8"} Feb 27 10:41:21 crc kubenswrapper[4998]: I0227 10:41:21.703215 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9945116f-270e-499c-b0a1-98473487ff27","Type":"ContainerStarted","Data":"4b2f0ef286758f5540c0141a699a08cf81b421360a864ffe3e0ba9c553cab095"} Feb 27 10:41:21 crc kubenswrapper[4998]: I0227 10:41:21.724167 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05c40d08-f020-4ef5-8e19-fbbc9abe46a4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "05c40d08-f020-4ef5-8e19-fbbc9abe46a4" (UID: "05c40d08-f020-4ef5-8e19-fbbc9abe46a4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:41:21 crc kubenswrapper[4998]: I0227 10:41:21.729565 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05c40d08-f020-4ef5-8e19-fbbc9abe46a4-config-data" (OuterVolumeSpecName: "config-data") pod "05c40d08-f020-4ef5-8e19-fbbc9abe46a4" (UID: "05c40d08-f020-4ef5-8e19-fbbc9abe46a4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:41:21 crc kubenswrapper[4998]: I0227 10:41:21.730530 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.730509732 podStartE2EDuration="2.730509732s" podCreationTimestamp="2026-02-27 10:41:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:41:21.724754599 +0000 UTC m=+1433.723025577" watchObservedRunningTime="2026-02-27 10:41:21.730509732 +0000 UTC m=+1433.728780700" Feb 27 10:41:21 crc kubenswrapper[4998]: I0227 10:41:21.755432 4998 scope.go:117] "RemoveContainer" containerID="7489d769d417fc7bdac009962cd25469a34a21ca3b93e912cca036f68e5c2fb9" Feb 27 10:41:21 crc kubenswrapper[4998]: E0227 10:41:21.755804 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7489d769d417fc7bdac009962cd25469a34a21ca3b93e912cca036f68e5c2fb9\": container with ID starting with 7489d769d417fc7bdac009962cd25469a34a21ca3b93e912cca036f68e5c2fb9 not found: ID does not exist" containerID="7489d769d417fc7bdac009962cd25469a34a21ca3b93e912cca036f68e5c2fb9" Feb 27 10:41:21 crc kubenswrapper[4998]: I0227 10:41:21.755846 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7489d769d417fc7bdac009962cd25469a34a21ca3b93e912cca036f68e5c2fb9"} err="failed to get container status \"7489d769d417fc7bdac009962cd25469a34a21ca3b93e912cca036f68e5c2fb9\": rpc error: code = NotFound desc = could not find container \"7489d769d417fc7bdac009962cd25469a34a21ca3b93e912cca036f68e5c2fb9\": container with ID starting with 7489d769d417fc7bdac009962cd25469a34a21ca3b93e912cca036f68e5c2fb9 not found: ID does not exist" Feb 27 10:41:21 crc kubenswrapper[4998]: I0227 10:41:21.793523 4998 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05c40d08-f020-4ef5-8e19-fbbc9abe46a4-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 10:41:21 crc kubenswrapper[4998]: I0227 10:41:21.793561 4998 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05c40d08-f020-4ef5-8e19-fbbc9abe46a4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:41:21 crc kubenswrapper[4998]: I0227 10:41:21.793572 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2zk7d\" (UniqueName: \"kubernetes.io/projected/05c40d08-f020-4ef5-8e19-fbbc9abe46a4-kube-api-access-2zk7d\") on node \"crc\" DevicePath \"\"" Feb 27 10:41:22 crc kubenswrapper[4998]: I0227 10:41:22.029640 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 27 10:41:22 crc kubenswrapper[4998]: I0227 10:41:22.037660 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 27 10:41:22 crc kubenswrapper[4998]: I0227 10:41:22.053212 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 27 10:41:22 crc kubenswrapper[4998]: E0227 10:41:22.053712 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05c40d08-f020-4ef5-8e19-fbbc9abe46a4" containerName="nova-scheduler-scheduler" Feb 27 10:41:22 crc kubenswrapper[4998]: I0227 10:41:22.053733 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="05c40d08-f020-4ef5-8e19-fbbc9abe46a4" containerName="nova-scheduler-scheduler" Feb 27 10:41:22 crc kubenswrapper[4998]: I0227 10:41:22.053899 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="05c40d08-f020-4ef5-8e19-fbbc9abe46a4" containerName="nova-scheduler-scheduler" Feb 27 10:41:22 crc kubenswrapper[4998]: I0227 10:41:22.054666 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 27 10:41:22 crc kubenswrapper[4998]: I0227 10:41:22.057991 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 27 10:41:22 crc kubenswrapper[4998]: I0227 10:41:22.071540 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 27 10:41:22 crc kubenswrapper[4998]: I0227 10:41:22.199180 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b061605c-7c6d-4893-ac07-0ce61a84be5e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b061605c-7c6d-4893-ac07-0ce61a84be5e\") " pod="openstack/nova-scheduler-0" Feb 27 10:41:22 crc kubenswrapper[4998]: I0227 10:41:22.199263 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b061605c-7c6d-4893-ac07-0ce61a84be5e-config-data\") pod \"nova-scheduler-0\" (UID: \"b061605c-7c6d-4893-ac07-0ce61a84be5e\") " pod="openstack/nova-scheduler-0" Feb 27 10:41:22 crc kubenswrapper[4998]: I0227 10:41:22.199427 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9drng\" (UniqueName: \"kubernetes.io/projected/b061605c-7c6d-4893-ac07-0ce61a84be5e-kube-api-access-9drng\") pod \"nova-scheduler-0\" (UID: \"b061605c-7c6d-4893-ac07-0ce61a84be5e\") " pod="openstack/nova-scheduler-0" Feb 27 10:41:22 crc kubenswrapper[4998]: I0227 10:41:22.300930 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b061605c-7c6d-4893-ac07-0ce61a84be5e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b061605c-7c6d-4893-ac07-0ce61a84be5e\") " pod="openstack/nova-scheduler-0" Feb 27 10:41:22 crc kubenswrapper[4998]: I0227 10:41:22.301021 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b061605c-7c6d-4893-ac07-0ce61a84be5e-config-data\") pod \"nova-scheduler-0\" (UID: \"b061605c-7c6d-4893-ac07-0ce61a84be5e\") " pod="openstack/nova-scheduler-0" Feb 27 10:41:22 crc kubenswrapper[4998]: I0227 10:41:22.301213 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9drng\" (UniqueName: \"kubernetes.io/projected/b061605c-7c6d-4893-ac07-0ce61a84be5e-kube-api-access-9drng\") pod \"nova-scheduler-0\" (UID: \"b061605c-7c6d-4893-ac07-0ce61a84be5e\") " pod="openstack/nova-scheduler-0" Feb 27 10:41:22 crc kubenswrapper[4998]: I0227 10:41:22.307012 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b061605c-7c6d-4893-ac07-0ce61a84be5e-config-data\") pod \"nova-scheduler-0\" (UID: \"b061605c-7c6d-4893-ac07-0ce61a84be5e\") " pod="openstack/nova-scheduler-0" Feb 27 10:41:22 crc kubenswrapper[4998]: I0227 10:41:22.308963 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b061605c-7c6d-4893-ac07-0ce61a84be5e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b061605c-7c6d-4893-ac07-0ce61a84be5e\") " pod="openstack/nova-scheduler-0" Feb 27 10:41:22 crc kubenswrapper[4998]: I0227 10:41:22.316787 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9drng\" (UniqueName: \"kubernetes.io/projected/b061605c-7c6d-4893-ac07-0ce61a84be5e-kube-api-access-9drng\") pod \"nova-scheduler-0\" (UID: \"b061605c-7c6d-4893-ac07-0ce61a84be5e\") " pod="openstack/nova-scheduler-0" Feb 27 10:41:22 crc kubenswrapper[4998]: I0227 10:41:22.416505 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 27 10:41:22 crc kubenswrapper[4998]: I0227 10:41:22.774250 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05c40d08-f020-4ef5-8e19-fbbc9abe46a4" path="/var/lib/kubelet/pods/05c40d08-f020-4ef5-8e19-fbbc9abe46a4/volumes" Feb 27 10:41:22 crc kubenswrapper[4998]: I0227 10:41:22.892248 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 27 10:41:22 crc kubenswrapper[4998]: W0227 10:41:22.899829 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb061605c_7c6d_4893_ac07_0ce61a84be5e.slice/crio-a4185fe5b960aa127726caec8180012bbf5f114a27f5f289adcd89b433ed30be WatchSource:0}: Error finding container a4185fe5b960aa127726caec8180012bbf5f114a27f5f289adcd89b433ed30be: Status 404 returned error can't find the container with id a4185fe5b960aa127726caec8180012bbf5f114a27f5f289adcd89b433ed30be Feb 27 10:41:23 crc kubenswrapper[4998]: I0227 10:41:23.735740 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b061605c-7c6d-4893-ac07-0ce61a84be5e","Type":"ContainerStarted","Data":"f4487b8acc184ed378129a0e7e976c4e29e9799f15aa29fd3ee8d17b00ba8f9d"} Feb 27 10:41:23 crc kubenswrapper[4998]: I0227 10:41:23.736024 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b061605c-7c6d-4893-ac07-0ce61a84be5e","Type":"ContainerStarted","Data":"a4185fe5b960aa127726caec8180012bbf5f114a27f5f289adcd89b433ed30be"} Feb 27 10:41:23 crc kubenswrapper[4998]: I0227 10:41:23.752476 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=1.752460042 podStartE2EDuration="1.752460042s" podCreationTimestamp="2026-02-27 10:41:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:41:23.750724014 +0000 UTC m=+1435.748995002" watchObservedRunningTime="2026-02-27 10:41:23.752460042 +0000 UTC m=+1435.750731010" Feb 27 10:41:25 crc kubenswrapper[4998]: I0227 10:41:25.125399 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 27 10:41:25 crc kubenswrapper[4998]: I0227 10:41:25.125560 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 27 10:41:25 crc kubenswrapper[4998]: E0227 10:41:25.916635 4998 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod92acff51_4ca2_43c6_ab0f_480e01e9efb8.slice/crio-conmon-550199ff57ddd5616ea967474384544dce1ead6cabf6b337129b1634632860fa.scope\": RecentStats: unable to find data in memory cache]" Feb 27 10:41:27 crc kubenswrapper[4998]: I0227 10:41:27.341762 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 27 10:41:27 crc kubenswrapper[4998]: I0227 10:41:27.341832 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 27 10:41:27 crc kubenswrapper[4998]: I0227 10:41:27.417382 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 27 10:41:28 crc kubenswrapper[4998]: I0227 10:41:28.355376 4998 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ba55db21-2e4c-4171-ab36-8b3ad880e27f" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.211:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 27 10:41:28 crc kubenswrapper[4998]: I0227 10:41:28.355556 4998 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ba55db21-2e4c-4171-ab36-8b3ad880e27f" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.211:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 27 10:41:30 crc kubenswrapper[4998]: I0227 10:41:30.124420 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 27 10:41:30 crc kubenswrapper[4998]: I0227 10:41:30.125736 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 27 10:41:31 crc kubenswrapper[4998]: I0227 10:41:31.138363 4998 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="9945116f-270e-499c-b0a1-98473487ff27" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.212:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 27 10:41:31 crc kubenswrapper[4998]: I0227 10:41:31.139524 4998 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="9945116f-270e-499c-b0a1-98473487ff27" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.212:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 27 10:41:32 crc kubenswrapper[4998]: I0227 10:41:32.417331 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 27 10:41:32 crc kubenswrapper[4998]: I0227 10:41:32.466901 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 27 10:41:32 crc kubenswrapper[4998]: I0227 10:41:32.857787 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 27 10:41:35 crc kubenswrapper[4998]: I0227 10:41:35.923904 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 27 10:41:37 crc kubenswrapper[4998]: I0227 10:41:37.350104 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 27 10:41:37 crc kubenswrapper[4998]: I0227 10:41:37.351981 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 27 10:41:37 crc kubenswrapper[4998]: I0227 10:41:37.352815 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 27 10:41:37 crc kubenswrapper[4998]: I0227 10:41:37.356828 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 27 10:41:37 crc kubenswrapper[4998]: I0227 10:41:37.869614 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 27 10:41:37 crc kubenswrapper[4998]: I0227 10:41:37.875743 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 27 10:41:40 crc kubenswrapper[4998]: I0227 10:41:40.130500 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 27 10:41:40 crc kubenswrapper[4998]: I0227 10:41:40.131967 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 27 10:41:40 crc kubenswrapper[4998]: I0227 10:41:40.137002 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 27 10:41:40 crc kubenswrapper[4998]: I0227 10:41:40.904610 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 27 10:41:45 crc kubenswrapper[4998]: I0227 10:41:45.745813 4998 scope.go:117] "RemoveContainer" containerID="0b5304665eda60e1c3766a688a5b5f1bae8baa4be44271087140a1ee16f263e2" Feb 27 10:41:45 crc kubenswrapper[4998]: I0227 10:41:45.778072 4998 scope.go:117] "RemoveContainer" containerID="3869a51c6b8c900ee2a904c84c238b3d587fab296d20ca2d9d0d585cc03e1db8" Feb 27 10:41:45 crc kubenswrapper[4998]: I0227 10:41:45.813286 4998 scope.go:117] "RemoveContainer" containerID="49457f1fe0d3f268f621e3ac19d1d0296977592093d46fc3244f8f4e3869437e" Feb 27 10:41:48 crc kubenswrapper[4998]: I0227 10:41:48.266537 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xbmkb"] Feb 27 10:41:48 crc kubenswrapper[4998]: I0227 10:41:48.269713 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xbmkb" Feb 27 10:41:48 crc kubenswrapper[4998]: I0227 10:41:48.281105 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xbmkb"] Feb 27 10:41:48 crc kubenswrapper[4998]: I0227 10:41:48.421395 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11764c1f-c462-471c-b5ab-e3822b04e887-utilities\") pod \"redhat-operators-xbmkb\" (UID: \"11764c1f-c462-471c-b5ab-e3822b04e887\") " pod="openshift-marketplace/redhat-operators-xbmkb" Feb 27 10:41:48 crc kubenswrapper[4998]: I0227 10:41:48.421472 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11764c1f-c462-471c-b5ab-e3822b04e887-catalog-content\") pod \"redhat-operators-xbmkb\" (UID: \"11764c1f-c462-471c-b5ab-e3822b04e887\") " pod="openshift-marketplace/redhat-operators-xbmkb" Feb 27 10:41:48 crc kubenswrapper[4998]: I0227 10:41:48.421526 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9wfx\" (UniqueName: \"kubernetes.io/projected/11764c1f-c462-471c-b5ab-e3822b04e887-kube-api-access-s9wfx\") pod \"redhat-operators-xbmkb\" (UID: \"11764c1f-c462-471c-b5ab-e3822b04e887\") " pod="openshift-marketplace/redhat-operators-xbmkb" Feb 27 10:41:48 crc kubenswrapper[4998]: I0227 10:41:48.523345 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9wfx\" (UniqueName: \"kubernetes.io/projected/11764c1f-c462-471c-b5ab-e3822b04e887-kube-api-access-s9wfx\") pod \"redhat-operators-xbmkb\" (UID: \"11764c1f-c462-471c-b5ab-e3822b04e887\") " pod="openshift-marketplace/redhat-operators-xbmkb" Feb 27 10:41:48 crc kubenswrapper[4998]: I0227 10:41:48.523491 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11764c1f-c462-471c-b5ab-e3822b04e887-utilities\") pod \"redhat-operators-xbmkb\" (UID: \"11764c1f-c462-471c-b5ab-e3822b04e887\") " pod="openshift-marketplace/redhat-operators-xbmkb" Feb 27 10:41:48 crc kubenswrapper[4998]: I0227 10:41:48.523537 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11764c1f-c462-471c-b5ab-e3822b04e887-catalog-content\") pod \"redhat-operators-xbmkb\" (UID: \"11764c1f-c462-471c-b5ab-e3822b04e887\") " pod="openshift-marketplace/redhat-operators-xbmkb" Feb 27 10:41:48 crc kubenswrapper[4998]: I0227 10:41:48.524036 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11764c1f-c462-471c-b5ab-e3822b04e887-catalog-content\") pod \"redhat-operators-xbmkb\" (UID: \"11764c1f-c462-471c-b5ab-e3822b04e887\") " pod="openshift-marketplace/redhat-operators-xbmkb" Feb 27 10:41:48 crc kubenswrapper[4998]: I0227 10:41:48.524216 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11764c1f-c462-471c-b5ab-e3822b04e887-utilities\") pod \"redhat-operators-xbmkb\" (UID: \"11764c1f-c462-471c-b5ab-e3822b04e887\") " pod="openshift-marketplace/redhat-operators-xbmkb" Feb 27 10:41:48 crc kubenswrapper[4998]: I0227 10:41:48.543690 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9wfx\" (UniqueName: \"kubernetes.io/projected/11764c1f-c462-471c-b5ab-e3822b04e887-kube-api-access-s9wfx\") pod \"redhat-operators-xbmkb\" (UID: \"11764c1f-c462-471c-b5ab-e3822b04e887\") " pod="openshift-marketplace/redhat-operators-xbmkb" Feb 27 10:41:48 crc kubenswrapper[4998]: I0227 10:41:48.589662 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xbmkb" Feb 27 10:41:49 crc kubenswrapper[4998]: I0227 10:41:49.055059 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xbmkb"] Feb 27 10:41:49 crc kubenswrapper[4998]: I0227 10:41:49.987739 4998 generic.go:334] "Generic (PLEG): container finished" podID="11764c1f-c462-471c-b5ab-e3822b04e887" containerID="951815444f29249eeba3fe2744ed74cdbe69d0c6596f07ffac19e1e1f356da83" exitCode=0 Feb 27 10:41:49 crc kubenswrapper[4998]: I0227 10:41:49.988036 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xbmkb" event={"ID":"11764c1f-c462-471c-b5ab-e3822b04e887","Type":"ContainerDied","Data":"951815444f29249eeba3fe2744ed74cdbe69d0c6596f07ffac19e1e1f356da83"} Feb 27 10:41:49 crc kubenswrapper[4998]: I0227 10:41:49.988068 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xbmkb" event={"ID":"11764c1f-c462-471c-b5ab-e3822b04e887","Type":"ContainerStarted","Data":"521772b3a98c16ae0d37652ece127fc980bb843e271aaa1acd4ed99f53ee629e"} Feb 27 10:41:50 crc kubenswrapper[4998]: I0227 10:41:50.012211 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 27 10:41:50 crc kubenswrapper[4998]: I0227 10:41:50.997338 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xbmkb" event={"ID":"11764c1f-c462-471c-b5ab-e3822b04e887","Type":"ContainerStarted","Data":"a1ef54b8f91e396c3bc9b44c4e7ac257410d72807564df705c5522ad28af67f5"} Feb 27 10:41:51 crc kubenswrapper[4998]: I0227 10:41:51.097922 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 27 10:41:54 crc kubenswrapper[4998]: I0227 10:41:54.486590 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="0ca208a2-3ba0-43e6-a2c4-942c12e54b41" containerName="rabbitmq" containerID="cri-o://9ff77bb37ea66515101280ca31ee414ad46484284b3cad62d344ef3e3bdf1b88" gracePeriod=604796 Feb 27 10:41:55 crc kubenswrapper[4998]: I0227 10:41:55.037302 4998 generic.go:334] "Generic (PLEG): container finished" podID="11764c1f-c462-471c-b5ab-e3822b04e887" containerID="a1ef54b8f91e396c3bc9b44c4e7ac257410d72807564df705c5522ad28af67f5" exitCode=0 Feb 27 10:41:55 crc kubenswrapper[4998]: I0227 10:41:55.037366 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xbmkb" event={"ID":"11764c1f-c462-471c-b5ab-e3822b04e887","Type":"ContainerDied","Data":"a1ef54b8f91e396c3bc9b44c4e7ac257410d72807564df705c5522ad28af67f5"} Feb 27 10:41:55 crc kubenswrapper[4998]: I0227 10:41:55.067006 4998 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="0ca208a2-3ba0-43e6-a2c4-942c12e54b41" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.102:5671: connect: connection refused" Feb 27 10:41:55 crc kubenswrapper[4998]: I0227 10:41:55.327400 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="68cd6142-df7e-4994-97c0-0bc08ea1e3d4" containerName="rabbitmq" containerID="cri-o://f6a98a41d2d5f12fca3b789a125d274ad94dc22565def6bda4dfcc1cd769e21f" gracePeriod=604796 Feb 27 10:41:56 crc kubenswrapper[4998]: I0227 10:41:56.046829 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xbmkb" event={"ID":"11764c1f-c462-471c-b5ab-e3822b04e887","Type":"ContainerStarted","Data":"072be09b45742b933461558139558802bc7e665185d0568e9e486c2b50deb3bd"} Feb 27 10:41:56 crc kubenswrapper[4998]: I0227 10:41:56.088311 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xbmkb" podStartSLOduration=2.638114443 podStartE2EDuration="8.088289648s" podCreationTimestamp="2026-02-27 10:41:48 +0000 UTC" firstStartedPulling="2026-02-27 10:41:49.989925935 +0000 UTC m=+1461.988196903" lastFinishedPulling="2026-02-27 10:41:55.44010114 +0000 UTC m=+1467.438372108" observedRunningTime="2026-02-27 10:41:56.07702164 +0000 UTC m=+1468.075292648" watchObservedRunningTime="2026-02-27 10:41:56.088289648 +0000 UTC m=+1468.086560636" Feb 27 10:41:58 crc kubenswrapper[4998]: I0227 10:41:58.590004 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xbmkb" Feb 27 10:41:58 crc kubenswrapper[4998]: I0227 10:41:58.590707 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-xbmkb" Feb 27 10:41:59 crc kubenswrapper[4998]: I0227 10:41:59.637050 4998 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xbmkb" podUID="11764c1f-c462-471c-b5ab-e3822b04e887" containerName="registry-server" probeResult="failure" output=< Feb 27 10:41:59 crc kubenswrapper[4998]: timeout: failed to connect service ":50051" within 1s Feb 27 10:41:59 crc kubenswrapper[4998]: > Feb 27 10:42:00 crc kubenswrapper[4998]: I0227 10:42:00.152322 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536482-zs5m9"] Feb 27 10:42:00 crc kubenswrapper[4998]: I0227 10:42:00.153772 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536482-zs5m9" Feb 27 10:42:00 crc kubenswrapper[4998]: I0227 10:42:00.156694 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-b74ch" Feb 27 10:42:00 crc kubenswrapper[4998]: I0227 10:42:00.156951 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 10:42:00 crc kubenswrapper[4998]: I0227 10:42:00.157697 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 10:42:00 crc kubenswrapper[4998]: I0227 10:42:00.174734 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536482-zs5m9"] Feb 27 10:42:00 crc kubenswrapper[4998]: I0227 10:42:00.290904 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czxfn\" (UniqueName: \"kubernetes.io/projected/590d897d-12b6-491a-bf20-33c238b10871-kube-api-access-czxfn\") pod \"auto-csr-approver-29536482-zs5m9\" (UID: \"590d897d-12b6-491a-bf20-33c238b10871\") " pod="openshift-infra/auto-csr-approver-29536482-zs5m9" Feb 27 10:42:00 crc kubenswrapper[4998]: I0227 10:42:00.393705 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czxfn\" (UniqueName: \"kubernetes.io/projected/590d897d-12b6-491a-bf20-33c238b10871-kube-api-access-czxfn\") pod \"auto-csr-approver-29536482-zs5m9\" (UID: \"590d897d-12b6-491a-bf20-33c238b10871\") " pod="openshift-infra/auto-csr-approver-29536482-zs5m9" Feb 27 10:42:00 crc kubenswrapper[4998]: I0227 10:42:00.413515 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czxfn\" (UniqueName: \"kubernetes.io/projected/590d897d-12b6-491a-bf20-33c238b10871-kube-api-access-czxfn\") pod \"auto-csr-approver-29536482-zs5m9\" (UID: \"590d897d-12b6-491a-bf20-33c238b10871\") " pod="openshift-infra/auto-csr-approver-29536482-zs5m9" Feb 27 10:42:00 crc kubenswrapper[4998]: I0227 10:42:00.476656 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536482-zs5m9" Feb 27 10:42:00 crc kubenswrapper[4998]: I0227 10:42:00.964125 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536482-zs5m9"] Feb 27 10:42:01 crc kubenswrapper[4998]: I0227 10:42:01.101678 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 27 10:42:01 crc kubenswrapper[4998]: I0227 10:42:01.102256 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536482-zs5m9" event={"ID":"590d897d-12b6-491a-bf20-33c238b10871","Type":"ContainerStarted","Data":"6570732ea594275c2b65c5cb823d7f978e51dcca720681266e6596d67b187088"} Feb 27 10:42:01 crc kubenswrapper[4998]: I0227 10:42:01.104202 4998 generic.go:334] "Generic (PLEG): container finished" podID="0ca208a2-3ba0-43e6-a2c4-942c12e54b41" containerID="9ff77bb37ea66515101280ca31ee414ad46484284b3cad62d344ef3e3bdf1b88" exitCode=0 Feb 27 10:42:01 crc kubenswrapper[4998]: I0227 10:42:01.104246 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0ca208a2-3ba0-43e6-a2c4-942c12e54b41","Type":"ContainerDied","Data":"9ff77bb37ea66515101280ca31ee414ad46484284b3cad62d344ef3e3bdf1b88"} Feb 27 10:42:01 crc kubenswrapper[4998]: I0227 10:42:01.104268 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0ca208a2-3ba0-43e6-a2c4-942c12e54b41","Type":"ContainerDied","Data":"e52a8922465ed9814efebc82a6bdd9ee0f7bd82c90c5a603d3fa439499654251"} Feb 27 10:42:01 crc kubenswrapper[4998]: I0227 10:42:01.104284 4998 scope.go:117] "RemoveContainer" containerID="9ff77bb37ea66515101280ca31ee414ad46484284b3cad62d344ef3e3bdf1b88" Feb 27 10:42:01 crc kubenswrapper[4998]: I0227 10:42:01.104297 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 27 10:42:01 crc kubenswrapper[4998]: I0227 10:42:01.135529 4998 scope.go:117] "RemoveContainer" containerID="f72bedbf2a5292be214576f7a9572718cf7fad9c31f0bf452a153749bb16372b" Feb 27 10:42:01 crc kubenswrapper[4998]: I0227 10:42:01.175932 4998 scope.go:117] "RemoveContainer" containerID="9ff77bb37ea66515101280ca31ee414ad46484284b3cad62d344ef3e3bdf1b88" Feb 27 10:42:01 crc kubenswrapper[4998]: E0227 10:42:01.176584 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ff77bb37ea66515101280ca31ee414ad46484284b3cad62d344ef3e3bdf1b88\": container with ID starting with 9ff77bb37ea66515101280ca31ee414ad46484284b3cad62d344ef3e3bdf1b88 not found: ID does not exist" containerID="9ff77bb37ea66515101280ca31ee414ad46484284b3cad62d344ef3e3bdf1b88" Feb 27 10:42:01 crc kubenswrapper[4998]: I0227 10:42:01.176625 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ff77bb37ea66515101280ca31ee414ad46484284b3cad62d344ef3e3bdf1b88"} err="failed to get container status \"9ff77bb37ea66515101280ca31ee414ad46484284b3cad62d344ef3e3bdf1b88\": rpc error: code = NotFound desc = could not find container \"9ff77bb37ea66515101280ca31ee414ad46484284b3cad62d344ef3e3bdf1b88\": container with ID starting with 9ff77bb37ea66515101280ca31ee414ad46484284b3cad62d344ef3e3bdf1b88 not found: ID does not exist" Feb 27 10:42:01 crc kubenswrapper[4998]: I0227 10:42:01.176651 4998 scope.go:117] "RemoveContainer" containerID="f72bedbf2a5292be214576f7a9572718cf7fad9c31f0bf452a153749bb16372b" Feb 27 10:42:01 crc kubenswrapper[4998]: E0227 10:42:01.177146 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f72bedbf2a5292be214576f7a9572718cf7fad9c31f0bf452a153749bb16372b\": container with ID starting with f72bedbf2a5292be214576f7a9572718cf7fad9c31f0bf452a153749bb16372b not found: ID does not exist" containerID="f72bedbf2a5292be214576f7a9572718cf7fad9c31f0bf452a153749bb16372b" Feb 27 10:42:01 crc kubenswrapper[4998]: I0227 10:42:01.177205 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f72bedbf2a5292be214576f7a9572718cf7fad9c31f0bf452a153749bb16372b"} err="failed to get container status \"f72bedbf2a5292be214576f7a9572718cf7fad9c31f0bf452a153749bb16372b\": rpc error: code = NotFound desc = could not find container \"f72bedbf2a5292be214576f7a9572718cf7fad9c31f0bf452a153749bb16372b\": container with ID starting with f72bedbf2a5292be214576f7a9572718cf7fad9c31f0bf452a153749bb16372b not found: ID does not exist" Feb 27 10:42:01 crc kubenswrapper[4998]: I0227 10:42:01.211915 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0ca208a2-3ba0-43e6-a2c4-942c12e54b41-rabbitmq-erlang-cookie\") pod \"0ca208a2-3ba0-43e6-a2c4-942c12e54b41\" (UID: \"0ca208a2-3ba0-43e6-a2c4-942c12e54b41\") " Feb 27 10:42:01 crc kubenswrapper[4998]: I0227 10:42:01.211966 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0ca208a2-3ba0-43e6-a2c4-942c12e54b41-pod-info\") pod \"0ca208a2-3ba0-43e6-a2c4-942c12e54b41\" (UID: \"0ca208a2-3ba0-43e6-a2c4-942c12e54b41\") " Feb 27 10:42:01 crc kubenswrapper[4998]: I0227 10:42:01.212007 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0ca208a2-3ba0-43e6-a2c4-942c12e54b41-plugins-conf\") pod \"0ca208a2-3ba0-43e6-a2c4-942c12e54b41\" (UID: \"0ca208a2-3ba0-43e6-a2c4-942c12e54b41\") " Feb 27 10:42:01 crc kubenswrapper[4998]: I0227 10:42:01.212058 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0ca208a2-3ba0-43e6-a2c4-942c12e54b41-rabbitmq-plugins\") pod \"0ca208a2-3ba0-43e6-a2c4-942c12e54b41\" (UID: \"0ca208a2-3ba0-43e6-a2c4-942c12e54b41\") " Feb 27 10:42:01 crc kubenswrapper[4998]: I0227 10:42:01.212102 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0ca208a2-3ba0-43e6-a2c4-942c12e54b41-config-data\") pod \"0ca208a2-3ba0-43e6-a2c4-942c12e54b41\" (UID: \"0ca208a2-3ba0-43e6-a2c4-942c12e54b41\") " Feb 27 10:42:01 crc kubenswrapper[4998]: I0227 10:42:01.212138 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hcjdq\" (UniqueName: \"kubernetes.io/projected/0ca208a2-3ba0-43e6-a2c4-942c12e54b41-kube-api-access-hcjdq\") pod \"0ca208a2-3ba0-43e6-a2c4-942c12e54b41\" (UID: \"0ca208a2-3ba0-43e6-a2c4-942c12e54b41\") " Feb 27 10:42:01 crc kubenswrapper[4998]: I0227 10:42:01.212167 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0ca208a2-3ba0-43e6-a2c4-942c12e54b41-erlang-cookie-secret\") pod \"0ca208a2-3ba0-43e6-a2c4-942c12e54b41\" (UID: \"0ca208a2-3ba0-43e6-a2c4-942c12e54b41\") " Feb 27 10:42:01 crc kubenswrapper[4998]: I0227 10:42:01.212316 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0ca208a2-3ba0-43e6-a2c4-942c12e54b41-server-conf\") pod \"0ca208a2-3ba0-43e6-a2c4-942c12e54b41\" (UID: \"0ca208a2-3ba0-43e6-a2c4-942c12e54b41\") " Feb 27 10:42:01 crc kubenswrapper[4998]: I0227 10:42:01.212344 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0ca208a2-3ba0-43e6-a2c4-942c12e54b41-rabbitmq-confd\") pod \"0ca208a2-3ba0-43e6-a2c4-942c12e54b41\" (UID: \"0ca208a2-3ba0-43e6-a2c4-942c12e54b41\") " Feb 27 10:42:01 crc kubenswrapper[4998]: I0227 10:42:01.212364 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"0ca208a2-3ba0-43e6-a2c4-942c12e54b41\" (UID: \"0ca208a2-3ba0-43e6-a2c4-942c12e54b41\") " Feb 27 10:42:01 crc kubenswrapper[4998]: I0227 10:42:01.212454 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0ca208a2-3ba0-43e6-a2c4-942c12e54b41-rabbitmq-tls\") pod \"0ca208a2-3ba0-43e6-a2c4-942c12e54b41\" (UID: \"0ca208a2-3ba0-43e6-a2c4-942c12e54b41\") " Feb 27 10:42:01 crc kubenswrapper[4998]: I0227 10:42:01.213078 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ca208a2-3ba0-43e6-a2c4-942c12e54b41-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "0ca208a2-3ba0-43e6-a2c4-942c12e54b41" (UID: "0ca208a2-3ba0-43e6-a2c4-942c12e54b41"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:42:01 crc kubenswrapper[4998]: I0227 10:42:01.214747 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ca208a2-3ba0-43e6-a2c4-942c12e54b41-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "0ca208a2-3ba0-43e6-a2c4-942c12e54b41" (UID: "0ca208a2-3ba0-43e6-a2c4-942c12e54b41"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:42:01 crc kubenswrapper[4998]: I0227 10:42:01.215492 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ca208a2-3ba0-43e6-a2c4-942c12e54b41-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "0ca208a2-3ba0-43e6-a2c4-942c12e54b41" (UID: "0ca208a2-3ba0-43e6-a2c4-942c12e54b41"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:42:01 crc kubenswrapper[4998]: I0227 10:42:01.221559 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ca208a2-3ba0-43e6-a2c4-942c12e54b41-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "0ca208a2-3ba0-43e6-a2c4-942c12e54b41" (UID: "0ca208a2-3ba0-43e6-a2c4-942c12e54b41"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:42:01 crc kubenswrapper[4998]: I0227 10:42:01.224417 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "persistence") pod "0ca208a2-3ba0-43e6-a2c4-942c12e54b41" (UID: "0ca208a2-3ba0-43e6-a2c4-942c12e54b41"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 27 10:42:01 crc kubenswrapper[4998]: I0227 10:42:01.224425 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/0ca208a2-3ba0-43e6-a2c4-942c12e54b41-pod-info" (OuterVolumeSpecName: "pod-info") pod "0ca208a2-3ba0-43e6-a2c4-942c12e54b41" (UID: "0ca208a2-3ba0-43e6-a2c4-942c12e54b41"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 27 10:42:01 crc kubenswrapper[4998]: I0227 10:42:01.224526 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ca208a2-3ba0-43e6-a2c4-942c12e54b41-kube-api-access-hcjdq" (OuterVolumeSpecName: "kube-api-access-hcjdq") pod "0ca208a2-3ba0-43e6-a2c4-942c12e54b41" (UID: "0ca208a2-3ba0-43e6-a2c4-942c12e54b41"). InnerVolumeSpecName "kube-api-access-hcjdq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:42:01 crc kubenswrapper[4998]: I0227 10:42:01.230559 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ca208a2-3ba0-43e6-a2c4-942c12e54b41-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "0ca208a2-3ba0-43e6-a2c4-942c12e54b41" (UID: "0ca208a2-3ba0-43e6-a2c4-942c12e54b41"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:42:01 crc kubenswrapper[4998]: I0227 10:42:01.295504 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ca208a2-3ba0-43e6-a2c4-942c12e54b41-server-conf" (OuterVolumeSpecName: "server-conf") pod "0ca208a2-3ba0-43e6-a2c4-942c12e54b41" (UID: "0ca208a2-3ba0-43e6-a2c4-942c12e54b41"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:42:01 crc kubenswrapper[4998]: I0227 10:42:01.296066 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ca208a2-3ba0-43e6-a2c4-942c12e54b41-config-data" (OuterVolumeSpecName: "config-data") pod "0ca208a2-3ba0-43e6-a2c4-942c12e54b41" (UID: "0ca208a2-3ba0-43e6-a2c4-942c12e54b41"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:42:01 crc kubenswrapper[4998]: I0227 10:42:01.318761 4998 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0ca208a2-3ba0-43e6-a2c4-942c12e54b41-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 27 10:42:01 crc kubenswrapper[4998]: I0227 10:42:01.318800 4998 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0ca208a2-3ba0-43e6-a2c4-942c12e54b41-pod-info\") on node \"crc\" DevicePath \"\"" Feb 27 10:42:01 crc kubenswrapper[4998]: I0227 10:42:01.318815 4998 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0ca208a2-3ba0-43e6-a2c4-942c12e54b41-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 27 10:42:01 crc kubenswrapper[4998]: I0227 10:42:01.318828 4998 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0ca208a2-3ba0-43e6-a2c4-942c12e54b41-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 27 10:42:01 crc kubenswrapper[4998]: I0227 10:42:01.318840 4998 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0ca208a2-3ba0-43e6-a2c4-942c12e54b41-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 10:42:01 crc kubenswrapper[4998]: I0227 10:42:01.318852 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hcjdq\" (UniqueName: \"kubernetes.io/projected/0ca208a2-3ba0-43e6-a2c4-942c12e54b41-kube-api-access-hcjdq\") on node \"crc\" DevicePath \"\"" Feb 27 10:42:01 crc kubenswrapper[4998]: I0227 10:42:01.318864 4998 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0ca208a2-3ba0-43e6-a2c4-942c12e54b41-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 27 10:42:01 crc kubenswrapper[4998]: I0227 10:42:01.318876 4998 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0ca208a2-3ba0-43e6-a2c4-942c12e54b41-server-conf\") on node \"crc\" DevicePath \"\"" Feb 27 10:42:01 crc kubenswrapper[4998]: I0227 10:42:01.318907 4998 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Feb 27 10:42:01 crc kubenswrapper[4998]: I0227 10:42:01.318919 4998 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0ca208a2-3ba0-43e6-a2c4-942c12e54b41-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 27 10:42:01 crc kubenswrapper[4998]: I0227 10:42:01.348315 4998 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Feb 27 10:42:01 crc kubenswrapper[4998]: I0227 10:42:01.420663 4998 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Feb 27 10:42:01 crc kubenswrapper[4998]: I0227 10:42:01.482405 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ca208a2-3ba0-43e6-a2c4-942c12e54b41-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "0ca208a2-3ba0-43e6-a2c4-942c12e54b41" (UID: "0ca208a2-3ba0-43e6-a2c4-942c12e54b41"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:42:01 crc kubenswrapper[4998]: I0227 10:42:01.521990 4998 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0ca208a2-3ba0-43e6-a2c4-942c12e54b41-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 27 10:42:01 crc kubenswrapper[4998]: I0227 10:42:01.740329 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 27 10:42:01 crc kubenswrapper[4998]: I0227 10:42:01.752619 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 27 10:42:01 crc kubenswrapper[4998]: I0227 10:42:01.772113 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 27 10:42:01 crc kubenswrapper[4998]: E0227 10:42:01.772480 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ca208a2-3ba0-43e6-a2c4-942c12e54b41" containerName="rabbitmq" Feb 27 10:42:01 crc kubenswrapper[4998]: I0227 10:42:01.772497 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ca208a2-3ba0-43e6-a2c4-942c12e54b41" containerName="rabbitmq" Feb 27 10:42:01 crc kubenswrapper[4998]: E0227 10:42:01.772514 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ca208a2-3ba0-43e6-a2c4-942c12e54b41" containerName="setup-container" Feb 27 10:42:01 crc kubenswrapper[4998]: I0227 10:42:01.772522 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ca208a2-3ba0-43e6-a2c4-942c12e54b41" containerName="setup-container" Feb 27 10:42:01 crc kubenswrapper[4998]: I0227 10:42:01.775520 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ca208a2-3ba0-43e6-a2c4-942c12e54b41" containerName="rabbitmq" Feb 27 10:42:01 crc kubenswrapper[4998]: I0227 10:42:01.779557 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 27 10:42:01 crc kubenswrapper[4998]: I0227 10:42:01.783754 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 27 10:42:01 crc kubenswrapper[4998]: I0227 10:42:01.785945 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-q8jmt" Feb 27 10:42:01 crc kubenswrapper[4998]: I0227 10:42:01.786114 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 27 10:42:01 crc kubenswrapper[4998]: I0227 10:42:01.786277 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 27 10:42:01 crc kubenswrapper[4998]: I0227 10:42:01.786392 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 27 10:42:01 crc kubenswrapper[4998]: I0227 10:42:01.786494 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 27 10:42:01 crc kubenswrapper[4998]: I0227 10:42:01.787058 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 27 10:42:01 crc kubenswrapper[4998]: I0227 10:42:01.789160 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 27 10:42:01 crc kubenswrapper[4998]: I0227 10:42:01.930569 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdwcd\" (UniqueName: \"kubernetes.io/projected/058b490f-35b3-42df-a3b3-a684664a0e44-kube-api-access-wdwcd\") pod \"rabbitmq-server-0\" (UID: \"058b490f-35b3-42df-a3b3-a684664a0e44\") " pod="openstack/rabbitmq-server-0" Feb 27 10:42:01 crc kubenswrapper[4998]: I0227 10:42:01.930912 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/058b490f-35b3-42df-a3b3-a684664a0e44-config-data\") pod \"rabbitmq-server-0\" (UID: \"058b490f-35b3-42df-a3b3-a684664a0e44\") " pod="openstack/rabbitmq-server-0" Feb 27 10:42:01 crc kubenswrapper[4998]: I0227 10:42:01.930945 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/058b490f-35b3-42df-a3b3-a684664a0e44-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"058b490f-35b3-42df-a3b3-a684664a0e44\") " pod="openstack/rabbitmq-server-0" Feb 27 10:42:01 crc kubenswrapper[4998]: I0227 10:42:01.930974 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/058b490f-35b3-42df-a3b3-a684664a0e44-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"058b490f-35b3-42df-a3b3-a684664a0e44\") " pod="openstack/rabbitmq-server-0" Feb 27 10:42:01 crc kubenswrapper[4998]: I0227 10:42:01.931007 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/058b490f-35b3-42df-a3b3-a684664a0e44-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"058b490f-35b3-42df-a3b3-a684664a0e44\") " pod="openstack/rabbitmq-server-0" Feb 27 10:42:01 crc kubenswrapper[4998]: I0227 10:42:01.931045 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/058b490f-35b3-42df-a3b3-a684664a0e44-server-conf\") pod \"rabbitmq-server-0\" (UID: \"058b490f-35b3-42df-a3b3-a684664a0e44\") " pod="openstack/rabbitmq-server-0" Feb 27 10:42:01 crc kubenswrapper[4998]: I0227 10:42:01.931074 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/058b490f-35b3-42df-a3b3-a684664a0e44-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"058b490f-35b3-42df-a3b3-a684664a0e44\") " pod="openstack/rabbitmq-server-0" Feb 27 10:42:01 crc kubenswrapper[4998]: I0227 10:42:01.931095 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/058b490f-35b3-42df-a3b3-a684664a0e44-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"058b490f-35b3-42df-a3b3-a684664a0e44\") " pod="openstack/rabbitmq-server-0" Feb 27 10:42:01 crc kubenswrapper[4998]: I0227 10:42:01.931125 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"058b490f-35b3-42df-a3b3-a684664a0e44\") " pod="openstack/rabbitmq-server-0" Feb 27 10:42:01 crc kubenswrapper[4998]: I0227 10:42:01.931151 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/058b490f-35b3-42df-a3b3-a684664a0e44-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"058b490f-35b3-42df-a3b3-a684664a0e44\") " pod="openstack/rabbitmq-server-0" Feb 27 10:42:01 crc kubenswrapper[4998]: I0227 10:42:01.931244 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/058b490f-35b3-42df-a3b3-a684664a0e44-pod-info\") pod \"rabbitmq-server-0\" (UID: \"058b490f-35b3-42df-a3b3-a684664a0e44\") " pod="openstack/rabbitmq-server-0" Feb 27 10:42:02 crc kubenswrapper[4998]: I0227 10:42:02.032859 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdwcd\" (UniqueName: \"kubernetes.io/projected/058b490f-35b3-42df-a3b3-a684664a0e44-kube-api-access-wdwcd\") pod \"rabbitmq-server-0\" (UID: \"058b490f-35b3-42df-a3b3-a684664a0e44\") " pod="openstack/rabbitmq-server-0" Feb 27 10:42:02 crc kubenswrapper[4998]: I0227 10:42:02.032918 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/058b490f-35b3-42df-a3b3-a684664a0e44-config-data\") pod \"rabbitmq-server-0\" (UID: \"058b490f-35b3-42df-a3b3-a684664a0e44\") " pod="openstack/rabbitmq-server-0" Feb 27 10:42:02 crc kubenswrapper[4998]: I0227 10:42:02.032943 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/058b490f-35b3-42df-a3b3-a684664a0e44-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"058b490f-35b3-42df-a3b3-a684664a0e44\") " pod="openstack/rabbitmq-server-0" Feb 27 10:42:02 crc kubenswrapper[4998]: I0227 10:42:02.032967 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/058b490f-35b3-42df-a3b3-a684664a0e44-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"058b490f-35b3-42df-a3b3-a684664a0e44\") " pod="openstack/rabbitmq-server-0" Feb 27 10:42:02 crc kubenswrapper[4998]: I0227 10:42:02.032994 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/058b490f-35b3-42df-a3b3-a684664a0e44-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"058b490f-35b3-42df-a3b3-a684664a0e44\") " pod="openstack/rabbitmq-server-0" Feb 27 10:42:02 crc kubenswrapper[4998]: I0227 10:42:02.033025 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/058b490f-35b3-42df-a3b3-a684664a0e44-server-conf\") pod \"rabbitmq-server-0\" (UID: \"058b490f-35b3-42df-a3b3-a684664a0e44\") " pod="openstack/rabbitmq-server-0" Feb 27 10:42:02 crc kubenswrapper[4998]: I0227 10:42:02.033046 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/058b490f-35b3-42df-a3b3-a684664a0e44-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"058b490f-35b3-42df-a3b3-a684664a0e44\") " pod="openstack/rabbitmq-server-0" Feb 27 10:42:02 crc kubenswrapper[4998]: I0227 10:42:02.033060 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/058b490f-35b3-42df-a3b3-a684664a0e44-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"058b490f-35b3-42df-a3b3-a684664a0e44\") " pod="openstack/rabbitmq-server-0" Feb 27 10:42:02 crc kubenswrapper[4998]: I0227 10:42:02.033083 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"058b490f-35b3-42df-a3b3-a684664a0e44\") " pod="openstack/rabbitmq-server-0" Feb 27 10:42:02 crc kubenswrapper[4998]: I0227 10:42:02.033101 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/058b490f-35b3-42df-a3b3-a684664a0e44-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"058b490f-35b3-42df-a3b3-a684664a0e44\") " pod="openstack/rabbitmq-server-0" Feb 27 10:42:02 crc kubenswrapper[4998]: I0227 10:42:02.033121 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/058b490f-35b3-42df-a3b3-a684664a0e44-pod-info\") pod \"rabbitmq-server-0\" (UID: \"058b490f-35b3-42df-a3b3-a684664a0e44\") " pod="openstack/rabbitmq-server-0" Feb 27 10:42:02 crc kubenswrapper[4998]: I0227 10:42:02.033966 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/058b490f-35b3-42df-a3b3-a684664a0e44-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"058b490f-35b3-42df-a3b3-a684664a0e44\") " pod="openstack/rabbitmq-server-0" Feb 27 10:42:02 crc kubenswrapper[4998]: I0227 10:42:02.034311 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/058b490f-35b3-42df-a3b3-a684664a0e44-config-data\") pod \"rabbitmq-server-0\" (UID: \"058b490f-35b3-42df-a3b3-a684664a0e44\") " pod="openstack/rabbitmq-server-0" Feb 27 10:42:02 crc kubenswrapper[4998]: I0227 10:42:02.034330 4998 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"058b490f-35b3-42df-a3b3-a684664a0e44\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-server-0" Feb 27 10:42:02 crc kubenswrapper[4998]: I0227 10:42:02.034688 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/058b490f-35b3-42df-a3b3-a684664a0e44-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"058b490f-35b3-42df-a3b3-a684664a0e44\") " pod="openstack/rabbitmq-server-0" Feb 27 10:42:02 crc kubenswrapper[4998]: I0227 10:42:02.037066 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/058b490f-35b3-42df-a3b3-a684664a0e44-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"058b490f-35b3-42df-a3b3-a684664a0e44\") " pod="openstack/rabbitmq-server-0" Feb 27 10:42:02 crc kubenswrapper[4998]: I0227 10:42:02.038077 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/058b490f-35b3-42df-a3b3-a684664a0e44-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"058b490f-35b3-42df-a3b3-a684664a0e44\") " pod="openstack/rabbitmq-server-0" Feb 27 10:42:02 crc kubenswrapper[4998]: I0227 10:42:02.038277 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/058b490f-35b3-42df-a3b3-a684664a0e44-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"058b490f-35b3-42df-a3b3-a684664a0e44\") " pod="openstack/rabbitmq-server-0" Feb 27 10:42:02 crc kubenswrapper[4998]: I0227 10:42:02.039707 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/058b490f-35b3-42df-a3b3-a684664a0e44-pod-info\") pod \"rabbitmq-server-0\" (UID: \"058b490f-35b3-42df-a3b3-a684664a0e44\") " pod="openstack/rabbitmq-server-0" Feb 27 10:42:02 crc kubenswrapper[4998]: I0227 10:42:02.041752 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/058b490f-35b3-42df-a3b3-a684664a0e44-server-conf\") pod \"rabbitmq-server-0\" (UID: \"058b490f-35b3-42df-a3b3-a684664a0e44\") " pod="openstack/rabbitmq-server-0" Feb 27 10:42:02 crc kubenswrapper[4998]: I0227 10:42:02.044504 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/058b490f-35b3-42df-a3b3-a684664a0e44-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"058b490f-35b3-42df-a3b3-a684664a0e44\") " pod="openstack/rabbitmq-server-0" Feb 27 10:42:02 crc kubenswrapper[4998]: I0227 10:42:02.050556 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdwcd\" (UniqueName: \"kubernetes.io/projected/058b490f-35b3-42df-a3b3-a684664a0e44-kube-api-access-wdwcd\") pod \"rabbitmq-server-0\" (UID: \"058b490f-35b3-42df-a3b3-a684664a0e44\") " pod="openstack/rabbitmq-server-0" Feb 27 10:42:02 crc kubenswrapper[4998]: I0227 10:42:02.071377 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"058b490f-35b3-42df-a3b3-a684664a0e44\") " pod="openstack/rabbitmq-server-0" Feb 27 10:42:02 crc kubenswrapper[4998]: I0227 10:42:02.089907 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:42:02 crc kubenswrapper[4998]: I0227 10:42:02.117166 4998 generic.go:334] "Generic (PLEG): container finished" podID="68cd6142-df7e-4994-97c0-0bc08ea1e3d4" containerID="f6a98a41d2d5f12fca3b789a125d274ad94dc22565def6bda4dfcc1cd769e21f" exitCode=0 Feb 27 10:42:02 crc kubenswrapper[4998]: I0227 10:42:02.117214 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"68cd6142-df7e-4994-97c0-0bc08ea1e3d4","Type":"ContainerDied","Data":"f6a98a41d2d5f12fca3b789a125d274ad94dc22565def6bda4dfcc1cd769e21f"} Feb 27 10:42:02 crc kubenswrapper[4998]: I0227 10:42:02.117262 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"68cd6142-df7e-4994-97c0-0bc08ea1e3d4","Type":"ContainerDied","Data":"8a535c8958ed175671f7276a4ebd6c425e975c31457e7f338ddc48d10db11f8e"} Feb 27 10:42:02 crc kubenswrapper[4998]: I0227 10:42:02.117281 4998 scope.go:117] "RemoveContainer" containerID="f6a98a41d2d5f12fca3b789a125d274ad94dc22565def6bda4dfcc1cd769e21f" Feb 27 10:42:02 crc kubenswrapper[4998]: I0227 10:42:02.117427 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:42:02 crc kubenswrapper[4998]: I0227 10:42:02.125215 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 27 10:42:02 crc kubenswrapper[4998]: I0227 10:42:02.170667 4998 scope.go:117] "RemoveContainer" containerID="009a183d6e1f7dc57da1884d0a8c4b15c0dd6b099075bea5e84e0e1183821aa1" Feb 27 10:42:02 crc kubenswrapper[4998]: I0227 10:42:02.219838 4998 scope.go:117] "RemoveContainer" containerID="f6a98a41d2d5f12fca3b789a125d274ad94dc22565def6bda4dfcc1cd769e21f" Feb 27 10:42:02 crc kubenswrapper[4998]: E0227 10:42:02.231701 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6a98a41d2d5f12fca3b789a125d274ad94dc22565def6bda4dfcc1cd769e21f\": container with ID starting with f6a98a41d2d5f12fca3b789a125d274ad94dc22565def6bda4dfcc1cd769e21f not found: ID does not exist" containerID="f6a98a41d2d5f12fca3b789a125d274ad94dc22565def6bda4dfcc1cd769e21f" Feb 27 10:42:02 crc kubenswrapper[4998]: I0227 10:42:02.231760 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6a98a41d2d5f12fca3b789a125d274ad94dc22565def6bda4dfcc1cd769e21f"} err="failed to get container status \"f6a98a41d2d5f12fca3b789a125d274ad94dc22565def6bda4dfcc1cd769e21f\": rpc error: code = NotFound desc = could not find container \"f6a98a41d2d5f12fca3b789a125d274ad94dc22565def6bda4dfcc1cd769e21f\": container with ID starting with f6a98a41d2d5f12fca3b789a125d274ad94dc22565def6bda4dfcc1cd769e21f not found: ID does not exist" Feb 27 10:42:02 crc kubenswrapper[4998]: I0227 10:42:02.231785 4998 scope.go:117] "RemoveContainer" containerID="009a183d6e1f7dc57da1884d0a8c4b15c0dd6b099075bea5e84e0e1183821aa1" Feb 27 10:42:02 crc kubenswrapper[4998]: E0227 10:42:02.234502 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"009a183d6e1f7dc57da1884d0a8c4b15c0dd6b099075bea5e84e0e1183821aa1\": container with ID starting with 009a183d6e1f7dc57da1884d0a8c4b15c0dd6b099075bea5e84e0e1183821aa1 not found: ID does not exist" containerID="009a183d6e1f7dc57da1884d0a8c4b15c0dd6b099075bea5e84e0e1183821aa1" Feb 27 10:42:02 crc kubenswrapper[4998]: I0227 10:42:02.234558 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"009a183d6e1f7dc57da1884d0a8c4b15c0dd6b099075bea5e84e0e1183821aa1"} err="failed to get container status \"009a183d6e1f7dc57da1884d0a8c4b15c0dd6b099075bea5e84e0e1183821aa1\": rpc error: code = NotFound desc = could not find container \"009a183d6e1f7dc57da1884d0a8c4b15c0dd6b099075bea5e84e0e1183821aa1\": container with ID starting with 009a183d6e1f7dc57da1884d0a8c4b15c0dd6b099075bea5e84e0e1183821aa1 not found: ID does not exist" Feb 27 10:42:02 crc kubenswrapper[4998]: I0227 10:42:02.238513 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/68cd6142-df7e-4994-97c0-0bc08ea1e3d4-rabbitmq-confd\") pod \"68cd6142-df7e-4994-97c0-0bc08ea1e3d4\" (UID: \"68cd6142-df7e-4994-97c0-0bc08ea1e3d4\") " Feb 27 10:42:02 crc kubenswrapper[4998]: I0227 10:42:02.238596 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/68cd6142-df7e-4994-97c0-0bc08ea1e3d4-rabbitmq-erlang-cookie\") pod \"68cd6142-df7e-4994-97c0-0bc08ea1e3d4\" (UID: \"68cd6142-df7e-4994-97c0-0bc08ea1e3d4\") " Feb 27 10:42:02 crc kubenswrapper[4998]: I0227 10:42:02.238667 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z4z7d\" (UniqueName: \"kubernetes.io/projected/68cd6142-df7e-4994-97c0-0bc08ea1e3d4-kube-api-access-z4z7d\") pod \"68cd6142-df7e-4994-97c0-0bc08ea1e3d4\" (UID: \"68cd6142-df7e-4994-97c0-0bc08ea1e3d4\") " Feb 27 10:42:02 crc kubenswrapper[4998]: I0227 10:42:02.238721 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/68cd6142-df7e-4994-97c0-0bc08ea1e3d4-config-data\") pod \"68cd6142-df7e-4994-97c0-0bc08ea1e3d4\" (UID: \"68cd6142-df7e-4994-97c0-0bc08ea1e3d4\") " Feb 27 10:42:02 crc kubenswrapper[4998]: I0227 10:42:02.238752 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/68cd6142-df7e-4994-97c0-0bc08ea1e3d4-rabbitmq-tls\") pod \"68cd6142-df7e-4994-97c0-0bc08ea1e3d4\" (UID: \"68cd6142-df7e-4994-97c0-0bc08ea1e3d4\") " Feb 27 10:42:02 crc kubenswrapper[4998]: I0227 10:42:02.238776 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/68cd6142-df7e-4994-97c0-0bc08ea1e3d4-rabbitmq-plugins\") pod \"68cd6142-df7e-4994-97c0-0bc08ea1e3d4\" (UID: \"68cd6142-df7e-4994-97c0-0bc08ea1e3d4\") " Feb 27 10:42:02 crc kubenswrapper[4998]: I0227 10:42:02.238804 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"68cd6142-df7e-4994-97c0-0bc08ea1e3d4\" (UID: \"68cd6142-df7e-4994-97c0-0bc08ea1e3d4\") " Feb 27 10:42:02 crc kubenswrapper[4998]: I0227 10:42:02.238836 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/68cd6142-df7e-4994-97c0-0bc08ea1e3d4-pod-info\") pod \"68cd6142-df7e-4994-97c0-0bc08ea1e3d4\" (UID: \"68cd6142-df7e-4994-97c0-0bc08ea1e3d4\") " Feb 27 10:42:02 crc kubenswrapper[4998]: I0227 10:42:02.238887 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/68cd6142-df7e-4994-97c0-0bc08ea1e3d4-erlang-cookie-secret\") pod \"68cd6142-df7e-4994-97c0-0bc08ea1e3d4\" (UID: \"68cd6142-df7e-4994-97c0-0bc08ea1e3d4\") " Feb 27 10:42:02 crc kubenswrapper[4998]: I0227 10:42:02.238912 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/68cd6142-df7e-4994-97c0-0bc08ea1e3d4-server-conf\") pod \"68cd6142-df7e-4994-97c0-0bc08ea1e3d4\" (UID: \"68cd6142-df7e-4994-97c0-0bc08ea1e3d4\") " Feb 27 10:42:02 crc kubenswrapper[4998]: I0227 10:42:02.238993 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/68cd6142-df7e-4994-97c0-0bc08ea1e3d4-plugins-conf\") pod \"68cd6142-df7e-4994-97c0-0bc08ea1e3d4\" (UID: \"68cd6142-df7e-4994-97c0-0bc08ea1e3d4\") " Feb 27 10:42:02 crc kubenswrapper[4998]: I0227 10:42:02.240866 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68cd6142-df7e-4994-97c0-0bc08ea1e3d4-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "68cd6142-df7e-4994-97c0-0bc08ea1e3d4" (UID: "68cd6142-df7e-4994-97c0-0bc08ea1e3d4"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:42:02 crc kubenswrapper[4998]: I0227 10:42:02.241271 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68cd6142-df7e-4994-97c0-0bc08ea1e3d4-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "68cd6142-df7e-4994-97c0-0bc08ea1e3d4" (UID: "68cd6142-df7e-4994-97c0-0bc08ea1e3d4"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:42:02 crc kubenswrapper[4998]: I0227 10:42:02.243576 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68cd6142-df7e-4994-97c0-0bc08ea1e3d4-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "68cd6142-df7e-4994-97c0-0bc08ea1e3d4" (UID: "68cd6142-df7e-4994-97c0-0bc08ea1e3d4"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:42:02 crc kubenswrapper[4998]: I0227 10:42:02.253245 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68cd6142-df7e-4994-97c0-0bc08ea1e3d4-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "68cd6142-df7e-4994-97c0-0bc08ea1e3d4" (UID: "68cd6142-df7e-4994-97c0-0bc08ea1e3d4"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:42:02 crc kubenswrapper[4998]: I0227 10:42:02.255396 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/68cd6142-df7e-4994-97c0-0bc08ea1e3d4-pod-info" (OuterVolumeSpecName: "pod-info") pod "68cd6142-df7e-4994-97c0-0bc08ea1e3d4" (UID: "68cd6142-df7e-4994-97c0-0bc08ea1e3d4"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 27 10:42:02 crc kubenswrapper[4998]: I0227 10:42:02.255511 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68cd6142-df7e-4994-97c0-0bc08ea1e3d4-kube-api-access-z4z7d" (OuterVolumeSpecName: "kube-api-access-z4z7d") pod "68cd6142-df7e-4994-97c0-0bc08ea1e3d4" (UID: "68cd6142-df7e-4994-97c0-0bc08ea1e3d4"). InnerVolumeSpecName "kube-api-access-z4z7d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:42:02 crc kubenswrapper[4998]: I0227 10:42:02.255578 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68cd6142-df7e-4994-97c0-0bc08ea1e3d4-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "68cd6142-df7e-4994-97c0-0bc08ea1e3d4" (UID: "68cd6142-df7e-4994-97c0-0bc08ea1e3d4"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:42:02 crc kubenswrapper[4998]: I0227 10:42:02.257653 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "persistence") pod "68cd6142-df7e-4994-97c0-0bc08ea1e3d4" (UID: "68cd6142-df7e-4994-97c0-0bc08ea1e3d4"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 27 10:42:02 crc kubenswrapper[4998]: I0227 10:42:02.275076 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68cd6142-df7e-4994-97c0-0bc08ea1e3d4-config-data" (OuterVolumeSpecName: "config-data") pod "68cd6142-df7e-4994-97c0-0bc08ea1e3d4" (UID: "68cd6142-df7e-4994-97c0-0bc08ea1e3d4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:42:02 crc kubenswrapper[4998]: I0227 10:42:02.335103 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68cd6142-df7e-4994-97c0-0bc08ea1e3d4-server-conf" (OuterVolumeSpecName: "server-conf") pod "68cd6142-df7e-4994-97c0-0bc08ea1e3d4" (UID: "68cd6142-df7e-4994-97c0-0bc08ea1e3d4"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:42:02 crc kubenswrapper[4998]: I0227 10:42:02.341188 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z4z7d\" (UniqueName: \"kubernetes.io/projected/68cd6142-df7e-4994-97c0-0bc08ea1e3d4-kube-api-access-z4z7d\") on node \"crc\" DevicePath \"\"" Feb 27 10:42:02 crc kubenswrapper[4998]: I0227 10:42:02.341247 4998 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/68cd6142-df7e-4994-97c0-0bc08ea1e3d4-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 10:42:02 crc kubenswrapper[4998]: I0227 10:42:02.341261 4998 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/68cd6142-df7e-4994-97c0-0bc08ea1e3d4-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 27 10:42:02 crc kubenswrapper[4998]: I0227 10:42:02.341273 4998 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/68cd6142-df7e-4994-97c0-0bc08ea1e3d4-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 27 10:42:02 crc kubenswrapper[4998]: I0227 10:42:02.341310 4998 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Feb 27 10:42:02 crc kubenswrapper[4998]: I0227 10:42:02.341323 4998 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/68cd6142-df7e-4994-97c0-0bc08ea1e3d4-pod-info\") on node \"crc\" DevicePath \"\"" Feb 27 10:42:02 crc kubenswrapper[4998]: I0227 10:42:02.341335 4998 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/68cd6142-df7e-4994-97c0-0bc08ea1e3d4-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 27 10:42:02 crc kubenswrapper[4998]: I0227 10:42:02.341347 4998 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/68cd6142-df7e-4994-97c0-0bc08ea1e3d4-server-conf\") on node \"crc\" DevicePath \"\"" Feb 27 10:42:02 crc kubenswrapper[4998]: I0227 10:42:02.341358 4998 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/68cd6142-df7e-4994-97c0-0bc08ea1e3d4-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 27 10:42:02 crc kubenswrapper[4998]: I0227 10:42:02.341369 4998 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/68cd6142-df7e-4994-97c0-0bc08ea1e3d4-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 27 10:42:02 crc kubenswrapper[4998]: I0227 10:42:02.368326 4998 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Feb 27 10:42:02 crc kubenswrapper[4998]: I0227 10:42:02.371319 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68cd6142-df7e-4994-97c0-0bc08ea1e3d4-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "68cd6142-df7e-4994-97c0-0bc08ea1e3d4" (UID: "68cd6142-df7e-4994-97c0-0bc08ea1e3d4"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:42:02 crc kubenswrapper[4998]: I0227 10:42:02.443549 4998 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Feb 27 10:42:02 crc kubenswrapper[4998]: I0227 10:42:02.443591 4998 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/68cd6142-df7e-4994-97c0-0bc08ea1e3d4-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 27 10:42:02 crc kubenswrapper[4998]: I0227 10:42:02.539896 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 27 10:42:02 crc kubenswrapper[4998]: I0227 10:42:02.566287 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 27 10:42:02 crc kubenswrapper[4998]: I0227 10:42:02.577563 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 27 10:42:02 crc kubenswrapper[4998]: E0227 10:42:02.577935 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68cd6142-df7e-4994-97c0-0bc08ea1e3d4" containerName="setup-container" Feb 27 10:42:02 crc kubenswrapper[4998]: I0227 10:42:02.577952 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="68cd6142-df7e-4994-97c0-0bc08ea1e3d4" containerName="setup-container" Feb 27 10:42:02 crc kubenswrapper[4998]: E0227 10:42:02.577975 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68cd6142-df7e-4994-97c0-0bc08ea1e3d4" containerName="rabbitmq" Feb 27 10:42:02 crc kubenswrapper[4998]: I0227 10:42:02.577983 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="68cd6142-df7e-4994-97c0-0bc08ea1e3d4" containerName="rabbitmq" Feb 27 10:42:02 crc kubenswrapper[4998]: I0227 10:42:02.578249 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="68cd6142-df7e-4994-97c0-0bc08ea1e3d4" containerName="rabbitmq" Feb 27 10:42:02 crc kubenswrapper[4998]: I0227 10:42:02.579145 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:42:02 crc kubenswrapper[4998]: I0227 10:42:02.590185 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 27 10:42:02 crc kubenswrapper[4998]: I0227 10:42:02.590611 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 27 10:42:02 crc kubenswrapper[4998]: I0227 10:42:02.590748 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 27 10:42:02 crc kubenswrapper[4998]: I0227 10:42:02.591706 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 27 10:42:02 crc kubenswrapper[4998]: I0227 10:42:02.591741 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 27 10:42:02 crc kubenswrapper[4998]: I0227 10:42:02.591956 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-dnxmh" Feb 27 10:42:02 crc kubenswrapper[4998]: I0227 10:42:02.591996 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 27 10:42:02 crc kubenswrapper[4998]: I0227 10:42:02.601520 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 27 10:42:02 crc kubenswrapper[4998]: W0227 10:42:02.758937 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod058b490f_35b3_42df_a3b3_a684664a0e44.slice/crio-10fc08bec19b4c4a08c0b22dd48725c8e06adfabada9448fa63dc55920e362a9 WatchSource:0}: Error finding container 10fc08bec19b4c4a08c0b22dd48725c8e06adfabada9448fa63dc55920e362a9: Status 404 returned error can't find the container with id 10fc08bec19b4c4a08c0b22dd48725c8e06adfabada9448fa63dc55920e362a9 Feb 27 10:42:02 crc kubenswrapper[4998]: I0227 10:42:02.762756 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7d2295db-a05f-492a-82d7-295fd2222daf-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7d2295db-a05f-492a-82d7-295fd2222daf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:42:02 crc kubenswrapper[4998]: I0227 10:42:02.762806 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"7d2295db-a05f-492a-82d7-295fd2222daf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:42:02 crc kubenswrapper[4998]: I0227 10:42:02.762825 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7d2295db-a05f-492a-82d7-295fd2222daf-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"7d2295db-a05f-492a-82d7-295fd2222daf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:42:02 crc kubenswrapper[4998]: I0227 10:42:02.762862 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7d2295db-a05f-492a-82d7-295fd2222daf-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"7d2295db-a05f-492a-82d7-295fd2222daf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:42:02 crc kubenswrapper[4998]: I0227 10:42:02.762896 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7d2295db-a05f-492a-82d7-295fd2222daf-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"7d2295db-a05f-492a-82d7-295fd2222daf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:42:02 crc kubenswrapper[4998]: I0227 10:42:02.762927 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6f2tw\" (UniqueName: \"kubernetes.io/projected/7d2295db-a05f-492a-82d7-295fd2222daf-kube-api-access-6f2tw\") pod \"rabbitmq-cell1-server-0\" (UID: \"7d2295db-a05f-492a-82d7-295fd2222daf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:42:02 crc kubenswrapper[4998]: I0227 10:42:02.762952 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7d2295db-a05f-492a-82d7-295fd2222daf-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7d2295db-a05f-492a-82d7-295fd2222daf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:42:02 crc kubenswrapper[4998]: I0227 10:42:02.762968 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7d2295db-a05f-492a-82d7-295fd2222daf-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"7d2295db-a05f-492a-82d7-295fd2222daf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:42:02 crc kubenswrapper[4998]: I0227 10:42:02.763039 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7d2295db-a05f-492a-82d7-295fd2222daf-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"7d2295db-a05f-492a-82d7-295fd2222daf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:42:02 crc kubenswrapper[4998]: I0227 10:42:02.763179 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7d2295db-a05f-492a-82d7-295fd2222daf-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"7d2295db-a05f-492a-82d7-295fd2222daf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:42:02 crc kubenswrapper[4998]: I0227 10:42:02.763220 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7d2295db-a05f-492a-82d7-295fd2222daf-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"7d2295db-a05f-492a-82d7-295fd2222daf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:42:02 crc kubenswrapper[4998]: I0227 10:42:02.775954 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ca208a2-3ba0-43e6-a2c4-942c12e54b41" path="/var/lib/kubelet/pods/0ca208a2-3ba0-43e6-a2c4-942c12e54b41/volumes" Feb 27 10:42:02 crc kubenswrapper[4998]: I0227 10:42:02.777416 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68cd6142-df7e-4994-97c0-0bc08ea1e3d4" path="/var/lib/kubelet/pods/68cd6142-df7e-4994-97c0-0bc08ea1e3d4/volumes" Feb 27 10:42:02 crc kubenswrapper[4998]: I0227 10:42:02.778020 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 27 10:42:02 crc kubenswrapper[4998]: I0227 10:42:02.865559 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7d2295db-a05f-492a-82d7-295fd2222daf-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"7d2295db-a05f-492a-82d7-295fd2222daf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:42:02 crc kubenswrapper[4998]: I0227 10:42:02.865624 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7d2295db-a05f-492a-82d7-295fd2222daf-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"7d2295db-a05f-492a-82d7-295fd2222daf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:42:02 crc kubenswrapper[4998]: I0227 10:42:02.865711 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7d2295db-a05f-492a-82d7-295fd2222daf-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"7d2295db-a05f-492a-82d7-295fd2222daf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:42:02 crc kubenswrapper[4998]: I0227 10:42:02.865741 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7d2295db-a05f-492a-82d7-295fd2222daf-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"7d2295db-a05f-492a-82d7-295fd2222daf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:42:02 crc kubenswrapper[4998]: I0227 10:42:02.865926 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7d2295db-a05f-492a-82d7-295fd2222daf-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7d2295db-a05f-492a-82d7-295fd2222daf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:42:02 crc kubenswrapper[4998]: I0227 10:42:02.865983 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7d2295db-a05f-492a-82d7-295fd2222daf-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"7d2295db-a05f-492a-82d7-295fd2222daf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:42:02 crc kubenswrapper[4998]: I0227 10:42:02.866010 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"7d2295db-a05f-492a-82d7-295fd2222daf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:42:02 crc kubenswrapper[4998]: I0227 10:42:02.866098 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7d2295db-a05f-492a-82d7-295fd2222daf-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"7d2295db-a05f-492a-82d7-295fd2222daf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:42:02 crc kubenswrapper[4998]: I0227 10:42:02.866204 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7d2295db-a05f-492a-82d7-295fd2222daf-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"7d2295db-a05f-492a-82d7-295fd2222daf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:42:02 crc kubenswrapper[4998]: I0227 10:42:02.866311 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6f2tw\" (UniqueName: \"kubernetes.io/projected/7d2295db-a05f-492a-82d7-295fd2222daf-kube-api-access-6f2tw\") pod \"rabbitmq-cell1-server-0\" (UID: \"7d2295db-a05f-492a-82d7-295fd2222daf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:42:02 crc kubenswrapper[4998]: I0227 10:42:02.866409 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7d2295db-a05f-492a-82d7-295fd2222daf-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7d2295db-a05f-492a-82d7-295fd2222daf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:42:02 crc kubenswrapper[4998]: I0227 10:42:02.867674 4998 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"7d2295db-a05f-492a-82d7-295fd2222daf\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:42:02 crc kubenswrapper[4998]: I0227 10:42:02.868250 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7d2295db-a05f-492a-82d7-295fd2222daf-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7d2295db-a05f-492a-82d7-295fd2222daf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:42:02 crc kubenswrapper[4998]: I0227 10:42:02.869301 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7d2295db-a05f-492a-82d7-295fd2222daf-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"7d2295db-a05f-492a-82d7-295fd2222daf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:42:02 crc kubenswrapper[4998]: I0227 10:42:02.872887 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7d2295db-a05f-492a-82d7-295fd2222daf-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"7d2295db-a05f-492a-82d7-295fd2222daf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:42:02 crc kubenswrapper[4998]: I0227 10:42:02.873800 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7d2295db-a05f-492a-82d7-295fd2222daf-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"7d2295db-a05f-492a-82d7-295fd2222daf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:42:02 crc kubenswrapper[4998]: I0227 10:42:02.874213 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7d2295db-a05f-492a-82d7-295fd2222daf-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"7d2295db-a05f-492a-82d7-295fd2222daf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:42:02 crc kubenswrapper[4998]: I0227 10:42:02.874413 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7d2295db-a05f-492a-82d7-295fd2222daf-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7d2295db-a05f-492a-82d7-295fd2222daf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:42:02 crc kubenswrapper[4998]: I0227 10:42:02.877481 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7d2295db-a05f-492a-82d7-295fd2222daf-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"7d2295db-a05f-492a-82d7-295fd2222daf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:42:02 crc kubenswrapper[4998]: I0227 10:42:02.878263 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7d2295db-a05f-492a-82d7-295fd2222daf-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"7d2295db-a05f-492a-82d7-295fd2222daf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:42:02 crc kubenswrapper[4998]: I0227 10:42:02.879020 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7d2295db-a05f-492a-82d7-295fd2222daf-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"7d2295db-a05f-492a-82d7-295fd2222daf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:42:02 crc kubenswrapper[4998]: I0227 10:42:02.901864 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"7d2295db-a05f-492a-82d7-295fd2222daf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:42:02 crc kubenswrapper[4998]: I0227 10:42:02.902743 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6f2tw\" (UniqueName: \"kubernetes.io/projected/7d2295db-a05f-492a-82d7-295fd2222daf-kube-api-access-6f2tw\") pod \"rabbitmq-cell1-server-0\" (UID: \"7d2295db-a05f-492a-82d7-295fd2222daf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:42:02 crc kubenswrapper[4998]: I0227 10:42:02.915162 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:42:03 crc kubenswrapper[4998]: I0227 10:42:03.144136 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"058b490f-35b3-42df-a3b3-a684664a0e44","Type":"ContainerStarted","Data":"10fc08bec19b4c4a08c0b22dd48725c8e06adfabada9448fa63dc55920e362a9"} Feb 27 10:42:03 crc kubenswrapper[4998]: W0227 10:42:03.360388 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7d2295db_a05f_492a_82d7_295fd2222daf.slice/crio-15dcf298419a9db6bb005783211b92ff5e49a1be810e4ebdab6ceddce8a197f2 WatchSource:0}: Error finding container 15dcf298419a9db6bb005783211b92ff5e49a1be810e4ebdab6ceddce8a197f2: Status 404 returned error can't find the container with id 15dcf298419a9db6bb005783211b92ff5e49a1be810e4ebdab6ceddce8a197f2 Feb 27 10:42:03 crc kubenswrapper[4998]: I0227 10:42:03.366427 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 27 10:42:03 crc kubenswrapper[4998]: I0227 10:42:03.640113 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-tb57x"] Feb 27 10:42:03 crc kubenswrapper[4998]: I0227 10:42:03.642072 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-tb57x" Feb 27 10:42:03 crc kubenswrapper[4998]: I0227 10:42:03.644383 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Feb 27 10:42:03 crc kubenswrapper[4998]: I0227 10:42:03.657498 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-tb57x"] Feb 27 10:42:03 crc kubenswrapper[4998]: I0227 10:42:03.786354 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5ed842a-2de5-404d-8ba6-9ce32aee6922-config\") pod \"dnsmasq-dns-79bd4cc8c9-tb57x\" (UID: \"e5ed842a-2de5-404d-8ba6-9ce32aee6922\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-tb57x" Feb 27 10:42:03 crc kubenswrapper[4998]: I0227 10:42:03.786612 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z68d8\" (UniqueName: \"kubernetes.io/projected/e5ed842a-2de5-404d-8ba6-9ce32aee6922-kube-api-access-z68d8\") pod \"dnsmasq-dns-79bd4cc8c9-tb57x\" (UID: \"e5ed842a-2de5-404d-8ba6-9ce32aee6922\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-tb57x" Feb 27 10:42:03 crc kubenswrapper[4998]: I0227 10:42:03.786683 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/e5ed842a-2de5-404d-8ba6-9ce32aee6922-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-tb57x\" (UID: \"e5ed842a-2de5-404d-8ba6-9ce32aee6922\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-tb57x" Feb 27 10:42:03 crc kubenswrapper[4998]: I0227 10:42:03.786776 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e5ed842a-2de5-404d-8ba6-9ce32aee6922-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-tb57x\" (UID: \"e5ed842a-2de5-404d-8ba6-9ce32aee6922\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-tb57x" Feb 27 10:42:03 crc kubenswrapper[4998]: I0227 10:42:03.786832 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e5ed842a-2de5-404d-8ba6-9ce32aee6922-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-tb57x\" (UID: \"e5ed842a-2de5-404d-8ba6-9ce32aee6922\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-tb57x" Feb 27 10:42:03 crc kubenswrapper[4998]: I0227 10:42:03.786931 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e5ed842a-2de5-404d-8ba6-9ce32aee6922-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-tb57x\" (UID: \"e5ed842a-2de5-404d-8ba6-9ce32aee6922\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-tb57x" Feb 27 10:42:03 crc kubenswrapper[4998]: I0227 10:42:03.787211 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e5ed842a-2de5-404d-8ba6-9ce32aee6922-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-tb57x\" (UID: \"e5ed842a-2de5-404d-8ba6-9ce32aee6922\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-tb57x" Feb 27 10:42:03 crc kubenswrapper[4998]: I0227 10:42:03.790244 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-tb57x"] Feb 27 10:42:03 crc kubenswrapper[4998]: E0227 10:42:03.791059 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[config dns-svc dns-swift-storage-0 kube-api-access-z68d8 openstack-edpm-ipam ovsdbserver-nb ovsdbserver-sb], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-79bd4cc8c9-tb57x" podUID="e5ed842a-2de5-404d-8ba6-9ce32aee6922" Feb 27 10:42:03 crc kubenswrapper[4998]: I0227 10:42:03.818853 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55478c4467-zvnkz"] Feb 27 10:42:03 crc kubenswrapper[4998]: I0227 10:42:03.820460 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55478c4467-zvnkz" Feb 27 10:42:03 crc kubenswrapper[4998]: I0227 10:42:03.842624 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55478c4467-zvnkz"] Feb 27 10:42:03 crc kubenswrapper[4998]: I0227 10:42:03.890143 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e5ed842a-2de5-404d-8ba6-9ce32aee6922-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-tb57x\" (UID: \"e5ed842a-2de5-404d-8ba6-9ce32aee6922\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-tb57x" Feb 27 10:42:03 crc kubenswrapper[4998]: I0227 10:42:03.890271 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5ed842a-2de5-404d-8ba6-9ce32aee6922-config\") pod \"dnsmasq-dns-79bd4cc8c9-tb57x\" (UID: \"e5ed842a-2de5-404d-8ba6-9ce32aee6922\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-tb57x" Feb 27 10:42:03 crc kubenswrapper[4998]: I0227 10:42:03.890331 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z68d8\" (UniqueName: \"kubernetes.io/projected/e5ed842a-2de5-404d-8ba6-9ce32aee6922-kube-api-access-z68d8\") pod \"dnsmasq-dns-79bd4cc8c9-tb57x\" (UID: \"e5ed842a-2de5-404d-8ba6-9ce32aee6922\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-tb57x" Feb 27 10:42:03 crc kubenswrapper[4998]: I0227 10:42:03.890449 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/e5ed842a-2de5-404d-8ba6-9ce32aee6922-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-tb57x\" (UID: \"e5ed842a-2de5-404d-8ba6-9ce32aee6922\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-tb57x" Feb 27 10:42:03 crc kubenswrapper[4998]: I0227 10:42:03.890470 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e5ed842a-2de5-404d-8ba6-9ce32aee6922-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-tb57x\" (UID: \"e5ed842a-2de5-404d-8ba6-9ce32aee6922\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-tb57x" Feb 27 10:42:03 crc kubenswrapper[4998]: I0227 10:42:03.890490 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e5ed842a-2de5-404d-8ba6-9ce32aee6922-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-tb57x\" (UID: \"e5ed842a-2de5-404d-8ba6-9ce32aee6922\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-tb57x" Feb 27 10:42:03 crc kubenswrapper[4998]: I0227 10:42:03.890543 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e5ed842a-2de5-404d-8ba6-9ce32aee6922-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-tb57x\" (UID: \"e5ed842a-2de5-404d-8ba6-9ce32aee6922\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-tb57x" Feb 27 10:42:03 crc kubenswrapper[4998]: I0227 10:42:03.891148 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e5ed842a-2de5-404d-8ba6-9ce32aee6922-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-tb57x\" (UID: \"e5ed842a-2de5-404d-8ba6-9ce32aee6922\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-tb57x" Feb 27 10:42:03 crc kubenswrapper[4998]: I0227 10:42:03.891468 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e5ed842a-2de5-404d-8ba6-9ce32aee6922-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-tb57x\" (UID: \"e5ed842a-2de5-404d-8ba6-9ce32aee6922\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-tb57x" Feb 27 10:42:03 crc kubenswrapper[4998]: I0227 10:42:03.891735 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e5ed842a-2de5-404d-8ba6-9ce32aee6922-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-tb57x\" (UID: \"e5ed842a-2de5-404d-8ba6-9ce32aee6922\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-tb57x" Feb 27 10:42:03 crc kubenswrapper[4998]: I0227 10:42:03.891783 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/e5ed842a-2de5-404d-8ba6-9ce32aee6922-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-tb57x\" (UID: \"e5ed842a-2de5-404d-8ba6-9ce32aee6922\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-tb57x" Feb 27 10:42:03 crc kubenswrapper[4998]: I0227 10:42:03.891792 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5ed842a-2de5-404d-8ba6-9ce32aee6922-config\") pod \"dnsmasq-dns-79bd4cc8c9-tb57x\" (UID: \"e5ed842a-2de5-404d-8ba6-9ce32aee6922\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-tb57x" Feb 27 10:42:03 crc kubenswrapper[4998]: I0227 10:42:03.891955 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e5ed842a-2de5-404d-8ba6-9ce32aee6922-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-tb57x\" (UID: \"e5ed842a-2de5-404d-8ba6-9ce32aee6922\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-tb57x" Feb 27 10:42:03 crc kubenswrapper[4998]: I0227 10:42:03.911498 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z68d8\" (UniqueName: \"kubernetes.io/projected/e5ed842a-2de5-404d-8ba6-9ce32aee6922-kube-api-access-z68d8\") pod \"dnsmasq-dns-79bd4cc8c9-tb57x\" (UID: \"e5ed842a-2de5-404d-8ba6-9ce32aee6922\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-tb57x" Feb 27 10:42:03 crc kubenswrapper[4998]: I0227 10:42:03.992166 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/50dafab7-7800-4392-a6e1-d081a9a29ae9-ovsdbserver-nb\") pod \"dnsmasq-dns-55478c4467-zvnkz\" (UID: \"50dafab7-7800-4392-a6e1-d081a9a29ae9\") " pod="openstack/dnsmasq-dns-55478c4467-zvnkz" Feb 27 10:42:03 crc kubenswrapper[4998]: I0227 10:42:03.992268 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwj9v\" (UniqueName: \"kubernetes.io/projected/50dafab7-7800-4392-a6e1-d081a9a29ae9-kube-api-access-hwj9v\") pod \"dnsmasq-dns-55478c4467-zvnkz\" (UID: \"50dafab7-7800-4392-a6e1-d081a9a29ae9\") " pod="openstack/dnsmasq-dns-55478c4467-zvnkz" Feb 27 10:42:03 crc kubenswrapper[4998]: I0227 10:42:03.992367 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/50dafab7-7800-4392-a6e1-d081a9a29ae9-dns-svc\") pod \"dnsmasq-dns-55478c4467-zvnkz\" (UID: \"50dafab7-7800-4392-a6e1-d081a9a29ae9\") " pod="openstack/dnsmasq-dns-55478c4467-zvnkz" Feb 27 10:42:03 crc kubenswrapper[4998]: I0227 10:42:03.992414 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/50dafab7-7800-4392-a6e1-d081a9a29ae9-dns-swift-storage-0\") pod \"dnsmasq-dns-55478c4467-zvnkz\" (UID: \"50dafab7-7800-4392-a6e1-d081a9a29ae9\") " pod="openstack/dnsmasq-dns-55478c4467-zvnkz" Feb 27 10:42:03 crc kubenswrapper[4998]: I0227 10:42:03.992505 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/50dafab7-7800-4392-a6e1-d081a9a29ae9-ovsdbserver-sb\") pod \"dnsmasq-dns-55478c4467-zvnkz\" (UID: \"50dafab7-7800-4392-a6e1-d081a9a29ae9\") " pod="openstack/dnsmasq-dns-55478c4467-zvnkz" Feb 27 10:42:03 crc kubenswrapper[4998]: I0227 10:42:03.992541 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/50dafab7-7800-4392-a6e1-d081a9a29ae9-openstack-edpm-ipam\") pod \"dnsmasq-dns-55478c4467-zvnkz\" (UID: \"50dafab7-7800-4392-a6e1-d081a9a29ae9\") " pod="openstack/dnsmasq-dns-55478c4467-zvnkz" Feb 27 10:42:03 crc kubenswrapper[4998]: I0227 10:42:03.992657 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50dafab7-7800-4392-a6e1-d081a9a29ae9-config\") pod \"dnsmasq-dns-55478c4467-zvnkz\" (UID: \"50dafab7-7800-4392-a6e1-d081a9a29ae9\") " pod="openstack/dnsmasq-dns-55478c4467-zvnkz" Feb 27 10:42:04 crc kubenswrapper[4998]: I0227 10:42:04.094625 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50dafab7-7800-4392-a6e1-d081a9a29ae9-config\") pod \"dnsmasq-dns-55478c4467-zvnkz\" (UID: \"50dafab7-7800-4392-a6e1-d081a9a29ae9\") " pod="openstack/dnsmasq-dns-55478c4467-zvnkz" Feb 27 10:42:04 crc kubenswrapper[4998]: I0227 10:42:04.094682 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/50dafab7-7800-4392-a6e1-d081a9a29ae9-ovsdbserver-nb\") pod \"dnsmasq-dns-55478c4467-zvnkz\" (UID: \"50dafab7-7800-4392-a6e1-d081a9a29ae9\") " pod="openstack/dnsmasq-dns-55478c4467-zvnkz" Feb 27 10:42:04 crc kubenswrapper[4998]: I0227 10:42:04.094729 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwj9v\" (UniqueName: \"kubernetes.io/projected/50dafab7-7800-4392-a6e1-d081a9a29ae9-kube-api-access-hwj9v\") pod \"dnsmasq-dns-55478c4467-zvnkz\" (UID: \"50dafab7-7800-4392-a6e1-d081a9a29ae9\") " pod="openstack/dnsmasq-dns-55478c4467-zvnkz" Feb 27 10:42:04 crc kubenswrapper[4998]: I0227 10:42:04.094798 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/50dafab7-7800-4392-a6e1-d081a9a29ae9-dns-svc\") pod \"dnsmasq-dns-55478c4467-zvnkz\" (UID: \"50dafab7-7800-4392-a6e1-d081a9a29ae9\") " pod="openstack/dnsmasq-dns-55478c4467-zvnkz" Feb 27 10:42:04 crc kubenswrapper[4998]: I0227 10:42:04.094853 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/50dafab7-7800-4392-a6e1-d081a9a29ae9-dns-swift-storage-0\") pod \"dnsmasq-dns-55478c4467-zvnkz\" (UID: \"50dafab7-7800-4392-a6e1-d081a9a29ae9\") " pod="openstack/dnsmasq-dns-55478c4467-zvnkz" Feb 27 10:42:04 crc kubenswrapper[4998]: I0227 10:42:04.094908 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/50dafab7-7800-4392-a6e1-d081a9a29ae9-ovsdbserver-sb\") pod \"dnsmasq-dns-55478c4467-zvnkz\" (UID: \"50dafab7-7800-4392-a6e1-d081a9a29ae9\") " pod="openstack/dnsmasq-dns-55478c4467-zvnkz" Feb 27 10:42:04 crc kubenswrapper[4998]: I0227 10:42:04.094947 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/50dafab7-7800-4392-a6e1-d081a9a29ae9-openstack-edpm-ipam\") pod \"dnsmasq-dns-55478c4467-zvnkz\" (UID: \"50dafab7-7800-4392-a6e1-d081a9a29ae9\") " pod="openstack/dnsmasq-dns-55478c4467-zvnkz" Feb 27 10:42:04 crc kubenswrapper[4998]: I0227 10:42:04.095831 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50dafab7-7800-4392-a6e1-d081a9a29ae9-config\") pod \"dnsmasq-dns-55478c4467-zvnkz\" (UID: \"50dafab7-7800-4392-a6e1-d081a9a29ae9\") " pod="openstack/dnsmasq-dns-55478c4467-zvnkz" Feb 27 10:42:04 crc kubenswrapper[4998]: I0227 10:42:04.095958 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/50dafab7-7800-4392-a6e1-d081a9a29ae9-ovsdbserver-sb\") pod \"dnsmasq-dns-55478c4467-zvnkz\" (UID: \"50dafab7-7800-4392-a6e1-d081a9a29ae9\") " pod="openstack/dnsmasq-dns-55478c4467-zvnkz" Feb 27 10:42:04 crc kubenswrapper[4998]: I0227 10:42:04.095985 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/50dafab7-7800-4392-a6e1-d081a9a29ae9-openstack-edpm-ipam\") pod \"dnsmasq-dns-55478c4467-zvnkz\" (UID: \"50dafab7-7800-4392-a6e1-d081a9a29ae9\") " pod="openstack/dnsmasq-dns-55478c4467-zvnkz" Feb 27 10:42:04 crc kubenswrapper[4998]: I0227 10:42:04.095958 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/50dafab7-7800-4392-a6e1-d081a9a29ae9-dns-svc\") pod \"dnsmasq-dns-55478c4467-zvnkz\" (UID: \"50dafab7-7800-4392-a6e1-d081a9a29ae9\") " pod="openstack/dnsmasq-dns-55478c4467-zvnkz" Feb 27 10:42:04 crc kubenswrapper[4998]: I0227 10:42:04.096376 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/50dafab7-7800-4392-a6e1-d081a9a29ae9-ovsdbserver-nb\") pod \"dnsmasq-dns-55478c4467-zvnkz\" (UID: \"50dafab7-7800-4392-a6e1-d081a9a29ae9\") " pod="openstack/dnsmasq-dns-55478c4467-zvnkz" Feb 27 10:42:04 crc kubenswrapper[4998]: I0227 10:42:04.096436 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/50dafab7-7800-4392-a6e1-d081a9a29ae9-dns-swift-storage-0\") pod \"dnsmasq-dns-55478c4467-zvnkz\" (UID: \"50dafab7-7800-4392-a6e1-d081a9a29ae9\") " pod="openstack/dnsmasq-dns-55478c4467-zvnkz" Feb 27 10:42:04 crc kubenswrapper[4998]: I0227 10:42:04.123057 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwj9v\" (UniqueName: \"kubernetes.io/projected/50dafab7-7800-4392-a6e1-d081a9a29ae9-kube-api-access-hwj9v\") pod \"dnsmasq-dns-55478c4467-zvnkz\" (UID: \"50dafab7-7800-4392-a6e1-d081a9a29ae9\") " pod="openstack/dnsmasq-dns-55478c4467-zvnkz" Feb 27 10:42:04 crc kubenswrapper[4998]: I0227 10:42:04.137665 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55478c4467-zvnkz" Feb 27 10:42:04 crc kubenswrapper[4998]: I0227 10:42:04.154369 4998 generic.go:334] "Generic (PLEG): container finished" podID="590d897d-12b6-491a-bf20-33c238b10871" containerID="89d5d1c9df571b02645d0ebdb4dd67443dbab4a40e735f37698dd8bae06ee4d3" exitCode=0 Feb 27 10:42:04 crc kubenswrapper[4998]: I0227 10:42:04.154451 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536482-zs5m9" event={"ID":"590d897d-12b6-491a-bf20-33c238b10871","Type":"ContainerDied","Data":"89d5d1c9df571b02645d0ebdb4dd67443dbab4a40e735f37698dd8bae06ee4d3"} Feb 27 10:42:04 crc kubenswrapper[4998]: I0227 10:42:04.157089 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-tb57x" Feb 27 10:42:04 crc kubenswrapper[4998]: I0227 10:42:04.157093 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"7d2295db-a05f-492a-82d7-295fd2222daf","Type":"ContainerStarted","Data":"15dcf298419a9db6bb005783211b92ff5e49a1be810e4ebdab6ceddce8a197f2"} Feb 27 10:42:04 crc kubenswrapper[4998]: I0227 10:42:04.229218 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-tb57x" Feb 27 10:42:04 crc kubenswrapper[4998]: I0227 10:42:04.408776 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z68d8\" (UniqueName: \"kubernetes.io/projected/e5ed842a-2de5-404d-8ba6-9ce32aee6922-kube-api-access-z68d8\") pod \"e5ed842a-2de5-404d-8ba6-9ce32aee6922\" (UID: \"e5ed842a-2de5-404d-8ba6-9ce32aee6922\") " Feb 27 10:42:04 crc kubenswrapper[4998]: I0227 10:42:04.409061 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e5ed842a-2de5-404d-8ba6-9ce32aee6922-dns-swift-storage-0\") pod \"e5ed842a-2de5-404d-8ba6-9ce32aee6922\" (UID: \"e5ed842a-2de5-404d-8ba6-9ce32aee6922\") " Feb 27 10:42:04 crc kubenswrapper[4998]: I0227 10:42:04.409085 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e5ed842a-2de5-404d-8ba6-9ce32aee6922-dns-svc\") pod \"e5ed842a-2de5-404d-8ba6-9ce32aee6922\" (UID: \"e5ed842a-2de5-404d-8ba6-9ce32aee6922\") " Feb 27 10:42:04 crc kubenswrapper[4998]: I0227 10:42:04.409148 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/e5ed842a-2de5-404d-8ba6-9ce32aee6922-openstack-edpm-ipam\") pod \"e5ed842a-2de5-404d-8ba6-9ce32aee6922\" (UID: \"e5ed842a-2de5-404d-8ba6-9ce32aee6922\") " Feb 27 10:42:04 crc kubenswrapper[4998]: I0227 10:42:04.409260 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e5ed842a-2de5-404d-8ba6-9ce32aee6922-ovsdbserver-nb\") pod \"e5ed842a-2de5-404d-8ba6-9ce32aee6922\" (UID: \"e5ed842a-2de5-404d-8ba6-9ce32aee6922\") " Feb 27 10:42:04 crc kubenswrapper[4998]: I0227 10:42:04.409309 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e5ed842a-2de5-404d-8ba6-9ce32aee6922-ovsdbserver-sb\") pod \"e5ed842a-2de5-404d-8ba6-9ce32aee6922\" (UID: \"e5ed842a-2de5-404d-8ba6-9ce32aee6922\") " Feb 27 10:42:04 crc kubenswrapper[4998]: I0227 10:42:04.409340 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5ed842a-2de5-404d-8ba6-9ce32aee6922-config\") pod \"e5ed842a-2de5-404d-8ba6-9ce32aee6922\" (UID: \"e5ed842a-2de5-404d-8ba6-9ce32aee6922\") " Feb 27 10:42:04 crc kubenswrapper[4998]: I0227 10:42:04.410076 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5ed842a-2de5-404d-8ba6-9ce32aee6922-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e5ed842a-2de5-404d-8ba6-9ce32aee6922" (UID: "e5ed842a-2de5-404d-8ba6-9ce32aee6922"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:42:04 crc kubenswrapper[4998]: I0227 10:42:04.410253 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5ed842a-2de5-404d-8ba6-9ce32aee6922-config" (OuterVolumeSpecName: "config") pod "e5ed842a-2de5-404d-8ba6-9ce32aee6922" (UID: "e5ed842a-2de5-404d-8ba6-9ce32aee6922"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:42:04 crc kubenswrapper[4998]: I0227 10:42:04.410291 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5ed842a-2de5-404d-8ba6-9ce32aee6922-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e5ed842a-2de5-404d-8ba6-9ce32aee6922" (UID: "e5ed842a-2de5-404d-8ba6-9ce32aee6922"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:42:04 crc kubenswrapper[4998]: I0227 10:42:04.410319 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5ed842a-2de5-404d-8ba6-9ce32aee6922-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e5ed842a-2de5-404d-8ba6-9ce32aee6922" (UID: "e5ed842a-2de5-404d-8ba6-9ce32aee6922"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:42:04 crc kubenswrapper[4998]: I0227 10:42:04.410426 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5ed842a-2de5-404d-8ba6-9ce32aee6922-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e5ed842a-2de5-404d-8ba6-9ce32aee6922" (UID: "e5ed842a-2de5-404d-8ba6-9ce32aee6922"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:42:04 crc kubenswrapper[4998]: I0227 10:42:04.410526 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5ed842a-2de5-404d-8ba6-9ce32aee6922-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "e5ed842a-2de5-404d-8ba6-9ce32aee6922" (UID: "e5ed842a-2de5-404d-8ba6-9ce32aee6922"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:42:04 crc kubenswrapper[4998]: I0227 10:42:04.411324 4998 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e5ed842a-2de5-404d-8ba6-9ce32aee6922-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 27 10:42:04 crc kubenswrapper[4998]: I0227 10:42:04.411355 4998 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e5ed842a-2de5-404d-8ba6-9ce32aee6922-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 27 10:42:04 crc kubenswrapper[4998]: I0227 10:42:04.411368 4998 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5ed842a-2de5-404d-8ba6-9ce32aee6922-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:42:04 crc kubenswrapper[4998]: I0227 10:42:04.411379 4998 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e5ed842a-2de5-404d-8ba6-9ce32aee6922-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 27 10:42:04 crc kubenswrapper[4998]: I0227 10:42:04.411391 4998 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e5ed842a-2de5-404d-8ba6-9ce32aee6922-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 27 10:42:04 crc kubenswrapper[4998]: I0227 10:42:04.411401 4998 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/e5ed842a-2de5-404d-8ba6-9ce32aee6922-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 27 10:42:04 crc kubenswrapper[4998]: I0227 10:42:04.469684 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5ed842a-2de5-404d-8ba6-9ce32aee6922-kube-api-access-z68d8" (OuterVolumeSpecName: "kube-api-access-z68d8") pod "e5ed842a-2de5-404d-8ba6-9ce32aee6922" (UID: "e5ed842a-2de5-404d-8ba6-9ce32aee6922"). InnerVolumeSpecName "kube-api-access-z68d8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:42:04 crc kubenswrapper[4998]: I0227 10:42:04.514254 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z68d8\" (UniqueName: \"kubernetes.io/projected/e5ed842a-2de5-404d-8ba6-9ce32aee6922-kube-api-access-z68d8\") on node \"crc\" DevicePath \"\"" Feb 27 10:42:04 crc kubenswrapper[4998]: I0227 10:42:04.683479 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55478c4467-zvnkz"] Feb 27 10:42:05 crc kubenswrapper[4998]: I0227 10:42:05.168540 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"058b490f-35b3-42df-a3b3-a684664a0e44","Type":"ContainerStarted","Data":"13fab38c9ef8fb59d01b560ad7fccb175b8ee207edbbb4b5396a28b23815f05d"} Feb 27 10:42:05 crc kubenswrapper[4998]: I0227 10:42:05.170715 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55478c4467-zvnkz" event={"ID":"50dafab7-7800-4392-a6e1-d081a9a29ae9","Type":"ContainerStarted","Data":"d2d7815c3c167bebc88a78817032bb4e6b58660d325c95181cbf8a9106b602c2"} Feb 27 10:42:05 crc kubenswrapper[4998]: I0227 10:42:05.170756 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55478c4467-zvnkz" event={"ID":"50dafab7-7800-4392-a6e1-d081a9a29ae9","Type":"ContainerStarted","Data":"28cb369081a097e7af9da40924f652ff86b557f5c520a38939e1602910b37414"} Feb 27 10:42:05 crc kubenswrapper[4998]: I0227 10:42:05.170770 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-tb57x" Feb 27 10:42:05 crc kubenswrapper[4998]: I0227 10:42:05.261363 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-tb57x"] Feb 27 10:42:05 crc kubenswrapper[4998]: I0227 10:42:05.271091 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-tb57x"] Feb 27 10:42:05 crc kubenswrapper[4998]: I0227 10:42:05.961340 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536482-zs5m9" Feb 27 10:42:06 crc kubenswrapper[4998]: I0227 10:42:06.148663 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-czxfn\" (UniqueName: \"kubernetes.io/projected/590d897d-12b6-491a-bf20-33c238b10871-kube-api-access-czxfn\") pod \"590d897d-12b6-491a-bf20-33c238b10871\" (UID: \"590d897d-12b6-491a-bf20-33c238b10871\") " Feb 27 10:42:06 crc kubenswrapper[4998]: I0227 10:42:06.157164 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/590d897d-12b6-491a-bf20-33c238b10871-kube-api-access-czxfn" (OuterVolumeSpecName: "kube-api-access-czxfn") pod "590d897d-12b6-491a-bf20-33c238b10871" (UID: "590d897d-12b6-491a-bf20-33c238b10871"). InnerVolumeSpecName "kube-api-access-czxfn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:42:06 crc kubenswrapper[4998]: I0227 10:42:06.182874 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"7d2295db-a05f-492a-82d7-295fd2222daf","Type":"ContainerStarted","Data":"a19148b92043467e836bf3ed92605bb7164775feebb7d35d0916e04da7b59e93"} Feb 27 10:42:06 crc kubenswrapper[4998]: I0227 10:42:06.186423 4998 generic.go:334] "Generic (PLEG): container finished" podID="50dafab7-7800-4392-a6e1-d081a9a29ae9" containerID="d2d7815c3c167bebc88a78817032bb4e6b58660d325c95181cbf8a9106b602c2" exitCode=0 Feb 27 10:42:06 crc kubenswrapper[4998]: I0227 10:42:06.186488 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55478c4467-zvnkz" event={"ID":"50dafab7-7800-4392-a6e1-d081a9a29ae9","Type":"ContainerDied","Data":"d2d7815c3c167bebc88a78817032bb4e6b58660d325c95181cbf8a9106b602c2"} Feb 27 10:42:06 crc kubenswrapper[4998]: I0227 10:42:06.189417 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536482-zs5m9" Feb 27 10:42:06 crc kubenswrapper[4998]: I0227 10:42:06.189474 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536482-zs5m9" event={"ID":"590d897d-12b6-491a-bf20-33c238b10871","Type":"ContainerDied","Data":"6570732ea594275c2b65c5cb823d7f978e51dcca720681266e6596d67b187088"} Feb 27 10:42:06 crc kubenswrapper[4998]: I0227 10:42:06.189496 4998 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6570732ea594275c2b65c5cb823d7f978e51dcca720681266e6596d67b187088" Feb 27 10:42:06 crc kubenswrapper[4998]: I0227 10:42:06.251557 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-czxfn\" (UniqueName: \"kubernetes.io/projected/590d897d-12b6-491a-bf20-33c238b10871-kube-api-access-czxfn\") on node \"crc\" DevicePath \"\"" Feb 27 10:42:06 crc kubenswrapper[4998]: I0227 10:42:06.780287 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5ed842a-2de5-404d-8ba6-9ce32aee6922" path="/var/lib/kubelet/pods/e5ed842a-2de5-404d-8ba6-9ce32aee6922/volumes" Feb 27 10:42:07 crc kubenswrapper[4998]: I0227 10:42:07.020855 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536476-dsmkc"] Feb 27 10:42:07 crc kubenswrapper[4998]: I0227 10:42:07.031675 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536476-dsmkc"] Feb 27 10:42:07 crc kubenswrapper[4998]: I0227 10:42:07.204072 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55478c4467-zvnkz" event={"ID":"50dafab7-7800-4392-a6e1-d081a9a29ae9","Type":"ContainerStarted","Data":"c631000dae04b070c2cadcceb8bb7a518bec00be63d1b7345672870bc3ffb99c"} Feb 27 10:42:07 crc kubenswrapper[4998]: I0227 10:42:07.204551 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55478c4467-zvnkz" Feb 27 10:42:07 crc kubenswrapper[4998]: I0227 10:42:07.228993 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55478c4467-zvnkz" podStartSLOduration=4.228972286 podStartE2EDuration="4.228972286s" podCreationTimestamp="2026-02-27 10:42:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:42:07.224013499 +0000 UTC m=+1479.222284467" watchObservedRunningTime="2026-02-27 10:42:07.228972286 +0000 UTC m=+1479.227243264" Feb 27 10:42:08 crc kubenswrapper[4998]: I0227 10:42:08.640378 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xbmkb" Feb 27 10:42:08 crc kubenswrapper[4998]: I0227 10:42:08.696060 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xbmkb" Feb 27 10:42:08 crc kubenswrapper[4998]: I0227 10:42:08.779503 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb0ae9f5-5e46-43c3-ae31-23a68848c96d" path="/var/lib/kubelet/pods/bb0ae9f5-5e46-43c3-ae31-23a68848c96d/volumes" Feb 27 10:42:08 crc kubenswrapper[4998]: I0227 10:42:08.882251 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xbmkb"] Feb 27 10:42:10 crc kubenswrapper[4998]: I0227 10:42:10.235132 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-xbmkb" podUID="11764c1f-c462-471c-b5ab-e3822b04e887" containerName="registry-server" containerID="cri-o://072be09b45742b933461558139558802bc7e665185d0568e9e486c2b50deb3bd" gracePeriod=2 Feb 27 10:42:10 crc kubenswrapper[4998]: I0227 10:42:10.634115 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xbmkb" Feb 27 10:42:10 crc kubenswrapper[4998]: I0227 10:42:10.755055 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s9wfx\" (UniqueName: \"kubernetes.io/projected/11764c1f-c462-471c-b5ab-e3822b04e887-kube-api-access-s9wfx\") pod \"11764c1f-c462-471c-b5ab-e3822b04e887\" (UID: \"11764c1f-c462-471c-b5ab-e3822b04e887\") " Feb 27 10:42:10 crc kubenswrapper[4998]: I0227 10:42:10.755180 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11764c1f-c462-471c-b5ab-e3822b04e887-utilities\") pod \"11764c1f-c462-471c-b5ab-e3822b04e887\" (UID: \"11764c1f-c462-471c-b5ab-e3822b04e887\") " Feb 27 10:42:10 crc kubenswrapper[4998]: I0227 10:42:10.755251 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11764c1f-c462-471c-b5ab-e3822b04e887-catalog-content\") pod \"11764c1f-c462-471c-b5ab-e3822b04e887\" (UID: \"11764c1f-c462-471c-b5ab-e3822b04e887\") " Feb 27 10:42:10 crc kubenswrapper[4998]: I0227 10:42:10.756670 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11764c1f-c462-471c-b5ab-e3822b04e887-utilities" (OuterVolumeSpecName: "utilities") pod "11764c1f-c462-471c-b5ab-e3822b04e887" (UID: "11764c1f-c462-471c-b5ab-e3822b04e887"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:42:10 crc kubenswrapper[4998]: I0227 10:42:10.760834 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11764c1f-c462-471c-b5ab-e3822b04e887-kube-api-access-s9wfx" (OuterVolumeSpecName: "kube-api-access-s9wfx") pod "11764c1f-c462-471c-b5ab-e3822b04e887" (UID: "11764c1f-c462-471c-b5ab-e3822b04e887"). InnerVolumeSpecName "kube-api-access-s9wfx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:42:10 crc kubenswrapper[4998]: I0227 10:42:10.866356 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s9wfx\" (UniqueName: \"kubernetes.io/projected/11764c1f-c462-471c-b5ab-e3822b04e887-kube-api-access-s9wfx\") on node \"crc\" DevicePath \"\"" Feb 27 10:42:10 crc kubenswrapper[4998]: I0227 10:42:10.866582 4998 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11764c1f-c462-471c-b5ab-e3822b04e887-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 10:42:10 crc kubenswrapper[4998]: I0227 10:42:10.875758 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11764c1f-c462-471c-b5ab-e3822b04e887-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "11764c1f-c462-471c-b5ab-e3822b04e887" (UID: "11764c1f-c462-471c-b5ab-e3822b04e887"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:42:10 crc kubenswrapper[4998]: I0227 10:42:10.967791 4998 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11764c1f-c462-471c-b5ab-e3822b04e887-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 10:42:11 crc kubenswrapper[4998]: I0227 10:42:11.249208 4998 generic.go:334] "Generic (PLEG): container finished" podID="11764c1f-c462-471c-b5ab-e3822b04e887" containerID="072be09b45742b933461558139558802bc7e665185d0568e9e486c2b50deb3bd" exitCode=0 Feb 27 10:42:11 crc kubenswrapper[4998]: I0227 10:42:11.249475 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xbmkb" event={"ID":"11764c1f-c462-471c-b5ab-e3822b04e887","Type":"ContainerDied","Data":"072be09b45742b933461558139558802bc7e665185d0568e9e486c2b50deb3bd"} Feb 27 10:42:11 crc kubenswrapper[4998]: I0227 10:42:11.250324 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xbmkb" event={"ID":"11764c1f-c462-471c-b5ab-e3822b04e887","Type":"ContainerDied","Data":"521772b3a98c16ae0d37652ece127fc980bb843e271aaa1acd4ed99f53ee629e"} Feb 27 10:42:11 crc kubenswrapper[4998]: I0227 10:42:11.250352 4998 scope.go:117] "RemoveContainer" containerID="072be09b45742b933461558139558802bc7e665185d0568e9e486c2b50deb3bd" Feb 27 10:42:11 crc kubenswrapper[4998]: I0227 10:42:11.249564 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xbmkb" Feb 27 10:42:11 crc kubenswrapper[4998]: I0227 10:42:11.292689 4998 scope.go:117] "RemoveContainer" containerID="a1ef54b8f91e396c3bc9b44c4e7ac257410d72807564df705c5522ad28af67f5" Feb 27 10:42:11 crc kubenswrapper[4998]: I0227 10:42:11.312915 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xbmkb"] Feb 27 10:42:11 crc kubenswrapper[4998]: I0227 10:42:11.322212 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-xbmkb"] Feb 27 10:42:11 crc kubenswrapper[4998]: I0227 10:42:11.326919 4998 scope.go:117] "RemoveContainer" containerID="951815444f29249eeba3fe2744ed74cdbe69d0c6596f07ffac19e1e1f356da83" Feb 27 10:42:11 crc kubenswrapper[4998]: I0227 10:42:11.387346 4998 scope.go:117] "RemoveContainer" containerID="072be09b45742b933461558139558802bc7e665185d0568e9e486c2b50deb3bd" Feb 27 10:42:11 crc kubenswrapper[4998]: E0227 10:42:11.387825 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"072be09b45742b933461558139558802bc7e665185d0568e9e486c2b50deb3bd\": container with ID starting with 072be09b45742b933461558139558802bc7e665185d0568e9e486c2b50deb3bd not found: ID does not exist" containerID="072be09b45742b933461558139558802bc7e665185d0568e9e486c2b50deb3bd" Feb 27 10:42:11 crc kubenswrapper[4998]: I0227 10:42:11.387894 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"072be09b45742b933461558139558802bc7e665185d0568e9e486c2b50deb3bd"} err="failed to get container status \"072be09b45742b933461558139558802bc7e665185d0568e9e486c2b50deb3bd\": rpc error: code = NotFound desc = could not find container \"072be09b45742b933461558139558802bc7e665185d0568e9e486c2b50deb3bd\": container with ID starting with 072be09b45742b933461558139558802bc7e665185d0568e9e486c2b50deb3bd not found: ID does not exist" Feb 27 10:42:11 crc kubenswrapper[4998]: I0227 10:42:11.387920 4998 scope.go:117] "RemoveContainer" containerID="a1ef54b8f91e396c3bc9b44c4e7ac257410d72807564df705c5522ad28af67f5" Feb 27 10:42:11 crc kubenswrapper[4998]: E0227 10:42:11.388149 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1ef54b8f91e396c3bc9b44c4e7ac257410d72807564df705c5522ad28af67f5\": container with ID starting with a1ef54b8f91e396c3bc9b44c4e7ac257410d72807564df705c5522ad28af67f5 not found: ID does not exist" containerID="a1ef54b8f91e396c3bc9b44c4e7ac257410d72807564df705c5522ad28af67f5" Feb 27 10:42:11 crc kubenswrapper[4998]: I0227 10:42:11.388172 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1ef54b8f91e396c3bc9b44c4e7ac257410d72807564df705c5522ad28af67f5"} err="failed to get container status \"a1ef54b8f91e396c3bc9b44c4e7ac257410d72807564df705c5522ad28af67f5\": rpc error: code = NotFound desc = could not find container \"a1ef54b8f91e396c3bc9b44c4e7ac257410d72807564df705c5522ad28af67f5\": container with ID starting with a1ef54b8f91e396c3bc9b44c4e7ac257410d72807564df705c5522ad28af67f5 not found: ID does not exist" Feb 27 10:42:11 crc kubenswrapper[4998]: I0227 10:42:11.388240 4998 scope.go:117] "RemoveContainer" containerID="951815444f29249eeba3fe2744ed74cdbe69d0c6596f07ffac19e1e1f356da83" Feb 27 10:42:11 crc kubenswrapper[4998]: E0227 10:42:11.388498 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"951815444f29249eeba3fe2744ed74cdbe69d0c6596f07ffac19e1e1f356da83\": container with ID starting with 951815444f29249eeba3fe2744ed74cdbe69d0c6596f07ffac19e1e1f356da83 not found: ID does not exist" containerID="951815444f29249eeba3fe2744ed74cdbe69d0c6596f07ffac19e1e1f356da83" Feb 27 10:42:11 crc kubenswrapper[4998]: I0227 10:42:11.388527 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"951815444f29249eeba3fe2744ed74cdbe69d0c6596f07ffac19e1e1f356da83"} err="failed to get container status \"951815444f29249eeba3fe2744ed74cdbe69d0c6596f07ffac19e1e1f356da83\": rpc error: code = NotFound desc = could not find container \"951815444f29249eeba3fe2744ed74cdbe69d0c6596f07ffac19e1e1f356da83\": container with ID starting with 951815444f29249eeba3fe2744ed74cdbe69d0c6596f07ffac19e1e1f356da83 not found: ID does not exist" Feb 27 10:42:12 crc kubenswrapper[4998]: I0227 10:42:12.782969 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11764c1f-c462-471c-b5ab-e3822b04e887" path="/var/lib/kubelet/pods/11764c1f-c462-471c-b5ab-e3822b04e887/volumes" Feb 27 10:42:14 crc kubenswrapper[4998]: I0227 10:42:14.139446 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55478c4467-zvnkz" Feb 27 10:42:14 crc kubenswrapper[4998]: I0227 10:42:14.225874 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-52g6d"] Feb 27 10:42:14 crc kubenswrapper[4998]: I0227 10:42:14.226502 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-89c5cd4d5-52g6d" podUID="b52da7f1-a553-4ea1-9e90-f0ea08ba7a14" containerName="dnsmasq-dns" containerID="cri-o://d6688dcf5ef02b6a7065429810edff01ffe1a532f29430cdcde8a90cf5bb7e9f" gracePeriod=10 Feb 27 10:42:14 crc kubenswrapper[4998]: I0227 10:42:14.738262 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-52g6d" Feb 27 10:42:14 crc kubenswrapper[4998]: I0227 10:42:14.871313 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b52da7f1-a553-4ea1-9e90-f0ea08ba7a14-dns-swift-storage-0\") pod \"b52da7f1-a553-4ea1-9e90-f0ea08ba7a14\" (UID: \"b52da7f1-a553-4ea1-9e90-f0ea08ba7a14\") " Feb 27 10:42:14 crc kubenswrapper[4998]: I0227 10:42:14.871853 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b52da7f1-a553-4ea1-9e90-f0ea08ba7a14-ovsdbserver-sb\") pod \"b52da7f1-a553-4ea1-9e90-f0ea08ba7a14\" (UID: \"b52da7f1-a553-4ea1-9e90-f0ea08ba7a14\") " Feb 27 10:42:14 crc kubenswrapper[4998]: I0227 10:42:14.871915 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b52da7f1-a553-4ea1-9e90-f0ea08ba7a14-dns-svc\") pod \"b52da7f1-a553-4ea1-9e90-f0ea08ba7a14\" (UID: \"b52da7f1-a553-4ea1-9e90-f0ea08ba7a14\") " Feb 27 10:42:14 crc kubenswrapper[4998]: I0227 10:42:14.871948 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b52da7f1-a553-4ea1-9e90-f0ea08ba7a14-config\") pod \"b52da7f1-a553-4ea1-9e90-f0ea08ba7a14\" (UID: \"b52da7f1-a553-4ea1-9e90-f0ea08ba7a14\") " Feb 27 10:42:14 crc kubenswrapper[4998]: I0227 10:42:14.872005 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b52da7f1-a553-4ea1-9e90-f0ea08ba7a14-ovsdbserver-nb\") pod \"b52da7f1-a553-4ea1-9e90-f0ea08ba7a14\" (UID: \"b52da7f1-a553-4ea1-9e90-f0ea08ba7a14\") " Feb 27 10:42:14 crc kubenswrapper[4998]: I0227 10:42:14.872115 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9jxwr\" (UniqueName: \"kubernetes.io/projected/b52da7f1-a553-4ea1-9e90-f0ea08ba7a14-kube-api-access-9jxwr\") pod \"b52da7f1-a553-4ea1-9e90-f0ea08ba7a14\" (UID: \"b52da7f1-a553-4ea1-9e90-f0ea08ba7a14\") " Feb 27 10:42:14 crc kubenswrapper[4998]: I0227 10:42:14.889613 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b52da7f1-a553-4ea1-9e90-f0ea08ba7a14-kube-api-access-9jxwr" (OuterVolumeSpecName: "kube-api-access-9jxwr") pod "b52da7f1-a553-4ea1-9e90-f0ea08ba7a14" (UID: "b52da7f1-a553-4ea1-9e90-f0ea08ba7a14"). InnerVolumeSpecName "kube-api-access-9jxwr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:42:14 crc kubenswrapper[4998]: I0227 10:42:14.929321 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b52da7f1-a553-4ea1-9e90-f0ea08ba7a14-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b52da7f1-a553-4ea1-9e90-f0ea08ba7a14" (UID: "b52da7f1-a553-4ea1-9e90-f0ea08ba7a14"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:42:14 crc kubenswrapper[4998]: I0227 10:42:14.929904 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b52da7f1-a553-4ea1-9e90-f0ea08ba7a14-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "b52da7f1-a553-4ea1-9e90-f0ea08ba7a14" (UID: "b52da7f1-a553-4ea1-9e90-f0ea08ba7a14"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:42:14 crc kubenswrapper[4998]: I0227 10:42:14.944486 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b52da7f1-a553-4ea1-9e90-f0ea08ba7a14-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b52da7f1-a553-4ea1-9e90-f0ea08ba7a14" (UID: "b52da7f1-a553-4ea1-9e90-f0ea08ba7a14"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:42:14 crc kubenswrapper[4998]: I0227 10:42:14.946412 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b52da7f1-a553-4ea1-9e90-f0ea08ba7a14-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b52da7f1-a553-4ea1-9e90-f0ea08ba7a14" (UID: "b52da7f1-a553-4ea1-9e90-f0ea08ba7a14"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:42:14 crc kubenswrapper[4998]: I0227 10:42:14.946410 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b52da7f1-a553-4ea1-9e90-f0ea08ba7a14-config" (OuterVolumeSpecName: "config") pod "b52da7f1-a553-4ea1-9e90-f0ea08ba7a14" (UID: "b52da7f1-a553-4ea1-9e90-f0ea08ba7a14"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:42:14 crc kubenswrapper[4998]: I0227 10:42:14.975532 4998 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b52da7f1-a553-4ea1-9e90-f0ea08ba7a14-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 27 10:42:14 crc kubenswrapper[4998]: I0227 10:42:14.975578 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9jxwr\" (UniqueName: \"kubernetes.io/projected/b52da7f1-a553-4ea1-9e90-f0ea08ba7a14-kube-api-access-9jxwr\") on node \"crc\" DevicePath \"\"" Feb 27 10:42:14 crc kubenswrapper[4998]: I0227 10:42:14.975593 4998 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b52da7f1-a553-4ea1-9e90-f0ea08ba7a14-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 27 10:42:14 crc kubenswrapper[4998]: I0227 10:42:14.975605 4998 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b52da7f1-a553-4ea1-9e90-f0ea08ba7a14-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 27 10:42:14 crc kubenswrapper[4998]: I0227 10:42:14.975622 4998 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b52da7f1-a553-4ea1-9e90-f0ea08ba7a14-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 27 10:42:14 crc kubenswrapper[4998]: I0227 10:42:14.975634 4998 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b52da7f1-a553-4ea1-9e90-f0ea08ba7a14-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:42:15 crc kubenswrapper[4998]: I0227 10:42:15.302613 4998 generic.go:334] "Generic (PLEG): container finished" podID="b52da7f1-a553-4ea1-9e90-f0ea08ba7a14" containerID="d6688dcf5ef02b6a7065429810edff01ffe1a532f29430cdcde8a90cf5bb7e9f" exitCode=0 Feb 27 10:42:15 crc kubenswrapper[4998]: I0227 10:42:15.302674 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-52g6d" Feb 27 10:42:15 crc kubenswrapper[4998]: I0227 10:42:15.302675 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-52g6d" event={"ID":"b52da7f1-a553-4ea1-9e90-f0ea08ba7a14","Type":"ContainerDied","Data":"d6688dcf5ef02b6a7065429810edff01ffe1a532f29430cdcde8a90cf5bb7e9f"} Feb 27 10:42:15 crc kubenswrapper[4998]: I0227 10:42:15.302815 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-52g6d" event={"ID":"b52da7f1-a553-4ea1-9e90-f0ea08ba7a14","Type":"ContainerDied","Data":"72fc8679cf4772c9544660f22004ddb431a584c838de9799bcc047ab9d7bc1e3"} Feb 27 10:42:15 crc kubenswrapper[4998]: I0227 10:42:15.302853 4998 scope.go:117] "RemoveContainer" containerID="d6688dcf5ef02b6a7065429810edff01ffe1a532f29430cdcde8a90cf5bb7e9f" Feb 27 10:42:15 crc kubenswrapper[4998]: I0227 10:42:15.322950 4998 scope.go:117] "RemoveContainer" containerID="e03a1c766c6792be431d73ac8a15d462881f334fc604e9fa7e7ec974e777f58c" Feb 27 10:42:15 crc kubenswrapper[4998]: I0227 10:42:15.347287 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-52g6d"] Feb 27 10:42:15 crc kubenswrapper[4998]: I0227 10:42:15.356192 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-52g6d"] Feb 27 10:42:15 crc kubenswrapper[4998]: I0227 10:42:15.358606 4998 scope.go:117] "RemoveContainer" containerID="d6688dcf5ef02b6a7065429810edff01ffe1a532f29430cdcde8a90cf5bb7e9f" Feb 27 10:42:15 crc kubenswrapper[4998]: E0227 10:42:15.359045 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6688dcf5ef02b6a7065429810edff01ffe1a532f29430cdcde8a90cf5bb7e9f\": container with ID starting with d6688dcf5ef02b6a7065429810edff01ffe1a532f29430cdcde8a90cf5bb7e9f not found: ID does not exist" containerID="d6688dcf5ef02b6a7065429810edff01ffe1a532f29430cdcde8a90cf5bb7e9f" Feb 27 10:42:15 crc kubenswrapper[4998]: I0227 10:42:15.359077 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6688dcf5ef02b6a7065429810edff01ffe1a532f29430cdcde8a90cf5bb7e9f"} err="failed to get container status \"d6688dcf5ef02b6a7065429810edff01ffe1a532f29430cdcde8a90cf5bb7e9f\": rpc error: code = NotFound desc = could not find container \"d6688dcf5ef02b6a7065429810edff01ffe1a532f29430cdcde8a90cf5bb7e9f\": container with ID starting with d6688dcf5ef02b6a7065429810edff01ffe1a532f29430cdcde8a90cf5bb7e9f not found: ID does not exist" Feb 27 10:42:15 crc kubenswrapper[4998]: I0227 10:42:15.359098 4998 scope.go:117] "RemoveContainer" containerID="e03a1c766c6792be431d73ac8a15d462881f334fc604e9fa7e7ec974e777f58c" Feb 27 10:42:15 crc kubenswrapper[4998]: E0227 10:42:15.359441 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e03a1c766c6792be431d73ac8a15d462881f334fc604e9fa7e7ec974e777f58c\": container with ID starting with e03a1c766c6792be431d73ac8a15d462881f334fc604e9fa7e7ec974e777f58c not found: ID does not exist" containerID="e03a1c766c6792be431d73ac8a15d462881f334fc604e9fa7e7ec974e777f58c" Feb 27 10:42:15 crc kubenswrapper[4998]: I0227 10:42:15.359473 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e03a1c766c6792be431d73ac8a15d462881f334fc604e9fa7e7ec974e777f58c"} err="failed to get container status \"e03a1c766c6792be431d73ac8a15d462881f334fc604e9fa7e7ec974e777f58c\": rpc error: code = NotFound desc = could not find container \"e03a1c766c6792be431d73ac8a15d462881f334fc604e9fa7e7ec974e777f58c\": container with ID starting with e03a1c766c6792be431d73ac8a15d462881f334fc604e9fa7e7ec974e777f58c not found: ID does not exist" Feb 27 10:42:16 crc kubenswrapper[4998]: I0227 10:42:16.775733 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b52da7f1-a553-4ea1-9e90-f0ea08ba7a14" path="/var/lib/kubelet/pods/b52da7f1-a553-4ea1-9e90-f0ea08ba7a14/volumes" Feb 27 10:42:23 crc kubenswrapper[4998]: I0227 10:42:23.095759 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-v8vlw"] Feb 27 10:42:23 crc kubenswrapper[4998]: E0227 10:42:23.096582 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11764c1f-c462-471c-b5ab-e3822b04e887" containerName="registry-server" Feb 27 10:42:23 crc kubenswrapper[4998]: I0227 10:42:23.096595 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="11764c1f-c462-471c-b5ab-e3822b04e887" containerName="registry-server" Feb 27 10:42:23 crc kubenswrapper[4998]: E0227 10:42:23.096619 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11764c1f-c462-471c-b5ab-e3822b04e887" containerName="extract-utilities" Feb 27 10:42:23 crc kubenswrapper[4998]: I0227 10:42:23.096625 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="11764c1f-c462-471c-b5ab-e3822b04e887" containerName="extract-utilities" Feb 27 10:42:23 crc kubenswrapper[4998]: E0227 10:42:23.096637 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b52da7f1-a553-4ea1-9e90-f0ea08ba7a14" containerName="init" Feb 27 10:42:23 crc kubenswrapper[4998]: I0227 10:42:23.096643 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="b52da7f1-a553-4ea1-9e90-f0ea08ba7a14" containerName="init" Feb 27 10:42:23 crc kubenswrapper[4998]: E0227 10:42:23.096663 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="590d897d-12b6-491a-bf20-33c238b10871" containerName="oc" Feb 27 10:42:23 crc kubenswrapper[4998]: I0227 10:42:23.096669 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="590d897d-12b6-491a-bf20-33c238b10871" containerName="oc" Feb 27 10:42:23 crc kubenswrapper[4998]: E0227 10:42:23.096680 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b52da7f1-a553-4ea1-9e90-f0ea08ba7a14" containerName="dnsmasq-dns" Feb 27 10:42:23 crc kubenswrapper[4998]: I0227 10:42:23.096685 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="b52da7f1-a553-4ea1-9e90-f0ea08ba7a14" containerName="dnsmasq-dns" Feb 27 10:42:23 crc kubenswrapper[4998]: E0227 10:42:23.096697 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11764c1f-c462-471c-b5ab-e3822b04e887" containerName="extract-content" Feb 27 10:42:23 crc kubenswrapper[4998]: I0227 10:42:23.096703 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="11764c1f-c462-471c-b5ab-e3822b04e887" containerName="extract-content" Feb 27 10:42:23 crc kubenswrapper[4998]: I0227 10:42:23.096863 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="590d897d-12b6-491a-bf20-33c238b10871" containerName="oc" Feb 27 10:42:23 crc kubenswrapper[4998]: I0227 10:42:23.096884 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="11764c1f-c462-471c-b5ab-e3822b04e887" containerName="registry-server" Feb 27 10:42:23 crc kubenswrapper[4998]: I0227 10:42:23.096897 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="b52da7f1-a553-4ea1-9e90-f0ea08ba7a14" containerName="dnsmasq-dns" Feb 27 10:42:23 crc kubenswrapper[4998]: I0227 10:42:23.101873 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-v8vlw" Feb 27 10:42:23 crc kubenswrapper[4998]: I0227 10:42:23.105456 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 27 10:42:23 crc kubenswrapper[4998]: I0227 10:42:23.105682 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 27 10:42:23 crc kubenswrapper[4998]: I0227 10:42:23.105895 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 27 10:42:23 crc kubenswrapper[4998]: I0227 10:42:23.107174 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-bpcp2" Feb 27 10:42:23 crc kubenswrapper[4998]: I0227 10:42:23.117118 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-v8vlw"] Feb 27 10:42:23 crc kubenswrapper[4998]: I0227 10:42:23.276494 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1d55b1c-6f76-403e-ab08-b47e20de4314-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-v8vlw\" (UID: \"e1d55b1c-6f76-403e-ab08-b47e20de4314\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-v8vlw" Feb 27 10:42:23 crc kubenswrapper[4998]: I0227 10:42:23.276652 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e1d55b1c-6f76-403e-ab08-b47e20de4314-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-v8vlw\" (UID: \"e1d55b1c-6f76-403e-ab08-b47e20de4314\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-v8vlw" Feb 27 10:42:23 crc kubenswrapper[4998]: I0227 10:42:23.276726 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgdsr\" (UniqueName: \"kubernetes.io/projected/e1d55b1c-6f76-403e-ab08-b47e20de4314-kube-api-access-qgdsr\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-v8vlw\" (UID: \"e1d55b1c-6f76-403e-ab08-b47e20de4314\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-v8vlw" Feb 27 10:42:23 crc kubenswrapper[4998]: I0227 10:42:23.277030 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e1d55b1c-6f76-403e-ab08-b47e20de4314-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-v8vlw\" (UID: \"e1d55b1c-6f76-403e-ab08-b47e20de4314\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-v8vlw" Feb 27 10:42:23 crc kubenswrapper[4998]: I0227 10:42:23.378516 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1d55b1c-6f76-403e-ab08-b47e20de4314-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-v8vlw\" (UID: \"e1d55b1c-6f76-403e-ab08-b47e20de4314\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-v8vlw" Feb 27 10:42:23 crc kubenswrapper[4998]: I0227 10:42:23.378872 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e1d55b1c-6f76-403e-ab08-b47e20de4314-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-v8vlw\" (UID: \"e1d55b1c-6f76-403e-ab08-b47e20de4314\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-v8vlw" Feb 27 10:42:23 crc kubenswrapper[4998]: I0227 10:42:23.378997 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgdsr\" (UniqueName: \"kubernetes.io/projected/e1d55b1c-6f76-403e-ab08-b47e20de4314-kube-api-access-qgdsr\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-v8vlw\" (UID: \"e1d55b1c-6f76-403e-ab08-b47e20de4314\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-v8vlw" Feb 27 10:42:23 crc kubenswrapper[4998]: I0227 10:42:23.379144 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e1d55b1c-6f76-403e-ab08-b47e20de4314-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-v8vlw\" (UID: \"e1d55b1c-6f76-403e-ab08-b47e20de4314\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-v8vlw" Feb 27 10:42:23 crc kubenswrapper[4998]: I0227 10:42:23.384258 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e1d55b1c-6f76-403e-ab08-b47e20de4314-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-v8vlw\" (UID: \"e1d55b1c-6f76-403e-ab08-b47e20de4314\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-v8vlw" Feb 27 10:42:23 crc kubenswrapper[4998]: I0227 10:42:23.384505 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e1d55b1c-6f76-403e-ab08-b47e20de4314-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-v8vlw\" (UID: \"e1d55b1c-6f76-403e-ab08-b47e20de4314\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-v8vlw" Feb 27 10:42:23 crc kubenswrapper[4998]: I0227 10:42:23.384548 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1d55b1c-6f76-403e-ab08-b47e20de4314-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-v8vlw\" (UID: \"e1d55b1c-6f76-403e-ab08-b47e20de4314\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-v8vlw" Feb 27 10:42:23 crc kubenswrapper[4998]: I0227 10:42:23.394929 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgdsr\" (UniqueName: \"kubernetes.io/projected/e1d55b1c-6f76-403e-ab08-b47e20de4314-kube-api-access-qgdsr\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-v8vlw\" (UID: \"e1d55b1c-6f76-403e-ab08-b47e20de4314\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-v8vlw" Feb 27 10:42:23 crc kubenswrapper[4998]: I0227 10:42:23.430935 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-v8vlw" Feb 27 10:42:23 crc kubenswrapper[4998]: I0227 10:42:23.995440 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-v8vlw"] Feb 27 10:42:24 crc kubenswrapper[4998]: I0227 10:42:24.401870 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-v8vlw" event={"ID":"e1d55b1c-6f76-403e-ab08-b47e20de4314","Type":"ContainerStarted","Data":"4197be57aaf9a13e3a8646b3a6f2dcd2deb04f0b1091a3ec86054a3027c149c8"} Feb 27 10:42:33 crc kubenswrapper[4998]: I0227 10:42:32.998598 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 27 10:42:33 crc kubenswrapper[4998]: I0227 10:42:33.491809 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-v8vlw" event={"ID":"e1d55b1c-6f76-403e-ab08-b47e20de4314","Type":"ContainerStarted","Data":"9c5d30d6019989b6395a9df359f288ed4ee361356e82980c255ee62a923c3fd4"} Feb 27 10:42:33 crc kubenswrapper[4998]: I0227 10:42:33.513283 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-v8vlw" podStartSLOduration=1.5158411200000002 podStartE2EDuration="10.513264819s" podCreationTimestamp="2026-02-27 10:42:23 +0000 UTC" firstStartedPulling="2026-02-27 10:42:23.996856235 +0000 UTC m=+1495.995127213" lastFinishedPulling="2026-02-27 10:42:32.994279944 +0000 UTC m=+1504.992550912" observedRunningTime="2026-02-27 10:42:33.506569685 +0000 UTC m=+1505.504840653" watchObservedRunningTime="2026-02-27 10:42:33.513264819 +0000 UTC m=+1505.511535787" Feb 27 10:42:36 crc kubenswrapper[4998]: I0227 10:42:36.525141 4998 generic.go:334] "Generic (PLEG): container finished" podID="058b490f-35b3-42df-a3b3-a684664a0e44" containerID="13fab38c9ef8fb59d01b560ad7fccb175b8ee207edbbb4b5396a28b23815f05d" exitCode=0 Feb 27 10:42:36 crc kubenswrapper[4998]: I0227 10:42:36.525355 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"058b490f-35b3-42df-a3b3-a684664a0e44","Type":"ContainerDied","Data":"13fab38c9ef8fb59d01b560ad7fccb175b8ee207edbbb4b5396a28b23815f05d"} Feb 27 10:42:37 crc kubenswrapper[4998]: I0227 10:42:37.536144 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"058b490f-35b3-42df-a3b3-a684664a0e44","Type":"ContainerStarted","Data":"aa07de13692df9f638115165c3567d45001dba8ba178b58ed5e0a9185a7b9005"} Feb 27 10:42:37 crc kubenswrapper[4998]: I0227 10:42:37.536764 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 27 10:42:37 crc kubenswrapper[4998]: I0227 10:42:37.538353 4998 generic.go:334] "Generic (PLEG): container finished" podID="7d2295db-a05f-492a-82d7-295fd2222daf" containerID="a19148b92043467e836bf3ed92605bb7164775feebb7d35d0916e04da7b59e93" exitCode=0 Feb 27 10:42:37 crc kubenswrapper[4998]: I0227 10:42:37.538404 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"7d2295db-a05f-492a-82d7-295fd2222daf","Type":"ContainerDied","Data":"a19148b92043467e836bf3ed92605bb7164775feebb7d35d0916e04da7b59e93"} Feb 27 10:42:37 crc kubenswrapper[4998]: I0227 10:42:37.560821 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.560801692 podStartE2EDuration="36.560801692s" podCreationTimestamp="2026-02-27 10:42:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:42:37.560347897 +0000 UTC m=+1509.558618885" watchObservedRunningTime="2026-02-27 10:42:37.560801692 +0000 UTC m=+1509.559072660" Feb 27 10:42:38 crc kubenswrapper[4998]: I0227 10:42:38.561124 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"7d2295db-a05f-492a-82d7-295fd2222daf","Type":"ContainerStarted","Data":"6f734aa0a124d35fb863fc41d9caa07828a52fefd914a286d5c7b1f53b23a602"} Feb 27 10:42:38 crc kubenswrapper[4998]: I0227 10:42:38.561864 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:42:38 crc kubenswrapper[4998]: I0227 10:42:38.585436 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=36.585421156 podStartE2EDuration="36.585421156s" podCreationTimestamp="2026-02-27 10:42:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:42:38.582365303 +0000 UTC m=+1510.580636271" watchObservedRunningTime="2026-02-27 10:42:38.585421156 +0000 UTC m=+1510.583692124" Feb 27 10:42:44 crc kubenswrapper[4998]: I0227 10:42:44.617077 4998 generic.go:334] "Generic (PLEG): container finished" podID="e1d55b1c-6f76-403e-ab08-b47e20de4314" containerID="9c5d30d6019989b6395a9df359f288ed4ee361356e82980c255ee62a923c3fd4" exitCode=0 Feb 27 10:42:44 crc kubenswrapper[4998]: I0227 10:42:44.617177 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-v8vlw" event={"ID":"e1d55b1c-6f76-403e-ab08-b47e20de4314","Type":"ContainerDied","Data":"9c5d30d6019989b6395a9df359f288ed4ee361356e82980c255ee62a923c3fd4"} Feb 27 10:42:45 crc kubenswrapper[4998]: I0227 10:42:45.952666 4998 scope.go:117] "RemoveContainer" containerID="88b207ed088e77bf20cf381bd8825fa32c16d5908fe2992af8efafd62c4b6824" Feb 27 10:42:46 crc kubenswrapper[4998]: I0227 10:42:46.126132 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-v8vlw" Feb 27 10:42:46 crc kubenswrapper[4998]: I0227 10:42:46.273329 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgdsr\" (UniqueName: \"kubernetes.io/projected/e1d55b1c-6f76-403e-ab08-b47e20de4314-kube-api-access-qgdsr\") pod \"e1d55b1c-6f76-403e-ab08-b47e20de4314\" (UID: \"e1d55b1c-6f76-403e-ab08-b47e20de4314\") " Feb 27 10:42:46 crc kubenswrapper[4998]: I0227 10:42:46.273540 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1d55b1c-6f76-403e-ab08-b47e20de4314-repo-setup-combined-ca-bundle\") pod \"e1d55b1c-6f76-403e-ab08-b47e20de4314\" (UID: \"e1d55b1c-6f76-403e-ab08-b47e20de4314\") " Feb 27 10:42:46 crc kubenswrapper[4998]: I0227 10:42:46.273660 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e1d55b1c-6f76-403e-ab08-b47e20de4314-inventory\") pod \"e1d55b1c-6f76-403e-ab08-b47e20de4314\" (UID: \"e1d55b1c-6f76-403e-ab08-b47e20de4314\") " Feb 27 10:42:46 crc kubenswrapper[4998]: I0227 10:42:46.273723 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e1d55b1c-6f76-403e-ab08-b47e20de4314-ssh-key-openstack-edpm-ipam\") pod \"e1d55b1c-6f76-403e-ab08-b47e20de4314\" (UID: \"e1d55b1c-6f76-403e-ab08-b47e20de4314\") " Feb 27 10:42:46 crc kubenswrapper[4998]: I0227 10:42:46.279551 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1d55b1c-6f76-403e-ab08-b47e20de4314-kube-api-access-qgdsr" (OuterVolumeSpecName: "kube-api-access-qgdsr") pod "e1d55b1c-6f76-403e-ab08-b47e20de4314" (UID: "e1d55b1c-6f76-403e-ab08-b47e20de4314"). InnerVolumeSpecName "kube-api-access-qgdsr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:42:46 crc kubenswrapper[4998]: I0227 10:42:46.289625 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1d55b1c-6f76-403e-ab08-b47e20de4314-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "e1d55b1c-6f76-403e-ab08-b47e20de4314" (UID: "e1d55b1c-6f76-403e-ab08-b47e20de4314"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:42:46 crc kubenswrapper[4998]: I0227 10:42:46.311722 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1d55b1c-6f76-403e-ab08-b47e20de4314-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e1d55b1c-6f76-403e-ab08-b47e20de4314" (UID: "e1d55b1c-6f76-403e-ab08-b47e20de4314"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:42:46 crc kubenswrapper[4998]: I0227 10:42:46.312537 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1d55b1c-6f76-403e-ab08-b47e20de4314-inventory" (OuterVolumeSpecName: "inventory") pod "e1d55b1c-6f76-403e-ab08-b47e20de4314" (UID: "e1d55b1c-6f76-403e-ab08-b47e20de4314"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:42:46 crc kubenswrapper[4998]: I0227 10:42:46.376211 4998 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e1d55b1c-6f76-403e-ab08-b47e20de4314-inventory\") on node \"crc\" DevicePath \"\"" Feb 27 10:42:46 crc kubenswrapper[4998]: I0227 10:42:46.376271 4998 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e1d55b1c-6f76-403e-ab08-b47e20de4314-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 27 10:42:46 crc kubenswrapper[4998]: I0227 10:42:46.376290 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qgdsr\" (UniqueName: \"kubernetes.io/projected/e1d55b1c-6f76-403e-ab08-b47e20de4314-kube-api-access-qgdsr\") on node \"crc\" DevicePath \"\"" Feb 27 10:42:46 crc kubenswrapper[4998]: I0227 10:42:46.376303 4998 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1d55b1c-6f76-403e-ab08-b47e20de4314-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:42:46 crc kubenswrapper[4998]: I0227 10:42:46.641030 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-v8vlw" event={"ID":"e1d55b1c-6f76-403e-ab08-b47e20de4314","Type":"ContainerDied","Data":"4197be57aaf9a13e3a8646b3a6f2dcd2deb04f0b1091a3ec86054a3027c149c8"} Feb 27 10:42:46 crc kubenswrapper[4998]: I0227 10:42:46.641404 4998 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4197be57aaf9a13e3a8646b3a6f2dcd2deb04f0b1091a3ec86054a3027c149c8" Feb 27 10:42:46 crc kubenswrapper[4998]: I0227 10:42:46.641501 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-v8vlw" Feb 27 10:42:46 crc kubenswrapper[4998]: I0227 10:42:46.839292 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-kzbf2"] Feb 27 10:42:46 crc kubenswrapper[4998]: E0227 10:42:46.839825 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1d55b1c-6f76-403e-ab08-b47e20de4314" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 27 10:42:46 crc kubenswrapper[4998]: I0227 10:42:46.839850 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1d55b1c-6f76-403e-ab08-b47e20de4314" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 27 10:42:46 crc kubenswrapper[4998]: I0227 10:42:46.840114 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1d55b1c-6f76-403e-ab08-b47e20de4314" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 27 10:42:46 crc kubenswrapper[4998]: I0227 10:42:46.840845 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-kzbf2" Feb 27 10:42:46 crc kubenswrapper[4998]: I0227 10:42:46.846732 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 27 10:42:46 crc kubenswrapper[4998]: I0227 10:42:46.846730 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 27 10:42:46 crc kubenswrapper[4998]: I0227 10:42:46.847065 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 27 10:42:46 crc kubenswrapper[4998]: I0227 10:42:46.847357 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-bpcp2" Feb 27 10:42:46 crc kubenswrapper[4998]: I0227 10:42:46.884295 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-kzbf2"] Feb 27 10:42:46 crc kubenswrapper[4998]: I0227 10:42:46.990913 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e543d9a5-720b-41ac-92ab-8a2895ce25c3-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-kzbf2\" (UID: \"e543d9a5-720b-41ac-92ab-8a2895ce25c3\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-kzbf2" Feb 27 10:42:46 crc kubenswrapper[4998]: I0227 10:42:46.991075 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e543d9a5-720b-41ac-92ab-8a2895ce25c3-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-kzbf2\" (UID: \"e543d9a5-720b-41ac-92ab-8a2895ce25c3\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-kzbf2" Feb 27 10:42:46 crc kubenswrapper[4998]: I0227 10:42:46.991124 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbw5l\" (UniqueName: \"kubernetes.io/projected/e543d9a5-720b-41ac-92ab-8a2895ce25c3-kube-api-access-bbw5l\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-kzbf2\" (UID: \"e543d9a5-720b-41ac-92ab-8a2895ce25c3\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-kzbf2" Feb 27 10:42:47 crc kubenswrapper[4998]: I0227 10:42:47.093056 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e543d9a5-720b-41ac-92ab-8a2895ce25c3-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-kzbf2\" (UID: \"e543d9a5-720b-41ac-92ab-8a2895ce25c3\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-kzbf2" Feb 27 10:42:47 crc kubenswrapper[4998]: I0227 10:42:47.093141 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbw5l\" (UniqueName: \"kubernetes.io/projected/e543d9a5-720b-41ac-92ab-8a2895ce25c3-kube-api-access-bbw5l\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-kzbf2\" (UID: \"e543d9a5-720b-41ac-92ab-8a2895ce25c3\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-kzbf2" Feb 27 10:42:47 crc kubenswrapper[4998]: I0227 10:42:47.093259 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e543d9a5-720b-41ac-92ab-8a2895ce25c3-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-kzbf2\" (UID: \"e543d9a5-720b-41ac-92ab-8a2895ce25c3\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-kzbf2" Feb 27 10:42:47 crc kubenswrapper[4998]: I0227 10:42:47.099498 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e543d9a5-720b-41ac-92ab-8a2895ce25c3-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-kzbf2\" (UID: \"e543d9a5-720b-41ac-92ab-8a2895ce25c3\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-kzbf2" Feb 27 10:42:47 crc kubenswrapper[4998]: I0227 10:42:47.100516 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e543d9a5-720b-41ac-92ab-8a2895ce25c3-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-kzbf2\" (UID: \"e543d9a5-720b-41ac-92ab-8a2895ce25c3\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-kzbf2" Feb 27 10:42:47 crc kubenswrapper[4998]: I0227 10:42:47.116011 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbw5l\" (UniqueName: \"kubernetes.io/projected/e543d9a5-720b-41ac-92ab-8a2895ce25c3-kube-api-access-bbw5l\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-kzbf2\" (UID: \"e543d9a5-720b-41ac-92ab-8a2895ce25c3\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-kzbf2" Feb 27 10:42:47 crc kubenswrapper[4998]: I0227 10:42:47.171002 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-kzbf2" Feb 27 10:42:47 crc kubenswrapper[4998]: I0227 10:42:47.703020 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-kzbf2"] Feb 27 10:42:48 crc kubenswrapper[4998]: I0227 10:42:48.665594 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-kzbf2" event={"ID":"e543d9a5-720b-41ac-92ab-8a2895ce25c3","Type":"ContainerStarted","Data":"1b17e4075eec7907481691030e3a119170fbd8055650e3f3bbfc7adb3218a468"} Feb 27 10:42:48 crc kubenswrapper[4998]: I0227 10:42:48.666966 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-kzbf2" event={"ID":"e543d9a5-720b-41ac-92ab-8a2895ce25c3","Type":"ContainerStarted","Data":"8007bc3f955620e8020a30b54afe3c610afa87b09630d85205f6152c5e8c62b4"} Feb 27 10:42:48 crc kubenswrapper[4998]: I0227 10:42:48.690591 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-kzbf2" podStartSLOduration=2.233749275 podStartE2EDuration="2.690571025s" podCreationTimestamp="2026-02-27 10:42:46 +0000 UTC" firstStartedPulling="2026-02-27 10:42:47.710197306 +0000 UTC m=+1519.708468274" lastFinishedPulling="2026-02-27 10:42:48.167019056 +0000 UTC m=+1520.165290024" observedRunningTime="2026-02-27 10:42:48.685719843 +0000 UTC m=+1520.683990831" watchObservedRunningTime="2026-02-27 10:42:48.690571025 +0000 UTC m=+1520.688841993" Feb 27 10:42:51 crc kubenswrapper[4998]: I0227 10:42:51.696670 4998 generic.go:334] "Generic (PLEG): container finished" podID="e543d9a5-720b-41ac-92ab-8a2895ce25c3" containerID="1b17e4075eec7907481691030e3a119170fbd8055650e3f3bbfc7adb3218a468" exitCode=0 Feb 27 10:42:51 crc kubenswrapper[4998]: I0227 10:42:51.696980 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-kzbf2" event={"ID":"e543d9a5-720b-41ac-92ab-8a2895ce25c3","Type":"ContainerDied","Data":"1b17e4075eec7907481691030e3a119170fbd8055650e3f3bbfc7adb3218a468"} Feb 27 10:42:52 crc kubenswrapper[4998]: I0227 10:42:52.128451 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 27 10:42:52 crc kubenswrapper[4998]: I0227 10:42:52.925140 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:42:53 crc kubenswrapper[4998]: I0227 10:42:53.214583 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-kzbf2" Feb 27 10:42:53 crc kubenswrapper[4998]: I0227 10:42:53.330888 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e543d9a5-720b-41ac-92ab-8a2895ce25c3-ssh-key-openstack-edpm-ipam\") pod \"e543d9a5-720b-41ac-92ab-8a2895ce25c3\" (UID: \"e543d9a5-720b-41ac-92ab-8a2895ce25c3\") " Feb 27 10:42:53 crc kubenswrapper[4998]: I0227 10:42:53.331103 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bbw5l\" (UniqueName: \"kubernetes.io/projected/e543d9a5-720b-41ac-92ab-8a2895ce25c3-kube-api-access-bbw5l\") pod \"e543d9a5-720b-41ac-92ab-8a2895ce25c3\" (UID: \"e543d9a5-720b-41ac-92ab-8a2895ce25c3\") " Feb 27 10:42:53 crc kubenswrapper[4998]: I0227 10:42:53.331358 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e543d9a5-720b-41ac-92ab-8a2895ce25c3-inventory\") pod \"e543d9a5-720b-41ac-92ab-8a2895ce25c3\" (UID: \"e543d9a5-720b-41ac-92ab-8a2895ce25c3\") " Feb 27 10:42:53 crc kubenswrapper[4998]: I0227 10:42:53.352552 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e543d9a5-720b-41ac-92ab-8a2895ce25c3-kube-api-access-bbw5l" (OuterVolumeSpecName: "kube-api-access-bbw5l") pod "e543d9a5-720b-41ac-92ab-8a2895ce25c3" (UID: "e543d9a5-720b-41ac-92ab-8a2895ce25c3"). InnerVolumeSpecName "kube-api-access-bbw5l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:42:53 crc kubenswrapper[4998]: I0227 10:42:53.365596 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e543d9a5-720b-41ac-92ab-8a2895ce25c3-inventory" (OuterVolumeSpecName: "inventory") pod "e543d9a5-720b-41ac-92ab-8a2895ce25c3" (UID: "e543d9a5-720b-41ac-92ab-8a2895ce25c3"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:42:53 crc kubenswrapper[4998]: I0227 10:42:53.368576 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e543d9a5-720b-41ac-92ab-8a2895ce25c3-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e543d9a5-720b-41ac-92ab-8a2895ce25c3" (UID: "e543d9a5-720b-41ac-92ab-8a2895ce25c3"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:42:53 crc kubenswrapper[4998]: I0227 10:42:53.433781 4998 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e543d9a5-720b-41ac-92ab-8a2895ce25c3-inventory\") on node \"crc\" DevicePath \"\"" Feb 27 10:42:53 crc kubenswrapper[4998]: I0227 10:42:53.433821 4998 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e543d9a5-720b-41ac-92ab-8a2895ce25c3-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 27 10:42:53 crc kubenswrapper[4998]: I0227 10:42:53.433835 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bbw5l\" (UniqueName: \"kubernetes.io/projected/e543d9a5-720b-41ac-92ab-8a2895ce25c3-kube-api-access-bbw5l\") on node \"crc\" DevicePath \"\"" Feb 27 10:42:53 crc kubenswrapper[4998]: I0227 10:42:53.740062 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-kzbf2" event={"ID":"e543d9a5-720b-41ac-92ab-8a2895ce25c3","Type":"ContainerDied","Data":"8007bc3f955620e8020a30b54afe3c610afa87b09630d85205f6152c5e8c62b4"} Feb 27 10:42:53 crc kubenswrapper[4998]: I0227 10:42:53.740111 4998 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8007bc3f955620e8020a30b54afe3c610afa87b09630d85205f6152c5e8c62b4" Feb 27 10:42:53 crc kubenswrapper[4998]: I0227 10:42:53.740188 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-kzbf2" Feb 27 10:42:53 crc kubenswrapper[4998]: I0227 10:42:53.814032 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xvnkl"] Feb 27 10:42:53 crc kubenswrapper[4998]: E0227 10:42:53.814608 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e543d9a5-720b-41ac-92ab-8a2895ce25c3" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 27 10:42:53 crc kubenswrapper[4998]: I0227 10:42:53.814636 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="e543d9a5-720b-41ac-92ab-8a2895ce25c3" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 27 10:42:53 crc kubenswrapper[4998]: I0227 10:42:53.814910 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="e543d9a5-720b-41ac-92ab-8a2895ce25c3" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 27 10:42:53 crc kubenswrapper[4998]: I0227 10:42:53.815844 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xvnkl" Feb 27 10:42:53 crc kubenswrapper[4998]: I0227 10:42:53.818355 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 27 10:42:53 crc kubenswrapper[4998]: I0227 10:42:53.818488 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 27 10:42:53 crc kubenswrapper[4998]: I0227 10:42:53.820749 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 27 10:42:53 crc kubenswrapper[4998]: I0227 10:42:53.820963 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-bpcp2" Feb 27 10:42:53 crc kubenswrapper[4998]: I0227 10:42:53.828928 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xvnkl"] Feb 27 10:42:53 crc kubenswrapper[4998]: I0227 10:42:53.844481 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bf074ce9-dfbc-44fe-8839-67aa9d2d43f8-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xvnkl\" (UID: \"bf074ce9-dfbc-44fe-8839-67aa9d2d43f8\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xvnkl" Feb 27 10:42:53 crc kubenswrapper[4998]: I0227 10:42:53.844541 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwfzt\" (UniqueName: \"kubernetes.io/projected/bf074ce9-dfbc-44fe-8839-67aa9d2d43f8-kube-api-access-xwfzt\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xvnkl\" (UID: \"bf074ce9-dfbc-44fe-8839-67aa9d2d43f8\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xvnkl" Feb 27 10:42:53 crc kubenswrapper[4998]: I0227 10:42:53.844613 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bf074ce9-dfbc-44fe-8839-67aa9d2d43f8-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xvnkl\" (UID: \"bf074ce9-dfbc-44fe-8839-67aa9d2d43f8\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xvnkl" Feb 27 10:42:53 crc kubenswrapper[4998]: I0227 10:42:53.844681 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf074ce9-dfbc-44fe-8839-67aa9d2d43f8-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xvnkl\" (UID: \"bf074ce9-dfbc-44fe-8839-67aa9d2d43f8\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xvnkl" Feb 27 10:42:53 crc kubenswrapper[4998]: I0227 10:42:53.946141 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bf074ce9-dfbc-44fe-8839-67aa9d2d43f8-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xvnkl\" (UID: \"bf074ce9-dfbc-44fe-8839-67aa9d2d43f8\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xvnkl" Feb 27 10:42:53 crc kubenswrapper[4998]: I0227 10:42:53.946759 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwfzt\" (UniqueName: \"kubernetes.io/projected/bf074ce9-dfbc-44fe-8839-67aa9d2d43f8-kube-api-access-xwfzt\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xvnkl\" (UID: \"bf074ce9-dfbc-44fe-8839-67aa9d2d43f8\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xvnkl" Feb 27 10:42:53 crc kubenswrapper[4998]: I0227 10:42:53.946809 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bf074ce9-dfbc-44fe-8839-67aa9d2d43f8-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xvnkl\" (UID: \"bf074ce9-dfbc-44fe-8839-67aa9d2d43f8\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xvnkl" Feb 27 10:42:53 crc kubenswrapper[4998]: I0227 10:42:53.946867 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf074ce9-dfbc-44fe-8839-67aa9d2d43f8-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xvnkl\" (UID: \"bf074ce9-dfbc-44fe-8839-67aa9d2d43f8\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xvnkl" Feb 27 10:42:53 crc kubenswrapper[4998]: I0227 10:42:53.950927 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bf074ce9-dfbc-44fe-8839-67aa9d2d43f8-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xvnkl\" (UID: \"bf074ce9-dfbc-44fe-8839-67aa9d2d43f8\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xvnkl" Feb 27 10:42:53 crc kubenswrapper[4998]: I0227 10:42:53.951871 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bf074ce9-dfbc-44fe-8839-67aa9d2d43f8-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xvnkl\" (UID: \"bf074ce9-dfbc-44fe-8839-67aa9d2d43f8\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xvnkl" Feb 27 10:42:53 crc kubenswrapper[4998]: I0227 10:42:53.961017 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwfzt\" (UniqueName: \"kubernetes.io/projected/bf074ce9-dfbc-44fe-8839-67aa9d2d43f8-kube-api-access-xwfzt\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xvnkl\" (UID: \"bf074ce9-dfbc-44fe-8839-67aa9d2d43f8\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xvnkl" Feb 27 10:42:53 crc kubenswrapper[4998]: I0227 10:42:53.962944 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf074ce9-dfbc-44fe-8839-67aa9d2d43f8-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xvnkl\" (UID: \"bf074ce9-dfbc-44fe-8839-67aa9d2d43f8\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xvnkl" Feb 27 10:42:54 crc kubenswrapper[4998]: I0227 10:42:54.137276 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xvnkl" Feb 27 10:42:54 crc kubenswrapper[4998]: I0227 10:42:54.693786 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xvnkl"] Feb 27 10:42:54 crc kubenswrapper[4998]: W0227 10:42:54.695258 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbf074ce9_dfbc_44fe_8839_67aa9d2d43f8.slice/crio-32563f1752fafc6d9c8328aee022369ff29e623282f4610108c7a9a15f582c10 WatchSource:0}: Error finding container 32563f1752fafc6d9c8328aee022369ff29e623282f4610108c7a9a15f582c10: Status 404 returned error can't find the container with id 32563f1752fafc6d9c8328aee022369ff29e623282f4610108c7a9a15f582c10 Feb 27 10:42:54 crc kubenswrapper[4998]: I0227 10:42:54.748754 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xvnkl" event={"ID":"bf074ce9-dfbc-44fe-8839-67aa9d2d43f8","Type":"ContainerStarted","Data":"32563f1752fafc6d9c8328aee022369ff29e623282f4610108c7a9a15f582c10"} Feb 27 10:42:55 crc kubenswrapper[4998]: I0227 10:42:55.759115 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xvnkl" event={"ID":"bf074ce9-dfbc-44fe-8839-67aa9d2d43f8","Type":"ContainerStarted","Data":"72b8e867a981a9a541a52ee66009ae8748a9592c48c1b6fd987ab30ca33b5d9e"} Feb 27 10:42:55 crc kubenswrapper[4998]: I0227 10:42:55.787705 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xvnkl" podStartSLOduration=2.220984398 podStartE2EDuration="2.787679003s" podCreationTimestamp="2026-02-27 10:42:53 +0000 UTC" firstStartedPulling="2026-02-27 10:42:54.697548393 +0000 UTC m=+1526.695819361" lastFinishedPulling="2026-02-27 10:42:55.264242988 +0000 UTC m=+1527.262513966" observedRunningTime="2026-02-27 10:42:55.777347986 +0000 UTC m=+1527.775618974" watchObservedRunningTime="2026-02-27 10:42:55.787679003 +0000 UTC m=+1527.785949971" Feb 27 10:43:10 crc kubenswrapper[4998]: I0227 10:43:10.504420 4998 patch_prober.go:28] interesting pod/machine-config-daemon-m6kr5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 10:43:10 crc kubenswrapper[4998]: I0227 10:43:10.504971 4998 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 10:43:31 crc kubenswrapper[4998]: I0227 10:43:31.797036 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-fn8h9"] Feb 27 10:43:31 crc kubenswrapper[4998]: I0227 10:43:31.800908 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fn8h9" Feb 27 10:43:31 crc kubenswrapper[4998]: I0227 10:43:31.814480 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fn8h9"] Feb 27 10:43:31 crc kubenswrapper[4998]: I0227 10:43:31.998817 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c4d1bff-af82-4f2d-aa17-d4897c54c4a3-utilities\") pod \"redhat-marketplace-fn8h9\" (UID: \"5c4d1bff-af82-4f2d-aa17-d4897c54c4a3\") " pod="openshift-marketplace/redhat-marketplace-fn8h9" Feb 27 10:43:31 crc kubenswrapper[4998]: I0227 10:43:31.998882 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c4d1bff-af82-4f2d-aa17-d4897c54c4a3-catalog-content\") pod \"redhat-marketplace-fn8h9\" (UID: \"5c4d1bff-af82-4f2d-aa17-d4897c54c4a3\") " pod="openshift-marketplace/redhat-marketplace-fn8h9" Feb 27 10:43:31 crc kubenswrapper[4998]: I0227 10:43:31.998978 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fr5wd\" (UniqueName: \"kubernetes.io/projected/5c4d1bff-af82-4f2d-aa17-d4897c54c4a3-kube-api-access-fr5wd\") pod \"redhat-marketplace-fn8h9\" (UID: \"5c4d1bff-af82-4f2d-aa17-d4897c54c4a3\") " pod="openshift-marketplace/redhat-marketplace-fn8h9" Feb 27 10:43:32 crc kubenswrapper[4998]: I0227 10:43:32.099944 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c4d1bff-af82-4f2d-aa17-d4897c54c4a3-utilities\") pod \"redhat-marketplace-fn8h9\" (UID: \"5c4d1bff-af82-4f2d-aa17-d4897c54c4a3\") " pod="openshift-marketplace/redhat-marketplace-fn8h9" Feb 27 10:43:32 crc kubenswrapper[4998]: I0227 10:43:32.099998 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c4d1bff-af82-4f2d-aa17-d4897c54c4a3-catalog-content\") pod \"redhat-marketplace-fn8h9\" (UID: \"5c4d1bff-af82-4f2d-aa17-d4897c54c4a3\") " pod="openshift-marketplace/redhat-marketplace-fn8h9" Feb 27 10:43:32 crc kubenswrapper[4998]: I0227 10:43:32.100069 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fr5wd\" (UniqueName: \"kubernetes.io/projected/5c4d1bff-af82-4f2d-aa17-d4897c54c4a3-kube-api-access-fr5wd\") pod \"redhat-marketplace-fn8h9\" (UID: \"5c4d1bff-af82-4f2d-aa17-d4897c54c4a3\") " pod="openshift-marketplace/redhat-marketplace-fn8h9" Feb 27 10:43:32 crc kubenswrapper[4998]: I0227 10:43:32.100644 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c4d1bff-af82-4f2d-aa17-d4897c54c4a3-utilities\") pod \"redhat-marketplace-fn8h9\" (UID: \"5c4d1bff-af82-4f2d-aa17-d4897c54c4a3\") " pod="openshift-marketplace/redhat-marketplace-fn8h9" Feb 27 10:43:32 crc kubenswrapper[4998]: I0227 10:43:32.100667 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c4d1bff-af82-4f2d-aa17-d4897c54c4a3-catalog-content\") pod \"redhat-marketplace-fn8h9\" (UID: \"5c4d1bff-af82-4f2d-aa17-d4897c54c4a3\") " pod="openshift-marketplace/redhat-marketplace-fn8h9" Feb 27 10:43:32 crc kubenswrapper[4998]: I0227 10:43:32.121113 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fr5wd\" (UniqueName: \"kubernetes.io/projected/5c4d1bff-af82-4f2d-aa17-d4897c54c4a3-kube-api-access-fr5wd\") pod \"redhat-marketplace-fn8h9\" (UID: \"5c4d1bff-af82-4f2d-aa17-d4897c54c4a3\") " pod="openshift-marketplace/redhat-marketplace-fn8h9" Feb 27 10:43:32 crc kubenswrapper[4998]: I0227 10:43:32.421466 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fn8h9" Feb 27 10:43:32 crc kubenswrapper[4998]: I0227 10:43:32.869092 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fn8h9"] Feb 27 10:43:33 crc kubenswrapper[4998]: I0227 10:43:33.114803 4998 generic.go:334] "Generic (PLEG): container finished" podID="5c4d1bff-af82-4f2d-aa17-d4897c54c4a3" containerID="ab6aa3b629418da06d0637346036814d2ce444a2c5d5fa368293debfc8a95ba1" exitCode=0 Feb 27 10:43:33 crc kubenswrapper[4998]: I0227 10:43:33.114873 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fn8h9" event={"ID":"5c4d1bff-af82-4f2d-aa17-d4897c54c4a3","Type":"ContainerDied","Data":"ab6aa3b629418da06d0637346036814d2ce444a2c5d5fa368293debfc8a95ba1"} Feb 27 10:43:33 crc kubenswrapper[4998]: I0227 10:43:33.115427 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fn8h9" event={"ID":"5c4d1bff-af82-4f2d-aa17-d4897c54c4a3","Type":"ContainerStarted","Data":"570c699ae94f55ef9276fefcc5fbc073bd6bce1ef315b8f2619cbce5b5d89655"} Feb 27 10:43:34 crc kubenswrapper[4998]: I0227 10:43:34.127800 4998 generic.go:334] "Generic (PLEG): container finished" podID="5c4d1bff-af82-4f2d-aa17-d4897c54c4a3" containerID="7466e596d4a875c19beea0137d4ba4f1ba8a9b4540358176f1ab321e7ee51d28" exitCode=0 Feb 27 10:43:34 crc kubenswrapper[4998]: I0227 10:43:34.127899 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fn8h9" event={"ID":"5c4d1bff-af82-4f2d-aa17-d4897c54c4a3","Type":"ContainerDied","Data":"7466e596d4a875c19beea0137d4ba4f1ba8a9b4540358176f1ab321e7ee51d28"} Feb 27 10:43:34 crc kubenswrapper[4998]: I0227 10:43:34.130218 4998 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 27 10:43:35 crc kubenswrapper[4998]: I0227 10:43:35.141914 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fn8h9" event={"ID":"5c4d1bff-af82-4f2d-aa17-d4897c54c4a3","Type":"ContainerStarted","Data":"9963097e1e2e67832a9fea415f07fdfbb01c8a525f97b208127ce4e77edd8d40"} Feb 27 10:43:35 crc kubenswrapper[4998]: I0227 10:43:35.172670 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-fn8h9" podStartSLOduration=2.706749696 podStartE2EDuration="4.172650889s" podCreationTimestamp="2026-02-27 10:43:31 +0000 UTC" firstStartedPulling="2026-02-27 10:43:33.117150512 +0000 UTC m=+1565.115421500" lastFinishedPulling="2026-02-27 10:43:34.583051725 +0000 UTC m=+1566.581322693" observedRunningTime="2026-02-27 10:43:35.163588844 +0000 UTC m=+1567.161859852" watchObservedRunningTime="2026-02-27 10:43:35.172650889 +0000 UTC m=+1567.170921857" Feb 27 10:43:40 crc kubenswrapper[4998]: I0227 10:43:40.504583 4998 patch_prober.go:28] interesting pod/machine-config-daemon-m6kr5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 10:43:40 crc kubenswrapper[4998]: I0227 10:43:40.505211 4998 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 10:43:42 crc kubenswrapper[4998]: I0227 10:43:42.421699 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-fn8h9" Feb 27 10:43:42 crc kubenswrapper[4998]: I0227 10:43:42.421968 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-fn8h9" Feb 27 10:43:42 crc kubenswrapper[4998]: I0227 10:43:42.480737 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-fn8h9" Feb 27 10:43:43 crc kubenswrapper[4998]: I0227 10:43:43.255956 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-fn8h9" Feb 27 10:43:43 crc kubenswrapper[4998]: I0227 10:43:43.322799 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fn8h9"] Feb 27 10:43:45 crc kubenswrapper[4998]: I0227 10:43:45.235459 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-fn8h9" podUID="5c4d1bff-af82-4f2d-aa17-d4897c54c4a3" containerName="registry-server" containerID="cri-o://9963097e1e2e67832a9fea415f07fdfbb01c8a525f97b208127ce4e77edd8d40" gracePeriod=2 Feb 27 10:43:45 crc kubenswrapper[4998]: I0227 10:43:45.715352 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fn8h9" Feb 27 10:43:45 crc kubenswrapper[4998]: I0227 10:43:45.768469 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c4d1bff-af82-4f2d-aa17-d4897c54c4a3-utilities\") pod \"5c4d1bff-af82-4f2d-aa17-d4897c54c4a3\" (UID: \"5c4d1bff-af82-4f2d-aa17-d4897c54c4a3\") " Feb 27 10:43:45 crc kubenswrapper[4998]: I0227 10:43:45.768548 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fr5wd\" (UniqueName: \"kubernetes.io/projected/5c4d1bff-af82-4f2d-aa17-d4897c54c4a3-kube-api-access-fr5wd\") pod \"5c4d1bff-af82-4f2d-aa17-d4897c54c4a3\" (UID: \"5c4d1bff-af82-4f2d-aa17-d4897c54c4a3\") " Feb 27 10:43:45 crc kubenswrapper[4998]: I0227 10:43:45.768594 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c4d1bff-af82-4f2d-aa17-d4897c54c4a3-catalog-content\") pod \"5c4d1bff-af82-4f2d-aa17-d4897c54c4a3\" (UID: \"5c4d1bff-af82-4f2d-aa17-d4897c54c4a3\") " Feb 27 10:43:45 crc kubenswrapper[4998]: I0227 10:43:45.769358 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c4d1bff-af82-4f2d-aa17-d4897c54c4a3-utilities" (OuterVolumeSpecName: "utilities") pod "5c4d1bff-af82-4f2d-aa17-d4897c54c4a3" (UID: "5c4d1bff-af82-4f2d-aa17-d4897c54c4a3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:43:45 crc kubenswrapper[4998]: I0227 10:43:45.769481 4998 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c4d1bff-af82-4f2d-aa17-d4897c54c4a3-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 10:43:45 crc kubenswrapper[4998]: I0227 10:43:45.774919 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c4d1bff-af82-4f2d-aa17-d4897c54c4a3-kube-api-access-fr5wd" (OuterVolumeSpecName: "kube-api-access-fr5wd") pod "5c4d1bff-af82-4f2d-aa17-d4897c54c4a3" (UID: "5c4d1bff-af82-4f2d-aa17-d4897c54c4a3"). InnerVolumeSpecName "kube-api-access-fr5wd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:43:45 crc kubenswrapper[4998]: I0227 10:43:45.793108 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c4d1bff-af82-4f2d-aa17-d4897c54c4a3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5c4d1bff-af82-4f2d-aa17-d4897c54c4a3" (UID: "5c4d1bff-af82-4f2d-aa17-d4897c54c4a3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:43:45 crc kubenswrapper[4998]: I0227 10:43:45.871609 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fr5wd\" (UniqueName: \"kubernetes.io/projected/5c4d1bff-af82-4f2d-aa17-d4897c54c4a3-kube-api-access-fr5wd\") on node \"crc\" DevicePath \"\"" Feb 27 10:43:45 crc kubenswrapper[4998]: I0227 10:43:45.871651 4998 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c4d1bff-af82-4f2d-aa17-d4897c54c4a3-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 10:43:46 crc kubenswrapper[4998]: I0227 10:43:46.116131 4998 scope.go:117] "RemoveContainer" containerID="0ba3fcea0f9d222ef24cecc7742419820c91cefe7c2bbeb2a0e7fc1b0076d863" Feb 27 10:43:46 crc kubenswrapper[4998]: I0227 10:43:46.149574 4998 scope.go:117] "RemoveContainer" containerID="87a1454506eff43f94db408a6272336f46547d1eb75b6b7ec2ad47b71e6445b0" Feb 27 10:43:46 crc kubenswrapper[4998]: I0227 10:43:46.205567 4998 scope.go:117] "RemoveContainer" containerID="42f98cdbca523215a2588b6ba36df2dc314000f1fd3621d1824b3e4eb60b9a3d" Feb 27 10:43:46 crc kubenswrapper[4998]: I0227 10:43:46.263682 4998 generic.go:334] "Generic (PLEG): container finished" podID="5c4d1bff-af82-4f2d-aa17-d4897c54c4a3" containerID="9963097e1e2e67832a9fea415f07fdfbb01c8a525f97b208127ce4e77edd8d40" exitCode=0 Feb 27 10:43:46 crc kubenswrapper[4998]: I0227 10:43:46.263716 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fn8h9" event={"ID":"5c4d1bff-af82-4f2d-aa17-d4897c54c4a3","Type":"ContainerDied","Data":"9963097e1e2e67832a9fea415f07fdfbb01c8a525f97b208127ce4e77edd8d40"} Feb 27 10:43:46 crc kubenswrapper[4998]: I0227 10:43:46.263737 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fn8h9" event={"ID":"5c4d1bff-af82-4f2d-aa17-d4897c54c4a3","Type":"ContainerDied","Data":"570c699ae94f55ef9276fefcc5fbc073bd6bce1ef315b8f2619cbce5b5d89655"} Feb 27 10:43:46 crc kubenswrapper[4998]: I0227 10:43:46.263754 4998 scope.go:117] "RemoveContainer" containerID="9963097e1e2e67832a9fea415f07fdfbb01c8a525f97b208127ce4e77edd8d40" Feb 27 10:43:46 crc kubenswrapper[4998]: I0227 10:43:46.263868 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fn8h9" Feb 27 10:43:46 crc kubenswrapper[4998]: I0227 10:43:46.299678 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fn8h9"] Feb 27 10:43:46 crc kubenswrapper[4998]: I0227 10:43:46.310315 4998 scope.go:117] "RemoveContainer" containerID="7466e596d4a875c19beea0137d4ba4f1ba8a9b4540358176f1ab321e7ee51d28" Feb 27 10:43:46 crc kubenswrapper[4998]: I0227 10:43:46.310492 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-fn8h9"] Feb 27 10:43:46 crc kubenswrapper[4998]: I0227 10:43:46.327300 4998 scope.go:117] "RemoveContainer" containerID="ab6aa3b629418da06d0637346036814d2ce444a2c5d5fa368293debfc8a95ba1" Feb 27 10:43:46 crc kubenswrapper[4998]: I0227 10:43:46.345520 4998 scope.go:117] "RemoveContainer" containerID="9963097e1e2e67832a9fea415f07fdfbb01c8a525f97b208127ce4e77edd8d40" Feb 27 10:43:46 crc kubenswrapper[4998]: E0227 10:43:46.346041 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9963097e1e2e67832a9fea415f07fdfbb01c8a525f97b208127ce4e77edd8d40\": container with ID starting with 9963097e1e2e67832a9fea415f07fdfbb01c8a525f97b208127ce4e77edd8d40 not found: ID does not exist" containerID="9963097e1e2e67832a9fea415f07fdfbb01c8a525f97b208127ce4e77edd8d40" Feb 27 10:43:46 crc kubenswrapper[4998]: I0227 10:43:46.346101 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9963097e1e2e67832a9fea415f07fdfbb01c8a525f97b208127ce4e77edd8d40"} err="failed to get container status \"9963097e1e2e67832a9fea415f07fdfbb01c8a525f97b208127ce4e77edd8d40\": rpc error: code = NotFound desc = could not find container \"9963097e1e2e67832a9fea415f07fdfbb01c8a525f97b208127ce4e77edd8d40\": container with ID starting with 9963097e1e2e67832a9fea415f07fdfbb01c8a525f97b208127ce4e77edd8d40 not found: ID does not exist" Feb 27 10:43:46 crc kubenswrapper[4998]: I0227 10:43:46.346137 4998 scope.go:117] "RemoveContainer" containerID="7466e596d4a875c19beea0137d4ba4f1ba8a9b4540358176f1ab321e7ee51d28" Feb 27 10:43:46 crc kubenswrapper[4998]: E0227 10:43:46.346525 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7466e596d4a875c19beea0137d4ba4f1ba8a9b4540358176f1ab321e7ee51d28\": container with ID starting with 7466e596d4a875c19beea0137d4ba4f1ba8a9b4540358176f1ab321e7ee51d28 not found: ID does not exist" containerID="7466e596d4a875c19beea0137d4ba4f1ba8a9b4540358176f1ab321e7ee51d28" Feb 27 10:43:46 crc kubenswrapper[4998]: I0227 10:43:46.346558 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7466e596d4a875c19beea0137d4ba4f1ba8a9b4540358176f1ab321e7ee51d28"} err="failed to get container status \"7466e596d4a875c19beea0137d4ba4f1ba8a9b4540358176f1ab321e7ee51d28\": rpc error: code = NotFound desc = could not find container \"7466e596d4a875c19beea0137d4ba4f1ba8a9b4540358176f1ab321e7ee51d28\": container with ID starting with 7466e596d4a875c19beea0137d4ba4f1ba8a9b4540358176f1ab321e7ee51d28 not found: ID does not exist" Feb 27 10:43:46 crc kubenswrapper[4998]: I0227 10:43:46.346577 4998 scope.go:117] "RemoveContainer" containerID="ab6aa3b629418da06d0637346036814d2ce444a2c5d5fa368293debfc8a95ba1" Feb 27 10:43:46 crc kubenswrapper[4998]: E0227 10:43:46.346834 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab6aa3b629418da06d0637346036814d2ce444a2c5d5fa368293debfc8a95ba1\": container with ID starting with ab6aa3b629418da06d0637346036814d2ce444a2c5d5fa368293debfc8a95ba1 not found: ID does not exist" containerID="ab6aa3b629418da06d0637346036814d2ce444a2c5d5fa368293debfc8a95ba1" Feb 27 10:43:46 crc kubenswrapper[4998]: I0227 10:43:46.346875 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab6aa3b629418da06d0637346036814d2ce444a2c5d5fa368293debfc8a95ba1"} err="failed to get container status \"ab6aa3b629418da06d0637346036814d2ce444a2c5d5fa368293debfc8a95ba1\": rpc error: code = NotFound desc = could not find container \"ab6aa3b629418da06d0637346036814d2ce444a2c5d5fa368293debfc8a95ba1\": container with ID starting with ab6aa3b629418da06d0637346036814d2ce444a2c5d5fa368293debfc8a95ba1 not found: ID does not exist" Feb 27 10:43:46 crc kubenswrapper[4998]: I0227 10:43:46.779001 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c4d1bff-af82-4f2d-aa17-d4897c54c4a3" path="/var/lib/kubelet/pods/5c4d1bff-af82-4f2d-aa17-d4897c54c4a3/volumes" Feb 27 10:44:00 crc kubenswrapper[4998]: I0227 10:44:00.147845 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536484-jpxns"] Feb 27 10:44:00 crc kubenswrapper[4998]: E0227 10:44:00.149867 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c4d1bff-af82-4f2d-aa17-d4897c54c4a3" containerName="extract-utilities" Feb 27 10:44:00 crc kubenswrapper[4998]: I0227 10:44:00.149973 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c4d1bff-af82-4f2d-aa17-d4897c54c4a3" containerName="extract-utilities" Feb 27 10:44:00 crc kubenswrapper[4998]: E0227 10:44:00.150088 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c4d1bff-af82-4f2d-aa17-d4897c54c4a3" containerName="registry-server" Feb 27 10:44:00 crc kubenswrapper[4998]: I0227 10:44:00.150181 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c4d1bff-af82-4f2d-aa17-d4897c54c4a3" containerName="registry-server" Feb 27 10:44:00 crc kubenswrapper[4998]: E0227 10:44:00.150341 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c4d1bff-af82-4f2d-aa17-d4897c54c4a3" containerName="extract-content" Feb 27 10:44:00 crc kubenswrapper[4998]: I0227 10:44:00.150420 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c4d1bff-af82-4f2d-aa17-d4897c54c4a3" containerName="extract-content" Feb 27 10:44:00 crc kubenswrapper[4998]: I0227 10:44:00.150722 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c4d1bff-af82-4f2d-aa17-d4897c54c4a3" containerName="registry-server" Feb 27 10:44:00 crc kubenswrapper[4998]: I0227 10:44:00.151578 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536484-jpxns" Feb 27 10:44:00 crc kubenswrapper[4998]: I0227 10:44:00.154992 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 10:44:00 crc kubenswrapper[4998]: I0227 10:44:00.155336 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-b74ch" Feb 27 10:44:00 crc kubenswrapper[4998]: I0227 10:44:00.155602 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 10:44:00 crc kubenswrapper[4998]: I0227 10:44:00.161461 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536484-jpxns"] Feb 27 10:44:00 crc kubenswrapper[4998]: I0227 10:44:00.254994 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x78md\" (UniqueName: \"kubernetes.io/projected/44ba4b4e-2b1b-41b9-8535-84119f0db0e9-kube-api-access-x78md\") pod \"auto-csr-approver-29536484-jpxns\" (UID: \"44ba4b4e-2b1b-41b9-8535-84119f0db0e9\") " pod="openshift-infra/auto-csr-approver-29536484-jpxns" Feb 27 10:44:00 crc kubenswrapper[4998]: I0227 10:44:00.357655 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x78md\" (UniqueName: \"kubernetes.io/projected/44ba4b4e-2b1b-41b9-8535-84119f0db0e9-kube-api-access-x78md\") pod \"auto-csr-approver-29536484-jpxns\" (UID: \"44ba4b4e-2b1b-41b9-8535-84119f0db0e9\") " pod="openshift-infra/auto-csr-approver-29536484-jpxns" Feb 27 10:44:00 crc kubenswrapper[4998]: I0227 10:44:00.377510 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x78md\" (UniqueName: \"kubernetes.io/projected/44ba4b4e-2b1b-41b9-8535-84119f0db0e9-kube-api-access-x78md\") pod \"auto-csr-approver-29536484-jpxns\" (UID: \"44ba4b4e-2b1b-41b9-8535-84119f0db0e9\") " pod="openshift-infra/auto-csr-approver-29536484-jpxns" Feb 27 10:44:00 crc kubenswrapper[4998]: I0227 10:44:00.472206 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536484-jpxns" Feb 27 10:44:00 crc kubenswrapper[4998]: I0227 10:44:00.919322 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536484-jpxns"] Feb 27 10:44:01 crc kubenswrapper[4998]: I0227 10:44:01.418196 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536484-jpxns" event={"ID":"44ba4b4e-2b1b-41b9-8535-84119f0db0e9","Type":"ContainerStarted","Data":"f98b42fa223df66aed80e6ef779110d95fd0a019289c3ac7af35a608472d11b7"} Feb 27 10:44:02 crc kubenswrapper[4998]: I0227 10:44:02.429134 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536484-jpxns" event={"ID":"44ba4b4e-2b1b-41b9-8535-84119f0db0e9","Type":"ContainerStarted","Data":"1f2bd1a75c1db5d148f7f3589d1ccaf5bbda642064c1c14754660ab554c548fb"} Feb 27 10:44:02 crc kubenswrapper[4998]: I0227 10:44:02.447570 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29536484-jpxns" podStartSLOduration=1.258153296 podStartE2EDuration="2.447544375s" podCreationTimestamp="2026-02-27 10:44:00 +0000 UTC" firstStartedPulling="2026-02-27 10:44:00.927412224 +0000 UTC m=+1592.925683192" lastFinishedPulling="2026-02-27 10:44:02.116803283 +0000 UTC m=+1594.115074271" observedRunningTime="2026-02-27 10:44:02.442992842 +0000 UTC m=+1594.441263810" watchObservedRunningTime="2026-02-27 10:44:02.447544375 +0000 UTC m=+1594.445815343" Feb 27 10:44:03 crc kubenswrapper[4998]: I0227 10:44:03.439814 4998 generic.go:334] "Generic (PLEG): container finished" podID="44ba4b4e-2b1b-41b9-8535-84119f0db0e9" containerID="1f2bd1a75c1db5d148f7f3589d1ccaf5bbda642064c1c14754660ab554c548fb" exitCode=0 Feb 27 10:44:03 crc kubenswrapper[4998]: I0227 10:44:03.439857 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536484-jpxns" event={"ID":"44ba4b4e-2b1b-41b9-8535-84119f0db0e9","Type":"ContainerDied","Data":"1f2bd1a75c1db5d148f7f3589d1ccaf5bbda642064c1c14754660ab554c548fb"} Feb 27 10:44:04 crc kubenswrapper[4998]: I0227 10:44:04.792813 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536484-jpxns" Feb 27 10:44:04 crc kubenswrapper[4998]: I0227 10:44:04.851767 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x78md\" (UniqueName: \"kubernetes.io/projected/44ba4b4e-2b1b-41b9-8535-84119f0db0e9-kube-api-access-x78md\") pod \"44ba4b4e-2b1b-41b9-8535-84119f0db0e9\" (UID: \"44ba4b4e-2b1b-41b9-8535-84119f0db0e9\") " Feb 27 10:44:04 crc kubenswrapper[4998]: I0227 10:44:04.857771 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44ba4b4e-2b1b-41b9-8535-84119f0db0e9-kube-api-access-x78md" (OuterVolumeSpecName: "kube-api-access-x78md") pod "44ba4b4e-2b1b-41b9-8535-84119f0db0e9" (UID: "44ba4b4e-2b1b-41b9-8535-84119f0db0e9"). InnerVolumeSpecName "kube-api-access-x78md". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:44:04 crc kubenswrapper[4998]: I0227 10:44:04.954178 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x78md\" (UniqueName: \"kubernetes.io/projected/44ba4b4e-2b1b-41b9-8535-84119f0db0e9-kube-api-access-x78md\") on node \"crc\" DevicePath \"\"" Feb 27 10:44:05 crc kubenswrapper[4998]: I0227 10:44:05.473219 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536484-jpxns" event={"ID":"44ba4b4e-2b1b-41b9-8535-84119f0db0e9","Type":"ContainerDied","Data":"f98b42fa223df66aed80e6ef779110d95fd0a019289c3ac7af35a608472d11b7"} Feb 27 10:44:05 crc kubenswrapper[4998]: I0227 10:44:05.473271 4998 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f98b42fa223df66aed80e6ef779110d95fd0a019289c3ac7af35a608472d11b7" Feb 27 10:44:05 crc kubenswrapper[4998]: I0227 10:44:05.473306 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536484-jpxns" Feb 27 10:44:05 crc kubenswrapper[4998]: I0227 10:44:05.521659 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536478-6z2th"] Feb 27 10:44:05 crc kubenswrapper[4998]: I0227 10:44:05.531081 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536478-6z2th"] Feb 27 10:44:06 crc kubenswrapper[4998]: I0227 10:44:06.785809 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="525020c3-603a-4430-8f28-1743f62fb179" path="/var/lib/kubelet/pods/525020c3-603a-4430-8f28-1743f62fb179/volumes" Feb 27 10:44:10 crc kubenswrapper[4998]: I0227 10:44:10.505533 4998 patch_prober.go:28] interesting pod/machine-config-daemon-m6kr5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 10:44:10 crc kubenswrapper[4998]: I0227 10:44:10.506358 4998 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 10:44:10 crc kubenswrapper[4998]: I0227 10:44:10.506444 4998 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" Feb 27 10:44:10 crc kubenswrapper[4998]: I0227 10:44:10.507803 4998 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f7bf3a0484c3e7ee22533ca49a17be909a31292e5418f4d1a0cd402775584d49"} pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 27 10:44:10 crc kubenswrapper[4998]: I0227 10:44:10.507931 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" containerName="machine-config-daemon" containerID="cri-o://f7bf3a0484c3e7ee22533ca49a17be909a31292e5418f4d1a0cd402775584d49" gracePeriod=600 Feb 27 10:44:10 crc kubenswrapper[4998]: E0227 10:44:10.663154 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m6kr5_openshift-machine-config-operator(400c5e2f-5448-49c6-bf8e-04b21e552bb2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" Feb 27 10:44:11 crc kubenswrapper[4998]: I0227 10:44:11.534569 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" event={"ID":"400c5e2f-5448-49c6-bf8e-04b21e552bb2","Type":"ContainerDied","Data":"f7bf3a0484c3e7ee22533ca49a17be909a31292e5418f4d1a0cd402775584d49"} Feb 27 10:44:11 crc kubenswrapper[4998]: I0227 10:44:11.534579 4998 generic.go:334] "Generic (PLEG): container finished" podID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" containerID="f7bf3a0484c3e7ee22533ca49a17be909a31292e5418f4d1a0cd402775584d49" exitCode=0 Feb 27 10:44:11 crc kubenswrapper[4998]: I0227 10:44:11.534639 4998 scope.go:117] "RemoveContainer" containerID="fa835617bfc870e1b2eabc00e16bdc9b210a2250fe70bb608d05ed5f2f06bfbc" Feb 27 10:44:11 crc kubenswrapper[4998]: I0227 10:44:11.535705 4998 scope.go:117] "RemoveContainer" containerID="f7bf3a0484c3e7ee22533ca49a17be909a31292e5418f4d1a0cd402775584d49" Feb 27 10:44:11 crc kubenswrapper[4998]: E0227 10:44:11.536418 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m6kr5_openshift-machine-config-operator(400c5e2f-5448-49c6-bf8e-04b21e552bb2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" Feb 27 10:44:26 crc kubenswrapper[4998]: I0227 10:44:26.765438 4998 scope.go:117] "RemoveContainer" containerID="f7bf3a0484c3e7ee22533ca49a17be909a31292e5418f4d1a0cd402775584d49" Feb 27 10:44:26 crc kubenswrapper[4998]: E0227 10:44:26.767097 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m6kr5_openshift-machine-config-operator(400c5e2f-5448-49c6-bf8e-04b21e552bb2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" Feb 27 10:44:40 crc kubenswrapper[4998]: I0227 10:44:40.765169 4998 scope.go:117] "RemoveContainer" containerID="f7bf3a0484c3e7ee22533ca49a17be909a31292e5418f4d1a0cd402775584d49" Feb 27 10:44:40 crc kubenswrapper[4998]: E0227 10:44:40.766020 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m6kr5_openshift-machine-config-operator(400c5e2f-5448-49c6-bf8e-04b21e552bb2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" Feb 27 10:44:46 crc kubenswrapper[4998]: I0227 10:44:46.310072 4998 scope.go:117] "RemoveContainer" containerID="7b8c3abb1720b489976607ab04f56862c647bf6c64245aa94721312299b41b7c" Feb 27 10:44:46 crc kubenswrapper[4998]: I0227 10:44:46.351163 4998 scope.go:117] "RemoveContainer" containerID="49e89a12c7099fce6728d4b69138d6d1cb485e4360dc8f988fabd11fdd316cfe" Feb 27 10:44:46 crc kubenswrapper[4998]: I0227 10:44:46.397721 4998 scope.go:117] "RemoveContainer" containerID="68e79fd98d37b7031b30a7a293616484b3448431de5ca8febeb855b0cf6bfa4c" Feb 27 10:44:46 crc kubenswrapper[4998]: I0227 10:44:46.441840 4998 scope.go:117] "RemoveContainer" containerID="f25d062ee75caa879239d82c88522c8a89ff642eb2088ca30aca323a958a5c6e" Feb 27 10:44:52 crc kubenswrapper[4998]: I0227 10:44:52.765465 4998 scope.go:117] "RemoveContainer" containerID="f7bf3a0484c3e7ee22533ca49a17be909a31292e5418f4d1a0cd402775584d49" Feb 27 10:44:52 crc kubenswrapper[4998]: E0227 10:44:52.781145 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m6kr5_openshift-machine-config-operator(400c5e2f-5448-49c6-bf8e-04b21e552bb2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" Feb 27 10:45:00 crc kubenswrapper[4998]: I0227 10:45:00.175926 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536485-lwwj9"] Feb 27 10:45:00 crc kubenswrapper[4998]: E0227 10:45:00.177669 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44ba4b4e-2b1b-41b9-8535-84119f0db0e9" containerName="oc" Feb 27 10:45:00 crc kubenswrapper[4998]: I0227 10:45:00.177703 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="44ba4b4e-2b1b-41b9-8535-84119f0db0e9" containerName="oc" Feb 27 10:45:00 crc kubenswrapper[4998]: I0227 10:45:00.178189 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="44ba4b4e-2b1b-41b9-8535-84119f0db0e9" containerName="oc" Feb 27 10:45:00 crc kubenswrapper[4998]: I0227 10:45:00.179633 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536485-lwwj9" Feb 27 10:45:00 crc kubenswrapper[4998]: I0227 10:45:00.182033 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 27 10:45:00 crc kubenswrapper[4998]: I0227 10:45:00.182524 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 27 10:45:00 crc kubenswrapper[4998]: I0227 10:45:00.194179 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536485-lwwj9"] Feb 27 10:45:00 crc kubenswrapper[4998]: I0227 10:45:00.237699 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rq49m\" (UniqueName: \"kubernetes.io/projected/c0e42edd-f49c-4cdb-88c9-7cd23809ae57-kube-api-access-rq49m\") pod \"collect-profiles-29536485-lwwj9\" (UID: \"c0e42edd-f49c-4cdb-88c9-7cd23809ae57\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536485-lwwj9" Feb 27 10:45:00 crc kubenswrapper[4998]: I0227 10:45:00.237760 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c0e42edd-f49c-4cdb-88c9-7cd23809ae57-config-volume\") pod \"collect-profiles-29536485-lwwj9\" (UID: \"c0e42edd-f49c-4cdb-88c9-7cd23809ae57\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536485-lwwj9" Feb 27 10:45:00 crc kubenswrapper[4998]: I0227 10:45:00.237783 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c0e42edd-f49c-4cdb-88c9-7cd23809ae57-secret-volume\") pod \"collect-profiles-29536485-lwwj9\" (UID: \"c0e42edd-f49c-4cdb-88c9-7cd23809ae57\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536485-lwwj9" Feb 27 10:45:00 crc kubenswrapper[4998]: I0227 10:45:00.339844 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rq49m\" (UniqueName: \"kubernetes.io/projected/c0e42edd-f49c-4cdb-88c9-7cd23809ae57-kube-api-access-rq49m\") pod \"collect-profiles-29536485-lwwj9\" (UID: \"c0e42edd-f49c-4cdb-88c9-7cd23809ae57\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536485-lwwj9" Feb 27 10:45:00 crc kubenswrapper[4998]: I0227 10:45:00.339918 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c0e42edd-f49c-4cdb-88c9-7cd23809ae57-config-volume\") pod \"collect-profiles-29536485-lwwj9\" (UID: \"c0e42edd-f49c-4cdb-88c9-7cd23809ae57\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536485-lwwj9" Feb 27 10:45:00 crc kubenswrapper[4998]: I0227 10:45:00.339945 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c0e42edd-f49c-4cdb-88c9-7cd23809ae57-secret-volume\") pod \"collect-profiles-29536485-lwwj9\" (UID: \"c0e42edd-f49c-4cdb-88c9-7cd23809ae57\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536485-lwwj9" Feb 27 10:45:00 crc kubenswrapper[4998]: I0227 10:45:00.341172 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c0e42edd-f49c-4cdb-88c9-7cd23809ae57-config-volume\") pod \"collect-profiles-29536485-lwwj9\" (UID: \"c0e42edd-f49c-4cdb-88c9-7cd23809ae57\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536485-lwwj9" Feb 27 10:45:00 crc kubenswrapper[4998]: I0227 10:45:00.348954 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c0e42edd-f49c-4cdb-88c9-7cd23809ae57-secret-volume\") pod \"collect-profiles-29536485-lwwj9\" (UID: \"c0e42edd-f49c-4cdb-88c9-7cd23809ae57\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536485-lwwj9" Feb 27 10:45:00 crc kubenswrapper[4998]: I0227 10:45:00.356337 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rq49m\" (UniqueName: \"kubernetes.io/projected/c0e42edd-f49c-4cdb-88c9-7cd23809ae57-kube-api-access-rq49m\") pod \"collect-profiles-29536485-lwwj9\" (UID: \"c0e42edd-f49c-4cdb-88c9-7cd23809ae57\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536485-lwwj9" Feb 27 10:45:00 crc kubenswrapper[4998]: I0227 10:45:00.513707 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536485-lwwj9" Feb 27 10:45:00 crc kubenswrapper[4998]: I0227 10:45:00.942867 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536485-lwwj9"] Feb 27 10:45:01 crc kubenswrapper[4998]: I0227 10:45:01.002206 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536485-lwwj9" event={"ID":"c0e42edd-f49c-4cdb-88c9-7cd23809ae57","Type":"ContainerStarted","Data":"57618970a70188b6646c74f73b2526116801120ef714f0e5098b3fa6026a9580"} Feb 27 10:45:02 crc kubenswrapper[4998]: I0227 10:45:02.020048 4998 generic.go:334] "Generic (PLEG): container finished" podID="c0e42edd-f49c-4cdb-88c9-7cd23809ae57" containerID="3ef5bfed4f49e3f5717fb792c010c04a91e4315ac168591becc8a925e6999a33" exitCode=0 Feb 27 10:45:02 crc kubenswrapper[4998]: I0227 10:45:02.020116 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536485-lwwj9" event={"ID":"c0e42edd-f49c-4cdb-88c9-7cd23809ae57","Type":"ContainerDied","Data":"3ef5bfed4f49e3f5717fb792c010c04a91e4315ac168591becc8a925e6999a33"} Feb 27 10:45:03 crc kubenswrapper[4998]: I0227 10:45:03.439467 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536485-lwwj9" Feb 27 10:45:03 crc kubenswrapper[4998]: I0227 10:45:03.509084 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c0e42edd-f49c-4cdb-88c9-7cd23809ae57-config-volume\") pod \"c0e42edd-f49c-4cdb-88c9-7cd23809ae57\" (UID: \"c0e42edd-f49c-4cdb-88c9-7cd23809ae57\") " Feb 27 10:45:03 crc kubenswrapper[4998]: I0227 10:45:03.509272 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rq49m\" (UniqueName: \"kubernetes.io/projected/c0e42edd-f49c-4cdb-88c9-7cd23809ae57-kube-api-access-rq49m\") pod \"c0e42edd-f49c-4cdb-88c9-7cd23809ae57\" (UID: \"c0e42edd-f49c-4cdb-88c9-7cd23809ae57\") " Feb 27 10:45:03 crc kubenswrapper[4998]: I0227 10:45:03.509470 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c0e42edd-f49c-4cdb-88c9-7cd23809ae57-secret-volume\") pod \"c0e42edd-f49c-4cdb-88c9-7cd23809ae57\" (UID: \"c0e42edd-f49c-4cdb-88c9-7cd23809ae57\") " Feb 27 10:45:03 crc kubenswrapper[4998]: I0227 10:45:03.511566 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0e42edd-f49c-4cdb-88c9-7cd23809ae57-config-volume" (OuterVolumeSpecName: "config-volume") pod "c0e42edd-f49c-4cdb-88c9-7cd23809ae57" (UID: "c0e42edd-f49c-4cdb-88c9-7cd23809ae57"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:45:03 crc kubenswrapper[4998]: I0227 10:45:03.515966 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0e42edd-f49c-4cdb-88c9-7cd23809ae57-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c0e42edd-f49c-4cdb-88c9-7cd23809ae57" (UID: "c0e42edd-f49c-4cdb-88c9-7cd23809ae57"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:45:03 crc kubenswrapper[4998]: I0227 10:45:03.517607 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0e42edd-f49c-4cdb-88c9-7cd23809ae57-kube-api-access-rq49m" (OuterVolumeSpecName: "kube-api-access-rq49m") pod "c0e42edd-f49c-4cdb-88c9-7cd23809ae57" (UID: "c0e42edd-f49c-4cdb-88c9-7cd23809ae57"). InnerVolumeSpecName "kube-api-access-rq49m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:45:03 crc kubenswrapper[4998]: I0227 10:45:03.611323 4998 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c0e42edd-f49c-4cdb-88c9-7cd23809ae57-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 27 10:45:03 crc kubenswrapper[4998]: I0227 10:45:03.611358 4998 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c0e42edd-f49c-4cdb-88c9-7cd23809ae57-config-volume\") on node \"crc\" DevicePath \"\"" Feb 27 10:45:03 crc kubenswrapper[4998]: I0227 10:45:03.611369 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rq49m\" (UniqueName: \"kubernetes.io/projected/c0e42edd-f49c-4cdb-88c9-7cd23809ae57-kube-api-access-rq49m\") on node \"crc\" DevicePath \"\"" Feb 27 10:45:04 crc kubenswrapper[4998]: I0227 10:45:04.044530 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536485-lwwj9" event={"ID":"c0e42edd-f49c-4cdb-88c9-7cd23809ae57","Type":"ContainerDied","Data":"57618970a70188b6646c74f73b2526116801120ef714f0e5098b3fa6026a9580"} Feb 27 10:45:04 crc kubenswrapper[4998]: I0227 10:45:04.044581 4998 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="57618970a70188b6646c74f73b2526116801120ef714f0e5098b3fa6026a9580" Feb 27 10:45:04 crc kubenswrapper[4998]: I0227 10:45:04.044647 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536485-lwwj9" Feb 27 10:45:06 crc kubenswrapper[4998]: I0227 10:45:06.766481 4998 scope.go:117] "RemoveContainer" containerID="f7bf3a0484c3e7ee22533ca49a17be909a31292e5418f4d1a0cd402775584d49" Feb 27 10:45:06 crc kubenswrapper[4998]: E0227 10:45:06.767197 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m6kr5_openshift-machine-config-operator(400c5e2f-5448-49c6-bf8e-04b21e552bb2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" Feb 27 10:45:12 crc kubenswrapper[4998]: I0227 10:45:12.953777 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-s8w6x"] Feb 27 10:45:12 crc kubenswrapper[4998]: E0227 10:45:12.954605 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0e42edd-f49c-4cdb-88c9-7cd23809ae57" containerName="collect-profiles" Feb 27 10:45:12 crc kubenswrapper[4998]: I0227 10:45:12.954618 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0e42edd-f49c-4cdb-88c9-7cd23809ae57" containerName="collect-profiles" Feb 27 10:45:12 crc kubenswrapper[4998]: I0227 10:45:12.954799 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0e42edd-f49c-4cdb-88c9-7cd23809ae57" containerName="collect-profiles" Feb 27 10:45:12 crc kubenswrapper[4998]: I0227 10:45:12.958656 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s8w6x" Feb 27 10:45:12 crc kubenswrapper[4998]: I0227 10:45:12.985613 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-s8w6x"] Feb 27 10:45:13 crc kubenswrapper[4998]: I0227 10:45:13.000359 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d00f7c61-1e4b-4d27-93d8-856fd11b09aa-catalog-content\") pod \"community-operators-s8w6x\" (UID: \"d00f7c61-1e4b-4d27-93d8-856fd11b09aa\") " pod="openshift-marketplace/community-operators-s8w6x" Feb 27 10:45:13 crc kubenswrapper[4998]: I0227 10:45:13.000489 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mb29x\" (UniqueName: \"kubernetes.io/projected/d00f7c61-1e4b-4d27-93d8-856fd11b09aa-kube-api-access-mb29x\") pod \"community-operators-s8w6x\" (UID: \"d00f7c61-1e4b-4d27-93d8-856fd11b09aa\") " pod="openshift-marketplace/community-operators-s8w6x" Feb 27 10:45:13 crc kubenswrapper[4998]: I0227 10:45:13.000668 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d00f7c61-1e4b-4d27-93d8-856fd11b09aa-utilities\") pod \"community-operators-s8w6x\" (UID: \"d00f7c61-1e4b-4d27-93d8-856fd11b09aa\") " pod="openshift-marketplace/community-operators-s8w6x" Feb 27 10:45:13 crc kubenswrapper[4998]: I0227 10:45:13.102793 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d00f7c61-1e4b-4d27-93d8-856fd11b09aa-utilities\") pod \"community-operators-s8w6x\" (UID: \"d00f7c61-1e4b-4d27-93d8-856fd11b09aa\") " pod="openshift-marketplace/community-operators-s8w6x" Feb 27 10:45:13 crc kubenswrapper[4998]: I0227 10:45:13.102882 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d00f7c61-1e4b-4d27-93d8-856fd11b09aa-catalog-content\") pod \"community-operators-s8w6x\" (UID: \"d00f7c61-1e4b-4d27-93d8-856fd11b09aa\") " pod="openshift-marketplace/community-operators-s8w6x" Feb 27 10:45:13 crc kubenswrapper[4998]: I0227 10:45:13.102973 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mb29x\" (UniqueName: \"kubernetes.io/projected/d00f7c61-1e4b-4d27-93d8-856fd11b09aa-kube-api-access-mb29x\") pod \"community-operators-s8w6x\" (UID: \"d00f7c61-1e4b-4d27-93d8-856fd11b09aa\") " pod="openshift-marketplace/community-operators-s8w6x" Feb 27 10:45:13 crc kubenswrapper[4998]: I0227 10:45:13.103458 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d00f7c61-1e4b-4d27-93d8-856fd11b09aa-utilities\") pod \"community-operators-s8w6x\" (UID: \"d00f7c61-1e4b-4d27-93d8-856fd11b09aa\") " pod="openshift-marketplace/community-operators-s8w6x" Feb 27 10:45:13 crc kubenswrapper[4998]: I0227 10:45:13.103577 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d00f7c61-1e4b-4d27-93d8-856fd11b09aa-catalog-content\") pod \"community-operators-s8w6x\" (UID: \"d00f7c61-1e4b-4d27-93d8-856fd11b09aa\") " pod="openshift-marketplace/community-operators-s8w6x" Feb 27 10:45:13 crc kubenswrapper[4998]: I0227 10:45:13.128503 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mb29x\" (UniqueName: \"kubernetes.io/projected/d00f7c61-1e4b-4d27-93d8-856fd11b09aa-kube-api-access-mb29x\") pod \"community-operators-s8w6x\" (UID: \"d00f7c61-1e4b-4d27-93d8-856fd11b09aa\") " pod="openshift-marketplace/community-operators-s8w6x" Feb 27 10:45:13 crc kubenswrapper[4998]: I0227 10:45:13.281340 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s8w6x" Feb 27 10:45:13 crc kubenswrapper[4998]: I0227 10:45:13.836802 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-s8w6x"] Feb 27 10:45:14 crc kubenswrapper[4998]: I0227 10:45:14.146702 4998 generic.go:334] "Generic (PLEG): container finished" podID="d00f7c61-1e4b-4d27-93d8-856fd11b09aa" containerID="dc39c85da8edb439f5c9b0905efa9dfea0ab2756589296376523f6023e62a60f" exitCode=0 Feb 27 10:45:14 crc kubenswrapper[4998]: I0227 10:45:14.146775 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s8w6x" event={"ID":"d00f7c61-1e4b-4d27-93d8-856fd11b09aa","Type":"ContainerDied","Data":"dc39c85da8edb439f5c9b0905efa9dfea0ab2756589296376523f6023e62a60f"} Feb 27 10:45:14 crc kubenswrapper[4998]: I0227 10:45:14.146800 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s8w6x" event={"ID":"d00f7c61-1e4b-4d27-93d8-856fd11b09aa","Type":"ContainerStarted","Data":"138f51451e537a29bbbd1196c818624d860e65e63cbe0ed1518b3e4cc836f829"} Feb 27 10:45:15 crc kubenswrapper[4998]: I0227 10:45:15.156214 4998 generic.go:334] "Generic (PLEG): container finished" podID="d00f7c61-1e4b-4d27-93d8-856fd11b09aa" containerID="68a17fa0f4bbbdfcd232f195761369caa5006905d3034c717a3913fbd4876b4d" exitCode=0 Feb 27 10:45:15 crc kubenswrapper[4998]: I0227 10:45:15.156311 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s8w6x" event={"ID":"d00f7c61-1e4b-4d27-93d8-856fd11b09aa","Type":"ContainerDied","Data":"68a17fa0f4bbbdfcd232f195761369caa5006905d3034c717a3913fbd4876b4d"} Feb 27 10:45:16 crc kubenswrapper[4998]: I0227 10:45:16.171328 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s8w6x" event={"ID":"d00f7c61-1e4b-4d27-93d8-856fd11b09aa","Type":"ContainerStarted","Data":"43241ac6f0482b99278c4a77704416ee68043fd4c74d54f50d81c1119f0ec1e8"} Feb 27 10:45:16 crc kubenswrapper[4998]: I0227 10:45:16.200586 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-s8w6x" podStartSLOduration=2.779649223 podStartE2EDuration="4.200561912s" podCreationTimestamp="2026-02-27 10:45:12 +0000 UTC" firstStartedPulling="2026-02-27 10:45:14.148939237 +0000 UTC m=+1666.147210205" lastFinishedPulling="2026-02-27 10:45:15.569851896 +0000 UTC m=+1667.568122894" observedRunningTime="2026-02-27 10:45:16.189301603 +0000 UTC m=+1668.187572611" watchObservedRunningTime="2026-02-27 10:45:16.200561912 +0000 UTC m=+1668.198832920" Feb 27 10:45:21 crc kubenswrapper[4998]: I0227 10:45:21.764712 4998 scope.go:117] "RemoveContainer" containerID="f7bf3a0484c3e7ee22533ca49a17be909a31292e5418f4d1a0cd402775584d49" Feb 27 10:45:21 crc kubenswrapper[4998]: E0227 10:45:21.765207 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m6kr5_openshift-machine-config-operator(400c5e2f-5448-49c6-bf8e-04b21e552bb2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" Feb 27 10:45:23 crc kubenswrapper[4998]: I0227 10:45:23.282222 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-s8w6x" Feb 27 10:45:23 crc kubenswrapper[4998]: I0227 10:45:23.283404 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-s8w6x" Feb 27 10:45:23 crc kubenswrapper[4998]: I0227 10:45:23.350739 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-s8w6x" Feb 27 10:45:24 crc kubenswrapper[4998]: I0227 10:45:24.319691 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-s8w6x" Feb 27 10:45:24 crc kubenswrapper[4998]: I0227 10:45:24.371626 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-s8w6x"] Feb 27 10:45:25 crc kubenswrapper[4998]: I0227 10:45:25.998030 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jncvm"] Feb 27 10:45:25 crc kubenswrapper[4998]: I0227 10:45:25.999822 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jncvm" Feb 27 10:45:26 crc kubenswrapper[4998]: I0227 10:45:26.022419 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jncvm"] Feb 27 10:45:26 crc kubenswrapper[4998]: I0227 10:45:26.055966 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rfdb\" (UniqueName: \"kubernetes.io/projected/53b39c75-f88d-4fa8-9ab9-3a1dd81bd83b-kube-api-access-8rfdb\") pod \"certified-operators-jncvm\" (UID: \"53b39c75-f88d-4fa8-9ab9-3a1dd81bd83b\") " pod="openshift-marketplace/certified-operators-jncvm" Feb 27 10:45:26 crc kubenswrapper[4998]: I0227 10:45:26.056439 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53b39c75-f88d-4fa8-9ab9-3a1dd81bd83b-utilities\") pod \"certified-operators-jncvm\" (UID: \"53b39c75-f88d-4fa8-9ab9-3a1dd81bd83b\") " pod="openshift-marketplace/certified-operators-jncvm" Feb 27 10:45:26 crc kubenswrapper[4998]: I0227 10:45:26.056548 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53b39c75-f88d-4fa8-9ab9-3a1dd81bd83b-catalog-content\") pod \"certified-operators-jncvm\" (UID: \"53b39c75-f88d-4fa8-9ab9-3a1dd81bd83b\") " pod="openshift-marketplace/certified-operators-jncvm" Feb 27 10:45:26 crc kubenswrapper[4998]: I0227 10:45:26.158777 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53b39c75-f88d-4fa8-9ab9-3a1dd81bd83b-utilities\") pod \"certified-operators-jncvm\" (UID: \"53b39c75-f88d-4fa8-9ab9-3a1dd81bd83b\") " pod="openshift-marketplace/certified-operators-jncvm" Feb 27 10:45:26 crc kubenswrapper[4998]: I0227 10:45:26.159163 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53b39c75-f88d-4fa8-9ab9-3a1dd81bd83b-catalog-content\") pod \"certified-operators-jncvm\" (UID: \"53b39c75-f88d-4fa8-9ab9-3a1dd81bd83b\") " pod="openshift-marketplace/certified-operators-jncvm" Feb 27 10:45:26 crc kubenswrapper[4998]: I0227 10:45:26.159281 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rfdb\" (UniqueName: \"kubernetes.io/projected/53b39c75-f88d-4fa8-9ab9-3a1dd81bd83b-kube-api-access-8rfdb\") pod \"certified-operators-jncvm\" (UID: \"53b39c75-f88d-4fa8-9ab9-3a1dd81bd83b\") " pod="openshift-marketplace/certified-operators-jncvm" Feb 27 10:45:26 crc kubenswrapper[4998]: I0227 10:45:26.159328 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53b39c75-f88d-4fa8-9ab9-3a1dd81bd83b-utilities\") pod \"certified-operators-jncvm\" (UID: \"53b39c75-f88d-4fa8-9ab9-3a1dd81bd83b\") " pod="openshift-marketplace/certified-operators-jncvm" Feb 27 10:45:26 crc kubenswrapper[4998]: I0227 10:45:26.159560 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53b39c75-f88d-4fa8-9ab9-3a1dd81bd83b-catalog-content\") pod \"certified-operators-jncvm\" (UID: \"53b39c75-f88d-4fa8-9ab9-3a1dd81bd83b\") " pod="openshift-marketplace/certified-operators-jncvm" Feb 27 10:45:26 crc kubenswrapper[4998]: I0227 10:45:26.187686 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rfdb\" (UniqueName: \"kubernetes.io/projected/53b39c75-f88d-4fa8-9ab9-3a1dd81bd83b-kube-api-access-8rfdb\") pod \"certified-operators-jncvm\" (UID: \"53b39c75-f88d-4fa8-9ab9-3a1dd81bd83b\") " pod="openshift-marketplace/certified-operators-jncvm" Feb 27 10:45:26 crc kubenswrapper[4998]: I0227 10:45:26.260896 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-s8w6x" podUID="d00f7c61-1e4b-4d27-93d8-856fd11b09aa" containerName="registry-server" containerID="cri-o://43241ac6f0482b99278c4a77704416ee68043fd4c74d54f50d81c1119f0ec1e8" gracePeriod=2 Feb 27 10:45:26 crc kubenswrapper[4998]: I0227 10:45:26.347533 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jncvm" Feb 27 10:45:26 crc kubenswrapper[4998]: I0227 10:45:26.764920 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s8w6x" Feb 27 10:45:26 crc kubenswrapper[4998]: I0227 10:45:26.871820 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d00f7c61-1e4b-4d27-93d8-856fd11b09aa-catalog-content\") pod \"d00f7c61-1e4b-4d27-93d8-856fd11b09aa\" (UID: \"d00f7c61-1e4b-4d27-93d8-856fd11b09aa\") " Feb 27 10:45:26 crc kubenswrapper[4998]: I0227 10:45:26.872023 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mb29x\" (UniqueName: \"kubernetes.io/projected/d00f7c61-1e4b-4d27-93d8-856fd11b09aa-kube-api-access-mb29x\") pod \"d00f7c61-1e4b-4d27-93d8-856fd11b09aa\" (UID: \"d00f7c61-1e4b-4d27-93d8-856fd11b09aa\") " Feb 27 10:45:26 crc kubenswrapper[4998]: I0227 10:45:26.872064 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d00f7c61-1e4b-4d27-93d8-856fd11b09aa-utilities\") pod \"d00f7c61-1e4b-4d27-93d8-856fd11b09aa\" (UID: \"d00f7c61-1e4b-4d27-93d8-856fd11b09aa\") " Feb 27 10:45:26 crc kubenswrapper[4998]: I0227 10:45:26.877541 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d00f7c61-1e4b-4d27-93d8-856fd11b09aa-utilities" (OuterVolumeSpecName: "utilities") pod "d00f7c61-1e4b-4d27-93d8-856fd11b09aa" (UID: "d00f7c61-1e4b-4d27-93d8-856fd11b09aa"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:45:26 crc kubenswrapper[4998]: I0227 10:45:26.880383 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d00f7c61-1e4b-4d27-93d8-856fd11b09aa-kube-api-access-mb29x" (OuterVolumeSpecName: "kube-api-access-mb29x") pod "d00f7c61-1e4b-4d27-93d8-856fd11b09aa" (UID: "d00f7c61-1e4b-4d27-93d8-856fd11b09aa"). InnerVolumeSpecName "kube-api-access-mb29x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:45:26 crc kubenswrapper[4998]: I0227 10:45:26.891888 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mb29x\" (UniqueName: \"kubernetes.io/projected/d00f7c61-1e4b-4d27-93d8-856fd11b09aa-kube-api-access-mb29x\") on node \"crc\" DevicePath \"\"" Feb 27 10:45:26 crc kubenswrapper[4998]: I0227 10:45:26.891930 4998 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d00f7c61-1e4b-4d27-93d8-856fd11b09aa-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 10:45:26 crc kubenswrapper[4998]: I0227 10:45:26.935888 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jncvm"] Feb 27 10:45:26 crc kubenswrapper[4998]: I0227 10:45:26.939957 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d00f7c61-1e4b-4d27-93d8-856fd11b09aa-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d00f7c61-1e4b-4d27-93d8-856fd11b09aa" (UID: "d00f7c61-1e4b-4d27-93d8-856fd11b09aa"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:45:26 crc kubenswrapper[4998]: I0227 10:45:26.993329 4998 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d00f7c61-1e4b-4d27-93d8-856fd11b09aa-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 10:45:27 crc kubenswrapper[4998]: I0227 10:45:27.278382 4998 generic.go:334] "Generic (PLEG): container finished" podID="d00f7c61-1e4b-4d27-93d8-856fd11b09aa" containerID="43241ac6f0482b99278c4a77704416ee68043fd4c74d54f50d81c1119f0ec1e8" exitCode=0 Feb 27 10:45:27 crc kubenswrapper[4998]: I0227 10:45:27.278431 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s8w6x" Feb 27 10:45:27 crc kubenswrapper[4998]: I0227 10:45:27.278458 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s8w6x" event={"ID":"d00f7c61-1e4b-4d27-93d8-856fd11b09aa","Type":"ContainerDied","Data":"43241ac6f0482b99278c4a77704416ee68043fd4c74d54f50d81c1119f0ec1e8"} Feb 27 10:45:27 crc kubenswrapper[4998]: I0227 10:45:27.278483 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s8w6x" event={"ID":"d00f7c61-1e4b-4d27-93d8-856fd11b09aa","Type":"ContainerDied","Data":"138f51451e537a29bbbd1196c818624d860e65e63cbe0ed1518b3e4cc836f829"} Feb 27 10:45:27 crc kubenswrapper[4998]: I0227 10:45:27.278499 4998 scope.go:117] "RemoveContainer" containerID="43241ac6f0482b99278c4a77704416ee68043fd4c74d54f50d81c1119f0ec1e8" Feb 27 10:45:27 crc kubenswrapper[4998]: I0227 10:45:27.282743 4998 generic.go:334] "Generic (PLEG): container finished" podID="53b39c75-f88d-4fa8-9ab9-3a1dd81bd83b" containerID="755596ddf7fcfd8abd699dbacd83ce5d9e7a74f358369276eed0dc4a1de9d6ff" exitCode=0 Feb 27 10:45:27 crc kubenswrapper[4998]: I0227 10:45:27.282770 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jncvm" event={"ID":"53b39c75-f88d-4fa8-9ab9-3a1dd81bd83b","Type":"ContainerDied","Data":"755596ddf7fcfd8abd699dbacd83ce5d9e7a74f358369276eed0dc4a1de9d6ff"} Feb 27 10:45:27 crc kubenswrapper[4998]: I0227 10:45:27.282792 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jncvm" event={"ID":"53b39c75-f88d-4fa8-9ab9-3a1dd81bd83b","Type":"ContainerStarted","Data":"ae578a22cdd112f78e12fe8f4213b1e3dbf34957922ed364798fedf2f1efc59c"} Feb 27 10:45:27 crc kubenswrapper[4998]: I0227 10:45:27.330698 4998 scope.go:117] "RemoveContainer" containerID="68a17fa0f4bbbdfcd232f195761369caa5006905d3034c717a3913fbd4876b4d" Feb 27 10:45:27 crc kubenswrapper[4998]: I0227 10:45:27.337375 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-s8w6x"] Feb 27 10:45:27 crc kubenswrapper[4998]: I0227 10:45:27.346826 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-s8w6x"] Feb 27 10:45:27 crc kubenswrapper[4998]: I0227 10:45:27.352932 4998 scope.go:117] "RemoveContainer" containerID="dc39c85da8edb439f5c9b0905efa9dfea0ab2756589296376523f6023e62a60f" Feb 27 10:45:27 crc kubenswrapper[4998]: I0227 10:45:27.406852 4998 scope.go:117] "RemoveContainer" containerID="43241ac6f0482b99278c4a77704416ee68043fd4c74d54f50d81c1119f0ec1e8" Feb 27 10:45:27 crc kubenswrapper[4998]: E0227 10:45:27.407265 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43241ac6f0482b99278c4a77704416ee68043fd4c74d54f50d81c1119f0ec1e8\": container with ID starting with 43241ac6f0482b99278c4a77704416ee68043fd4c74d54f50d81c1119f0ec1e8 not found: ID does not exist" containerID="43241ac6f0482b99278c4a77704416ee68043fd4c74d54f50d81c1119f0ec1e8" Feb 27 10:45:27 crc kubenswrapper[4998]: I0227 10:45:27.407295 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43241ac6f0482b99278c4a77704416ee68043fd4c74d54f50d81c1119f0ec1e8"} err="failed to get container status \"43241ac6f0482b99278c4a77704416ee68043fd4c74d54f50d81c1119f0ec1e8\": rpc error: code = NotFound desc = could not find container \"43241ac6f0482b99278c4a77704416ee68043fd4c74d54f50d81c1119f0ec1e8\": container with ID starting with 43241ac6f0482b99278c4a77704416ee68043fd4c74d54f50d81c1119f0ec1e8 not found: ID does not exist" Feb 27 10:45:27 crc kubenswrapper[4998]: I0227 10:45:27.407313 4998 scope.go:117] "RemoveContainer" containerID="68a17fa0f4bbbdfcd232f195761369caa5006905d3034c717a3913fbd4876b4d" Feb 27 10:45:27 crc kubenswrapper[4998]: E0227 10:45:27.407510 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"68a17fa0f4bbbdfcd232f195761369caa5006905d3034c717a3913fbd4876b4d\": container with ID starting with 68a17fa0f4bbbdfcd232f195761369caa5006905d3034c717a3913fbd4876b4d not found: ID does not exist" containerID="68a17fa0f4bbbdfcd232f195761369caa5006905d3034c717a3913fbd4876b4d" Feb 27 10:45:27 crc kubenswrapper[4998]: I0227 10:45:27.407532 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68a17fa0f4bbbdfcd232f195761369caa5006905d3034c717a3913fbd4876b4d"} err="failed to get container status \"68a17fa0f4bbbdfcd232f195761369caa5006905d3034c717a3913fbd4876b4d\": rpc error: code = NotFound desc = could not find container \"68a17fa0f4bbbdfcd232f195761369caa5006905d3034c717a3913fbd4876b4d\": container with ID starting with 68a17fa0f4bbbdfcd232f195761369caa5006905d3034c717a3913fbd4876b4d not found: ID does not exist" Feb 27 10:45:27 crc kubenswrapper[4998]: I0227 10:45:27.407545 4998 scope.go:117] "RemoveContainer" containerID="dc39c85da8edb439f5c9b0905efa9dfea0ab2756589296376523f6023e62a60f" Feb 27 10:45:27 crc kubenswrapper[4998]: E0227 10:45:27.407709 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc39c85da8edb439f5c9b0905efa9dfea0ab2756589296376523f6023e62a60f\": container with ID starting with dc39c85da8edb439f5c9b0905efa9dfea0ab2756589296376523f6023e62a60f not found: ID does not exist" containerID="dc39c85da8edb439f5c9b0905efa9dfea0ab2756589296376523f6023e62a60f" Feb 27 10:45:27 crc kubenswrapper[4998]: I0227 10:45:27.407735 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc39c85da8edb439f5c9b0905efa9dfea0ab2756589296376523f6023e62a60f"} err="failed to get container status \"dc39c85da8edb439f5c9b0905efa9dfea0ab2756589296376523f6023e62a60f\": rpc error: code = NotFound desc = could not find container \"dc39c85da8edb439f5c9b0905efa9dfea0ab2756589296376523f6023e62a60f\": container with ID starting with dc39c85da8edb439f5c9b0905efa9dfea0ab2756589296376523f6023e62a60f not found: ID does not exist" Feb 27 10:45:28 crc kubenswrapper[4998]: I0227 10:45:28.780193 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d00f7c61-1e4b-4d27-93d8-856fd11b09aa" path="/var/lib/kubelet/pods/d00f7c61-1e4b-4d27-93d8-856fd11b09aa/volumes" Feb 27 10:45:29 crc kubenswrapper[4998]: I0227 10:45:29.308205 4998 generic.go:334] "Generic (PLEG): container finished" podID="53b39c75-f88d-4fa8-9ab9-3a1dd81bd83b" containerID="a120daa87d61a9118bc9a7c33a64da672184d7d40cd615b4cb68cd768d8ab490" exitCode=0 Feb 27 10:45:29 crc kubenswrapper[4998]: I0227 10:45:29.308294 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jncvm" event={"ID":"53b39c75-f88d-4fa8-9ab9-3a1dd81bd83b","Type":"ContainerDied","Data":"a120daa87d61a9118bc9a7c33a64da672184d7d40cd615b4cb68cd768d8ab490"} Feb 27 10:45:30 crc kubenswrapper[4998]: I0227 10:45:30.319473 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jncvm" event={"ID":"53b39c75-f88d-4fa8-9ab9-3a1dd81bd83b","Type":"ContainerStarted","Data":"ced866ab02df638704d2a4cfdba20be868bc678161a03e890333385e0db6219f"} Feb 27 10:45:30 crc kubenswrapper[4998]: I0227 10:45:30.344825 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jncvm" podStartSLOduration=2.919283106 podStartE2EDuration="5.344810037s" podCreationTimestamp="2026-02-27 10:45:25 +0000 UTC" firstStartedPulling="2026-02-27 10:45:27.284076027 +0000 UTC m=+1679.282346995" lastFinishedPulling="2026-02-27 10:45:29.709602958 +0000 UTC m=+1681.707873926" observedRunningTime="2026-02-27 10:45:30.339480621 +0000 UTC m=+1682.337751629" watchObservedRunningTime="2026-02-27 10:45:30.344810037 +0000 UTC m=+1682.343080995" Feb 27 10:45:33 crc kubenswrapper[4998]: I0227 10:45:33.765421 4998 scope.go:117] "RemoveContainer" containerID="f7bf3a0484c3e7ee22533ca49a17be909a31292e5418f4d1a0cd402775584d49" Feb 27 10:45:33 crc kubenswrapper[4998]: E0227 10:45:33.766204 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m6kr5_openshift-machine-config-operator(400c5e2f-5448-49c6-bf8e-04b21e552bb2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" Feb 27 10:45:36 crc kubenswrapper[4998]: I0227 10:45:36.347982 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jncvm" Feb 27 10:45:36 crc kubenswrapper[4998]: I0227 10:45:36.348422 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jncvm" Feb 27 10:45:36 crc kubenswrapper[4998]: I0227 10:45:36.394837 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jncvm" Feb 27 10:45:36 crc kubenswrapper[4998]: I0227 10:45:36.447165 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jncvm" Feb 27 10:45:36 crc kubenswrapper[4998]: I0227 10:45:36.637206 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jncvm"] Feb 27 10:45:38 crc kubenswrapper[4998]: I0227 10:45:38.395810 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-jncvm" podUID="53b39c75-f88d-4fa8-9ab9-3a1dd81bd83b" containerName="registry-server" containerID="cri-o://ced866ab02df638704d2a4cfdba20be868bc678161a03e890333385e0db6219f" gracePeriod=2 Feb 27 10:45:38 crc kubenswrapper[4998]: I0227 10:45:38.853993 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jncvm" Feb 27 10:45:39 crc kubenswrapper[4998]: I0227 10:45:39.034803 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53b39c75-f88d-4fa8-9ab9-3a1dd81bd83b-utilities\") pod \"53b39c75-f88d-4fa8-9ab9-3a1dd81bd83b\" (UID: \"53b39c75-f88d-4fa8-9ab9-3a1dd81bd83b\") " Feb 27 10:45:39 crc kubenswrapper[4998]: I0227 10:45:39.035024 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8rfdb\" (UniqueName: \"kubernetes.io/projected/53b39c75-f88d-4fa8-9ab9-3a1dd81bd83b-kube-api-access-8rfdb\") pod \"53b39c75-f88d-4fa8-9ab9-3a1dd81bd83b\" (UID: \"53b39c75-f88d-4fa8-9ab9-3a1dd81bd83b\") " Feb 27 10:45:39 crc kubenswrapper[4998]: I0227 10:45:39.035138 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53b39c75-f88d-4fa8-9ab9-3a1dd81bd83b-catalog-content\") pod \"53b39c75-f88d-4fa8-9ab9-3a1dd81bd83b\" (UID: \"53b39c75-f88d-4fa8-9ab9-3a1dd81bd83b\") " Feb 27 10:45:39 crc kubenswrapper[4998]: I0227 10:45:39.036141 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53b39c75-f88d-4fa8-9ab9-3a1dd81bd83b-utilities" (OuterVolumeSpecName: "utilities") pod "53b39c75-f88d-4fa8-9ab9-3a1dd81bd83b" (UID: "53b39c75-f88d-4fa8-9ab9-3a1dd81bd83b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:45:39 crc kubenswrapper[4998]: I0227 10:45:39.041019 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53b39c75-f88d-4fa8-9ab9-3a1dd81bd83b-kube-api-access-8rfdb" (OuterVolumeSpecName: "kube-api-access-8rfdb") pod "53b39c75-f88d-4fa8-9ab9-3a1dd81bd83b" (UID: "53b39c75-f88d-4fa8-9ab9-3a1dd81bd83b"). InnerVolumeSpecName "kube-api-access-8rfdb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:45:39 crc kubenswrapper[4998]: I0227 10:45:39.137703 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8rfdb\" (UniqueName: \"kubernetes.io/projected/53b39c75-f88d-4fa8-9ab9-3a1dd81bd83b-kube-api-access-8rfdb\") on node \"crc\" DevicePath \"\"" Feb 27 10:45:39 crc kubenswrapper[4998]: I0227 10:45:39.137746 4998 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53b39c75-f88d-4fa8-9ab9-3a1dd81bd83b-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 10:45:39 crc kubenswrapper[4998]: I0227 10:45:39.410703 4998 generic.go:334] "Generic (PLEG): container finished" podID="53b39c75-f88d-4fa8-9ab9-3a1dd81bd83b" containerID="ced866ab02df638704d2a4cfdba20be868bc678161a03e890333385e0db6219f" exitCode=0 Feb 27 10:45:39 crc kubenswrapper[4998]: I0227 10:45:39.410772 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jncvm" event={"ID":"53b39c75-f88d-4fa8-9ab9-3a1dd81bd83b","Type":"ContainerDied","Data":"ced866ab02df638704d2a4cfdba20be868bc678161a03e890333385e0db6219f"} Feb 27 10:45:39 crc kubenswrapper[4998]: I0227 10:45:39.410835 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jncvm" event={"ID":"53b39c75-f88d-4fa8-9ab9-3a1dd81bd83b","Type":"ContainerDied","Data":"ae578a22cdd112f78e12fe8f4213b1e3dbf34957922ed364798fedf2f1efc59c"} Feb 27 10:45:39 crc kubenswrapper[4998]: I0227 10:45:39.410860 4998 scope.go:117] "RemoveContainer" containerID="ced866ab02df638704d2a4cfdba20be868bc678161a03e890333385e0db6219f" Feb 27 10:45:39 crc kubenswrapper[4998]: I0227 10:45:39.410798 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jncvm" Feb 27 10:45:39 crc kubenswrapper[4998]: I0227 10:45:39.437437 4998 scope.go:117] "RemoveContainer" containerID="a120daa87d61a9118bc9a7c33a64da672184d7d40cd615b4cb68cd768d8ab490" Feb 27 10:45:39 crc kubenswrapper[4998]: I0227 10:45:39.462544 4998 scope.go:117] "RemoveContainer" containerID="755596ddf7fcfd8abd699dbacd83ce5d9e7a74f358369276eed0dc4a1de9d6ff" Feb 27 10:45:39 crc kubenswrapper[4998]: I0227 10:45:39.550477 4998 scope.go:117] "RemoveContainer" containerID="ced866ab02df638704d2a4cfdba20be868bc678161a03e890333385e0db6219f" Feb 27 10:45:39 crc kubenswrapper[4998]: E0227 10:45:39.550798 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ced866ab02df638704d2a4cfdba20be868bc678161a03e890333385e0db6219f\": container with ID starting with ced866ab02df638704d2a4cfdba20be868bc678161a03e890333385e0db6219f not found: ID does not exist" containerID="ced866ab02df638704d2a4cfdba20be868bc678161a03e890333385e0db6219f" Feb 27 10:45:39 crc kubenswrapper[4998]: I0227 10:45:39.550856 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ced866ab02df638704d2a4cfdba20be868bc678161a03e890333385e0db6219f"} err="failed to get container status \"ced866ab02df638704d2a4cfdba20be868bc678161a03e890333385e0db6219f\": rpc error: code = NotFound desc = could not find container \"ced866ab02df638704d2a4cfdba20be868bc678161a03e890333385e0db6219f\": container with ID starting with ced866ab02df638704d2a4cfdba20be868bc678161a03e890333385e0db6219f not found: ID does not exist" Feb 27 10:45:39 crc kubenswrapper[4998]: I0227 10:45:39.550894 4998 scope.go:117] "RemoveContainer" containerID="a120daa87d61a9118bc9a7c33a64da672184d7d40cd615b4cb68cd768d8ab490" Feb 27 10:45:39 crc kubenswrapper[4998]: E0227 10:45:39.551249 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a120daa87d61a9118bc9a7c33a64da672184d7d40cd615b4cb68cd768d8ab490\": container with ID starting with a120daa87d61a9118bc9a7c33a64da672184d7d40cd615b4cb68cd768d8ab490 not found: ID does not exist" containerID="a120daa87d61a9118bc9a7c33a64da672184d7d40cd615b4cb68cd768d8ab490" Feb 27 10:45:39 crc kubenswrapper[4998]: I0227 10:45:39.551288 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a120daa87d61a9118bc9a7c33a64da672184d7d40cd615b4cb68cd768d8ab490"} err="failed to get container status \"a120daa87d61a9118bc9a7c33a64da672184d7d40cd615b4cb68cd768d8ab490\": rpc error: code = NotFound desc = could not find container \"a120daa87d61a9118bc9a7c33a64da672184d7d40cd615b4cb68cd768d8ab490\": container with ID starting with a120daa87d61a9118bc9a7c33a64da672184d7d40cd615b4cb68cd768d8ab490 not found: ID does not exist" Feb 27 10:45:39 crc kubenswrapper[4998]: I0227 10:45:39.551314 4998 scope.go:117] "RemoveContainer" containerID="755596ddf7fcfd8abd699dbacd83ce5d9e7a74f358369276eed0dc4a1de9d6ff" Feb 27 10:45:39 crc kubenswrapper[4998]: E0227 10:45:39.551816 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"755596ddf7fcfd8abd699dbacd83ce5d9e7a74f358369276eed0dc4a1de9d6ff\": container with ID starting with 755596ddf7fcfd8abd699dbacd83ce5d9e7a74f358369276eed0dc4a1de9d6ff not found: ID does not exist" containerID="755596ddf7fcfd8abd699dbacd83ce5d9e7a74f358369276eed0dc4a1de9d6ff" Feb 27 10:45:39 crc kubenswrapper[4998]: I0227 10:45:39.551851 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"755596ddf7fcfd8abd699dbacd83ce5d9e7a74f358369276eed0dc4a1de9d6ff"} err="failed to get container status \"755596ddf7fcfd8abd699dbacd83ce5d9e7a74f358369276eed0dc4a1de9d6ff\": rpc error: code = NotFound desc = could not find container \"755596ddf7fcfd8abd699dbacd83ce5d9e7a74f358369276eed0dc4a1de9d6ff\": container with ID starting with 755596ddf7fcfd8abd699dbacd83ce5d9e7a74f358369276eed0dc4a1de9d6ff not found: ID does not exist" Feb 27 10:45:39 crc kubenswrapper[4998]: I0227 10:45:39.676835 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53b39c75-f88d-4fa8-9ab9-3a1dd81bd83b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "53b39c75-f88d-4fa8-9ab9-3a1dd81bd83b" (UID: "53b39c75-f88d-4fa8-9ab9-3a1dd81bd83b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:45:39 crc kubenswrapper[4998]: I0227 10:45:39.747291 4998 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53b39c75-f88d-4fa8-9ab9-3a1dd81bd83b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 10:45:39 crc kubenswrapper[4998]: I0227 10:45:39.753162 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jncvm"] Feb 27 10:45:39 crc kubenswrapper[4998]: I0227 10:45:39.777037 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-jncvm"] Feb 27 10:45:40 crc kubenswrapper[4998]: I0227 10:45:40.775112 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53b39c75-f88d-4fa8-9ab9-3a1dd81bd83b" path="/var/lib/kubelet/pods/53b39c75-f88d-4fa8-9ab9-3a1dd81bd83b/volumes" Feb 27 10:45:45 crc kubenswrapper[4998]: I0227 10:45:45.764854 4998 scope.go:117] "RemoveContainer" containerID="f7bf3a0484c3e7ee22533ca49a17be909a31292e5418f4d1a0cd402775584d49" Feb 27 10:45:45 crc kubenswrapper[4998]: E0227 10:45:45.765598 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m6kr5_openshift-machine-config-operator(400c5e2f-5448-49c6-bf8e-04b21e552bb2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" Feb 27 10:45:52 crc kubenswrapper[4998]: I0227 10:45:52.548160 4998 generic.go:334] "Generic (PLEG): container finished" podID="bf074ce9-dfbc-44fe-8839-67aa9d2d43f8" containerID="72b8e867a981a9a541a52ee66009ae8748a9592c48c1b6fd987ab30ca33b5d9e" exitCode=0 Feb 27 10:45:52 crc kubenswrapper[4998]: I0227 10:45:52.548274 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xvnkl" event={"ID":"bf074ce9-dfbc-44fe-8839-67aa9d2d43f8","Type":"ContainerDied","Data":"72b8e867a981a9a541a52ee66009ae8748a9592c48c1b6fd987ab30ca33b5d9e"} Feb 27 10:45:53 crc kubenswrapper[4998]: I0227 10:45:53.917385 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xvnkl" Feb 27 10:45:54 crc kubenswrapper[4998]: I0227 10:45:54.015602 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf074ce9-dfbc-44fe-8839-67aa9d2d43f8-bootstrap-combined-ca-bundle\") pod \"bf074ce9-dfbc-44fe-8839-67aa9d2d43f8\" (UID: \"bf074ce9-dfbc-44fe-8839-67aa9d2d43f8\") " Feb 27 10:45:54 crc kubenswrapper[4998]: I0227 10:45:54.015762 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bf074ce9-dfbc-44fe-8839-67aa9d2d43f8-inventory\") pod \"bf074ce9-dfbc-44fe-8839-67aa9d2d43f8\" (UID: \"bf074ce9-dfbc-44fe-8839-67aa9d2d43f8\") " Feb 27 10:45:54 crc kubenswrapper[4998]: I0227 10:45:54.015792 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xwfzt\" (UniqueName: \"kubernetes.io/projected/bf074ce9-dfbc-44fe-8839-67aa9d2d43f8-kube-api-access-xwfzt\") pod \"bf074ce9-dfbc-44fe-8839-67aa9d2d43f8\" (UID: \"bf074ce9-dfbc-44fe-8839-67aa9d2d43f8\") " Feb 27 10:45:54 crc kubenswrapper[4998]: I0227 10:45:54.015842 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bf074ce9-dfbc-44fe-8839-67aa9d2d43f8-ssh-key-openstack-edpm-ipam\") pod \"bf074ce9-dfbc-44fe-8839-67aa9d2d43f8\" (UID: \"bf074ce9-dfbc-44fe-8839-67aa9d2d43f8\") " Feb 27 10:45:54 crc kubenswrapper[4998]: I0227 10:45:54.021420 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf074ce9-dfbc-44fe-8839-67aa9d2d43f8-kube-api-access-xwfzt" (OuterVolumeSpecName: "kube-api-access-xwfzt") pod "bf074ce9-dfbc-44fe-8839-67aa9d2d43f8" (UID: "bf074ce9-dfbc-44fe-8839-67aa9d2d43f8"). InnerVolumeSpecName "kube-api-access-xwfzt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:45:54 crc kubenswrapper[4998]: I0227 10:45:54.021459 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf074ce9-dfbc-44fe-8839-67aa9d2d43f8-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "bf074ce9-dfbc-44fe-8839-67aa9d2d43f8" (UID: "bf074ce9-dfbc-44fe-8839-67aa9d2d43f8"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:45:54 crc kubenswrapper[4998]: I0227 10:45:54.045048 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf074ce9-dfbc-44fe-8839-67aa9d2d43f8-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "bf074ce9-dfbc-44fe-8839-67aa9d2d43f8" (UID: "bf074ce9-dfbc-44fe-8839-67aa9d2d43f8"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:45:54 crc kubenswrapper[4998]: I0227 10:45:54.046168 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf074ce9-dfbc-44fe-8839-67aa9d2d43f8-inventory" (OuterVolumeSpecName: "inventory") pod "bf074ce9-dfbc-44fe-8839-67aa9d2d43f8" (UID: "bf074ce9-dfbc-44fe-8839-67aa9d2d43f8"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:45:54 crc kubenswrapper[4998]: I0227 10:45:54.117791 4998 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf074ce9-dfbc-44fe-8839-67aa9d2d43f8-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:45:54 crc kubenswrapper[4998]: I0227 10:45:54.118075 4998 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bf074ce9-dfbc-44fe-8839-67aa9d2d43f8-inventory\") on node \"crc\" DevicePath \"\"" Feb 27 10:45:54 crc kubenswrapper[4998]: I0227 10:45:54.118156 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xwfzt\" (UniqueName: \"kubernetes.io/projected/bf074ce9-dfbc-44fe-8839-67aa9d2d43f8-kube-api-access-xwfzt\") on node \"crc\" DevicePath \"\"" Feb 27 10:45:54 crc kubenswrapper[4998]: I0227 10:45:54.118488 4998 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bf074ce9-dfbc-44fe-8839-67aa9d2d43f8-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 27 10:45:54 crc kubenswrapper[4998]: I0227 10:45:54.572761 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xvnkl" event={"ID":"bf074ce9-dfbc-44fe-8839-67aa9d2d43f8","Type":"ContainerDied","Data":"32563f1752fafc6d9c8328aee022369ff29e623282f4610108c7a9a15f582c10"} Feb 27 10:45:54 crc kubenswrapper[4998]: I0227 10:45:54.572797 4998 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32563f1752fafc6d9c8328aee022369ff29e623282f4610108c7a9a15f582c10" Feb 27 10:45:54 crc kubenswrapper[4998]: I0227 10:45:54.572851 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xvnkl" Feb 27 10:45:54 crc kubenswrapper[4998]: I0227 10:45:54.666956 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-t98b7"] Feb 27 10:45:54 crc kubenswrapper[4998]: E0227 10:45:54.667683 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53b39c75-f88d-4fa8-9ab9-3a1dd81bd83b" containerName="registry-server" Feb 27 10:45:54 crc kubenswrapper[4998]: I0227 10:45:54.667779 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="53b39c75-f88d-4fa8-9ab9-3a1dd81bd83b" containerName="registry-server" Feb 27 10:45:54 crc kubenswrapper[4998]: E0227 10:45:54.667882 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d00f7c61-1e4b-4d27-93d8-856fd11b09aa" containerName="extract-utilities" Feb 27 10:45:54 crc kubenswrapper[4998]: I0227 10:45:54.667984 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="d00f7c61-1e4b-4d27-93d8-856fd11b09aa" containerName="extract-utilities" Feb 27 10:45:54 crc kubenswrapper[4998]: E0227 10:45:54.668072 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf074ce9-dfbc-44fe-8839-67aa9d2d43f8" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 27 10:45:54 crc kubenswrapper[4998]: I0227 10:45:54.668150 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf074ce9-dfbc-44fe-8839-67aa9d2d43f8" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 27 10:45:54 crc kubenswrapper[4998]: E0227 10:45:54.668288 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d00f7c61-1e4b-4d27-93d8-856fd11b09aa" containerName="registry-server" Feb 27 10:45:54 crc kubenswrapper[4998]: I0227 10:45:54.668371 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="d00f7c61-1e4b-4d27-93d8-856fd11b09aa" containerName="registry-server" Feb 27 10:45:54 crc kubenswrapper[4998]: E0227 10:45:54.668465 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53b39c75-f88d-4fa8-9ab9-3a1dd81bd83b" containerName="extract-content" Feb 27 10:45:54 crc kubenswrapper[4998]: I0227 10:45:54.668534 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="53b39c75-f88d-4fa8-9ab9-3a1dd81bd83b" containerName="extract-content" Feb 27 10:45:54 crc kubenswrapper[4998]: E0227 10:45:54.668599 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d00f7c61-1e4b-4d27-93d8-856fd11b09aa" containerName="extract-content" Feb 27 10:45:54 crc kubenswrapper[4998]: I0227 10:45:54.668680 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="d00f7c61-1e4b-4d27-93d8-856fd11b09aa" containerName="extract-content" Feb 27 10:45:54 crc kubenswrapper[4998]: E0227 10:45:54.668778 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53b39c75-f88d-4fa8-9ab9-3a1dd81bd83b" containerName="extract-utilities" Feb 27 10:45:54 crc kubenswrapper[4998]: I0227 10:45:54.668845 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="53b39c75-f88d-4fa8-9ab9-3a1dd81bd83b" containerName="extract-utilities" Feb 27 10:45:54 crc kubenswrapper[4998]: I0227 10:45:54.669120 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="53b39c75-f88d-4fa8-9ab9-3a1dd81bd83b" containerName="registry-server" Feb 27 10:45:54 crc kubenswrapper[4998]: I0227 10:45:54.669209 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="d00f7c61-1e4b-4d27-93d8-856fd11b09aa" containerName="registry-server" Feb 27 10:45:54 crc kubenswrapper[4998]: I0227 10:45:54.669320 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf074ce9-dfbc-44fe-8839-67aa9d2d43f8" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 27 10:45:54 crc kubenswrapper[4998]: I0227 10:45:54.670213 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-t98b7" Feb 27 10:45:54 crc kubenswrapper[4998]: I0227 10:45:54.672174 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 27 10:45:54 crc kubenswrapper[4998]: I0227 10:45:54.672666 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 27 10:45:54 crc kubenswrapper[4998]: I0227 10:45:54.672890 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 27 10:45:54 crc kubenswrapper[4998]: I0227 10:45:54.673028 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-bpcp2" Feb 27 10:45:54 crc kubenswrapper[4998]: I0227 10:45:54.676857 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-t98b7"] Feb 27 10:45:54 crc kubenswrapper[4998]: I0227 10:45:54.833394 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7fa18717-d186-4f9e-8a17-f3689b187491-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-t98b7\" (UID: \"7fa18717-d186-4f9e-8a17-f3689b187491\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-t98b7" Feb 27 10:45:54 crc kubenswrapper[4998]: I0227 10:45:54.833495 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6z8r\" (UniqueName: \"kubernetes.io/projected/7fa18717-d186-4f9e-8a17-f3689b187491-kube-api-access-s6z8r\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-t98b7\" (UID: \"7fa18717-d186-4f9e-8a17-f3689b187491\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-t98b7" Feb 27 10:45:54 crc kubenswrapper[4998]: I0227 10:45:54.833523 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7fa18717-d186-4f9e-8a17-f3689b187491-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-t98b7\" (UID: \"7fa18717-d186-4f9e-8a17-f3689b187491\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-t98b7" Feb 27 10:45:54 crc kubenswrapper[4998]: I0227 10:45:54.935257 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6z8r\" (UniqueName: \"kubernetes.io/projected/7fa18717-d186-4f9e-8a17-f3689b187491-kube-api-access-s6z8r\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-t98b7\" (UID: \"7fa18717-d186-4f9e-8a17-f3689b187491\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-t98b7" Feb 27 10:45:54 crc kubenswrapper[4998]: I0227 10:45:54.935322 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7fa18717-d186-4f9e-8a17-f3689b187491-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-t98b7\" (UID: \"7fa18717-d186-4f9e-8a17-f3689b187491\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-t98b7" Feb 27 10:45:54 crc kubenswrapper[4998]: I0227 10:45:54.935429 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7fa18717-d186-4f9e-8a17-f3689b187491-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-t98b7\" (UID: \"7fa18717-d186-4f9e-8a17-f3689b187491\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-t98b7" Feb 27 10:45:54 crc kubenswrapper[4998]: I0227 10:45:54.941127 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7fa18717-d186-4f9e-8a17-f3689b187491-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-t98b7\" (UID: \"7fa18717-d186-4f9e-8a17-f3689b187491\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-t98b7" Feb 27 10:45:54 crc kubenswrapper[4998]: I0227 10:45:54.943964 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7fa18717-d186-4f9e-8a17-f3689b187491-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-t98b7\" (UID: \"7fa18717-d186-4f9e-8a17-f3689b187491\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-t98b7" Feb 27 10:45:54 crc kubenswrapper[4998]: I0227 10:45:54.954119 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6z8r\" (UniqueName: \"kubernetes.io/projected/7fa18717-d186-4f9e-8a17-f3689b187491-kube-api-access-s6z8r\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-t98b7\" (UID: \"7fa18717-d186-4f9e-8a17-f3689b187491\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-t98b7" Feb 27 10:45:54 crc kubenswrapper[4998]: I0227 10:45:54.995736 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-t98b7" Feb 27 10:45:55 crc kubenswrapper[4998]: I0227 10:45:55.539037 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-t98b7"] Feb 27 10:45:55 crc kubenswrapper[4998]: I0227 10:45:55.590273 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-t98b7" event={"ID":"7fa18717-d186-4f9e-8a17-f3689b187491","Type":"ContainerStarted","Data":"2f3b6821bb5168544d05a89ac1a59ffcd7f9e658b3f187db4e29f23c4045670f"} Feb 27 10:45:56 crc kubenswrapper[4998]: I0227 10:45:56.765314 4998 scope.go:117] "RemoveContainer" containerID="f7bf3a0484c3e7ee22533ca49a17be909a31292e5418f4d1a0cd402775584d49" Feb 27 10:45:56 crc kubenswrapper[4998]: E0227 10:45:56.765801 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m6kr5_openshift-machine-config-operator(400c5e2f-5448-49c6-bf8e-04b21e552bb2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" Feb 27 10:45:57 crc kubenswrapper[4998]: I0227 10:45:57.614312 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-t98b7" event={"ID":"7fa18717-d186-4f9e-8a17-f3689b187491","Type":"ContainerStarted","Data":"88596411b7a726e737b83173f29237f22bd79425357e9410ee90873c0200f2af"} Feb 27 10:45:57 crc kubenswrapper[4998]: I0227 10:45:57.639618 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-t98b7" podStartSLOduration=2.594028928 podStartE2EDuration="3.639591738s" podCreationTimestamp="2026-02-27 10:45:54 +0000 UTC" firstStartedPulling="2026-02-27 10:45:55.544524682 +0000 UTC m=+1707.542795660" lastFinishedPulling="2026-02-27 10:45:56.590087502 +0000 UTC m=+1708.588358470" observedRunningTime="2026-02-27 10:45:57.630369948 +0000 UTC m=+1709.628640916" watchObservedRunningTime="2026-02-27 10:45:57.639591738 +0000 UTC m=+1709.637862736" Feb 27 10:46:00 crc kubenswrapper[4998]: I0227 10:46:00.158336 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536486-vmqwl"] Feb 27 10:46:00 crc kubenswrapper[4998]: I0227 10:46:00.161068 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536486-vmqwl" Feb 27 10:46:00 crc kubenswrapper[4998]: I0227 10:46:00.165696 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-b74ch" Feb 27 10:46:00 crc kubenswrapper[4998]: I0227 10:46:00.165982 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 10:46:00 crc kubenswrapper[4998]: I0227 10:46:00.165811 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 10:46:00 crc kubenswrapper[4998]: I0227 10:46:00.175293 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536486-vmqwl"] Feb 27 10:46:00 crc kubenswrapper[4998]: I0227 10:46:00.345035 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbjdh\" (UniqueName: \"kubernetes.io/projected/dce9adf8-87bf-48d3-a7bb-3e0596585bf4-kube-api-access-cbjdh\") pod \"auto-csr-approver-29536486-vmqwl\" (UID: \"dce9adf8-87bf-48d3-a7bb-3e0596585bf4\") " pod="openshift-infra/auto-csr-approver-29536486-vmqwl" Feb 27 10:46:00 crc kubenswrapper[4998]: I0227 10:46:00.447598 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbjdh\" (UniqueName: \"kubernetes.io/projected/dce9adf8-87bf-48d3-a7bb-3e0596585bf4-kube-api-access-cbjdh\") pod \"auto-csr-approver-29536486-vmqwl\" (UID: \"dce9adf8-87bf-48d3-a7bb-3e0596585bf4\") " pod="openshift-infra/auto-csr-approver-29536486-vmqwl" Feb 27 10:46:00 crc kubenswrapper[4998]: I0227 10:46:00.475220 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbjdh\" (UniqueName: \"kubernetes.io/projected/dce9adf8-87bf-48d3-a7bb-3e0596585bf4-kube-api-access-cbjdh\") pod \"auto-csr-approver-29536486-vmqwl\" (UID: \"dce9adf8-87bf-48d3-a7bb-3e0596585bf4\") " pod="openshift-infra/auto-csr-approver-29536486-vmqwl" Feb 27 10:46:00 crc kubenswrapper[4998]: I0227 10:46:00.496293 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536486-vmqwl" Feb 27 10:46:01 crc kubenswrapper[4998]: W0227 10:46:01.003926 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddce9adf8_87bf_48d3_a7bb_3e0596585bf4.slice/crio-89f1f16b26a4d8f432ba6c4fd821036484f47ad4c9c24d26ed595242c8c0a0aa WatchSource:0}: Error finding container 89f1f16b26a4d8f432ba6c4fd821036484f47ad4c9c24d26ed595242c8c0a0aa: Status 404 returned error can't find the container with id 89f1f16b26a4d8f432ba6c4fd821036484f47ad4c9c24d26ed595242c8c0a0aa Feb 27 10:46:01 crc kubenswrapper[4998]: I0227 10:46:01.010044 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536486-vmqwl"] Feb 27 10:46:01 crc kubenswrapper[4998]: I0227 10:46:01.660399 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536486-vmqwl" event={"ID":"dce9adf8-87bf-48d3-a7bb-3e0596585bf4","Type":"ContainerStarted","Data":"89f1f16b26a4d8f432ba6c4fd821036484f47ad4c9c24d26ed595242c8c0a0aa"} Feb 27 10:46:02 crc kubenswrapper[4998]: I0227 10:46:02.669363 4998 generic.go:334] "Generic (PLEG): container finished" podID="dce9adf8-87bf-48d3-a7bb-3e0596585bf4" containerID="e5e86a51155f19c0e93dfb872c142f8706afdd37d3b4c58cd7ee97a7a1f97537" exitCode=0 Feb 27 10:46:02 crc kubenswrapper[4998]: I0227 10:46:02.669421 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536486-vmqwl" event={"ID":"dce9adf8-87bf-48d3-a7bb-3e0596585bf4","Type":"ContainerDied","Data":"e5e86a51155f19c0e93dfb872c142f8706afdd37d3b4c58cd7ee97a7a1f97537"} Feb 27 10:46:04 crc kubenswrapper[4998]: I0227 10:46:04.028184 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536486-vmqwl" Feb 27 10:46:04 crc kubenswrapper[4998]: I0227 10:46:04.119788 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cbjdh\" (UniqueName: \"kubernetes.io/projected/dce9adf8-87bf-48d3-a7bb-3e0596585bf4-kube-api-access-cbjdh\") pod \"dce9adf8-87bf-48d3-a7bb-3e0596585bf4\" (UID: \"dce9adf8-87bf-48d3-a7bb-3e0596585bf4\") " Feb 27 10:46:04 crc kubenswrapper[4998]: I0227 10:46:04.128959 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dce9adf8-87bf-48d3-a7bb-3e0596585bf4-kube-api-access-cbjdh" (OuterVolumeSpecName: "kube-api-access-cbjdh") pod "dce9adf8-87bf-48d3-a7bb-3e0596585bf4" (UID: "dce9adf8-87bf-48d3-a7bb-3e0596585bf4"). InnerVolumeSpecName "kube-api-access-cbjdh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:46:04 crc kubenswrapper[4998]: I0227 10:46:04.222575 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cbjdh\" (UniqueName: \"kubernetes.io/projected/dce9adf8-87bf-48d3-a7bb-3e0596585bf4-kube-api-access-cbjdh\") on node \"crc\" DevicePath \"\"" Feb 27 10:46:04 crc kubenswrapper[4998]: I0227 10:46:04.690052 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536486-vmqwl" event={"ID":"dce9adf8-87bf-48d3-a7bb-3e0596585bf4","Type":"ContainerDied","Data":"89f1f16b26a4d8f432ba6c4fd821036484f47ad4c9c24d26ed595242c8c0a0aa"} Feb 27 10:46:04 crc kubenswrapper[4998]: I0227 10:46:04.690335 4998 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89f1f16b26a4d8f432ba6c4fd821036484f47ad4c9c24d26ed595242c8c0a0aa" Feb 27 10:46:04 crc kubenswrapper[4998]: I0227 10:46:04.690112 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536486-vmqwl" Feb 27 10:46:05 crc kubenswrapper[4998]: I0227 10:46:05.095108 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536480-4x855"] Feb 27 10:46:05 crc kubenswrapper[4998]: I0227 10:46:05.105171 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536480-4x855"] Feb 27 10:46:06 crc kubenswrapper[4998]: I0227 10:46:06.776719 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f855440-d78c-4b8b-b258-b4455899fba0" path="/var/lib/kubelet/pods/5f855440-d78c-4b8b-b258-b4455899fba0/volumes" Feb 27 10:46:07 crc kubenswrapper[4998]: I0227 10:46:07.765270 4998 scope.go:117] "RemoveContainer" containerID="f7bf3a0484c3e7ee22533ca49a17be909a31292e5418f4d1a0cd402775584d49" Feb 27 10:46:07 crc kubenswrapper[4998]: E0227 10:46:07.765955 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m6kr5_openshift-machine-config-operator(400c5e2f-5448-49c6-bf8e-04b21e552bb2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" Feb 27 10:46:21 crc kubenswrapper[4998]: I0227 10:46:21.765136 4998 scope.go:117] "RemoveContainer" containerID="f7bf3a0484c3e7ee22533ca49a17be909a31292e5418f4d1a0cd402775584d49" Feb 27 10:46:21 crc kubenswrapper[4998]: E0227 10:46:21.766355 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m6kr5_openshift-machine-config-operator(400c5e2f-5448-49c6-bf8e-04b21e552bb2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" Feb 27 10:46:34 crc kubenswrapper[4998]: I0227 10:46:34.765497 4998 scope.go:117] "RemoveContainer" containerID="f7bf3a0484c3e7ee22533ca49a17be909a31292e5418f4d1a0cd402775584d49" Feb 27 10:46:34 crc kubenswrapper[4998]: E0227 10:46:34.766543 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m6kr5_openshift-machine-config-operator(400c5e2f-5448-49c6-bf8e-04b21e552bb2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" Feb 27 10:46:45 crc kubenswrapper[4998]: I0227 10:46:45.765089 4998 scope.go:117] "RemoveContainer" containerID="f7bf3a0484c3e7ee22533ca49a17be909a31292e5418f4d1a0cd402775584d49" Feb 27 10:46:45 crc kubenswrapper[4998]: E0227 10:46:45.766005 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m6kr5_openshift-machine-config-operator(400c5e2f-5448-49c6-bf8e-04b21e552bb2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" Feb 27 10:46:46 crc kubenswrapper[4998]: I0227 10:46:46.606537 4998 scope.go:117] "RemoveContainer" containerID="93fbe3f8ae0890e4c907024bc758f53363a96cece0ec44c2f50637b785c8c03f" Feb 27 10:46:46 crc kubenswrapper[4998]: I0227 10:46:46.628486 4998 scope.go:117] "RemoveContainer" containerID="1e63bf8db9b3b73321716d312e658449f9b74fcad62f2ec8727d5da31813ded3" Feb 27 10:46:46 crc kubenswrapper[4998]: I0227 10:46:46.671736 4998 scope.go:117] "RemoveContainer" containerID="fa395cc74c71064c42c5b81f816a3d385997c09302897dacf7aa1fc439125fad" Feb 27 10:46:46 crc kubenswrapper[4998]: I0227 10:46:46.712425 4998 scope.go:117] "RemoveContainer" containerID="a6dd5e47c6abb35450458dde4ec8aeba8b15341e384fc2b0033b757d85ca5355" Feb 27 10:46:56 crc kubenswrapper[4998]: I0227 10:46:56.766852 4998 scope.go:117] "RemoveContainer" containerID="f7bf3a0484c3e7ee22533ca49a17be909a31292e5418f4d1a0cd402775584d49" Feb 27 10:46:56 crc kubenswrapper[4998]: E0227 10:46:56.769777 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m6kr5_openshift-machine-config-operator(400c5e2f-5448-49c6-bf8e-04b21e552bb2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" Feb 27 10:47:11 crc kubenswrapper[4998]: I0227 10:47:11.765439 4998 scope.go:117] "RemoveContainer" containerID="f7bf3a0484c3e7ee22533ca49a17be909a31292e5418f4d1a0cd402775584d49" Feb 27 10:47:11 crc kubenswrapper[4998]: E0227 10:47:11.766259 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m6kr5_openshift-machine-config-operator(400c5e2f-5448-49c6-bf8e-04b21e552bb2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" Feb 27 10:47:12 crc kubenswrapper[4998]: I0227 10:47:12.057369 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-e6b9-account-create-update-zb9jg"] Feb 27 10:47:12 crc kubenswrapper[4998]: I0227 10:47:12.082100 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-mfjlb"] Feb 27 10:47:12 crc kubenswrapper[4998]: I0227 10:47:12.091664 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-e6b9-account-create-update-zb9jg"] Feb 27 10:47:12 crc kubenswrapper[4998]: I0227 10:47:12.101807 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-mfjlb"] Feb 27 10:47:12 crc kubenswrapper[4998]: I0227 10:47:12.786426 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="472a79a1-5809-4914-b8a3-1aa3a708bb9a" path="/var/lib/kubelet/pods/472a79a1-5809-4914-b8a3-1aa3a708bb9a/volumes" Feb 27 10:47:12 crc kubenswrapper[4998]: I0227 10:47:12.787748 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5297f13-d069-44e9-aa42-17bf298602e4" path="/var/lib/kubelet/pods/c5297f13-d069-44e9-aa42-17bf298602e4/volumes" Feb 27 10:47:13 crc kubenswrapper[4998]: I0227 10:47:13.044698 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-q5h9v"] Feb 27 10:47:13 crc kubenswrapper[4998]: I0227 10:47:13.053945 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-f443-account-create-update-2mv4s"] Feb 27 10:47:13 crc kubenswrapper[4998]: I0227 10:47:13.066709 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-q5h9v"] Feb 27 10:47:13 crc kubenswrapper[4998]: I0227 10:47:13.077961 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-f443-account-create-update-2mv4s"] Feb 27 10:47:14 crc kubenswrapper[4998]: I0227 10:47:14.778059 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91e2f326-c479-4e94-a24f-42ec17281073" path="/var/lib/kubelet/pods/91e2f326-c479-4e94-a24f-42ec17281073/volumes" Feb 27 10:47:14 crc kubenswrapper[4998]: I0227 10:47:14.779122 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d29ee408-e38e-4bd8-b05c-9fe12d166c9e" path="/var/lib/kubelet/pods/d29ee408-e38e-4bd8-b05c-9fe12d166c9e/volumes" Feb 27 10:47:20 crc kubenswrapper[4998]: I0227 10:47:20.030645 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-2l67v"] Feb 27 10:47:20 crc kubenswrapper[4998]: I0227 10:47:20.039528 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-c2c7-account-create-update-fclht"] Feb 27 10:47:20 crc kubenswrapper[4998]: I0227 10:47:20.051049 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-c2c7-account-create-update-fclht"] Feb 27 10:47:20 crc kubenswrapper[4998]: I0227 10:47:20.059141 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-2l67v"] Feb 27 10:47:20 crc kubenswrapper[4998]: I0227 10:47:20.780756 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2173d9f2-1855-4ce8-bcfe-5dd0a8d99da5" path="/var/lib/kubelet/pods/2173d9f2-1855-4ce8-bcfe-5dd0a8d99da5/volumes" Feb 27 10:47:20 crc kubenswrapper[4998]: I0227 10:47:20.781956 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ef6de27-5d07-4c05-9da0-513855fbefa6" path="/var/lib/kubelet/pods/4ef6de27-5d07-4c05-9da0-513855fbefa6/volumes" Feb 27 10:47:24 crc kubenswrapper[4998]: I0227 10:47:24.765136 4998 scope.go:117] "RemoveContainer" containerID="f7bf3a0484c3e7ee22533ca49a17be909a31292e5418f4d1a0cd402775584d49" Feb 27 10:47:24 crc kubenswrapper[4998]: E0227 10:47:24.766035 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m6kr5_openshift-machine-config-operator(400c5e2f-5448-49c6-bf8e-04b21e552bb2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" Feb 27 10:47:36 crc kubenswrapper[4998]: I0227 10:47:36.764801 4998 scope.go:117] "RemoveContainer" containerID="f7bf3a0484c3e7ee22533ca49a17be909a31292e5418f4d1a0cd402775584d49" Feb 27 10:47:36 crc kubenswrapper[4998]: E0227 10:47:36.765646 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m6kr5_openshift-machine-config-operator(400c5e2f-5448-49c6-bf8e-04b21e552bb2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" Feb 27 10:47:43 crc kubenswrapper[4998]: I0227 10:47:43.050147 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-q5krj"] Feb 27 10:47:43 crc kubenswrapper[4998]: I0227 10:47:43.058965 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-q5krj"] Feb 27 10:47:44 crc kubenswrapper[4998]: I0227 10:47:44.778437 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c2d1d1d-3eba-45d7-935c-6c71925d009a" path="/var/lib/kubelet/pods/6c2d1d1d-3eba-45d7-935c-6c71925d009a/volumes" Feb 27 10:47:46 crc kubenswrapper[4998]: I0227 10:47:46.812940 4998 scope.go:117] "RemoveContainer" containerID="4b44d92ff2d42b91e0734dbb0372fcb64f363d789014828f845c926c059df932" Feb 27 10:47:46 crc kubenswrapper[4998]: I0227 10:47:46.859955 4998 scope.go:117] "RemoveContainer" containerID="2ad90dc2b9d9de393c6d67401b20841b7cc89ec38646e127c07ddd254b80f184" Feb 27 10:47:46 crc kubenswrapper[4998]: I0227 10:47:46.910481 4998 scope.go:117] "RemoveContainer" containerID="fa0c8c512b2c2d546aec604bc47d0a75a02fa597b9683950982d2e8a1ac3b7e4" Feb 27 10:47:46 crc kubenswrapper[4998]: I0227 10:47:46.952741 4998 scope.go:117] "RemoveContainer" containerID="3a8af43fb129cce6cf03894f349e6724ec05c00d5edef40b2543249728a3fae3" Feb 27 10:47:46 crc kubenswrapper[4998]: I0227 10:47:46.996422 4998 scope.go:117] "RemoveContainer" containerID="d4a209e1100cb7caa3a0e4db93d8d75115afcef62988d8eb6a691aa6bdd5023f" Feb 27 10:47:47 crc kubenswrapper[4998]: I0227 10:47:47.046436 4998 scope.go:117] "RemoveContainer" containerID="5a50d303aa58005bb4829d13042d3c899d47e3382e098a688f8bd18e6f33a20f" Feb 27 10:47:47 crc kubenswrapper[4998]: I0227 10:47:47.092657 4998 scope.go:117] "RemoveContainer" containerID="61af5b34fc0062ba56c71848973511ea8009a7154cd302708c805aa4a500d9b1" Feb 27 10:47:50 crc kubenswrapper[4998]: I0227 10:47:50.765603 4998 scope.go:117] "RemoveContainer" containerID="f7bf3a0484c3e7ee22533ca49a17be909a31292e5418f4d1a0cd402775584d49" Feb 27 10:47:50 crc kubenswrapper[4998]: E0227 10:47:50.766528 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m6kr5_openshift-machine-config-operator(400c5e2f-5448-49c6-bf8e-04b21e552bb2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" Feb 27 10:47:51 crc kubenswrapper[4998]: I0227 10:47:51.039156 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-68rxz"] Feb 27 10:47:51 crc kubenswrapper[4998]: I0227 10:47:51.049634 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-tdqvg"] Feb 27 10:47:51 crc kubenswrapper[4998]: I0227 10:47:51.063197 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-f503-account-create-update-sppwc"] Feb 27 10:47:51 crc kubenswrapper[4998]: I0227 10:47:51.074709 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-j226t"] Feb 27 10:47:51 crc kubenswrapper[4998]: I0227 10:47:51.083463 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-70b3-account-create-update-jxckj"] Feb 27 10:47:51 crc kubenswrapper[4998]: I0227 10:47:51.100528 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-tdqvg"] Feb 27 10:47:51 crc kubenswrapper[4998]: I0227 10:47:51.109638 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-f503-account-create-update-sppwc"] Feb 27 10:47:51 crc kubenswrapper[4998]: I0227 10:47:51.117303 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-68rxz"] Feb 27 10:47:51 crc kubenswrapper[4998]: I0227 10:47:51.124586 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-70b3-account-create-update-jxckj"] Feb 27 10:47:51 crc kubenswrapper[4998]: I0227 10:47:51.131480 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-j226t"] Feb 27 10:47:52 crc kubenswrapper[4998]: I0227 10:47:52.785449 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33b458da-5079-4368-935a-74562555231c" path="/var/lib/kubelet/pods/33b458da-5079-4368-935a-74562555231c/volumes" Feb 27 10:47:52 crc kubenswrapper[4998]: I0227 10:47:52.786556 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a51c99e4-d488-4245-8642-7e02c861919c" path="/var/lib/kubelet/pods/a51c99e4-d488-4245-8642-7e02c861919c/volumes" Feb 27 10:47:52 crc kubenswrapper[4998]: I0227 10:47:52.787207 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a99365b3-16bc-4dce-9952-9f5cc37dfe2b" path="/var/lib/kubelet/pods/a99365b3-16bc-4dce-9952-9f5cc37dfe2b/volumes" Feb 27 10:47:52 crc kubenswrapper[4998]: I0227 10:47:52.787912 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c53e7f43-9c1d-487b-984a-f6ea82b5caec" path="/var/lib/kubelet/pods/c53e7f43-9c1d-487b-984a-f6ea82b5caec/volumes" Feb 27 10:47:52 crc kubenswrapper[4998]: I0227 10:47:52.789104 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0c813f4-c426-471d-a640-9889450bfec7" path="/var/lib/kubelet/pods/f0c813f4-c426-471d-a640-9889450bfec7/volumes" Feb 27 10:47:54 crc kubenswrapper[4998]: I0227 10:47:54.044377 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-kdmkv"] Feb 27 10:47:54 crc kubenswrapper[4998]: I0227 10:47:54.057563 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-3ce1-account-create-update-25bnc"] Feb 27 10:47:54 crc kubenswrapper[4998]: I0227 10:47:54.065046 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-kdmkv"] Feb 27 10:47:54 crc kubenswrapper[4998]: I0227 10:47:54.072182 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-3ce1-account-create-update-25bnc"] Feb 27 10:47:54 crc kubenswrapper[4998]: I0227 10:47:54.780573 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b35bf46b-b800-4e48-a90f-1a5e25eb3e3b" path="/var/lib/kubelet/pods/b35bf46b-b800-4e48-a90f-1a5e25eb3e3b/volumes" Feb 27 10:47:54 crc kubenswrapper[4998]: I0227 10:47:54.782070 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4c7cf30-091f-4dea-bbc1-156ad96a5451" path="/var/lib/kubelet/pods/b4c7cf30-091f-4dea-bbc1-156ad96a5451/volumes" Feb 27 10:47:58 crc kubenswrapper[4998]: I0227 10:47:58.898018 4998 generic.go:334] "Generic (PLEG): container finished" podID="7fa18717-d186-4f9e-8a17-f3689b187491" containerID="88596411b7a726e737b83173f29237f22bd79425357e9410ee90873c0200f2af" exitCode=0 Feb 27 10:47:58 crc kubenswrapper[4998]: I0227 10:47:58.898110 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-t98b7" event={"ID":"7fa18717-d186-4f9e-8a17-f3689b187491","Type":"ContainerDied","Data":"88596411b7a726e737b83173f29237f22bd79425357e9410ee90873c0200f2af"} Feb 27 10:48:00 crc kubenswrapper[4998]: I0227 10:48:00.037148 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-8vc7w"] Feb 27 10:48:00 crc kubenswrapper[4998]: I0227 10:48:00.048765 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-8vc7w"] Feb 27 10:48:00 crc kubenswrapper[4998]: I0227 10:48:00.164447 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536488-7f9vc"] Feb 27 10:48:00 crc kubenswrapper[4998]: E0227 10:48:00.165121 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dce9adf8-87bf-48d3-a7bb-3e0596585bf4" containerName="oc" Feb 27 10:48:00 crc kubenswrapper[4998]: I0227 10:48:00.165144 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="dce9adf8-87bf-48d3-a7bb-3e0596585bf4" containerName="oc" Feb 27 10:48:00 crc kubenswrapper[4998]: I0227 10:48:00.165421 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="dce9adf8-87bf-48d3-a7bb-3e0596585bf4" containerName="oc" Feb 27 10:48:00 crc kubenswrapper[4998]: I0227 10:48:00.166219 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536488-7f9vc" Feb 27 10:48:00 crc kubenswrapper[4998]: I0227 10:48:00.172606 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 10:48:00 crc kubenswrapper[4998]: I0227 10:48:00.173046 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 10:48:00 crc kubenswrapper[4998]: I0227 10:48:00.173868 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-b74ch" Feb 27 10:48:00 crc kubenswrapper[4998]: I0227 10:48:00.185657 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536488-7f9vc"] Feb 27 10:48:00 crc kubenswrapper[4998]: I0227 10:48:00.329706 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cf5b4\" (UniqueName: \"kubernetes.io/projected/779a501d-c19a-4ee2-94ed-9449448195b2-kube-api-access-cf5b4\") pod \"auto-csr-approver-29536488-7f9vc\" (UID: \"779a501d-c19a-4ee2-94ed-9449448195b2\") " pod="openshift-infra/auto-csr-approver-29536488-7f9vc" Feb 27 10:48:00 crc kubenswrapper[4998]: I0227 10:48:00.390423 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-t98b7" Feb 27 10:48:00 crc kubenswrapper[4998]: I0227 10:48:00.432346 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cf5b4\" (UniqueName: \"kubernetes.io/projected/779a501d-c19a-4ee2-94ed-9449448195b2-kube-api-access-cf5b4\") pod \"auto-csr-approver-29536488-7f9vc\" (UID: \"779a501d-c19a-4ee2-94ed-9449448195b2\") " pod="openshift-infra/auto-csr-approver-29536488-7f9vc" Feb 27 10:48:00 crc kubenswrapper[4998]: I0227 10:48:00.451368 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cf5b4\" (UniqueName: \"kubernetes.io/projected/779a501d-c19a-4ee2-94ed-9449448195b2-kube-api-access-cf5b4\") pod \"auto-csr-approver-29536488-7f9vc\" (UID: \"779a501d-c19a-4ee2-94ed-9449448195b2\") " pod="openshift-infra/auto-csr-approver-29536488-7f9vc" Feb 27 10:48:00 crc kubenswrapper[4998]: I0227 10:48:00.492462 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536488-7f9vc" Feb 27 10:48:00 crc kubenswrapper[4998]: I0227 10:48:00.534812 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7fa18717-d186-4f9e-8a17-f3689b187491-ssh-key-openstack-edpm-ipam\") pod \"7fa18717-d186-4f9e-8a17-f3689b187491\" (UID: \"7fa18717-d186-4f9e-8a17-f3689b187491\") " Feb 27 10:48:00 crc kubenswrapper[4998]: I0227 10:48:00.534957 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7fa18717-d186-4f9e-8a17-f3689b187491-inventory\") pod \"7fa18717-d186-4f9e-8a17-f3689b187491\" (UID: \"7fa18717-d186-4f9e-8a17-f3689b187491\") " Feb 27 10:48:00 crc kubenswrapper[4998]: I0227 10:48:00.535119 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s6z8r\" (UniqueName: \"kubernetes.io/projected/7fa18717-d186-4f9e-8a17-f3689b187491-kube-api-access-s6z8r\") pod \"7fa18717-d186-4f9e-8a17-f3689b187491\" (UID: \"7fa18717-d186-4f9e-8a17-f3689b187491\") " Feb 27 10:48:00 crc kubenswrapper[4998]: I0227 10:48:00.540196 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fa18717-d186-4f9e-8a17-f3689b187491-kube-api-access-s6z8r" (OuterVolumeSpecName: "kube-api-access-s6z8r") pod "7fa18717-d186-4f9e-8a17-f3689b187491" (UID: "7fa18717-d186-4f9e-8a17-f3689b187491"). InnerVolumeSpecName "kube-api-access-s6z8r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:48:00 crc kubenswrapper[4998]: I0227 10:48:00.562739 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fa18717-d186-4f9e-8a17-f3689b187491-inventory" (OuterVolumeSpecName: "inventory") pod "7fa18717-d186-4f9e-8a17-f3689b187491" (UID: "7fa18717-d186-4f9e-8a17-f3689b187491"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:48:00 crc kubenswrapper[4998]: I0227 10:48:00.567058 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fa18717-d186-4f9e-8a17-f3689b187491-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "7fa18717-d186-4f9e-8a17-f3689b187491" (UID: "7fa18717-d186-4f9e-8a17-f3689b187491"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:48:00 crc kubenswrapper[4998]: I0227 10:48:00.637755 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s6z8r\" (UniqueName: \"kubernetes.io/projected/7fa18717-d186-4f9e-8a17-f3689b187491-kube-api-access-s6z8r\") on node \"crc\" DevicePath \"\"" Feb 27 10:48:00 crc kubenswrapper[4998]: I0227 10:48:00.637782 4998 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7fa18717-d186-4f9e-8a17-f3689b187491-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 27 10:48:00 crc kubenswrapper[4998]: I0227 10:48:00.637792 4998 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7fa18717-d186-4f9e-8a17-f3689b187491-inventory\") on node \"crc\" DevicePath \"\"" Feb 27 10:48:00 crc kubenswrapper[4998]: I0227 10:48:00.789313 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87a57573-5e1f-4004-bb42-4de9e20de0ef" path="/var/lib/kubelet/pods/87a57573-5e1f-4004-bb42-4de9e20de0ef/volumes" Feb 27 10:48:00 crc kubenswrapper[4998]: I0227 10:48:00.915708 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536488-7f9vc"] Feb 27 10:48:00 crc kubenswrapper[4998]: I0227 10:48:00.921100 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-t98b7" event={"ID":"7fa18717-d186-4f9e-8a17-f3689b187491","Type":"ContainerDied","Data":"2f3b6821bb5168544d05a89ac1a59ffcd7f9e658b3f187db4e29f23c4045670f"} Feb 27 10:48:00 crc kubenswrapper[4998]: I0227 10:48:00.921140 4998 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f3b6821bb5168544d05a89ac1a59ffcd7f9e658b3f187db4e29f23c4045670f" Feb 27 10:48:00 crc kubenswrapper[4998]: I0227 10:48:00.921146 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-t98b7" Feb 27 10:48:00 crc kubenswrapper[4998]: I0227 10:48:00.999658 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ph82m"] Feb 27 10:48:01 crc kubenswrapper[4998]: E0227 10:48:01.000325 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fa18717-d186-4f9e-8a17-f3689b187491" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 27 10:48:01 crc kubenswrapper[4998]: I0227 10:48:01.000349 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fa18717-d186-4f9e-8a17-f3689b187491" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 27 10:48:01 crc kubenswrapper[4998]: I0227 10:48:01.000605 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fa18717-d186-4f9e-8a17-f3689b187491" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 27 10:48:01 crc kubenswrapper[4998]: I0227 10:48:01.001486 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ph82m" Feb 27 10:48:01 crc kubenswrapper[4998]: I0227 10:48:01.010529 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ph82m"] Feb 27 10:48:01 crc kubenswrapper[4998]: I0227 10:48:01.015904 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-bpcp2" Feb 27 10:48:01 crc kubenswrapper[4998]: I0227 10:48:01.016036 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 27 10:48:01 crc kubenswrapper[4998]: I0227 10:48:01.016034 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 27 10:48:01 crc kubenswrapper[4998]: I0227 10:48:01.016207 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 27 10:48:01 crc kubenswrapper[4998]: I0227 10:48:01.147329 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/23592e13-bc1d-4017-918b-1b78a059e903-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ph82m\" (UID: \"23592e13-bc1d-4017-918b-1b78a059e903\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ph82m" Feb 27 10:48:01 crc kubenswrapper[4998]: I0227 10:48:01.147405 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnghl\" (UniqueName: \"kubernetes.io/projected/23592e13-bc1d-4017-918b-1b78a059e903-kube-api-access-wnghl\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ph82m\" (UID: \"23592e13-bc1d-4017-918b-1b78a059e903\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ph82m" Feb 27 10:48:01 crc kubenswrapper[4998]: I0227 10:48:01.147440 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/23592e13-bc1d-4017-918b-1b78a059e903-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ph82m\" (UID: \"23592e13-bc1d-4017-918b-1b78a059e903\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ph82m" Feb 27 10:48:01 crc kubenswrapper[4998]: I0227 10:48:01.249589 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/23592e13-bc1d-4017-918b-1b78a059e903-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ph82m\" (UID: \"23592e13-bc1d-4017-918b-1b78a059e903\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ph82m" Feb 27 10:48:01 crc kubenswrapper[4998]: I0227 10:48:01.249684 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wnghl\" (UniqueName: \"kubernetes.io/projected/23592e13-bc1d-4017-918b-1b78a059e903-kube-api-access-wnghl\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ph82m\" (UID: \"23592e13-bc1d-4017-918b-1b78a059e903\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ph82m" Feb 27 10:48:01 crc kubenswrapper[4998]: I0227 10:48:01.249724 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/23592e13-bc1d-4017-918b-1b78a059e903-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ph82m\" (UID: \"23592e13-bc1d-4017-918b-1b78a059e903\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ph82m" Feb 27 10:48:01 crc kubenswrapper[4998]: I0227 10:48:01.254707 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/23592e13-bc1d-4017-918b-1b78a059e903-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ph82m\" (UID: \"23592e13-bc1d-4017-918b-1b78a059e903\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ph82m" Feb 27 10:48:01 crc kubenswrapper[4998]: I0227 10:48:01.255268 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/23592e13-bc1d-4017-918b-1b78a059e903-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ph82m\" (UID: \"23592e13-bc1d-4017-918b-1b78a059e903\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ph82m" Feb 27 10:48:01 crc kubenswrapper[4998]: I0227 10:48:01.271928 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wnghl\" (UniqueName: \"kubernetes.io/projected/23592e13-bc1d-4017-918b-1b78a059e903-kube-api-access-wnghl\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ph82m\" (UID: \"23592e13-bc1d-4017-918b-1b78a059e903\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ph82m" Feb 27 10:48:01 crc kubenswrapper[4998]: I0227 10:48:01.324113 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ph82m" Feb 27 10:48:01 crc kubenswrapper[4998]: I0227 10:48:01.863766 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ph82m"] Feb 27 10:48:01 crc kubenswrapper[4998]: I0227 10:48:01.932845 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536488-7f9vc" event={"ID":"779a501d-c19a-4ee2-94ed-9449448195b2","Type":"ContainerStarted","Data":"5eb6f2587ad5e9db09b09646501cca05e8f693345415a25e3adef3fe732cd82e"} Feb 27 10:48:01 crc kubenswrapper[4998]: I0227 10:48:01.934898 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ph82m" event={"ID":"23592e13-bc1d-4017-918b-1b78a059e903","Type":"ContainerStarted","Data":"02f7d98af406ec43009480592417ab556108cf3ccde10d0b2776e9f5ff72d1c6"} Feb 27 10:48:02 crc kubenswrapper[4998]: I0227 10:48:02.944932 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ph82m" event={"ID":"23592e13-bc1d-4017-918b-1b78a059e903","Type":"ContainerStarted","Data":"b989604b2dfe1f5e695f352ad712c63fcd1750220a0b3af512e0e3002977f13c"} Feb 27 10:48:02 crc kubenswrapper[4998]: I0227 10:48:02.947335 4998 generic.go:334] "Generic (PLEG): container finished" podID="779a501d-c19a-4ee2-94ed-9449448195b2" containerID="9efbca7d52b4645b3c56013046c46d39d981cadb9849f0217787eafd0027a9af" exitCode=0 Feb 27 10:48:02 crc kubenswrapper[4998]: I0227 10:48:02.947415 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536488-7f9vc" event={"ID":"779a501d-c19a-4ee2-94ed-9449448195b2","Type":"ContainerDied","Data":"9efbca7d52b4645b3c56013046c46d39d981cadb9849f0217787eafd0027a9af"} Feb 27 10:48:02 crc kubenswrapper[4998]: I0227 10:48:02.971341 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ph82m" podStartSLOduration=2.5825863399999998 podStartE2EDuration="2.97130056s" podCreationTimestamp="2026-02-27 10:48:00 +0000 UTC" firstStartedPulling="2026-02-27 10:48:01.874963204 +0000 UTC m=+1833.873234182" lastFinishedPulling="2026-02-27 10:48:02.263677434 +0000 UTC m=+1834.261948402" observedRunningTime="2026-02-27 10:48:02.959310199 +0000 UTC m=+1834.957581187" watchObservedRunningTime="2026-02-27 10:48:02.97130056 +0000 UTC m=+1834.969571518" Feb 27 10:48:04 crc kubenswrapper[4998]: I0227 10:48:04.410752 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536488-7f9vc" Feb 27 10:48:04 crc kubenswrapper[4998]: I0227 10:48:04.521016 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cf5b4\" (UniqueName: \"kubernetes.io/projected/779a501d-c19a-4ee2-94ed-9449448195b2-kube-api-access-cf5b4\") pod \"779a501d-c19a-4ee2-94ed-9449448195b2\" (UID: \"779a501d-c19a-4ee2-94ed-9449448195b2\") " Feb 27 10:48:04 crc kubenswrapper[4998]: I0227 10:48:04.528312 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/779a501d-c19a-4ee2-94ed-9449448195b2-kube-api-access-cf5b4" (OuterVolumeSpecName: "kube-api-access-cf5b4") pod "779a501d-c19a-4ee2-94ed-9449448195b2" (UID: "779a501d-c19a-4ee2-94ed-9449448195b2"). InnerVolumeSpecName "kube-api-access-cf5b4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:48:04 crc kubenswrapper[4998]: I0227 10:48:04.622984 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cf5b4\" (UniqueName: \"kubernetes.io/projected/779a501d-c19a-4ee2-94ed-9449448195b2-kube-api-access-cf5b4\") on node \"crc\" DevicePath \"\"" Feb 27 10:48:04 crc kubenswrapper[4998]: I0227 10:48:04.765979 4998 scope.go:117] "RemoveContainer" containerID="f7bf3a0484c3e7ee22533ca49a17be909a31292e5418f4d1a0cd402775584d49" Feb 27 10:48:04 crc kubenswrapper[4998]: E0227 10:48:04.766308 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m6kr5_openshift-machine-config-operator(400c5e2f-5448-49c6-bf8e-04b21e552bb2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" Feb 27 10:48:04 crc kubenswrapper[4998]: I0227 10:48:04.969614 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536488-7f9vc" event={"ID":"779a501d-c19a-4ee2-94ed-9449448195b2","Type":"ContainerDied","Data":"5eb6f2587ad5e9db09b09646501cca05e8f693345415a25e3adef3fe732cd82e"} Feb 27 10:48:04 crc kubenswrapper[4998]: I0227 10:48:04.969672 4998 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5eb6f2587ad5e9db09b09646501cca05e8f693345415a25e3adef3fe732cd82e" Feb 27 10:48:04 crc kubenswrapper[4998]: I0227 10:48:04.969739 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536488-7f9vc" Feb 27 10:48:05 crc kubenswrapper[4998]: I0227 10:48:05.460562 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536482-zs5m9"] Feb 27 10:48:05 crc kubenswrapper[4998]: I0227 10:48:05.471103 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536482-zs5m9"] Feb 27 10:48:06 crc kubenswrapper[4998]: I0227 10:48:06.781566 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="590d897d-12b6-491a-bf20-33c238b10871" path="/var/lib/kubelet/pods/590d897d-12b6-491a-bf20-33c238b10871/volumes" Feb 27 10:48:19 crc kubenswrapper[4998]: I0227 10:48:19.765459 4998 scope.go:117] "RemoveContainer" containerID="f7bf3a0484c3e7ee22533ca49a17be909a31292e5418f4d1a0cd402775584d49" Feb 27 10:48:19 crc kubenswrapper[4998]: E0227 10:48:19.766307 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m6kr5_openshift-machine-config-operator(400c5e2f-5448-49c6-bf8e-04b21e552bb2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" Feb 27 10:48:31 crc kubenswrapper[4998]: I0227 10:48:31.045829 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-b8h8c"] Feb 27 10:48:31 crc kubenswrapper[4998]: I0227 10:48:31.058590 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-b8h8c"] Feb 27 10:48:31 crc kubenswrapper[4998]: I0227 10:48:31.765699 4998 scope.go:117] "RemoveContainer" containerID="f7bf3a0484c3e7ee22533ca49a17be909a31292e5418f4d1a0cd402775584d49" Feb 27 10:48:31 crc kubenswrapper[4998]: E0227 10:48:31.766064 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m6kr5_openshift-machine-config-operator(400c5e2f-5448-49c6-bf8e-04b21e552bb2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" Feb 27 10:48:32 crc kubenswrapper[4998]: I0227 10:48:32.787155 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97372f36-bf18-4a79-917b-cf9b6d0f92a2" path="/var/lib/kubelet/pods/97372f36-bf18-4a79-917b-cf9b6d0f92a2/volumes" Feb 27 10:48:38 crc kubenswrapper[4998]: I0227 10:48:38.040440 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-plff2"] Feb 27 10:48:38 crc kubenswrapper[4998]: I0227 10:48:38.059760 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-plff2"] Feb 27 10:48:38 crc kubenswrapper[4998]: I0227 10:48:38.775744 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e55e4748-da26-4ed7-8bba-e7260a78ba19" path="/var/lib/kubelet/pods/e55e4748-da26-4ed7-8bba-e7260a78ba19/volumes" Feb 27 10:48:44 crc kubenswrapper[4998]: I0227 10:48:44.767311 4998 scope.go:117] "RemoveContainer" containerID="f7bf3a0484c3e7ee22533ca49a17be909a31292e5418f4d1a0cd402775584d49" Feb 27 10:48:44 crc kubenswrapper[4998]: E0227 10:48:44.768122 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m6kr5_openshift-machine-config-operator(400c5e2f-5448-49c6-bf8e-04b21e552bb2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" Feb 27 10:48:46 crc kubenswrapper[4998]: I0227 10:48:46.027873 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-dlccw"] Feb 27 10:48:46 crc kubenswrapper[4998]: I0227 10:48:46.035428 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-dlccw"] Feb 27 10:48:46 crc kubenswrapper[4998]: I0227 10:48:46.774948 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06aa0c33-2be2-426a-98a0-eff676933eb1" path="/var/lib/kubelet/pods/06aa0c33-2be2-426a-98a0-eff676933eb1/volumes" Feb 27 10:48:47 crc kubenswrapper[4998]: I0227 10:48:47.242773 4998 scope.go:117] "RemoveContainer" containerID="89d5d1c9df571b02645d0ebdb4dd67443dbab4a40e735f37698dd8bae06ee4d3" Feb 27 10:48:47 crc kubenswrapper[4998]: I0227 10:48:47.288758 4998 scope.go:117] "RemoveContainer" containerID="eae523fac7505ae968dd9b7f9bb1c82265f152db269da617070438ba4412f92a" Feb 27 10:48:47 crc kubenswrapper[4998]: I0227 10:48:47.342303 4998 scope.go:117] "RemoveContainer" containerID="e3ed6fa400f5b4e9c3001899d26ba4be388dd3a9b92720a015b3ee8d86e584be" Feb 27 10:48:47 crc kubenswrapper[4998]: I0227 10:48:47.411505 4998 scope.go:117] "RemoveContainer" containerID="c70d09dd99c13f00697825f7e4e0c0041a19779e0ef465ce606cbe63ae93670f" Feb 27 10:48:47 crc kubenswrapper[4998]: I0227 10:48:47.437159 4998 scope.go:117] "RemoveContainer" containerID="27c60324302e68015b924ed479bfbe21ddc9277870418632bf5a046d96883cbb" Feb 27 10:48:47 crc kubenswrapper[4998]: I0227 10:48:47.486691 4998 scope.go:117] "RemoveContainer" containerID="90031f06f62cdca0d25a84c9f00af5f52b5558ebac987e5e53ec011eba89cd2f" Feb 27 10:48:47 crc kubenswrapper[4998]: I0227 10:48:47.533944 4998 scope.go:117] "RemoveContainer" containerID="3ad8ead1d374638d70d30ef0515106557398f5e552665a5da2284551e7f7e532" Feb 27 10:48:47 crc kubenswrapper[4998]: I0227 10:48:47.553812 4998 scope.go:117] "RemoveContainer" containerID="e76b698c2159f521a393996218c6ebaeec4738abd51d34008556d34317066f14" Feb 27 10:48:47 crc kubenswrapper[4998]: I0227 10:48:47.570614 4998 scope.go:117] "RemoveContainer" containerID="d1b0cc283350dc86c38043c0b922f6d720ef3669dd2604dce714abe1361d664f" Feb 27 10:48:47 crc kubenswrapper[4998]: I0227 10:48:47.599684 4998 scope.go:117] "RemoveContainer" containerID="de81a5f70de9a03abbd0d4f927a765587abce470f9ae52dc8086c284b44f0c93" Feb 27 10:48:47 crc kubenswrapper[4998]: I0227 10:48:47.638954 4998 scope.go:117] "RemoveContainer" containerID="df502cfcbbdee7b48d76fc4a8b1f4caf4a3ae3fbdc80f4ec4db85fab974eb7d0" Feb 27 10:48:47 crc kubenswrapper[4998]: I0227 10:48:47.661749 4998 scope.go:117] "RemoveContainer" containerID="da647573782f2ae67fe422e061eca74eab0037368f1c39126eea24206d3ad9e9" Feb 27 10:48:55 crc kubenswrapper[4998]: I0227 10:48:55.057301 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-5x42c"] Feb 27 10:48:55 crc kubenswrapper[4998]: I0227 10:48:55.066316 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-5x42c"] Feb 27 10:48:56 crc kubenswrapper[4998]: I0227 10:48:56.774961 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d455fe16-80bf-42c1-be16-a87102249bf8" path="/var/lib/kubelet/pods/d455fe16-80bf-42c1-be16-a87102249bf8/volumes" Feb 27 10:48:57 crc kubenswrapper[4998]: I0227 10:48:57.029843 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-tf4n2"] Feb 27 10:48:57 crc kubenswrapper[4998]: I0227 10:48:57.043600 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-tf4n2"] Feb 27 10:48:57 crc kubenswrapper[4998]: I0227 10:48:57.765327 4998 scope.go:117] "RemoveContainer" containerID="f7bf3a0484c3e7ee22533ca49a17be909a31292e5418f4d1a0cd402775584d49" Feb 27 10:48:57 crc kubenswrapper[4998]: E0227 10:48:57.765735 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m6kr5_openshift-machine-config-operator(400c5e2f-5448-49c6-bf8e-04b21e552bb2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" Feb 27 10:48:58 crc kubenswrapper[4998]: I0227 10:48:58.778774 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c96d5b82-3e0b-49a0-be3d-7f2fae6dd592" path="/var/lib/kubelet/pods/c96d5b82-3e0b-49a0-be3d-7f2fae6dd592/volumes" Feb 27 10:49:10 crc kubenswrapper[4998]: I0227 10:49:10.768136 4998 scope.go:117] "RemoveContainer" containerID="f7bf3a0484c3e7ee22533ca49a17be909a31292e5418f4d1a0cd402775584d49" Feb 27 10:49:11 crc kubenswrapper[4998]: I0227 10:49:11.675659 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" event={"ID":"400c5e2f-5448-49c6-bf8e-04b21e552bb2","Type":"ContainerStarted","Data":"04de93613a67bf6912778a8335db7a4a5c63027b3022fa81b03ae228d08d8d7b"} Feb 27 10:49:17 crc kubenswrapper[4998]: I0227 10:49:17.738039 4998 generic.go:334] "Generic (PLEG): container finished" podID="23592e13-bc1d-4017-918b-1b78a059e903" containerID="b989604b2dfe1f5e695f352ad712c63fcd1750220a0b3af512e0e3002977f13c" exitCode=0 Feb 27 10:49:17 crc kubenswrapper[4998]: I0227 10:49:17.738149 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ph82m" event={"ID":"23592e13-bc1d-4017-918b-1b78a059e903","Type":"ContainerDied","Data":"b989604b2dfe1f5e695f352ad712c63fcd1750220a0b3af512e0e3002977f13c"} Feb 27 10:49:19 crc kubenswrapper[4998]: I0227 10:49:19.207427 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ph82m" Feb 27 10:49:19 crc kubenswrapper[4998]: I0227 10:49:19.348931 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/23592e13-bc1d-4017-918b-1b78a059e903-ssh-key-openstack-edpm-ipam\") pod \"23592e13-bc1d-4017-918b-1b78a059e903\" (UID: \"23592e13-bc1d-4017-918b-1b78a059e903\") " Feb 27 10:49:19 crc kubenswrapper[4998]: I0227 10:49:19.349429 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wnghl\" (UniqueName: \"kubernetes.io/projected/23592e13-bc1d-4017-918b-1b78a059e903-kube-api-access-wnghl\") pod \"23592e13-bc1d-4017-918b-1b78a059e903\" (UID: \"23592e13-bc1d-4017-918b-1b78a059e903\") " Feb 27 10:49:19 crc kubenswrapper[4998]: I0227 10:49:19.349490 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/23592e13-bc1d-4017-918b-1b78a059e903-inventory\") pod \"23592e13-bc1d-4017-918b-1b78a059e903\" (UID: \"23592e13-bc1d-4017-918b-1b78a059e903\") " Feb 27 10:49:19 crc kubenswrapper[4998]: I0227 10:49:19.355469 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23592e13-bc1d-4017-918b-1b78a059e903-kube-api-access-wnghl" (OuterVolumeSpecName: "kube-api-access-wnghl") pod "23592e13-bc1d-4017-918b-1b78a059e903" (UID: "23592e13-bc1d-4017-918b-1b78a059e903"). InnerVolumeSpecName "kube-api-access-wnghl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:49:19 crc kubenswrapper[4998]: I0227 10:49:19.383665 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23592e13-bc1d-4017-918b-1b78a059e903-inventory" (OuterVolumeSpecName: "inventory") pod "23592e13-bc1d-4017-918b-1b78a059e903" (UID: "23592e13-bc1d-4017-918b-1b78a059e903"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:49:19 crc kubenswrapper[4998]: I0227 10:49:19.397283 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23592e13-bc1d-4017-918b-1b78a059e903-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "23592e13-bc1d-4017-918b-1b78a059e903" (UID: "23592e13-bc1d-4017-918b-1b78a059e903"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:49:19 crc kubenswrapper[4998]: I0227 10:49:19.451471 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wnghl\" (UniqueName: \"kubernetes.io/projected/23592e13-bc1d-4017-918b-1b78a059e903-kube-api-access-wnghl\") on node \"crc\" DevicePath \"\"" Feb 27 10:49:19 crc kubenswrapper[4998]: I0227 10:49:19.451517 4998 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/23592e13-bc1d-4017-918b-1b78a059e903-inventory\") on node \"crc\" DevicePath \"\"" Feb 27 10:49:19 crc kubenswrapper[4998]: I0227 10:49:19.451531 4998 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/23592e13-bc1d-4017-918b-1b78a059e903-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 27 10:49:19 crc kubenswrapper[4998]: I0227 10:49:19.753647 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ph82m" event={"ID":"23592e13-bc1d-4017-918b-1b78a059e903","Type":"ContainerDied","Data":"02f7d98af406ec43009480592417ab556108cf3ccde10d0b2776e9f5ff72d1c6"} Feb 27 10:49:19 crc kubenswrapper[4998]: I0227 10:49:19.753683 4998 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02f7d98af406ec43009480592417ab556108cf3ccde10d0b2776e9f5ff72d1c6" Feb 27 10:49:19 crc kubenswrapper[4998]: I0227 10:49:19.753734 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ph82m" Feb 27 10:49:19 crc kubenswrapper[4998]: I0227 10:49:19.853977 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xtvzq"] Feb 27 10:49:19 crc kubenswrapper[4998]: E0227 10:49:19.854449 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="779a501d-c19a-4ee2-94ed-9449448195b2" containerName="oc" Feb 27 10:49:19 crc kubenswrapper[4998]: I0227 10:49:19.854470 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="779a501d-c19a-4ee2-94ed-9449448195b2" containerName="oc" Feb 27 10:49:19 crc kubenswrapper[4998]: E0227 10:49:19.854503 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23592e13-bc1d-4017-918b-1b78a059e903" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 27 10:49:19 crc kubenswrapper[4998]: I0227 10:49:19.854515 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="23592e13-bc1d-4017-918b-1b78a059e903" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 27 10:49:19 crc kubenswrapper[4998]: I0227 10:49:19.854740 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="23592e13-bc1d-4017-918b-1b78a059e903" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 27 10:49:19 crc kubenswrapper[4998]: I0227 10:49:19.854760 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="779a501d-c19a-4ee2-94ed-9449448195b2" containerName="oc" Feb 27 10:49:19 crc kubenswrapper[4998]: I0227 10:49:19.855454 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xtvzq" Feb 27 10:49:19 crc kubenswrapper[4998]: I0227 10:49:19.858762 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 27 10:49:19 crc kubenswrapper[4998]: I0227 10:49:19.859032 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 27 10:49:19 crc kubenswrapper[4998]: I0227 10:49:19.859195 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 27 10:49:19 crc kubenswrapper[4998]: I0227 10:49:19.860990 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-bpcp2" Feb 27 10:49:19 crc kubenswrapper[4998]: I0227 10:49:19.887994 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xtvzq"] Feb 27 10:49:19 crc kubenswrapper[4998]: I0227 10:49:19.994277 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qchw\" (UniqueName: \"kubernetes.io/projected/cba7f413-6f64-4932-99d8-5894c5f3ab7a-kube-api-access-7qchw\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-xtvzq\" (UID: \"cba7f413-6f64-4932-99d8-5894c5f3ab7a\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xtvzq" Feb 27 10:49:19 crc kubenswrapper[4998]: I0227 10:49:19.994555 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cba7f413-6f64-4932-99d8-5894c5f3ab7a-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-xtvzq\" (UID: \"cba7f413-6f64-4932-99d8-5894c5f3ab7a\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xtvzq" Feb 27 10:49:19 crc kubenswrapper[4998]: I0227 10:49:19.994843 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cba7f413-6f64-4932-99d8-5894c5f3ab7a-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-xtvzq\" (UID: \"cba7f413-6f64-4932-99d8-5894c5f3ab7a\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xtvzq" Feb 27 10:49:20 crc kubenswrapper[4998]: I0227 10:49:20.096851 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cba7f413-6f64-4932-99d8-5894c5f3ab7a-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-xtvzq\" (UID: \"cba7f413-6f64-4932-99d8-5894c5f3ab7a\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xtvzq" Feb 27 10:49:20 crc kubenswrapper[4998]: I0227 10:49:20.096930 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qchw\" (UniqueName: \"kubernetes.io/projected/cba7f413-6f64-4932-99d8-5894c5f3ab7a-kube-api-access-7qchw\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-xtvzq\" (UID: \"cba7f413-6f64-4932-99d8-5894c5f3ab7a\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xtvzq" Feb 27 10:49:20 crc kubenswrapper[4998]: I0227 10:49:20.097041 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cba7f413-6f64-4932-99d8-5894c5f3ab7a-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-xtvzq\" (UID: \"cba7f413-6f64-4932-99d8-5894c5f3ab7a\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xtvzq" Feb 27 10:49:20 crc kubenswrapper[4998]: I0227 10:49:20.101584 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cba7f413-6f64-4932-99d8-5894c5f3ab7a-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-xtvzq\" (UID: \"cba7f413-6f64-4932-99d8-5894c5f3ab7a\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xtvzq" Feb 27 10:49:20 crc kubenswrapper[4998]: I0227 10:49:20.102524 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cba7f413-6f64-4932-99d8-5894c5f3ab7a-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-xtvzq\" (UID: \"cba7f413-6f64-4932-99d8-5894c5f3ab7a\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xtvzq" Feb 27 10:49:20 crc kubenswrapper[4998]: I0227 10:49:20.117771 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qchw\" (UniqueName: \"kubernetes.io/projected/cba7f413-6f64-4932-99d8-5894c5f3ab7a-kube-api-access-7qchw\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-xtvzq\" (UID: \"cba7f413-6f64-4932-99d8-5894c5f3ab7a\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xtvzq" Feb 27 10:49:20 crc kubenswrapper[4998]: I0227 10:49:20.179043 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xtvzq" Feb 27 10:49:20 crc kubenswrapper[4998]: I0227 10:49:20.691977 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xtvzq"] Feb 27 10:49:20 crc kubenswrapper[4998]: W0227 10:49:20.694740 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcba7f413_6f64_4932_99d8_5894c5f3ab7a.slice/crio-cc51cc6545a86735375cfb77f19efdfeecc9ef8d20dbdfb053dad5a13b78043b WatchSource:0}: Error finding container cc51cc6545a86735375cfb77f19efdfeecc9ef8d20dbdfb053dad5a13b78043b: Status 404 returned error can't find the container with id cc51cc6545a86735375cfb77f19efdfeecc9ef8d20dbdfb053dad5a13b78043b Feb 27 10:49:20 crc kubenswrapper[4998]: I0227 10:49:20.697974 4998 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 27 10:49:20 crc kubenswrapper[4998]: I0227 10:49:20.769248 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xtvzq" event={"ID":"cba7f413-6f64-4932-99d8-5894c5f3ab7a","Type":"ContainerStarted","Data":"cc51cc6545a86735375cfb77f19efdfeecc9ef8d20dbdfb053dad5a13b78043b"} Feb 27 10:49:21 crc kubenswrapper[4998]: I0227 10:49:21.778844 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xtvzq" event={"ID":"cba7f413-6f64-4932-99d8-5894c5f3ab7a","Type":"ContainerStarted","Data":"608542bbcaf016b74944a7ba0d9a735417763c202c42c2179fca538df1e032b2"} Feb 27 10:49:21 crc kubenswrapper[4998]: I0227 10:49:21.807839 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xtvzq" podStartSLOduration=2.3838187619999998 podStartE2EDuration="2.807811883s" podCreationTimestamp="2026-02-27 10:49:19 +0000 UTC" firstStartedPulling="2026-02-27 10:49:20.69768866 +0000 UTC m=+1912.695959638" lastFinishedPulling="2026-02-27 10:49:21.121681751 +0000 UTC m=+1913.119952759" observedRunningTime="2026-02-27 10:49:21.797266862 +0000 UTC m=+1913.795537830" watchObservedRunningTime="2026-02-27 10:49:21.807811883 +0000 UTC m=+1913.806082861" Feb 27 10:49:25 crc kubenswrapper[4998]: I0227 10:49:25.825053 4998 generic.go:334] "Generic (PLEG): container finished" podID="cba7f413-6f64-4932-99d8-5894c5f3ab7a" containerID="608542bbcaf016b74944a7ba0d9a735417763c202c42c2179fca538df1e032b2" exitCode=0 Feb 27 10:49:25 crc kubenswrapper[4998]: I0227 10:49:25.825175 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xtvzq" event={"ID":"cba7f413-6f64-4932-99d8-5894c5f3ab7a","Type":"ContainerDied","Data":"608542bbcaf016b74944a7ba0d9a735417763c202c42c2179fca538df1e032b2"} Feb 27 10:49:27 crc kubenswrapper[4998]: I0227 10:49:27.208475 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xtvzq" Feb 27 10:49:27 crc kubenswrapper[4998]: I0227 10:49:27.283473 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cba7f413-6f64-4932-99d8-5894c5f3ab7a-inventory\") pod \"cba7f413-6f64-4932-99d8-5894c5f3ab7a\" (UID: \"cba7f413-6f64-4932-99d8-5894c5f3ab7a\") " Feb 27 10:49:27 crc kubenswrapper[4998]: I0227 10:49:27.283550 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cba7f413-6f64-4932-99d8-5894c5f3ab7a-ssh-key-openstack-edpm-ipam\") pod \"cba7f413-6f64-4932-99d8-5894c5f3ab7a\" (UID: \"cba7f413-6f64-4932-99d8-5894c5f3ab7a\") " Feb 27 10:49:27 crc kubenswrapper[4998]: I0227 10:49:27.283630 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7qchw\" (UniqueName: \"kubernetes.io/projected/cba7f413-6f64-4932-99d8-5894c5f3ab7a-kube-api-access-7qchw\") pod \"cba7f413-6f64-4932-99d8-5894c5f3ab7a\" (UID: \"cba7f413-6f64-4932-99d8-5894c5f3ab7a\") " Feb 27 10:49:27 crc kubenswrapper[4998]: I0227 10:49:27.292068 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cba7f413-6f64-4932-99d8-5894c5f3ab7a-kube-api-access-7qchw" (OuterVolumeSpecName: "kube-api-access-7qchw") pod "cba7f413-6f64-4932-99d8-5894c5f3ab7a" (UID: "cba7f413-6f64-4932-99d8-5894c5f3ab7a"). InnerVolumeSpecName "kube-api-access-7qchw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:49:27 crc kubenswrapper[4998]: E0227 10:49:27.307556 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cba7f413-6f64-4932-99d8-5894c5f3ab7a-ssh-key-openstack-edpm-ipam podName:cba7f413-6f64-4932-99d8-5894c5f3ab7a nodeName:}" failed. No retries permitted until 2026-02-27 10:49:27.807525576 +0000 UTC m=+1919.805796554 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "ssh-key-openstack-edpm-ipam" (UniqueName: "kubernetes.io/secret/cba7f413-6f64-4932-99d8-5894c5f3ab7a-ssh-key-openstack-edpm-ipam") pod "cba7f413-6f64-4932-99d8-5894c5f3ab7a" (UID: "cba7f413-6f64-4932-99d8-5894c5f3ab7a") : error deleting /var/lib/kubelet/pods/cba7f413-6f64-4932-99d8-5894c5f3ab7a/volume-subpaths: remove /var/lib/kubelet/pods/cba7f413-6f64-4932-99d8-5894c5f3ab7a/volume-subpaths: no such file or directory Feb 27 10:49:27 crc kubenswrapper[4998]: I0227 10:49:27.310619 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cba7f413-6f64-4932-99d8-5894c5f3ab7a-inventory" (OuterVolumeSpecName: "inventory") pod "cba7f413-6f64-4932-99d8-5894c5f3ab7a" (UID: "cba7f413-6f64-4932-99d8-5894c5f3ab7a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:49:27 crc kubenswrapper[4998]: I0227 10:49:27.384549 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7qchw\" (UniqueName: \"kubernetes.io/projected/cba7f413-6f64-4932-99d8-5894c5f3ab7a-kube-api-access-7qchw\") on node \"crc\" DevicePath \"\"" Feb 27 10:49:27 crc kubenswrapper[4998]: I0227 10:49:27.384584 4998 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cba7f413-6f64-4932-99d8-5894c5f3ab7a-inventory\") on node \"crc\" DevicePath \"\"" Feb 27 10:49:27 crc kubenswrapper[4998]: I0227 10:49:27.842763 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xtvzq" event={"ID":"cba7f413-6f64-4932-99d8-5894c5f3ab7a","Type":"ContainerDied","Data":"cc51cc6545a86735375cfb77f19efdfeecc9ef8d20dbdfb053dad5a13b78043b"} Feb 27 10:49:27 crc kubenswrapper[4998]: I0227 10:49:27.843065 4998 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc51cc6545a86735375cfb77f19efdfeecc9ef8d20dbdfb053dad5a13b78043b" Feb 27 10:49:27 crc kubenswrapper[4998]: I0227 10:49:27.842833 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xtvzq" Feb 27 10:49:27 crc kubenswrapper[4998]: I0227 10:49:27.892581 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cba7f413-6f64-4932-99d8-5894c5f3ab7a-ssh-key-openstack-edpm-ipam\") pod \"cba7f413-6f64-4932-99d8-5894c5f3ab7a\" (UID: \"cba7f413-6f64-4932-99d8-5894c5f3ab7a\") " Feb 27 10:49:27 crc kubenswrapper[4998]: I0227 10:49:27.898165 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cba7f413-6f64-4932-99d8-5894c5f3ab7a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "cba7f413-6f64-4932-99d8-5894c5f3ab7a" (UID: "cba7f413-6f64-4932-99d8-5894c5f3ab7a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:49:27 crc kubenswrapper[4998]: I0227 10:49:27.920305 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-5dm8k"] Feb 27 10:49:27 crc kubenswrapper[4998]: E0227 10:49:27.920889 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cba7f413-6f64-4932-99d8-5894c5f3ab7a" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 27 10:49:27 crc kubenswrapper[4998]: I0227 10:49:27.920969 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="cba7f413-6f64-4932-99d8-5894c5f3ab7a" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 27 10:49:27 crc kubenswrapper[4998]: I0227 10:49:27.921296 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="cba7f413-6f64-4932-99d8-5894c5f3ab7a" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 27 10:49:27 crc kubenswrapper[4998]: I0227 10:49:27.921924 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-5dm8k" Feb 27 10:49:27 crc kubenswrapper[4998]: I0227 10:49:27.932933 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-5dm8k"] Feb 27 10:49:27 crc kubenswrapper[4998]: I0227 10:49:27.995202 4998 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cba7f413-6f64-4932-99d8-5894c5f3ab7a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 27 10:49:28 crc kubenswrapper[4998]: I0227 10:49:28.097550 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-259l7\" (UniqueName: \"kubernetes.io/projected/4feb60c2-418d-4406-ac16-26ad4e5d0825-kube-api-access-259l7\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-5dm8k\" (UID: \"4feb60c2-418d-4406-ac16-26ad4e5d0825\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-5dm8k" Feb 27 10:49:28 crc kubenswrapper[4998]: I0227 10:49:28.097735 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4feb60c2-418d-4406-ac16-26ad4e5d0825-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-5dm8k\" (UID: \"4feb60c2-418d-4406-ac16-26ad4e5d0825\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-5dm8k" Feb 27 10:49:28 crc kubenswrapper[4998]: I0227 10:49:28.098204 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4feb60c2-418d-4406-ac16-26ad4e5d0825-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-5dm8k\" (UID: \"4feb60c2-418d-4406-ac16-26ad4e5d0825\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-5dm8k" Feb 27 10:49:28 crc kubenswrapper[4998]: I0227 10:49:28.199700 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-259l7\" (UniqueName: \"kubernetes.io/projected/4feb60c2-418d-4406-ac16-26ad4e5d0825-kube-api-access-259l7\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-5dm8k\" (UID: \"4feb60c2-418d-4406-ac16-26ad4e5d0825\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-5dm8k" Feb 27 10:49:28 crc kubenswrapper[4998]: I0227 10:49:28.199773 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4feb60c2-418d-4406-ac16-26ad4e5d0825-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-5dm8k\" (UID: \"4feb60c2-418d-4406-ac16-26ad4e5d0825\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-5dm8k" Feb 27 10:49:28 crc kubenswrapper[4998]: I0227 10:49:28.199884 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4feb60c2-418d-4406-ac16-26ad4e5d0825-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-5dm8k\" (UID: \"4feb60c2-418d-4406-ac16-26ad4e5d0825\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-5dm8k" Feb 27 10:49:28 crc kubenswrapper[4998]: I0227 10:49:28.204083 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4feb60c2-418d-4406-ac16-26ad4e5d0825-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-5dm8k\" (UID: \"4feb60c2-418d-4406-ac16-26ad4e5d0825\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-5dm8k" Feb 27 10:49:28 crc kubenswrapper[4998]: I0227 10:49:28.205804 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4feb60c2-418d-4406-ac16-26ad4e5d0825-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-5dm8k\" (UID: \"4feb60c2-418d-4406-ac16-26ad4e5d0825\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-5dm8k" Feb 27 10:49:28 crc kubenswrapper[4998]: I0227 10:49:28.217202 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-259l7\" (UniqueName: \"kubernetes.io/projected/4feb60c2-418d-4406-ac16-26ad4e5d0825-kube-api-access-259l7\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-5dm8k\" (UID: \"4feb60c2-418d-4406-ac16-26ad4e5d0825\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-5dm8k" Feb 27 10:49:28 crc kubenswrapper[4998]: I0227 10:49:28.286354 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-5dm8k" Feb 27 10:49:28 crc kubenswrapper[4998]: I0227 10:49:28.837608 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-5dm8k"] Feb 27 10:49:28 crc kubenswrapper[4998]: I0227 10:49:28.854038 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-5dm8k" event={"ID":"4feb60c2-418d-4406-ac16-26ad4e5d0825","Type":"ContainerStarted","Data":"994f64f9dc714f5ea98e92353534176327fc11e87f923ed03b9700a704e207ee"} Feb 27 10:49:29 crc kubenswrapper[4998]: I0227 10:49:29.869834 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-5dm8k" event={"ID":"4feb60c2-418d-4406-ac16-26ad4e5d0825","Type":"ContainerStarted","Data":"50119d1f6ab906e67ba9f28e12d6119a41b7956fb64b8060e577a73e9fe1ad17"} Feb 27 10:49:29 crc kubenswrapper[4998]: I0227 10:49:29.895923 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-5dm8k" podStartSLOduration=2.5106104289999998 podStartE2EDuration="2.895905287s" podCreationTimestamp="2026-02-27 10:49:27 +0000 UTC" firstStartedPulling="2026-02-27 10:49:28.843382041 +0000 UTC m=+1920.841653019" lastFinishedPulling="2026-02-27 10:49:29.228676869 +0000 UTC m=+1921.226947877" observedRunningTime="2026-02-27 10:49:29.886141702 +0000 UTC m=+1921.884412670" watchObservedRunningTime="2026-02-27 10:49:29.895905287 +0000 UTC m=+1921.894176255" Feb 27 10:49:35 crc kubenswrapper[4998]: I0227 10:49:35.054939 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-5249f"] Feb 27 10:49:35 crc kubenswrapper[4998]: I0227 10:49:35.063977 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-5249f"] Feb 27 10:49:35 crc kubenswrapper[4998]: I0227 10:49:35.074330 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-w2dlm"] Feb 27 10:49:35 crc kubenswrapper[4998]: I0227 10:49:35.084960 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-kbvf6"] Feb 27 10:49:35 crc kubenswrapper[4998]: I0227 10:49:35.093600 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-8d8b-account-create-update-gkdwk"] Feb 27 10:49:35 crc kubenswrapper[4998]: I0227 10:49:35.101460 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-8d8b-account-create-update-gkdwk"] Feb 27 10:49:35 crc kubenswrapper[4998]: I0227 10:49:35.110481 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-kbvf6"] Feb 27 10:49:35 crc kubenswrapper[4998]: I0227 10:49:35.118147 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-w2dlm"] Feb 27 10:49:36 crc kubenswrapper[4998]: I0227 10:49:36.778412 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2eea7aa4-efe1-4601-84a2-5dce0446e27e" path="/var/lib/kubelet/pods/2eea7aa4-efe1-4601-84a2-5dce0446e27e/volumes" Feb 27 10:49:36 crc kubenswrapper[4998]: I0227 10:49:36.779711 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa43c986-f63e-4e51-9c39-6f3d39260745" path="/var/lib/kubelet/pods/aa43c986-f63e-4e51-9c39-6f3d39260745/volumes" Feb 27 10:49:36 crc kubenswrapper[4998]: I0227 10:49:36.780503 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bfa6944d-79e7-49cd-a3c3-4b183ac32c63" path="/var/lib/kubelet/pods/bfa6944d-79e7-49cd-a3c3-4b183ac32c63/volumes" Feb 27 10:49:36 crc kubenswrapper[4998]: I0227 10:49:36.781344 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb09f66d-895d-4368-beac-96ae5467a97a" path="/var/lib/kubelet/pods/eb09f66d-895d-4368-beac-96ae5467a97a/volumes" Feb 27 10:49:37 crc kubenswrapper[4998]: I0227 10:49:37.037545 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-8f1e-account-create-update-s2nlt"] Feb 27 10:49:37 crc kubenswrapper[4998]: I0227 10:49:37.047193 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-2d4e-account-create-update-ntthm"] Feb 27 10:49:37 crc kubenswrapper[4998]: I0227 10:49:37.054017 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-8f1e-account-create-update-s2nlt"] Feb 27 10:49:37 crc kubenswrapper[4998]: I0227 10:49:37.060596 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-2d4e-account-create-update-ntthm"] Feb 27 10:49:38 crc kubenswrapper[4998]: I0227 10:49:38.780648 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="159df315-6004-48f3-a1f3-192ee4c02588" path="/var/lib/kubelet/pods/159df315-6004-48f3-a1f3-192ee4c02588/volumes" Feb 27 10:49:38 crc kubenswrapper[4998]: I0227 10:49:38.781864 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc3d5ef4-3dac-442c-b6e8-64435a2f474a" path="/var/lib/kubelet/pods/dc3d5ef4-3dac-442c-b6e8-64435a2f474a/volumes" Feb 27 10:49:47 crc kubenswrapper[4998]: I0227 10:49:47.894987 4998 scope.go:117] "RemoveContainer" containerID="d1042d168a64a3e8094c92b3891df6353ed3fd743c70ef714ff254aacd631081" Feb 27 10:49:47 crc kubenswrapper[4998]: I0227 10:49:47.920473 4998 scope.go:117] "RemoveContainer" containerID="19e9bda0bf3b7df19880c8ecb26998dd1afa76225e70f831cbfb3ee11ce597f7" Feb 27 10:49:47 crc kubenswrapper[4998]: I0227 10:49:47.972626 4998 scope.go:117] "RemoveContainer" containerID="d11113c411e393c59459117cef4dc1d2fabe18c236fb6d17bfdc73913454bed4" Feb 27 10:49:48 crc kubenswrapper[4998]: I0227 10:49:48.001498 4998 scope.go:117] "RemoveContainer" containerID="f0c82b8579f470c3c6cc44b4c3354d4c9bbff2e80f7f102e17391d38f2c19d98" Feb 27 10:49:48 crc kubenswrapper[4998]: I0227 10:49:48.043866 4998 scope.go:117] "RemoveContainer" containerID="0da6500eaac45dc91f8b760e81e1d83245b2c4fb712eef06ca1085b4e34f83b1" Feb 27 10:49:48 crc kubenswrapper[4998]: I0227 10:49:48.079289 4998 scope.go:117] "RemoveContainer" containerID="f4316920d4d3df2d0cebe2ec917145c808ee1bfd4546870b1316f784d8e64ea4" Feb 27 10:49:48 crc kubenswrapper[4998]: I0227 10:49:48.137992 4998 scope.go:117] "RemoveContainer" containerID="3405e474b165ed537e9aaa3f84be16a6f9af9ab98609252f615e86c7db3d2c57" Feb 27 10:49:48 crc kubenswrapper[4998]: I0227 10:49:48.156455 4998 scope.go:117] "RemoveContainer" containerID="7ed0129e40a525787727447885326d046e1144735af18ff175409dfe36d92c0b" Feb 27 10:50:00 crc kubenswrapper[4998]: I0227 10:50:00.149752 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536490-qzmpd"] Feb 27 10:50:00 crc kubenswrapper[4998]: I0227 10:50:00.151553 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536490-qzmpd" Feb 27 10:50:00 crc kubenswrapper[4998]: I0227 10:50:00.153624 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 10:50:00 crc kubenswrapper[4998]: I0227 10:50:00.153879 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 10:50:00 crc kubenswrapper[4998]: I0227 10:50:00.154010 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-b74ch" Feb 27 10:50:00 crc kubenswrapper[4998]: I0227 10:50:00.163833 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536490-qzmpd"] Feb 27 10:50:00 crc kubenswrapper[4998]: I0227 10:50:00.310830 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qk2mh\" (UniqueName: \"kubernetes.io/projected/6af23187-3396-44e8-8d49-96deb9d77a91-kube-api-access-qk2mh\") pod \"auto-csr-approver-29536490-qzmpd\" (UID: \"6af23187-3396-44e8-8d49-96deb9d77a91\") " pod="openshift-infra/auto-csr-approver-29536490-qzmpd" Feb 27 10:50:00 crc kubenswrapper[4998]: I0227 10:50:00.413042 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qk2mh\" (UniqueName: \"kubernetes.io/projected/6af23187-3396-44e8-8d49-96deb9d77a91-kube-api-access-qk2mh\") pod \"auto-csr-approver-29536490-qzmpd\" (UID: \"6af23187-3396-44e8-8d49-96deb9d77a91\") " pod="openshift-infra/auto-csr-approver-29536490-qzmpd" Feb 27 10:50:00 crc kubenswrapper[4998]: I0227 10:50:00.444794 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qk2mh\" (UniqueName: \"kubernetes.io/projected/6af23187-3396-44e8-8d49-96deb9d77a91-kube-api-access-qk2mh\") pod \"auto-csr-approver-29536490-qzmpd\" (UID: \"6af23187-3396-44e8-8d49-96deb9d77a91\") " pod="openshift-infra/auto-csr-approver-29536490-qzmpd" Feb 27 10:50:00 crc kubenswrapper[4998]: I0227 10:50:00.469831 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536490-qzmpd" Feb 27 10:50:00 crc kubenswrapper[4998]: I0227 10:50:00.917828 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536490-qzmpd"] Feb 27 10:50:01 crc kubenswrapper[4998]: I0227 10:50:01.027824 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536490-qzmpd" event={"ID":"6af23187-3396-44e8-8d49-96deb9d77a91","Type":"ContainerStarted","Data":"b25e0242bbb60cec7440ba429197862b3edcf555ec699e674fd5d07091d450d5"} Feb 27 10:50:03 crc kubenswrapper[4998]: I0227 10:50:03.048975 4998 generic.go:334] "Generic (PLEG): container finished" podID="6af23187-3396-44e8-8d49-96deb9d77a91" containerID="5b75aa082c7e0c2d83962823826db1897428a5c5fe7ab7c09b7255f6bf953d5e" exitCode=0 Feb 27 10:50:03 crc kubenswrapper[4998]: I0227 10:50:03.049683 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536490-qzmpd" event={"ID":"6af23187-3396-44e8-8d49-96deb9d77a91","Type":"ContainerDied","Data":"5b75aa082c7e0c2d83962823826db1897428a5c5fe7ab7c09b7255f6bf953d5e"} Feb 27 10:50:04 crc kubenswrapper[4998]: I0227 10:50:04.431446 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536490-qzmpd" Feb 27 10:50:04 crc kubenswrapper[4998]: I0227 10:50:04.592716 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qk2mh\" (UniqueName: \"kubernetes.io/projected/6af23187-3396-44e8-8d49-96deb9d77a91-kube-api-access-qk2mh\") pod \"6af23187-3396-44e8-8d49-96deb9d77a91\" (UID: \"6af23187-3396-44e8-8d49-96deb9d77a91\") " Feb 27 10:50:04 crc kubenswrapper[4998]: I0227 10:50:04.601203 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6af23187-3396-44e8-8d49-96deb9d77a91-kube-api-access-qk2mh" (OuterVolumeSpecName: "kube-api-access-qk2mh") pod "6af23187-3396-44e8-8d49-96deb9d77a91" (UID: "6af23187-3396-44e8-8d49-96deb9d77a91"). InnerVolumeSpecName "kube-api-access-qk2mh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:50:04 crc kubenswrapper[4998]: I0227 10:50:04.695810 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qk2mh\" (UniqueName: \"kubernetes.io/projected/6af23187-3396-44e8-8d49-96deb9d77a91-kube-api-access-qk2mh\") on node \"crc\" DevicePath \"\"" Feb 27 10:50:05 crc kubenswrapper[4998]: I0227 10:50:05.068889 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536490-qzmpd" event={"ID":"6af23187-3396-44e8-8d49-96deb9d77a91","Type":"ContainerDied","Data":"b25e0242bbb60cec7440ba429197862b3edcf555ec699e674fd5d07091d450d5"} Feb 27 10:50:05 crc kubenswrapper[4998]: I0227 10:50:05.068937 4998 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b25e0242bbb60cec7440ba429197862b3edcf555ec699e674fd5d07091d450d5" Feb 27 10:50:05 crc kubenswrapper[4998]: I0227 10:50:05.069015 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536490-qzmpd" Feb 27 10:50:05 crc kubenswrapper[4998]: I0227 10:50:05.524016 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536484-jpxns"] Feb 27 10:50:05 crc kubenswrapper[4998]: I0227 10:50:05.531610 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536484-jpxns"] Feb 27 10:50:06 crc kubenswrapper[4998]: I0227 10:50:06.776152 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44ba4b4e-2b1b-41b9-8535-84119f0db0e9" path="/var/lib/kubelet/pods/44ba4b4e-2b1b-41b9-8535-84119f0db0e9/volumes" Feb 27 10:50:08 crc kubenswrapper[4998]: I0227 10:50:08.045859 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-p7ghm"] Feb 27 10:50:08 crc kubenswrapper[4998]: I0227 10:50:08.074727 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-p7ghm"] Feb 27 10:50:08 crc kubenswrapper[4998]: I0227 10:50:08.097889 4998 generic.go:334] "Generic (PLEG): container finished" podID="4feb60c2-418d-4406-ac16-26ad4e5d0825" containerID="50119d1f6ab906e67ba9f28e12d6119a41b7956fb64b8060e577a73e9fe1ad17" exitCode=0 Feb 27 10:50:08 crc kubenswrapper[4998]: I0227 10:50:08.097946 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-5dm8k" event={"ID":"4feb60c2-418d-4406-ac16-26ad4e5d0825","Type":"ContainerDied","Data":"50119d1f6ab906e67ba9f28e12d6119a41b7956fb64b8060e577a73e9fe1ad17"} Feb 27 10:50:08 crc kubenswrapper[4998]: I0227 10:50:08.782474 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9abdc95-5c73-40b8-a234-8b13e7be1cec" path="/var/lib/kubelet/pods/a9abdc95-5c73-40b8-a234-8b13e7be1cec/volumes" Feb 27 10:50:09 crc kubenswrapper[4998]: I0227 10:50:09.611973 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-5dm8k" Feb 27 10:50:09 crc kubenswrapper[4998]: I0227 10:50:09.741942 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-259l7\" (UniqueName: \"kubernetes.io/projected/4feb60c2-418d-4406-ac16-26ad4e5d0825-kube-api-access-259l7\") pod \"4feb60c2-418d-4406-ac16-26ad4e5d0825\" (UID: \"4feb60c2-418d-4406-ac16-26ad4e5d0825\") " Feb 27 10:50:09 crc kubenswrapper[4998]: I0227 10:50:09.742092 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4feb60c2-418d-4406-ac16-26ad4e5d0825-ssh-key-openstack-edpm-ipam\") pod \"4feb60c2-418d-4406-ac16-26ad4e5d0825\" (UID: \"4feb60c2-418d-4406-ac16-26ad4e5d0825\") " Feb 27 10:50:09 crc kubenswrapper[4998]: I0227 10:50:09.742168 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4feb60c2-418d-4406-ac16-26ad4e5d0825-inventory\") pod \"4feb60c2-418d-4406-ac16-26ad4e5d0825\" (UID: \"4feb60c2-418d-4406-ac16-26ad4e5d0825\") " Feb 27 10:50:09 crc kubenswrapper[4998]: I0227 10:50:09.747989 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4feb60c2-418d-4406-ac16-26ad4e5d0825-kube-api-access-259l7" (OuterVolumeSpecName: "kube-api-access-259l7") pod "4feb60c2-418d-4406-ac16-26ad4e5d0825" (UID: "4feb60c2-418d-4406-ac16-26ad4e5d0825"). InnerVolumeSpecName "kube-api-access-259l7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:50:09 crc kubenswrapper[4998]: I0227 10:50:09.774445 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4feb60c2-418d-4406-ac16-26ad4e5d0825-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "4feb60c2-418d-4406-ac16-26ad4e5d0825" (UID: "4feb60c2-418d-4406-ac16-26ad4e5d0825"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:50:09 crc kubenswrapper[4998]: I0227 10:50:09.783412 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4feb60c2-418d-4406-ac16-26ad4e5d0825-inventory" (OuterVolumeSpecName: "inventory") pod "4feb60c2-418d-4406-ac16-26ad4e5d0825" (UID: "4feb60c2-418d-4406-ac16-26ad4e5d0825"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:50:09 crc kubenswrapper[4998]: I0227 10:50:09.843993 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-259l7\" (UniqueName: \"kubernetes.io/projected/4feb60c2-418d-4406-ac16-26ad4e5d0825-kube-api-access-259l7\") on node \"crc\" DevicePath \"\"" Feb 27 10:50:09 crc kubenswrapper[4998]: I0227 10:50:09.844049 4998 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4feb60c2-418d-4406-ac16-26ad4e5d0825-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 27 10:50:09 crc kubenswrapper[4998]: I0227 10:50:09.844107 4998 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4feb60c2-418d-4406-ac16-26ad4e5d0825-inventory\") on node \"crc\" DevicePath \"\"" Feb 27 10:50:10 crc kubenswrapper[4998]: I0227 10:50:10.114195 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-5dm8k" event={"ID":"4feb60c2-418d-4406-ac16-26ad4e5d0825","Type":"ContainerDied","Data":"994f64f9dc714f5ea98e92353534176327fc11e87f923ed03b9700a704e207ee"} Feb 27 10:50:10 crc kubenswrapper[4998]: I0227 10:50:10.114270 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-5dm8k" Feb 27 10:50:10 crc kubenswrapper[4998]: I0227 10:50:10.114273 4998 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="994f64f9dc714f5ea98e92353534176327fc11e87f923ed03b9700a704e207ee" Feb 27 10:50:10 crc kubenswrapper[4998]: I0227 10:50:10.202082 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w89m5"] Feb 27 10:50:10 crc kubenswrapper[4998]: E0227 10:50:10.202471 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6af23187-3396-44e8-8d49-96deb9d77a91" containerName="oc" Feb 27 10:50:10 crc kubenswrapper[4998]: I0227 10:50:10.202491 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="6af23187-3396-44e8-8d49-96deb9d77a91" containerName="oc" Feb 27 10:50:10 crc kubenswrapper[4998]: E0227 10:50:10.202504 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4feb60c2-418d-4406-ac16-26ad4e5d0825" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 27 10:50:10 crc kubenswrapper[4998]: I0227 10:50:10.202515 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="4feb60c2-418d-4406-ac16-26ad4e5d0825" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 27 10:50:10 crc kubenswrapper[4998]: I0227 10:50:10.202697 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="4feb60c2-418d-4406-ac16-26ad4e5d0825" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 27 10:50:10 crc kubenswrapper[4998]: I0227 10:50:10.202721 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="6af23187-3396-44e8-8d49-96deb9d77a91" containerName="oc" Feb 27 10:50:10 crc kubenswrapper[4998]: I0227 10:50:10.203318 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w89m5" Feb 27 10:50:10 crc kubenswrapper[4998]: I0227 10:50:10.205704 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 27 10:50:10 crc kubenswrapper[4998]: I0227 10:50:10.206180 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-bpcp2" Feb 27 10:50:10 crc kubenswrapper[4998]: I0227 10:50:10.206921 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 27 10:50:10 crc kubenswrapper[4998]: I0227 10:50:10.208153 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 27 10:50:10 crc kubenswrapper[4998]: I0227 10:50:10.219983 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w89m5"] Feb 27 10:50:10 crc kubenswrapper[4998]: I0227 10:50:10.354252 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/090bec07-4f2c-4d72-be75-06274215cdf1-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-w89m5\" (UID: \"090bec07-4f2c-4d72-be75-06274215cdf1\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w89m5" Feb 27 10:50:10 crc kubenswrapper[4998]: I0227 10:50:10.354344 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/090bec07-4f2c-4d72-be75-06274215cdf1-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-w89m5\" (UID: \"090bec07-4f2c-4d72-be75-06274215cdf1\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w89m5" Feb 27 10:50:10 crc kubenswrapper[4998]: I0227 10:50:10.354410 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jk66b\" (UniqueName: \"kubernetes.io/projected/090bec07-4f2c-4d72-be75-06274215cdf1-kube-api-access-jk66b\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-w89m5\" (UID: \"090bec07-4f2c-4d72-be75-06274215cdf1\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w89m5" Feb 27 10:50:10 crc kubenswrapper[4998]: I0227 10:50:10.456662 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/090bec07-4f2c-4d72-be75-06274215cdf1-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-w89m5\" (UID: \"090bec07-4f2c-4d72-be75-06274215cdf1\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w89m5" Feb 27 10:50:10 crc kubenswrapper[4998]: I0227 10:50:10.456773 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/090bec07-4f2c-4d72-be75-06274215cdf1-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-w89m5\" (UID: \"090bec07-4f2c-4d72-be75-06274215cdf1\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w89m5" Feb 27 10:50:10 crc kubenswrapper[4998]: I0227 10:50:10.456848 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jk66b\" (UniqueName: \"kubernetes.io/projected/090bec07-4f2c-4d72-be75-06274215cdf1-kube-api-access-jk66b\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-w89m5\" (UID: \"090bec07-4f2c-4d72-be75-06274215cdf1\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w89m5" Feb 27 10:50:10 crc kubenswrapper[4998]: I0227 10:50:10.460077 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/090bec07-4f2c-4d72-be75-06274215cdf1-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-w89m5\" (UID: \"090bec07-4f2c-4d72-be75-06274215cdf1\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w89m5" Feb 27 10:50:10 crc kubenswrapper[4998]: I0227 10:50:10.460162 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/090bec07-4f2c-4d72-be75-06274215cdf1-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-w89m5\" (UID: \"090bec07-4f2c-4d72-be75-06274215cdf1\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w89m5" Feb 27 10:50:10 crc kubenswrapper[4998]: I0227 10:50:10.477078 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jk66b\" (UniqueName: \"kubernetes.io/projected/090bec07-4f2c-4d72-be75-06274215cdf1-kube-api-access-jk66b\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-w89m5\" (UID: \"090bec07-4f2c-4d72-be75-06274215cdf1\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w89m5" Feb 27 10:50:10 crc kubenswrapper[4998]: I0227 10:50:10.521771 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w89m5" Feb 27 10:50:10 crc kubenswrapper[4998]: I0227 10:50:10.920994 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w89m5"] Feb 27 10:50:11 crc kubenswrapper[4998]: I0227 10:50:11.123614 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w89m5" event={"ID":"090bec07-4f2c-4d72-be75-06274215cdf1","Type":"ContainerStarted","Data":"0abc5695a798f343d0757c2a565f45ac63ee385b490a98190d109f5fcd4f57fd"} Feb 27 10:50:12 crc kubenswrapper[4998]: I0227 10:50:12.133642 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w89m5" event={"ID":"090bec07-4f2c-4d72-be75-06274215cdf1","Type":"ContainerStarted","Data":"2e10c323fc582d099a4f9659bf41ac4184bdf3dfcd140d7f63872469a59ff5ed"} Feb 27 10:50:30 crc kubenswrapper[4998]: I0227 10:50:30.048247 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w89m5" podStartSLOduration=19.624987642 podStartE2EDuration="20.048203237s" podCreationTimestamp="2026-02-27 10:50:10 +0000 UTC" firstStartedPulling="2026-02-27 10:50:10.926623023 +0000 UTC m=+1962.924893991" lastFinishedPulling="2026-02-27 10:50:11.349838618 +0000 UTC m=+1963.348109586" observedRunningTime="2026-02-27 10:50:12.153849485 +0000 UTC m=+1964.152120453" watchObservedRunningTime="2026-02-27 10:50:30.048203237 +0000 UTC m=+1982.046474205" Feb 27 10:50:30 crc kubenswrapper[4998]: I0227 10:50:30.051065 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-czjfd"] Feb 27 10:50:30 crc kubenswrapper[4998]: I0227 10:50:30.060971 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-rwrfg"] Feb 27 10:50:30 crc kubenswrapper[4998]: I0227 10:50:30.070545 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-czjfd"] Feb 27 10:50:30 crc kubenswrapper[4998]: I0227 10:50:30.082862 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-rwrfg"] Feb 27 10:50:30 crc kubenswrapper[4998]: I0227 10:50:30.775906 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c5290fa-d988-420b-88de-065d0558ea40" path="/var/lib/kubelet/pods/7c5290fa-d988-420b-88de-065d0558ea40/volumes" Feb 27 10:50:30 crc kubenswrapper[4998]: I0227 10:50:30.776606 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92acff51-4ca2-43c6-ab0f-480e01e9efb8" path="/var/lib/kubelet/pods/92acff51-4ca2-43c6-ab0f-480e01e9efb8/volumes" Feb 27 10:50:48 crc kubenswrapper[4998]: I0227 10:50:48.285485 4998 scope.go:117] "RemoveContainer" containerID="25b4d582b6dd8ffbbf76388d717f28f867dc4707194de6b1df65a414d23975ce" Feb 27 10:50:48 crc kubenswrapper[4998]: I0227 10:50:48.332881 4998 scope.go:117] "RemoveContainer" containerID="71e78464a33c722c93cd258774f6086a1dd7d42f9906c3cc6ae3dd873c06f64d" Feb 27 10:50:48 crc kubenswrapper[4998]: I0227 10:50:48.384627 4998 scope.go:117] "RemoveContainer" containerID="1f2bd1a75c1db5d148f7f3589d1ccaf5bbda642064c1c14754660ab554c548fb" Feb 27 10:50:48 crc kubenswrapper[4998]: I0227 10:50:48.442195 4998 scope.go:117] "RemoveContainer" containerID="550199ff57ddd5616ea967474384544dce1ead6cabf6b337129b1634632860fa" Feb 27 10:50:58 crc kubenswrapper[4998]: I0227 10:50:58.587367 4998 generic.go:334] "Generic (PLEG): container finished" podID="090bec07-4f2c-4d72-be75-06274215cdf1" containerID="2e10c323fc582d099a4f9659bf41ac4184bdf3dfcd140d7f63872469a59ff5ed" exitCode=0 Feb 27 10:50:58 crc kubenswrapper[4998]: I0227 10:50:58.587501 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w89m5" event={"ID":"090bec07-4f2c-4d72-be75-06274215cdf1","Type":"ContainerDied","Data":"2e10c323fc582d099a4f9659bf41ac4184bdf3dfcd140d7f63872469a59ff5ed"} Feb 27 10:50:59 crc kubenswrapper[4998]: I0227 10:50:59.987340 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w89m5" Feb 27 10:51:00 crc kubenswrapper[4998]: I0227 10:51:00.116169 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/090bec07-4f2c-4d72-be75-06274215cdf1-inventory\") pod \"090bec07-4f2c-4d72-be75-06274215cdf1\" (UID: \"090bec07-4f2c-4d72-be75-06274215cdf1\") " Feb 27 10:51:00 crc kubenswrapper[4998]: I0227 10:51:00.116243 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jk66b\" (UniqueName: \"kubernetes.io/projected/090bec07-4f2c-4d72-be75-06274215cdf1-kube-api-access-jk66b\") pod \"090bec07-4f2c-4d72-be75-06274215cdf1\" (UID: \"090bec07-4f2c-4d72-be75-06274215cdf1\") " Feb 27 10:51:00 crc kubenswrapper[4998]: I0227 10:51:00.116349 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/090bec07-4f2c-4d72-be75-06274215cdf1-ssh-key-openstack-edpm-ipam\") pod \"090bec07-4f2c-4d72-be75-06274215cdf1\" (UID: \"090bec07-4f2c-4d72-be75-06274215cdf1\") " Feb 27 10:51:00 crc kubenswrapper[4998]: I0227 10:51:00.121901 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/090bec07-4f2c-4d72-be75-06274215cdf1-kube-api-access-jk66b" (OuterVolumeSpecName: "kube-api-access-jk66b") pod "090bec07-4f2c-4d72-be75-06274215cdf1" (UID: "090bec07-4f2c-4d72-be75-06274215cdf1"). InnerVolumeSpecName "kube-api-access-jk66b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:51:00 crc kubenswrapper[4998]: I0227 10:51:00.145147 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/090bec07-4f2c-4d72-be75-06274215cdf1-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "090bec07-4f2c-4d72-be75-06274215cdf1" (UID: "090bec07-4f2c-4d72-be75-06274215cdf1"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:51:00 crc kubenswrapper[4998]: I0227 10:51:00.152809 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/090bec07-4f2c-4d72-be75-06274215cdf1-inventory" (OuterVolumeSpecName: "inventory") pod "090bec07-4f2c-4d72-be75-06274215cdf1" (UID: "090bec07-4f2c-4d72-be75-06274215cdf1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:51:00 crc kubenswrapper[4998]: I0227 10:51:00.219175 4998 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/090bec07-4f2c-4d72-be75-06274215cdf1-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 27 10:51:00 crc kubenswrapper[4998]: I0227 10:51:00.219214 4998 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/090bec07-4f2c-4d72-be75-06274215cdf1-inventory\") on node \"crc\" DevicePath \"\"" Feb 27 10:51:00 crc kubenswrapper[4998]: I0227 10:51:00.219244 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jk66b\" (UniqueName: \"kubernetes.io/projected/090bec07-4f2c-4d72-be75-06274215cdf1-kube-api-access-jk66b\") on node \"crc\" DevicePath \"\"" Feb 27 10:51:00 crc kubenswrapper[4998]: I0227 10:51:00.612558 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w89m5" event={"ID":"090bec07-4f2c-4d72-be75-06274215cdf1","Type":"ContainerDied","Data":"0abc5695a798f343d0757c2a565f45ac63ee385b490a98190d109f5fcd4f57fd"} Feb 27 10:51:00 crc kubenswrapper[4998]: I0227 10:51:00.613025 4998 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0abc5695a798f343d0757c2a565f45ac63ee385b490a98190d109f5fcd4f57fd" Feb 27 10:51:00 crc kubenswrapper[4998]: I0227 10:51:00.612631 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w89m5" Feb 27 10:51:00 crc kubenswrapper[4998]: I0227 10:51:00.732299 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-q5888"] Feb 27 10:51:00 crc kubenswrapper[4998]: E0227 10:51:00.732817 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="090bec07-4f2c-4d72-be75-06274215cdf1" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 27 10:51:00 crc kubenswrapper[4998]: I0227 10:51:00.732835 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="090bec07-4f2c-4d72-be75-06274215cdf1" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 27 10:51:00 crc kubenswrapper[4998]: I0227 10:51:00.733075 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="090bec07-4f2c-4d72-be75-06274215cdf1" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 27 10:51:00 crc kubenswrapper[4998]: I0227 10:51:00.733784 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-q5888" Feb 27 10:51:00 crc kubenswrapper[4998]: I0227 10:51:00.740477 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-q5888"] Feb 27 10:51:00 crc kubenswrapper[4998]: I0227 10:51:00.741648 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 27 10:51:00 crc kubenswrapper[4998]: I0227 10:51:00.741904 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 27 10:51:00 crc kubenswrapper[4998]: I0227 10:51:00.742025 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-bpcp2" Feb 27 10:51:00 crc kubenswrapper[4998]: I0227 10:51:00.742179 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 27 10:51:00 crc kubenswrapper[4998]: I0227 10:51:00.935220 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxwgs\" (UniqueName: \"kubernetes.io/projected/176d4d7b-4d9a-4b59-a556-f534a5a574d8-kube-api-access-gxwgs\") pod \"ssh-known-hosts-edpm-deployment-q5888\" (UID: \"176d4d7b-4d9a-4b59-a556-f534a5a574d8\") " pod="openstack/ssh-known-hosts-edpm-deployment-q5888" Feb 27 10:51:00 crc kubenswrapper[4998]: I0227 10:51:00.935360 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/176d4d7b-4d9a-4b59-a556-f534a5a574d8-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-q5888\" (UID: \"176d4d7b-4d9a-4b59-a556-f534a5a574d8\") " pod="openstack/ssh-known-hosts-edpm-deployment-q5888" Feb 27 10:51:00 crc kubenswrapper[4998]: I0227 10:51:00.935424 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/176d4d7b-4d9a-4b59-a556-f534a5a574d8-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-q5888\" (UID: \"176d4d7b-4d9a-4b59-a556-f534a5a574d8\") " pod="openstack/ssh-known-hosts-edpm-deployment-q5888" Feb 27 10:51:01 crc kubenswrapper[4998]: I0227 10:51:01.037611 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/176d4d7b-4d9a-4b59-a556-f534a5a574d8-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-q5888\" (UID: \"176d4d7b-4d9a-4b59-a556-f534a5a574d8\") " pod="openstack/ssh-known-hosts-edpm-deployment-q5888" Feb 27 10:51:01 crc kubenswrapper[4998]: I0227 10:51:01.037718 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/176d4d7b-4d9a-4b59-a556-f534a5a574d8-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-q5888\" (UID: \"176d4d7b-4d9a-4b59-a556-f534a5a574d8\") " pod="openstack/ssh-known-hosts-edpm-deployment-q5888" Feb 27 10:51:01 crc kubenswrapper[4998]: I0227 10:51:01.038627 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxwgs\" (UniqueName: \"kubernetes.io/projected/176d4d7b-4d9a-4b59-a556-f534a5a574d8-kube-api-access-gxwgs\") pod \"ssh-known-hosts-edpm-deployment-q5888\" (UID: \"176d4d7b-4d9a-4b59-a556-f534a5a574d8\") " pod="openstack/ssh-known-hosts-edpm-deployment-q5888" Feb 27 10:51:01 crc kubenswrapper[4998]: I0227 10:51:01.042458 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/176d4d7b-4d9a-4b59-a556-f534a5a574d8-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-q5888\" (UID: \"176d4d7b-4d9a-4b59-a556-f534a5a574d8\") " pod="openstack/ssh-known-hosts-edpm-deployment-q5888" Feb 27 10:51:01 crc kubenswrapper[4998]: I0227 10:51:01.042924 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/176d4d7b-4d9a-4b59-a556-f534a5a574d8-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-q5888\" (UID: \"176d4d7b-4d9a-4b59-a556-f534a5a574d8\") " pod="openstack/ssh-known-hosts-edpm-deployment-q5888" Feb 27 10:51:01 crc kubenswrapper[4998]: I0227 10:51:01.058380 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxwgs\" (UniqueName: \"kubernetes.io/projected/176d4d7b-4d9a-4b59-a556-f534a5a574d8-kube-api-access-gxwgs\") pod \"ssh-known-hosts-edpm-deployment-q5888\" (UID: \"176d4d7b-4d9a-4b59-a556-f534a5a574d8\") " pod="openstack/ssh-known-hosts-edpm-deployment-q5888" Feb 27 10:51:01 crc kubenswrapper[4998]: I0227 10:51:01.356069 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-q5888" Feb 27 10:51:01 crc kubenswrapper[4998]: I0227 10:51:01.889943 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-q5888"] Feb 27 10:51:02 crc kubenswrapper[4998]: I0227 10:51:02.632466 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-q5888" event={"ID":"176d4d7b-4d9a-4b59-a556-f534a5a574d8","Type":"ContainerStarted","Data":"cd2eeed94f84e376fa9dbc86533bf96e8a9a55f273b677448a3c75d97c0fee58"} Feb 27 10:51:02 crc kubenswrapper[4998]: I0227 10:51:02.632744 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-q5888" event={"ID":"176d4d7b-4d9a-4b59-a556-f534a5a574d8","Type":"ContainerStarted","Data":"4660412996ccfa399723035ee53ecc462031bf850b4d850d8bc23448510cd175"} Feb 27 10:51:02 crc kubenswrapper[4998]: I0227 10:51:02.652065 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-q5888" podStartSLOduration=2.230888079 podStartE2EDuration="2.652040228s" podCreationTimestamp="2026-02-27 10:51:00 +0000 UTC" firstStartedPulling="2026-02-27 10:51:01.891550262 +0000 UTC m=+2013.889821230" lastFinishedPulling="2026-02-27 10:51:02.312702411 +0000 UTC m=+2014.310973379" observedRunningTime="2026-02-27 10:51:02.646155674 +0000 UTC m=+2014.644426642" watchObservedRunningTime="2026-02-27 10:51:02.652040228 +0000 UTC m=+2014.650311196" Feb 27 10:51:09 crc kubenswrapper[4998]: I0227 10:51:09.720806 4998 generic.go:334] "Generic (PLEG): container finished" podID="176d4d7b-4d9a-4b59-a556-f534a5a574d8" containerID="cd2eeed94f84e376fa9dbc86533bf96e8a9a55f273b677448a3c75d97c0fee58" exitCode=0 Feb 27 10:51:09 crc kubenswrapper[4998]: I0227 10:51:09.720892 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-q5888" event={"ID":"176d4d7b-4d9a-4b59-a556-f534a5a574d8","Type":"ContainerDied","Data":"cd2eeed94f84e376fa9dbc86533bf96e8a9a55f273b677448a3c75d97c0fee58"} Feb 27 10:51:10 crc kubenswrapper[4998]: I0227 10:51:10.504989 4998 patch_prober.go:28] interesting pod/machine-config-daemon-m6kr5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 10:51:10 crc kubenswrapper[4998]: I0227 10:51:10.505409 4998 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 10:51:11 crc kubenswrapper[4998]: I0227 10:51:11.217137 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-q5888" Feb 27 10:51:11 crc kubenswrapper[4998]: I0227 10:51:11.355007 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gxwgs\" (UniqueName: \"kubernetes.io/projected/176d4d7b-4d9a-4b59-a556-f534a5a574d8-kube-api-access-gxwgs\") pod \"176d4d7b-4d9a-4b59-a556-f534a5a574d8\" (UID: \"176d4d7b-4d9a-4b59-a556-f534a5a574d8\") " Feb 27 10:51:11 crc kubenswrapper[4998]: I0227 10:51:11.355169 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/176d4d7b-4d9a-4b59-a556-f534a5a574d8-ssh-key-openstack-edpm-ipam\") pod \"176d4d7b-4d9a-4b59-a556-f534a5a574d8\" (UID: \"176d4d7b-4d9a-4b59-a556-f534a5a574d8\") " Feb 27 10:51:11 crc kubenswrapper[4998]: I0227 10:51:11.355200 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/176d4d7b-4d9a-4b59-a556-f534a5a574d8-inventory-0\") pod \"176d4d7b-4d9a-4b59-a556-f534a5a574d8\" (UID: \"176d4d7b-4d9a-4b59-a556-f534a5a574d8\") " Feb 27 10:51:11 crc kubenswrapper[4998]: I0227 10:51:11.360187 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/176d4d7b-4d9a-4b59-a556-f534a5a574d8-kube-api-access-gxwgs" (OuterVolumeSpecName: "kube-api-access-gxwgs") pod "176d4d7b-4d9a-4b59-a556-f534a5a574d8" (UID: "176d4d7b-4d9a-4b59-a556-f534a5a574d8"). InnerVolumeSpecName "kube-api-access-gxwgs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:51:11 crc kubenswrapper[4998]: I0227 10:51:11.378664 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/176d4d7b-4d9a-4b59-a556-f534a5a574d8-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "176d4d7b-4d9a-4b59-a556-f534a5a574d8" (UID: "176d4d7b-4d9a-4b59-a556-f534a5a574d8"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:51:11 crc kubenswrapper[4998]: I0227 10:51:11.383021 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/176d4d7b-4d9a-4b59-a556-f534a5a574d8-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "176d4d7b-4d9a-4b59-a556-f534a5a574d8" (UID: "176d4d7b-4d9a-4b59-a556-f534a5a574d8"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:51:11 crc kubenswrapper[4998]: I0227 10:51:11.457768 4998 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/176d4d7b-4d9a-4b59-a556-f534a5a574d8-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 27 10:51:11 crc kubenswrapper[4998]: I0227 10:51:11.457975 4998 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/176d4d7b-4d9a-4b59-a556-f534a5a574d8-inventory-0\") on node \"crc\" DevicePath \"\"" Feb 27 10:51:11 crc kubenswrapper[4998]: I0227 10:51:11.458094 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gxwgs\" (UniqueName: \"kubernetes.io/projected/176d4d7b-4d9a-4b59-a556-f534a5a574d8-kube-api-access-gxwgs\") on node \"crc\" DevicePath \"\"" Feb 27 10:51:11 crc kubenswrapper[4998]: I0227 10:51:11.744385 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-q5888" event={"ID":"176d4d7b-4d9a-4b59-a556-f534a5a574d8","Type":"ContainerDied","Data":"4660412996ccfa399723035ee53ecc462031bf850b4d850d8bc23448510cd175"} Feb 27 10:51:11 crc kubenswrapper[4998]: I0227 10:51:11.744453 4998 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4660412996ccfa399723035ee53ecc462031bf850b4d850d8bc23448510cd175" Feb 27 10:51:11 crc kubenswrapper[4998]: I0227 10:51:11.744511 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-q5888" Feb 27 10:51:11 crc kubenswrapper[4998]: I0227 10:51:11.816949 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-dznmc"] Feb 27 10:51:11 crc kubenswrapper[4998]: E0227 10:51:11.817371 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="176d4d7b-4d9a-4b59-a556-f534a5a574d8" containerName="ssh-known-hosts-edpm-deployment" Feb 27 10:51:11 crc kubenswrapper[4998]: I0227 10:51:11.817387 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="176d4d7b-4d9a-4b59-a556-f534a5a574d8" containerName="ssh-known-hosts-edpm-deployment" Feb 27 10:51:11 crc kubenswrapper[4998]: I0227 10:51:11.817562 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="176d4d7b-4d9a-4b59-a556-f534a5a574d8" containerName="ssh-known-hosts-edpm-deployment" Feb 27 10:51:11 crc kubenswrapper[4998]: I0227 10:51:11.818174 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dznmc" Feb 27 10:51:11 crc kubenswrapper[4998]: I0227 10:51:11.822993 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 27 10:51:11 crc kubenswrapper[4998]: I0227 10:51:11.823063 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 27 10:51:11 crc kubenswrapper[4998]: I0227 10:51:11.823107 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-bpcp2" Feb 27 10:51:11 crc kubenswrapper[4998]: I0227 10:51:11.823277 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 27 10:51:11 crc kubenswrapper[4998]: I0227 10:51:11.830830 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-dznmc"] Feb 27 10:51:11 crc kubenswrapper[4998]: I0227 10:51:11.971137 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/46e0d517-7041-41ce-8cbc-9ed19afff0cb-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-dznmc\" (UID: \"46e0d517-7041-41ce-8cbc-9ed19afff0cb\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dznmc" Feb 27 10:51:11 crc kubenswrapper[4998]: I0227 10:51:11.971203 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgm2r\" (UniqueName: \"kubernetes.io/projected/46e0d517-7041-41ce-8cbc-9ed19afff0cb-kube-api-access-dgm2r\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-dznmc\" (UID: \"46e0d517-7041-41ce-8cbc-9ed19afff0cb\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dznmc" Feb 27 10:51:11 crc kubenswrapper[4998]: I0227 10:51:11.971397 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/46e0d517-7041-41ce-8cbc-9ed19afff0cb-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-dznmc\" (UID: \"46e0d517-7041-41ce-8cbc-9ed19afff0cb\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dznmc" Feb 27 10:51:12 crc kubenswrapper[4998]: I0227 10:51:12.072714 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/46e0d517-7041-41ce-8cbc-9ed19afff0cb-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-dznmc\" (UID: \"46e0d517-7041-41ce-8cbc-9ed19afff0cb\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dznmc" Feb 27 10:51:12 crc kubenswrapper[4998]: I0227 10:51:12.072772 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgm2r\" (UniqueName: \"kubernetes.io/projected/46e0d517-7041-41ce-8cbc-9ed19afff0cb-kube-api-access-dgm2r\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-dznmc\" (UID: \"46e0d517-7041-41ce-8cbc-9ed19afff0cb\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dznmc" Feb 27 10:51:12 crc kubenswrapper[4998]: I0227 10:51:12.072873 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/46e0d517-7041-41ce-8cbc-9ed19afff0cb-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-dznmc\" (UID: \"46e0d517-7041-41ce-8cbc-9ed19afff0cb\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dznmc" Feb 27 10:51:12 crc kubenswrapper[4998]: I0227 10:51:12.077030 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/46e0d517-7041-41ce-8cbc-9ed19afff0cb-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-dznmc\" (UID: \"46e0d517-7041-41ce-8cbc-9ed19afff0cb\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dznmc" Feb 27 10:51:12 crc kubenswrapper[4998]: I0227 10:51:12.077108 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/46e0d517-7041-41ce-8cbc-9ed19afff0cb-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-dznmc\" (UID: \"46e0d517-7041-41ce-8cbc-9ed19afff0cb\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dznmc" Feb 27 10:51:12 crc kubenswrapper[4998]: I0227 10:51:12.096663 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgm2r\" (UniqueName: \"kubernetes.io/projected/46e0d517-7041-41ce-8cbc-9ed19afff0cb-kube-api-access-dgm2r\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-dznmc\" (UID: \"46e0d517-7041-41ce-8cbc-9ed19afff0cb\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dznmc" Feb 27 10:51:12 crc kubenswrapper[4998]: I0227 10:51:12.136547 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dznmc" Feb 27 10:51:12 crc kubenswrapper[4998]: I0227 10:51:12.721063 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-dznmc"] Feb 27 10:51:12 crc kubenswrapper[4998]: I0227 10:51:12.758191 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dznmc" event={"ID":"46e0d517-7041-41ce-8cbc-9ed19afff0cb","Type":"ContainerStarted","Data":"fe86277ee5428bc06e13595a5cf84c6614e98cada1d559c3c76a9dc0976ed5eb"} Feb 27 10:51:13 crc kubenswrapper[4998]: I0227 10:51:13.773208 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dznmc" event={"ID":"46e0d517-7041-41ce-8cbc-9ed19afff0cb","Type":"ContainerStarted","Data":"0126b0d7609c6f6671959734852b27267a89dc9bbd64ba780e416328224078db"} Feb 27 10:51:13 crc kubenswrapper[4998]: I0227 10:51:13.802085 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dznmc" podStartSLOduration=2.32118308 podStartE2EDuration="2.802056822s" podCreationTimestamp="2026-02-27 10:51:11 +0000 UTC" firstStartedPulling="2026-02-27 10:51:12.726815884 +0000 UTC m=+2024.725086862" lastFinishedPulling="2026-02-27 10:51:13.207689636 +0000 UTC m=+2025.205960604" observedRunningTime="2026-02-27 10:51:13.794970378 +0000 UTC m=+2025.793241366" watchObservedRunningTime="2026-02-27 10:51:13.802056822 +0000 UTC m=+2025.800327800" Feb 27 10:51:15 crc kubenswrapper[4998]: I0227 10:51:15.043959 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-k7cgp"] Feb 27 10:51:15 crc kubenswrapper[4998]: I0227 10:51:15.054648 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-k7cgp"] Feb 27 10:51:16 crc kubenswrapper[4998]: I0227 10:51:16.779386 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00fdbd39-90de-4af0-b166-7b8f106ec115" path="/var/lib/kubelet/pods/00fdbd39-90de-4af0-b166-7b8f106ec115/volumes" Feb 27 10:51:20 crc kubenswrapper[4998]: I0227 10:51:20.853816 4998 generic.go:334] "Generic (PLEG): container finished" podID="46e0d517-7041-41ce-8cbc-9ed19afff0cb" containerID="0126b0d7609c6f6671959734852b27267a89dc9bbd64ba780e416328224078db" exitCode=0 Feb 27 10:51:20 crc kubenswrapper[4998]: I0227 10:51:20.854483 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dznmc" event={"ID":"46e0d517-7041-41ce-8cbc-9ed19afff0cb","Type":"ContainerDied","Data":"0126b0d7609c6f6671959734852b27267a89dc9bbd64ba780e416328224078db"} Feb 27 10:51:22 crc kubenswrapper[4998]: I0227 10:51:22.331596 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dznmc" Feb 27 10:51:22 crc kubenswrapper[4998]: I0227 10:51:22.471885 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/46e0d517-7041-41ce-8cbc-9ed19afff0cb-inventory\") pod \"46e0d517-7041-41ce-8cbc-9ed19afff0cb\" (UID: \"46e0d517-7041-41ce-8cbc-9ed19afff0cb\") " Feb 27 10:51:22 crc kubenswrapper[4998]: I0227 10:51:22.471963 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dgm2r\" (UniqueName: \"kubernetes.io/projected/46e0d517-7041-41ce-8cbc-9ed19afff0cb-kube-api-access-dgm2r\") pod \"46e0d517-7041-41ce-8cbc-9ed19afff0cb\" (UID: \"46e0d517-7041-41ce-8cbc-9ed19afff0cb\") " Feb 27 10:51:22 crc kubenswrapper[4998]: I0227 10:51:22.472092 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/46e0d517-7041-41ce-8cbc-9ed19afff0cb-ssh-key-openstack-edpm-ipam\") pod \"46e0d517-7041-41ce-8cbc-9ed19afff0cb\" (UID: \"46e0d517-7041-41ce-8cbc-9ed19afff0cb\") " Feb 27 10:51:22 crc kubenswrapper[4998]: I0227 10:51:22.478725 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46e0d517-7041-41ce-8cbc-9ed19afff0cb-kube-api-access-dgm2r" (OuterVolumeSpecName: "kube-api-access-dgm2r") pod "46e0d517-7041-41ce-8cbc-9ed19afff0cb" (UID: "46e0d517-7041-41ce-8cbc-9ed19afff0cb"). InnerVolumeSpecName "kube-api-access-dgm2r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:51:22 crc kubenswrapper[4998]: I0227 10:51:22.514270 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46e0d517-7041-41ce-8cbc-9ed19afff0cb-inventory" (OuterVolumeSpecName: "inventory") pod "46e0d517-7041-41ce-8cbc-9ed19afff0cb" (UID: "46e0d517-7041-41ce-8cbc-9ed19afff0cb"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:51:22 crc kubenswrapper[4998]: I0227 10:51:22.522858 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46e0d517-7041-41ce-8cbc-9ed19afff0cb-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "46e0d517-7041-41ce-8cbc-9ed19afff0cb" (UID: "46e0d517-7041-41ce-8cbc-9ed19afff0cb"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:51:22 crc kubenswrapper[4998]: I0227 10:51:22.573981 4998 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/46e0d517-7041-41ce-8cbc-9ed19afff0cb-inventory\") on node \"crc\" DevicePath \"\"" Feb 27 10:51:22 crc kubenswrapper[4998]: I0227 10:51:22.574014 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dgm2r\" (UniqueName: \"kubernetes.io/projected/46e0d517-7041-41ce-8cbc-9ed19afff0cb-kube-api-access-dgm2r\") on node \"crc\" DevicePath \"\"" Feb 27 10:51:22 crc kubenswrapper[4998]: I0227 10:51:22.574026 4998 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/46e0d517-7041-41ce-8cbc-9ed19afff0cb-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 27 10:51:22 crc kubenswrapper[4998]: I0227 10:51:22.881749 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dznmc" event={"ID":"46e0d517-7041-41ce-8cbc-9ed19afff0cb","Type":"ContainerDied","Data":"fe86277ee5428bc06e13595a5cf84c6614e98cada1d559c3c76a9dc0976ed5eb"} Feb 27 10:51:22 crc kubenswrapper[4998]: I0227 10:51:22.881818 4998 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe86277ee5428bc06e13595a5cf84c6614e98cada1d559c3c76a9dc0976ed5eb" Feb 27 10:51:22 crc kubenswrapper[4998]: I0227 10:51:22.881879 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dznmc" Feb 27 10:51:22 crc kubenswrapper[4998]: I0227 10:51:22.969243 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-gml48"] Feb 27 10:51:22 crc kubenswrapper[4998]: E0227 10:51:22.969894 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46e0d517-7041-41ce-8cbc-9ed19afff0cb" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 27 10:51:22 crc kubenswrapper[4998]: I0227 10:51:22.970029 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="46e0d517-7041-41ce-8cbc-9ed19afff0cb" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 27 10:51:22 crc kubenswrapper[4998]: I0227 10:51:22.970498 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="46e0d517-7041-41ce-8cbc-9ed19afff0cb" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 27 10:51:22 crc kubenswrapper[4998]: I0227 10:51:22.971382 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-gml48" Feb 27 10:51:22 crc kubenswrapper[4998]: I0227 10:51:22.973444 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 27 10:51:22 crc kubenswrapper[4998]: I0227 10:51:22.974085 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-bpcp2" Feb 27 10:51:22 crc kubenswrapper[4998]: I0227 10:51:22.974314 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 27 10:51:22 crc kubenswrapper[4998]: I0227 10:51:22.974470 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 27 10:51:22 crc kubenswrapper[4998]: I0227 10:51:22.981161 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-gml48"] Feb 27 10:51:23 crc kubenswrapper[4998]: I0227 10:51:23.085117 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2bw7\" (UniqueName: \"kubernetes.io/projected/96387bdc-46ea-4452-afb8-6d0a3fa3a80e-kube-api-access-n2bw7\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-gml48\" (UID: \"96387bdc-46ea-4452-afb8-6d0a3fa3a80e\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-gml48" Feb 27 10:51:23 crc kubenswrapper[4998]: I0227 10:51:23.085182 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/96387bdc-46ea-4452-afb8-6d0a3fa3a80e-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-gml48\" (UID: \"96387bdc-46ea-4452-afb8-6d0a3fa3a80e\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-gml48" Feb 27 10:51:23 crc kubenswrapper[4998]: I0227 10:51:23.085299 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/96387bdc-46ea-4452-afb8-6d0a3fa3a80e-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-gml48\" (UID: \"96387bdc-46ea-4452-afb8-6d0a3fa3a80e\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-gml48" Feb 27 10:51:23 crc kubenswrapper[4998]: I0227 10:51:23.186734 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/96387bdc-46ea-4452-afb8-6d0a3fa3a80e-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-gml48\" (UID: \"96387bdc-46ea-4452-afb8-6d0a3fa3a80e\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-gml48" Feb 27 10:51:23 crc kubenswrapper[4998]: I0227 10:51:23.186904 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2bw7\" (UniqueName: \"kubernetes.io/projected/96387bdc-46ea-4452-afb8-6d0a3fa3a80e-kube-api-access-n2bw7\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-gml48\" (UID: \"96387bdc-46ea-4452-afb8-6d0a3fa3a80e\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-gml48" Feb 27 10:51:23 crc kubenswrapper[4998]: I0227 10:51:23.186932 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/96387bdc-46ea-4452-afb8-6d0a3fa3a80e-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-gml48\" (UID: \"96387bdc-46ea-4452-afb8-6d0a3fa3a80e\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-gml48" Feb 27 10:51:23 crc kubenswrapper[4998]: I0227 10:51:23.191689 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/96387bdc-46ea-4452-afb8-6d0a3fa3a80e-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-gml48\" (UID: \"96387bdc-46ea-4452-afb8-6d0a3fa3a80e\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-gml48" Feb 27 10:51:23 crc kubenswrapper[4998]: I0227 10:51:23.193367 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/96387bdc-46ea-4452-afb8-6d0a3fa3a80e-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-gml48\" (UID: \"96387bdc-46ea-4452-afb8-6d0a3fa3a80e\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-gml48" Feb 27 10:51:23 crc kubenswrapper[4998]: I0227 10:51:23.210511 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2bw7\" (UniqueName: \"kubernetes.io/projected/96387bdc-46ea-4452-afb8-6d0a3fa3a80e-kube-api-access-n2bw7\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-gml48\" (UID: \"96387bdc-46ea-4452-afb8-6d0a3fa3a80e\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-gml48" Feb 27 10:51:23 crc kubenswrapper[4998]: I0227 10:51:23.303341 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-gml48" Feb 27 10:51:23 crc kubenswrapper[4998]: I0227 10:51:23.901894 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-gml48"] Feb 27 10:51:24 crc kubenswrapper[4998]: I0227 10:51:24.908152 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-gml48" event={"ID":"96387bdc-46ea-4452-afb8-6d0a3fa3a80e","Type":"ContainerStarted","Data":"6ab899f072c3b8250e3c973a9ca749d497f2827d0393cdab0e5dc4e6b1472654"} Feb 27 10:51:24 crc kubenswrapper[4998]: I0227 10:51:24.908566 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-gml48" event={"ID":"96387bdc-46ea-4452-afb8-6d0a3fa3a80e","Type":"ContainerStarted","Data":"ced4e764d6ca4dd7e2fb88d523dfb27fe3b78a17da02b5419ec0f504d59dd7ee"} Feb 27 10:51:24 crc kubenswrapper[4998]: I0227 10:51:24.929124 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-gml48" podStartSLOduration=2.183103145 podStartE2EDuration="2.929105281s" podCreationTimestamp="2026-02-27 10:51:22 +0000 UTC" firstStartedPulling="2026-02-27 10:51:23.911917585 +0000 UTC m=+2035.910188573" lastFinishedPulling="2026-02-27 10:51:24.657919701 +0000 UTC m=+2036.656190709" observedRunningTime="2026-02-27 10:51:24.924859628 +0000 UTC m=+2036.923130656" watchObservedRunningTime="2026-02-27 10:51:24.929105281 +0000 UTC m=+2036.927376249" Feb 27 10:51:33 crc kubenswrapper[4998]: I0227 10:51:33.990425 4998 generic.go:334] "Generic (PLEG): container finished" podID="96387bdc-46ea-4452-afb8-6d0a3fa3a80e" containerID="6ab899f072c3b8250e3c973a9ca749d497f2827d0393cdab0e5dc4e6b1472654" exitCode=0 Feb 27 10:51:33 crc kubenswrapper[4998]: I0227 10:51:33.990530 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-gml48" event={"ID":"96387bdc-46ea-4452-afb8-6d0a3fa3a80e","Type":"ContainerDied","Data":"6ab899f072c3b8250e3c973a9ca749d497f2827d0393cdab0e5dc4e6b1472654"} Feb 27 10:51:35 crc kubenswrapper[4998]: I0227 10:51:35.443353 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-gml48" Feb 27 10:51:35 crc kubenswrapper[4998]: I0227 10:51:35.642826 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/96387bdc-46ea-4452-afb8-6d0a3fa3a80e-inventory\") pod \"96387bdc-46ea-4452-afb8-6d0a3fa3a80e\" (UID: \"96387bdc-46ea-4452-afb8-6d0a3fa3a80e\") " Feb 27 10:51:35 crc kubenswrapper[4998]: I0227 10:51:35.642989 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/96387bdc-46ea-4452-afb8-6d0a3fa3a80e-ssh-key-openstack-edpm-ipam\") pod \"96387bdc-46ea-4452-afb8-6d0a3fa3a80e\" (UID: \"96387bdc-46ea-4452-afb8-6d0a3fa3a80e\") " Feb 27 10:51:35 crc kubenswrapper[4998]: I0227 10:51:35.643060 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n2bw7\" (UniqueName: \"kubernetes.io/projected/96387bdc-46ea-4452-afb8-6d0a3fa3a80e-kube-api-access-n2bw7\") pod \"96387bdc-46ea-4452-afb8-6d0a3fa3a80e\" (UID: \"96387bdc-46ea-4452-afb8-6d0a3fa3a80e\") " Feb 27 10:51:35 crc kubenswrapper[4998]: I0227 10:51:35.649412 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96387bdc-46ea-4452-afb8-6d0a3fa3a80e-kube-api-access-n2bw7" (OuterVolumeSpecName: "kube-api-access-n2bw7") pod "96387bdc-46ea-4452-afb8-6d0a3fa3a80e" (UID: "96387bdc-46ea-4452-afb8-6d0a3fa3a80e"). InnerVolumeSpecName "kube-api-access-n2bw7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:51:35 crc kubenswrapper[4998]: I0227 10:51:35.672596 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96387bdc-46ea-4452-afb8-6d0a3fa3a80e-inventory" (OuterVolumeSpecName: "inventory") pod "96387bdc-46ea-4452-afb8-6d0a3fa3a80e" (UID: "96387bdc-46ea-4452-afb8-6d0a3fa3a80e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:51:35 crc kubenswrapper[4998]: I0227 10:51:35.694697 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96387bdc-46ea-4452-afb8-6d0a3fa3a80e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "96387bdc-46ea-4452-afb8-6d0a3fa3a80e" (UID: "96387bdc-46ea-4452-afb8-6d0a3fa3a80e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:51:35 crc kubenswrapper[4998]: I0227 10:51:35.745945 4998 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/96387bdc-46ea-4452-afb8-6d0a3fa3a80e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 27 10:51:35 crc kubenswrapper[4998]: I0227 10:51:35.745988 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n2bw7\" (UniqueName: \"kubernetes.io/projected/96387bdc-46ea-4452-afb8-6d0a3fa3a80e-kube-api-access-n2bw7\") on node \"crc\" DevicePath \"\"" Feb 27 10:51:35 crc kubenswrapper[4998]: I0227 10:51:35.746001 4998 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/96387bdc-46ea-4452-afb8-6d0a3fa3a80e-inventory\") on node \"crc\" DevicePath \"\"" Feb 27 10:51:36 crc kubenswrapper[4998]: I0227 10:51:36.014333 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-gml48" event={"ID":"96387bdc-46ea-4452-afb8-6d0a3fa3a80e","Type":"ContainerDied","Data":"ced4e764d6ca4dd7e2fb88d523dfb27fe3b78a17da02b5419ec0f504d59dd7ee"} Feb 27 10:51:36 crc kubenswrapper[4998]: I0227 10:51:36.014662 4998 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ced4e764d6ca4dd7e2fb88d523dfb27fe3b78a17da02b5419ec0f504d59dd7ee" Feb 27 10:51:36 crc kubenswrapper[4998]: I0227 10:51:36.014435 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-gml48" Feb 27 10:51:36 crc kubenswrapper[4998]: I0227 10:51:36.092270 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9n5t8"] Feb 27 10:51:36 crc kubenswrapper[4998]: E0227 10:51:36.092723 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96387bdc-46ea-4452-afb8-6d0a3fa3a80e" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 27 10:51:36 crc kubenswrapper[4998]: I0227 10:51:36.092746 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="96387bdc-46ea-4452-afb8-6d0a3fa3a80e" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 27 10:51:36 crc kubenswrapper[4998]: I0227 10:51:36.093048 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="96387bdc-46ea-4452-afb8-6d0a3fa3a80e" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 27 10:51:36 crc kubenswrapper[4998]: I0227 10:51:36.093863 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9n5t8" Feb 27 10:51:36 crc kubenswrapper[4998]: I0227 10:51:36.098044 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Feb 27 10:51:36 crc kubenswrapper[4998]: I0227 10:51:36.098115 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 27 10:51:36 crc kubenswrapper[4998]: I0227 10:51:36.098045 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 27 10:51:36 crc kubenswrapper[4998]: I0227 10:51:36.098325 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Feb 27 10:51:36 crc kubenswrapper[4998]: I0227 10:51:36.098217 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 27 10:51:36 crc kubenswrapper[4998]: I0227 10:51:36.098759 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Feb 27 10:51:36 crc kubenswrapper[4998]: I0227 10:51:36.098981 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-bpcp2" Feb 27 10:51:36 crc kubenswrapper[4998]: I0227 10:51:36.104394 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9n5t8"] Feb 27 10:51:36 crc kubenswrapper[4998]: I0227 10:51:36.106537 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Feb 27 10:51:36 crc kubenswrapper[4998]: I0227 10:51:36.256732 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9ff2a9f1-4fc7-4a25-a209-3e58f6610e24-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9n5t8\" (UID: \"9ff2a9f1-4fc7-4a25-a209-3e58f6610e24\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9n5t8" Feb 27 10:51:36 crc kubenswrapper[4998]: I0227 10:51:36.256805 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ff2a9f1-4fc7-4a25-a209-3e58f6610e24-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9n5t8\" (UID: \"9ff2a9f1-4fc7-4a25-a209-3e58f6610e24\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9n5t8" Feb 27 10:51:36 crc kubenswrapper[4998]: I0227 10:51:36.257049 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ff2a9f1-4fc7-4a25-a209-3e58f6610e24-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9n5t8\" (UID: \"9ff2a9f1-4fc7-4a25-a209-3e58f6610e24\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9n5t8" Feb 27 10:51:36 crc kubenswrapper[4998]: I0227 10:51:36.257131 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9ff2a9f1-4fc7-4a25-a209-3e58f6610e24-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9n5t8\" (UID: \"9ff2a9f1-4fc7-4a25-a209-3e58f6610e24\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9n5t8" Feb 27 10:51:36 crc kubenswrapper[4998]: I0227 10:51:36.257206 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9ff2a9f1-4fc7-4a25-a209-3e58f6610e24-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9n5t8\" (UID: \"9ff2a9f1-4fc7-4a25-a209-3e58f6610e24\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9n5t8" Feb 27 10:51:36 crc kubenswrapper[4998]: I0227 10:51:36.258379 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnsvz\" (UniqueName: \"kubernetes.io/projected/9ff2a9f1-4fc7-4a25-a209-3e58f6610e24-kube-api-access-vnsvz\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9n5t8\" (UID: \"9ff2a9f1-4fc7-4a25-a209-3e58f6610e24\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9n5t8" Feb 27 10:51:36 crc kubenswrapper[4998]: I0227 10:51:36.258418 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ff2a9f1-4fc7-4a25-a209-3e58f6610e24-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9n5t8\" (UID: \"9ff2a9f1-4fc7-4a25-a209-3e58f6610e24\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9n5t8" Feb 27 10:51:36 crc kubenswrapper[4998]: I0227 10:51:36.258508 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ff2a9f1-4fc7-4a25-a209-3e58f6610e24-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9n5t8\" (UID: \"9ff2a9f1-4fc7-4a25-a209-3e58f6610e24\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9n5t8" Feb 27 10:51:36 crc kubenswrapper[4998]: I0227 10:51:36.258538 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9ff2a9f1-4fc7-4a25-a209-3e58f6610e24-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9n5t8\" (UID: \"9ff2a9f1-4fc7-4a25-a209-3e58f6610e24\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9n5t8" Feb 27 10:51:36 crc kubenswrapper[4998]: I0227 10:51:36.258572 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9ff2a9f1-4fc7-4a25-a209-3e58f6610e24-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9n5t8\" (UID: \"9ff2a9f1-4fc7-4a25-a209-3e58f6610e24\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9n5t8" Feb 27 10:51:36 crc kubenswrapper[4998]: I0227 10:51:36.258597 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9ff2a9f1-4fc7-4a25-a209-3e58f6610e24-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9n5t8\" (UID: \"9ff2a9f1-4fc7-4a25-a209-3e58f6610e24\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9n5t8" Feb 27 10:51:36 crc kubenswrapper[4998]: I0227 10:51:36.258730 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ff2a9f1-4fc7-4a25-a209-3e58f6610e24-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9n5t8\" (UID: \"9ff2a9f1-4fc7-4a25-a209-3e58f6610e24\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9n5t8" Feb 27 10:51:36 crc kubenswrapper[4998]: I0227 10:51:36.258766 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ff2a9f1-4fc7-4a25-a209-3e58f6610e24-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9n5t8\" (UID: \"9ff2a9f1-4fc7-4a25-a209-3e58f6610e24\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9n5t8" Feb 27 10:51:36 crc kubenswrapper[4998]: I0227 10:51:36.258867 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ff2a9f1-4fc7-4a25-a209-3e58f6610e24-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9n5t8\" (UID: \"9ff2a9f1-4fc7-4a25-a209-3e58f6610e24\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9n5t8" Feb 27 10:51:36 crc kubenswrapper[4998]: I0227 10:51:36.360252 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ff2a9f1-4fc7-4a25-a209-3e58f6610e24-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9n5t8\" (UID: \"9ff2a9f1-4fc7-4a25-a209-3e58f6610e24\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9n5t8" Feb 27 10:51:36 crc kubenswrapper[4998]: I0227 10:51:36.360351 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ff2a9f1-4fc7-4a25-a209-3e58f6610e24-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9n5t8\" (UID: \"9ff2a9f1-4fc7-4a25-a209-3e58f6610e24\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9n5t8" Feb 27 10:51:36 crc kubenswrapper[4998]: I0227 10:51:36.360485 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9ff2a9f1-4fc7-4a25-a209-3e58f6610e24-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9n5t8\" (UID: \"9ff2a9f1-4fc7-4a25-a209-3e58f6610e24\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9n5t8" Feb 27 10:51:36 crc kubenswrapper[4998]: I0227 10:51:36.360531 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ff2a9f1-4fc7-4a25-a209-3e58f6610e24-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9n5t8\" (UID: \"9ff2a9f1-4fc7-4a25-a209-3e58f6610e24\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9n5t8" Feb 27 10:51:36 crc kubenswrapper[4998]: I0227 10:51:36.360648 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ff2a9f1-4fc7-4a25-a209-3e58f6610e24-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9n5t8\" (UID: \"9ff2a9f1-4fc7-4a25-a209-3e58f6610e24\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9n5t8" Feb 27 10:51:36 crc kubenswrapper[4998]: I0227 10:51:36.360727 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9ff2a9f1-4fc7-4a25-a209-3e58f6610e24-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9n5t8\" (UID: \"9ff2a9f1-4fc7-4a25-a209-3e58f6610e24\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9n5t8" Feb 27 10:51:36 crc kubenswrapper[4998]: I0227 10:51:36.362363 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9ff2a9f1-4fc7-4a25-a209-3e58f6610e24-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9n5t8\" (UID: \"9ff2a9f1-4fc7-4a25-a209-3e58f6610e24\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9n5t8" Feb 27 10:51:36 crc kubenswrapper[4998]: I0227 10:51:36.362487 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vnsvz\" (UniqueName: \"kubernetes.io/projected/9ff2a9f1-4fc7-4a25-a209-3e58f6610e24-kube-api-access-vnsvz\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9n5t8\" (UID: \"9ff2a9f1-4fc7-4a25-a209-3e58f6610e24\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9n5t8" Feb 27 10:51:36 crc kubenswrapper[4998]: I0227 10:51:36.364786 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ff2a9f1-4fc7-4a25-a209-3e58f6610e24-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9n5t8\" (UID: \"9ff2a9f1-4fc7-4a25-a209-3e58f6610e24\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9n5t8" Feb 27 10:51:36 crc kubenswrapper[4998]: I0227 10:51:36.365733 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ff2a9f1-4fc7-4a25-a209-3e58f6610e24-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9n5t8\" (UID: \"9ff2a9f1-4fc7-4a25-a209-3e58f6610e24\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9n5t8" Feb 27 10:51:36 crc kubenswrapper[4998]: I0227 10:51:36.365817 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ff2a9f1-4fc7-4a25-a209-3e58f6610e24-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9n5t8\" (UID: \"9ff2a9f1-4fc7-4a25-a209-3e58f6610e24\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9n5t8" Feb 27 10:51:36 crc kubenswrapper[4998]: I0227 10:51:36.365853 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9ff2a9f1-4fc7-4a25-a209-3e58f6610e24-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9n5t8\" (UID: \"9ff2a9f1-4fc7-4a25-a209-3e58f6610e24\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9n5t8" Feb 27 10:51:36 crc kubenswrapper[4998]: I0227 10:51:36.365887 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9ff2a9f1-4fc7-4a25-a209-3e58f6610e24-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9n5t8\" (UID: \"9ff2a9f1-4fc7-4a25-a209-3e58f6610e24\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9n5t8" Feb 27 10:51:36 crc kubenswrapper[4998]: I0227 10:51:36.365915 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9ff2a9f1-4fc7-4a25-a209-3e58f6610e24-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9n5t8\" (UID: \"9ff2a9f1-4fc7-4a25-a209-3e58f6610e24\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9n5t8" Feb 27 10:51:36 crc kubenswrapper[4998]: I0227 10:51:36.365998 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ff2a9f1-4fc7-4a25-a209-3e58f6610e24-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9n5t8\" (UID: \"9ff2a9f1-4fc7-4a25-a209-3e58f6610e24\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9n5t8" Feb 27 10:51:36 crc kubenswrapper[4998]: I0227 10:51:36.367720 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9ff2a9f1-4fc7-4a25-a209-3e58f6610e24-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9n5t8\" (UID: \"9ff2a9f1-4fc7-4a25-a209-3e58f6610e24\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9n5t8" Feb 27 10:51:36 crc kubenswrapper[4998]: I0227 10:51:36.368375 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9ff2a9f1-4fc7-4a25-a209-3e58f6610e24-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9n5t8\" (UID: \"9ff2a9f1-4fc7-4a25-a209-3e58f6610e24\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9n5t8" Feb 27 10:51:36 crc kubenswrapper[4998]: I0227 10:51:36.370914 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ff2a9f1-4fc7-4a25-a209-3e58f6610e24-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9n5t8\" (UID: \"9ff2a9f1-4fc7-4a25-a209-3e58f6610e24\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9n5t8" Feb 27 10:51:36 crc kubenswrapper[4998]: I0227 10:51:36.372424 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ff2a9f1-4fc7-4a25-a209-3e58f6610e24-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9n5t8\" (UID: \"9ff2a9f1-4fc7-4a25-a209-3e58f6610e24\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9n5t8" Feb 27 10:51:36 crc kubenswrapper[4998]: I0227 10:51:36.372426 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ff2a9f1-4fc7-4a25-a209-3e58f6610e24-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9n5t8\" (UID: \"9ff2a9f1-4fc7-4a25-a209-3e58f6610e24\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9n5t8" Feb 27 10:51:36 crc kubenswrapper[4998]: I0227 10:51:36.372669 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9ff2a9f1-4fc7-4a25-a209-3e58f6610e24-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9n5t8\" (UID: \"9ff2a9f1-4fc7-4a25-a209-3e58f6610e24\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9n5t8" Feb 27 10:51:36 crc kubenswrapper[4998]: I0227 10:51:36.380008 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ff2a9f1-4fc7-4a25-a209-3e58f6610e24-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9n5t8\" (UID: \"9ff2a9f1-4fc7-4a25-a209-3e58f6610e24\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9n5t8" Feb 27 10:51:36 crc kubenswrapper[4998]: I0227 10:51:36.386678 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ff2a9f1-4fc7-4a25-a209-3e58f6610e24-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9n5t8\" (UID: \"9ff2a9f1-4fc7-4a25-a209-3e58f6610e24\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9n5t8" Feb 27 10:51:36 crc kubenswrapper[4998]: I0227 10:51:36.388797 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ff2a9f1-4fc7-4a25-a209-3e58f6610e24-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9n5t8\" (UID: \"9ff2a9f1-4fc7-4a25-a209-3e58f6610e24\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9n5t8" Feb 27 10:51:36 crc kubenswrapper[4998]: I0227 10:51:36.390096 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9ff2a9f1-4fc7-4a25-a209-3e58f6610e24-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9n5t8\" (UID: \"9ff2a9f1-4fc7-4a25-a209-3e58f6610e24\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9n5t8" Feb 27 10:51:36 crc kubenswrapper[4998]: I0227 10:51:36.390649 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9ff2a9f1-4fc7-4a25-a209-3e58f6610e24-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9n5t8\" (UID: \"9ff2a9f1-4fc7-4a25-a209-3e58f6610e24\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9n5t8" Feb 27 10:51:36 crc kubenswrapper[4998]: I0227 10:51:36.391825 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9ff2a9f1-4fc7-4a25-a209-3e58f6610e24-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9n5t8\" (UID: \"9ff2a9f1-4fc7-4a25-a209-3e58f6610e24\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9n5t8" Feb 27 10:51:36 crc kubenswrapper[4998]: I0227 10:51:36.398147 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vnsvz\" (UniqueName: \"kubernetes.io/projected/9ff2a9f1-4fc7-4a25-a209-3e58f6610e24-kube-api-access-vnsvz\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9n5t8\" (UID: \"9ff2a9f1-4fc7-4a25-a209-3e58f6610e24\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9n5t8" Feb 27 10:51:36 crc kubenswrapper[4998]: I0227 10:51:36.410890 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9n5t8" Feb 27 10:51:36 crc kubenswrapper[4998]: I0227 10:51:36.973317 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9n5t8"] Feb 27 10:51:37 crc kubenswrapper[4998]: I0227 10:51:37.025246 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9n5t8" event={"ID":"9ff2a9f1-4fc7-4a25-a209-3e58f6610e24","Type":"ContainerStarted","Data":"234a5969f6a710324f21bc4fcd83248bf45e6e0fb9898bb29cac39541ec96b7f"} Feb 27 10:51:39 crc kubenswrapper[4998]: I0227 10:51:39.052932 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9n5t8" event={"ID":"9ff2a9f1-4fc7-4a25-a209-3e58f6610e24","Type":"ContainerStarted","Data":"8a56dbbd7cb8a26014574f6c6632172ae7d280bd81fafe02e97ac5ec36b43810"} Feb 27 10:51:39 crc kubenswrapper[4998]: I0227 10:51:39.079595 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9n5t8" podStartSLOduration=1.653177986 podStartE2EDuration="3.07957528s" podCreationTimestamp="2026-02-27 10:51:36 +0000 UTC" firstStartedPulling="2026-02-27 10:51:36.991337859 +0000 UTC m=+2048.989608837" lastFinishedPulling="2026-02-27 10:51:38.417735163 +0000 UTC m=+2050.416006131" observedRunningTime="2026-02-27 10:51:39.075551879 +0000 UTC m=+2051.073822867" watchObservedRunningTime="2026-02-27 10:51:39.07957528 +0000 UTC m=+2051.077846248" Feb 27 10:51:40 crc kubenswrapper[4998]: I0227 10:51:40.504821 4998 patch_prober.go:28] interesting pod/machine-config-daemon-m6kr5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 10:51:40 crc kubenswrapper[4998]: I0227 10:51:40.505197 4998 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 10:51:48 crc kubenswrapper[4998]: I0227 10:51:48.588001 4998 scope.go:117] "RemoveContainer" containerID="a090d5f3dba05c4961e0a631ba3f4fcb81c9f62f7244bd6634290d47e69a4c5d" Feb 27 10:52:00 crc kubenswrapper[4998]: I0227 10:52:00.146662 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536492-j8djb"] Feb 27 10:52:00 crc kubenswrapper[4998]: I0227 10:52:00.149360 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536492-j8djb" Feb 27 10:52:00 crc kubenswrapper[4998]: I0227 10:52:00.151715 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 10:52:00 crc kubenswrapper[4998]: I0227 10:52:00.152066 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-b74ch" Feb 27 10:52:00 crc kubenswrapper[4998]: I0227 10:52:00.152281 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 10:52:00 crc kubenswrapper[4998]: I0227 10:52:00.157189 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536492-j8djb"] Feb 27 10:52:00 crc kubenswrapper[4998]: I0227 10:52:00.252877 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2w2kx\" (UniqueName: \"kubernetes.io/projected/61c9cd2a-63ea-4935-9102-c943c6479131-kube-api-access-2w2kx\") pod \"auto-csr-approver-29536492-j8djb\" (UID: \"61c9cd2a-63ea-4935-9102-c943c6479131\") " pod="openshift-infra/auto-csr-approver-29536492-j8djb" Feb 27 10:52:00 crc kubenswrapper[4998]: I0227 10:52:00.356119 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2w2kx\" (UniqueName: \"kubernetes.io/projected/61c9cd2a-63ea-4935-9102-c943c6479131-kube-api-access-2w2kx\") pod \"auto-csr-approver-29536492-j8djb\" (UID: \"61c9cd2a-63ea-4935-9102-c943c6479131\") " pod="openshift-infra/auto-csr-approver-29536492-j8djb" Feb 27 10:52:00 crc kubenswrapper[4998]: I0227 10:52:00.381255 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2w2kx\" (UniqueName: \"kubernetes.io/projected/61c9cd2a-63ea-4935-9102-c943c6479131-kube-api-access-2w2kx\") pod \"auto-csr-approver-29536492-j8djb\" (UID: \"61c9cd2a-63ea-4935-9102-c943c6479131\") " pod="openshift-infra/auto-csr-approver-29536492-j8djb" Feb 27 10:52:00 crc kubenswrapper[4998]: I0227 10:52:00.473300 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536492-j8djb" Feb 27 10:52:00 crc kubenswrapper[4998]: I0227 10:52:00.931464 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536492-j8djb"] Feb 27 10:52:01 crc kubenswrapper[4998]: I0227 10:52:01.286890 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536492-j8djb" event={"ID":"61c9cd2a-63ea-4935-9102-c943c6479131","Type":"ContainerStarted","Data":"6384cf2851923b3edabd5a90a89f25807ec9fb4affa392c08b5df180eb7dbc3d"} Feb 27 10:52:02 crc kubenswrapper[4998]: I0227 10:52:02.298475 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536492-j8djb" event={"ID":"61c9cd2a-63ea-4935-9102-c943c6479131","Type":"ContainerStarted","Data":"93cb2f76c85667978039c472fb5fa9408058836f54e2c418be1791dad6bb2483"} Feb 27 10:52:02 crc kubenswrapper[4998]: I0227 10:52:02.320186 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29536492-j8djb" podStartSLOduration=1.407763508 podStartE2EDuration="2.320165739s" podCreationTimestamp="2026-02-27 10:52:00 +0000 UTC" firstStartedPulling="2026-02-27 10:52:00.941672178 +0000 UTC m=+2072.939943136" lastFinishedPulling="2026-02-27 10:52:01.854074399 +0000 UTC m=+2073.852345367" observedRunningTime="2026-02-27 10:52:02.313471009 +0000 UTC m=+2074.311741977" watchObservedRunningTime="2026-02-27 10:52:02.320165739 +0000 UTC m=+2074.318436707" Feb 27 10:52:03 crc kubenswrapper[4998]: I0227 10:52:03.309108 4998 generic.go:334] "Generic (PLEG): container finished" podID="61c9cd2a-63ea-4935-9102-c943c6479131" containerID="93cb2f76c85667978039c472fb5fa9408058836f54e2c418be1791dad6bb2483" exitCode=0 Feb 27 10:52:03 crc kubenswrapper[4998]: I0227 10:52:03.309194 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536492-j8djb" event={"ID":"61c9cd2a-63ea-4935-9102-c943c6479131","Type":"ContainerDied","Data":"93cb2f76c85667978039c472fb5fa9408058836f54e2c418be1791dad6bb2483"} Feb 27 10:52:03 crc kubenswrapper[4998]: I0227 10:52:03.958020 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-25qb8"] Feb 27 10:52:03 crc kubenswrapper[4998]: I0227 10:52:03.962108 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-25qb8" Feb 27 10:52:03 crc kubenswrapper[4998]: I0227 10:52:03.971655 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-25qb8"] Feb 27 10:52:04 crc kubenswrapper[4998]: I0227 10:52:04.139293 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3df5f8c9-aa4b-4a79-b07d-265c7fe120ad-utilities\") pod \"redhat-operators-25qb8\" (UID: \"3df5f8c9-aa4b-4a79-b07d-265c7fe120ad\") " pod="openshift-marketplace/redhat-operators-25qb8" Feb 27 10:52:04 crc kubenswrapper[4998]: I0227 10:52:04.139451 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3df5f8c9-aa4b-4a79-b07d-265c7fe120ad-catalog-content\") pod \"redhat-operators-25qb8\" (UID: \"3df5f8c9-aa4b-4a79-b07d-265c7fe120ad\") " pod="openshift-marketplace/redhat-operators-25qb8" Feb 27 10:52:04 crc kubenswrapper[4998]: I0227 10:52:04.139478 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pwdt\" (UniqueName: \"kubernetes.io/projected/3df5f8c9-aa4b-4a79-b07d-265c7fe120ad-kube-api-access-8pwdt\") pod \"redhat-operators-25qb8\" (UID: \"3df5f8c9-aa4b-4a79-b07d-265c7fe120ad\") " pod="openshift-marketplace/redhat-operators-25qb8" Feb 27 10:52:04 crc kubenswrapper[4998]: I0227 10:52:04.240765 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3df5f8c9-aa4b-4a79-b07d-265c7fe120ad-catalog-content\") pod \"redhat-operators-25qb8\" (UID: \"3df5f8c9-aa4b-4a79-b07d-265c7fe120ad\") " pod="openshift-marketplace/redhat-operators-25qb8" Feb 27 10:52:04 crc kubenswrapper[4998]: I0227 10:52:04.241056 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8pwdt\" (UniqueName: \"kubernetes.io/projected/3df5f8c9-aa4b-4a79-b07d-265c7fe120ad-kube-api-access-8pwdt\") pod \"redhat-operators-25qb8\" (UID: \"3df5f8c9-aa4b-4a79-b07d-265c7fe120ad\") " pod="openshift-marketplace/redhat-operators-25qb8" Feb 27 10:52:04 crc kubenswrapper[4998]: I0227 10:52:04.241202 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3df5f8c9-aa4b-4a79-b07d-265c7fe120ad-utilities\") pod \"redhat-operators-25qb8\" (UID: \"3df5f8c9-aa4b-4a79-b07d-265c7fe120ad\") " pod="openshift-marketplace/redhat-operators-25qb8" Feb 27 10:52:04 crc kubenswrapper[4998]: I0227 10:52:04.241437 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3df5f8c9-aa4b-4a79-b07d-265c7fe120ad-catalog-content\") pod \"redhat-operators-25qb8\" (UID: \"3df5f8c9-aa4b-4a79-b07d-265c7fe120ad\") " pod="openshift-marketplace/redhat-operators-25qb8" Feb 27 10:52:04 crc kubenswrapper[4998]: I0227 10:52:04.241820 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3df5f8c9-aa4b-4a79-b07d-265c7fe120ad-utilities\") pod \"redhat-operators-25qb8\" (UID: \"3df5f8c9-aa4b-4a79-b07d-265c7fe120ad\") " pod="openshift-marketplace/redhat-operators-25qb8" Feb 27 10:52:04 crc kubenswrapper[4998]: I0227 10:52:04.266317 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8pwdt\" (UniqueName: \"kubernetes.io/projected/3df5f8c9-aa4b-4a79-b07d-265c7fe120ad-kube-api-access-8pwdt\") pod \"redhat-operators-25qb8\" (UID: \"3df5f8c9-aa4b-4a79-b07d-265c7fe120ad\") " pod="openshift-marketplace/redhat-operators-25qb8" Feb 27 10:52:04 crc kubenswrapper[4998]: I0227 10:52:04.280619 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-25qb8" Feb 27 10:52:04 crc kubenswrapper[4998]: I0227 10:52:04.710242 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536492-j8djb" Feb 27 10:52:04 crc kubenswrapper[4998]: I0227 10:52:04.854132 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w2kx\" (UniqueName: \"kubernetes.io/projected/61c9cd2a-63ea-4935-9102-c943c6479131-kube-api-access-2w2kx\") pod \"61c9cd2a-63ea-4935-9102-c943c6479131\" (UID: \"61c9cd2a-63ea-4935-9102-c943c6479131\") " Feb 27 10:52:04 crc kubenswrapper[4998]: I0227 10:52:04.860140 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61c9cd2a-63ea-4935-9102-c943c6479131-kube-api-access-2w2kx" (OuterVolumeSpecName: "kube-api-access-2w2kx") pod "61c9cd2a-63ea-4935-9102-c943c6479131" (UID: "61c9cd2a-63ea-4935-9102-c943c6479131"). InnerVolumeSpecName "kube-api-access-2w2kx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:52:04 crc kubenswrapper[4998]: I0227 10:52:04.879168 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-25qb8"] Feb 27 10:52:04 crc kubenswrapper[4998]: W0227 10:52:04.882793 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3df5f8c9_aa4b_4a79_b07d_265c7fe120ad.slice/crio-5bde29f29622d919fb0f22cf220b3e0817c86ecdbc6db7ca5304fb2ae113ffc9 WatchSource:0}: Error finding container 5bde29f29622d919fb0f22cf220b3e0817c86ecdbc6db7ca5304fb2ae113ffc9: Status 404 returned error can't find the container with id 5bde29f29622d919fb0f22cf220b3e0817c86ecdbc6db7ca5304fb2ae113ffc9 Feb 27 10:52:04 crc kubenswrapper[4998]: I0227 10:52:04.957669 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w2kx\" (UniqueName: \"kubernetes.io/projected/61c9cd2a-63ea-4935-9102-c943c6479131-kube-api-access-2w2kx\") on node \"crc\" DevicePath \"\"" Feb 27 10:52:05 crc kubenswrapper[4998]: I0227 10:52:05.341923 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536492-j8djb" event={"ID":"61c9cd2a-63ea-4935-9102-c943c6479131","Type":"ContainerDied","Data":"6384cf2851923b3edabd5a90a89f25807ec9fb4affa392c08b5df180eb7dbc3d"} Feb 27 10:52:05 crc kubenswrapper[4998]: I0227 10:52:05.342273 4998 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6384cf2851923b3edabd5a90a89f25807ec9fb4affa392c08b5df180eb7dbc3d" Feb 27 10:52:05 crc kubenswrapper[4998]: I0227 10:52:05.341946 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536492-j8djb" Feb 27 10:52:05 crc kubenswrapper[4998]: I0227 10:52:05.343993 4998 generic.go:334] "Generic (PLEG): container finished" podID="3df5f8c9-aa4b-4a79-b07d-265c7fe120ad" containerID="691329e447abafa2d123f155c74336a501701987c87a8111f4636ba4b55e64b9" exitCode=0 Feb 27 10:52:05 crc kubenswrapper[4998]: I0227 10:52:05.344033 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-25qb8" event={"ID":"3df5f8c9-aa4b-4a79-b07d-265c7fe120ad","Type":"ContainerDied","Data":"691329e447abafa2d123f155c74336a501701987c87a8111f4636ba4b55e64b9"} Feb 27 10:52:05 crc kubenswrapper[4998]: I0227 10:52:05.344059 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-25qb8" event={"ID":"3df5f8c9-aa4b-4a79-b07d-265c7fe120ad","Type":"ContainerStarted","Data":"5bde29f29622d919fb0f22cf220b3e0817c86ecdbc6db7ca5304fb2ae113ffc9"} Feb 27 10:52:05 crc kubenswrapper[4998]: I0227 10:52:05.398623 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536486-vmqwl"] Feb 27 10:52:05 crc kubenswrapper[4998]: I0227 10:52:05.407264 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536486-vmqwl"] Feb 27 10:52:06 crc kubenswrapper[4998]: I0227 10:52:06.353246 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-25qb8" event={"ID":"3df5f8c9-aa4b-4a79-b07d-265c7fe120ad","Type":"ContainerStarted","Data":"bdbc6b5859a249c5ac35790c86a20e683253bacce9a3ec71e5e7955fb4ce6f9e"} Feb 27 10:52:06 crc kubenswrapper[4998]: I0227 10:52:06.774277 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dce9adf8-87bf-48d3-a7bb-3e0596585bf4" path="/var/lib/kubelet/pods/dce9adf8-87bf-48d3-a7bb-3e0596585bf4/volumes" Feb 27 10:52:08 crc kubenswrapper[4998]: I0227 10:52:08.370790 4998 generic.go:334] "Generic (PLEG): container finished" podID="3df5f8c9-aa4b-4a79-b07d-265c7fe120ad" containerID="bdbc6b5859a249c5ac35790c86a20e683253bacce9a3ec71e5e7955fb4ce6f9e" exitCode=0 Feb 27 10:52:08 crc kubenswrapper[4998]: I0227 10:52:08.371098 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-25qb8" event={"ID":"3df5f8c9-aa4b-4a79-b07d-265c7fe120ad","Type":"ContainerDied","Data":"bdbc6b5859a249c5ac35790c86a20e683253bacce9a3ec71e5e7955fb4ce6f9e"} Feb 27 10:52:09 crc kubenswrapper[4998]: I0227 10:52:09.383482 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-25qb8" event={"ID":"3df5f8c9-aa4b-4a79-b07d-265c7fe120ad","Type":"ContainerStarted","Data":"39fccd280175fd4d650b4e8d2d31dcfa65d785f2b5fb930916739e86e13ad9e2"} Feb 27 10:52:09 crc kubenswrapper[4998]: I0227 10:52:09.409769 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-25qb8" podStartSLOduration=2.976577122 podStartE2EDuration="6.4097236s" podCreationTimestamp="2026-02-27 10:52:03 +0000 UTC" firstStartedPulling="2026-02-27 10:52:05.34562118 +0000 UTC m=+2077.343892148" lastFinishedPulling="2026-02-27 10:52:08.778767648 +0000 UTC m=+2080.777038626" observedRunningTime="2026-02-27 10:52:09.408959115 +0000 UTC m=+2081.407230083" watchObservedRunningTime="2026-02-27 10:52:09.4097236 +0000 UTC m=+2081.407994728" Feb 27 10:52:10 crc kubenswrapper[4998]: I0227 10:52:10.504488 4998 patch_prober.go:28] interesting pod/machine-config-daemon-m6kr5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 10:52:10 crc kubenswrapper[4998]: I0227 10:52:10.504819 4998 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 10:52:10 crc kubenswrapper[4998]: I0227 10:52:10.504869 4998 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" Feb 27 10:52:10 crc kubenswrapper[4998]: I0227 10:52:10.505540 4998 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"04de93613a67bf6912778a8335db7a4a5c63027b3022fa81b03ae228d08d8d7b"} pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 27 10:52:10 crc kubenswrapper[4998]: I0227 10:52:10.505591 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" containerName="machine-config-daemon" containerID="cri-o://04de93613a67bf6912778a8335db7a4a5c63027b3022fa81b03ae228d08d8d7b" gracePeriod=600 Feb 27 10:52:11 crc kubenswrapper[4998]: I0227 10:52:11.401554 4998 generic.go:334] "Generic (PLEG): container finished" podID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" containerID="04de93613a67bf6912778a8335db7a4a5c63027b3022fa81b03ae228d08d8d7b" exitCode=0 Feb 27 10:52:11 crc kubenswrapper[4998]: I0227 10:52:11.401641 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" event={"ID":"400c5e2f-5448-49c6-bf8e-04b21e552bb2","Type":"ContainerDied","Data":"04de93613a67bf6912778a8335db7a4a5c63027b3022fa81b03ae228d08d8d7b"} Feb 27 10:52:11 crc kubenswrapper[4998]: I0227 10:52:11.402494 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" event={"ID":"400c5e2f-5448-49c6-bf8e-04b21e552bb2","Type":"ContainerStarted","Data":"7b0185177c0eba8b1ede867992da9cdf152aa626aa33c9939f91ead1ce17d9c4"} Feb 27 10:52:11 crc kubenswrapper[4998]: I0227 10:52:11.402600 4998 scope.go:117] "RemoveContainer" containerID="f7bf3a0484c3e7ee22533ca49a17be909a31292e5418f4d1a0cd402775584d49" Feb 27 10:52:13 crc kubenswrapper[4998]: I0227 10:52:13.424615 4998 generic.go:334] "Generic (PLEG): container finished" podID="9ff2a9f1-4fc7-4a25-a209-3e58f6610e24" containerID="8a56dbbd7cb8a26014574f6c6632172ae7d280bd81fafe02e97ac5ec36b43810" exitCode=0 Feb 27 10:52:13 crc kubenswrapper[4998]: I0227 10:52:13.424705 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9n5t8" event={"ID":"9ff2a9f1-4fc7-4a25-a209-3e58f6610e24","Type":"ContainerDied","Data":"8a56dbbd7cb8a26014574f6c6632172ae7d280bd81fafe02e97ac5ec36b43810"} Feb 27 10:52:14 crc kubenswrapper[4998]: I0227 10:52:14.281763 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-25qb8" Feb 27 10:52:14 crc kubenswrapper[4998]: I0227 10:52:14.282498 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-25qb8" Feb 27 10:52:14 crc kubenswrapper[4998]: I0227 10:52:14.885148 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9n5t8" Feb 27 10:52:15 crc kubenswrapper[4998]: I0227 10:52:15.052701 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9ff2a9f1-4fc7-4a25-a209-3e58f6610e24-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"9ff2a9f1-4fc7-4a25-a209-3e58f6610e24\" (UID: \"9ff2a9f1-4fc7-4a25-a209-3e58f6610e24\") " Feb 27 10:52:15 crc kubenswrapper[4998]: I0227 10:52:15.052746 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9ff2a9f1-4fc7-4a25-a209-3e58f6610e24-ssh-key-openstack-edpm-ipam\") pod \"9ff2a9f1-4fc7-4a25-a209-3e58f6610e24\" (UID: \"9ff2a9f1-4fc7-4a25-a209-3e58f6610e24\") " Feb 27 10:52:15 crc kubenswrapper[4998]: I0227 10:52:15.052771 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ff2a9f1-4fc7-4a25-a209-3e58f6610e24-ovn-combined-ca-bundle\") pod \"9ff2a9f1-4fc7-4a25-a209-3e58f6610e24\" (UID: \"9ff2a9f1-4fc7-4a25-a209-3e58f6610e24\") " Feb 27 10:52:15 crc kubenswrapper[4998]: I0227 10:52:15.052789 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vnsvz\" (UniqueName: \"kubernetes.io/projected/9ff2a9f1-4fc7-4a25-a209-3e58f6610e24-kube-api-access-vnsvz\") pod \"9ff2a9f1-4fc7-4a25-a209-3e58f6610e24\" (UID: \"9ff2a9f1-4fc7-4a25-a209-3e58f6610e24\") " Feb 27 10:52:15 crc kubenswrapper[4998]: I0227 10:52:15.052806 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ff2a9f1-4fc7-4a25-a209-3e58f6610e24-neutron-metadata-combined-ca-bundle\") pod \"9ff2a9f1-4fc7-4a25-a209-3e58f6610e24\" (UID: \"9ff2a9f1-4fc7-4a25-a209-3e58f6610e24\") " Feb 27 10:52:15 crc kubenswrapper[4998]: I0227 10:52:15.053916 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ff2a9f1-4fc7-4a25-a209-3e58f6610e24-nova-combined-ca-bundle\") pod \"9ff2a9f1-4fc7-4a25-a209-3e58f6610e24\" (UID: \"9ff2a9f1-4fc7-4a25-a209-3e58f6610e24\") " Feb 27 10:52:15 crc kubenswrapper[4998]: I0227 10:52:15.054172 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9ff2a9f1-4fc7-4a25-a209-3e58f6610e24-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"9ff2a9f1-4fc7-4a25-a209-3e58f6610e24\" (UID: \"9ff2a9f1-4fc7-4a25-a209-3e58f6610e24\") " Feb 27 10:52:15 crc kubenswrapper[4998]: I0227 10:52:15.054535 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ff2a9f1-4fc7-4a25-a209-3e58f6610e24-libvirt-combined-ca-bundle\") pod \"9ff2a9f1-4fc7-4a25-a209-3e58f6610e24\" (UID: \"9ff2a9f1-4fc7-4a25-a209-3e58f6610e24\") " Feb 27 10:52:15 crc kubenswrapper[4998]: I0227 10:52:15.054635 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ff2a9f1-4fc7-4a25-a209-3e58f6610e24-bootstrap-combined-ca-bundle\") pod \"9ff2a9f1-4fc7-4a25-a209-3e58f6610e24\" (UID: \"9ff2a9f1-4fc7-4a25-a209-3e58f6610e24\") " Feb 27 10:52:15 crc kubenswrapper[4998]: I0227 10:52:15.054730 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9ff2a9f1-4fc7-4a25-a209-3e58f6610e24-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"9ff2a9f1-4fc7-4a25-a209-3e58f6610e24\" (UID: \"9ff2a9f1-4fc7-4a25-a209-3e58f6610e24\") " Feb 27 10:52:15 crc kubenswrapper[4998]: I0227 10:52:15.054768 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ff2a9f1-4fc7-4a25-a209-3e58f6610e24-repo-setup-combined-ca-bundle\") pod \"9ff2a9f1-4fc7-4a25-a209-3e58f6610e24\" (UID: \"9ff2a9f1-4fc7-4a25-a209-3e58f6610e24\") " Feb 27 10:52:15 crc kubenswrapper[4998]: I0227 10:52:15.054792 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ff2a9f1-4fc7-4a25-a209-3e58f6610e24-telemetry-combined-ca-bundle\") pod \"9ff2a9f1-4fc7-4a25-a209-3e58f6610e24\" (UID: \"9ff2a9f1-4fc7-4a25-a209-3e58f6610e24\") " Feb 27 10:52:15 crc kubenswrapper[4998]: I0227 10:52:15.054820 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9ff2a9f1-4fc7-4a25-a209-3e58f6610e24-inventory\") pod \"9ff2a9f1-4fc7-4a25-a209-3e58f6610e24\" (UID: \"9ff2a9f1-4fc7-4a25-a209-3e58f6610e24\") " Feb 27 10:52:15 crc kubenswrapper[4998]: I0227 10:52:15.054862 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9ff2a9f1-4fc7-4a25-a209-3e58f6610e24-openstack-edpm-ipam-ovn-default-certs-0\") pod \"9ff2a9f1-4fc7-4a25-a209-3e58f6610e24\" (UID: \"9ff2a9f1-4fc7-4a25-a209-3e58f6610e24\") " Feb 27 10:52:15 crc kubenswrapper[4998]: I0227 10:52:15.060118 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ff2a9f1-4fc7-4a25-a209-3e58f6610e24-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "9ff2a9f1-4fc7-4a25-a209-3e58f6610e24" (UID: "9ff2a9f1-4fc7-4a25-a209-3e58f6610e24"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:52:15 crc kubenswrapper[4998]: I0227 10:52:15.060555 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ff2a9f1-4fc7-4a25-a209-3e58f6610e24-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "9ff2a9f1-4fc7-4a25-a209-3e58f6610e24" (UID: "9ff2a9f1-4fc7-4a25-a209-3e58f6610e24"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:52:15 crc kubenswrapper[4998]: I0227 10:52:15.060894 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ff2a9f1-4fc7-4a25-a209-3e58f6610e24-kube-api-access-vnsvz" (OuterVolumeSpecName: "kube-api-access-vnsvz") pod "9ff2a9f1-4fc7-4a25-a209-3e58f6610e24" (UID: "9ff2a9f1-4fc7-4a25-a209-3e58f6610e24"). InnerVolumeSpecName "kube-api-access-vnsvz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:52:15 crc kubenswrapper[4998]: I0227 10:52:15.061243 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ff2a9f1-4fc7-4a25-a209-3e58f6610e24-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "9ff2a9f1-4fc7-4a25-a209-3e58f6610e24" (UID: "9ff2a9f1-4fc7-4a25-a209-3e58f6610e24"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:52:15 crc kubenswrapper[4998]: I0227 10:52:15.061371 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ff2a9f1-4fc7-4a25-a209-3e58f6610e24-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "9ff2a9f1-4fc7-4a25-a209-3e58f6610e24" (UID: "9ff2a9f1-4fc7-4a25-a209-3e58f6610e24"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:52:15 crc kubenswrapper[4998]: I0227 10:52:15.061518 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ff2a9f1-4fc7-4a25-a209-3e58f6610e24-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "9ff2a9f1-4fc7-4a25-a209-3e58f6610e24" (UID: "9ff2a9f1-4fc7-4a25-a209-3e58f6610e24"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:52:15 crc kubenswrapper[4998]: I0227 10:52:15.061799 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ff2a9f1-4fc7-4a25-a209-3e58f6610e24-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "9ff2a9f1-4fc7-4a25-a209-3e58f6610e24" (UID: "9ff2a9f1-4fc7-4a25-a209-3e58f6610e24"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:52:15 crc kubenswrapper[4998]: I0227 10:52:15.062152 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ff2a9f1-4fc7-4a25-a209-3e58f6610e24-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "9ff2a9f1-4fc7-4a25-a209-3e58f6610e24" (UID: "9ff2a9f1-4fc7-4a25-a209-3e58f6610e24"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:52:15 crc kubenswrapper[4998]: I0227 10:52:15.062796 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ff2a9f1-4fc7-4a25-a209-3e58f6610e24-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "9ff2a9f1-4fc7-4a25-a209-3e58f6610e24" (UID: "9ff2a9f1-4fc7-4a25-a209-3e58f6610e24"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:52:15 crc kubenswrapper[4998]: I0227 10:52:15.063734 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ff2a9f1-4fc7-4a25-a209-3e58f6610e24-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "9ff2a9f1-4fc7-4a25-a209-3e58f6610e24" (UID: "9ff2a9f1-4fc7-4a25-a209-3e58f6610e24"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:52:15 crc kubenswrapper[4998]: I0227 10:52:15.072841 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ff2a9f1-4fc7-4a25-a209-3e58f6610e24-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "9ff2a9f1-4fc7-4a25-a209-3e58f6610e24" (UID: "9ff2a9f1-4fc7-4a25-a209-3e58f6610e24"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:52:15 crc kubenswrapper[4998]: I0227 10:52:15.081325 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ff2a9f1-4fc7-4a25-a209-3e58f6610e24-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "9ff2a9f1-4fc7-4a25-a209-3e58f6610e24" (UID: "9ff2a9f1-4fc7-4a25-a209-3e58f6610e24"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:52:15 crc kubenswrapper[4998]: I0227 10:52:15.087940 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ff2a9f1-4fc7-4a25-a209-3e58f6610e24-inventory" (OuterVolumeSpecName: "inventory") pod "9ff2a9f1-4fc7-4a25-a209-3e58f6610e24" (UID: "9ff2a9f1-4fc7-4a25-a209-3e58f6610e24"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:52:15 crc kubenswrapper[4998]: I0227 10:52:15.088799 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ff2a9f1-4fc7-4a25-a209-3e58f6610e24-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "9ff2a9f1-4fc7-4a25-a209-3e58f6610e24" (UID: "9ff2a9f1-4fc7-4a25-a209-3e58f6610e24"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:52:15 crc kubenswrapper[4998]: I0227 10:52:15.157006 4998 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9ff2a9f1-4fc7-4a25-a209-3e58f6610e24-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 27 10:52:15 crc kubenswrapper[4998]: I0227 10:52:15.157072 4998 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9ff2a9f1-4fc7-4a25-a209-3e58f6610e24-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 27 10:52:15 crc kubenswrapper[4998]: I0227 10:52:15.157090 4998 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ff2a9f1-4fc7-4a25-a209-3e58f6610e24-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:52:15 crc kubenswrapper[4998]: I0227 10:52:15.157101 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vnsvz\" (UniqueName: \"kubernetes.io/projected/9ff2a9f1-4fc7-4a25-a209-3e58f6610e24-kube-api-access-vnsvz\") on node \"crc\" DevicePath \"\"" Feb 27 10:52:15 crc kubenswrapper[4998]: I0227 10:52:15.157129 4998 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ff2a9f1-4fc7-4a25-a209-3e58f6610e24-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:52:15 crc kubenswrapper[4998]: I0227 10:52:15.157140 4998 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ff2a9f1-4fc7-4a25-a209-3e58f6610e24-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:52:15 crc kubenswrapper[4998]: I0227 10:52:15.157152 4998 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9ff2a9f1-4fc7-4a25-a209-3e58f6610e24-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 27 10:52:15 crc kubenswrapper[4998]: I0227 10:52:15.157163 4998 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ff2a9f1-4fc7-4a25-a209-3e58f6610e24-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:52:15 crc kubenswrapper[4998]: I0227 10:52:15.157174 4998 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ff2a9f1-4fc7-4a25-a209-3e58f6610e24-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:52:15 crc kubenswrapper[4998]: I0227 10:52:15.157185 4998 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9ff2a9f1-4fc7-4a25-a209-3e58f6610e24-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 27 10:52:15 crc kubenswrapper[4998]: I0227 10:52:15.157194 4998 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ff2a9f1-4fc7-4a25-a209-3e58f6610e24-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:52:15 crc kubenswrapper[4998]: I0227 10:52:15.157204 4998 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ff2a9f1-4fc7-4a25-a209-3e58f6610e24-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:52:15 crc kubenswrapper[4998]: I0227 10:52:15.157213 4998 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9ff2a9f1-4fc7-4a25-a209-3e58f6610e24-inventory\") on node \"crc\" DevicePath \"\"" Feb 27 10:52:15 crc kubenswrapper[4998]: I0227 10:52:15.157238 4998 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9ff2a9f1-4fc7-4a25-a209-3e58f6610e24-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 27 10:52:15 crc kubenswrapper[4998]: I0227 10:52:15.331712 4998 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-25qb8" podUID="3df5f8c9-aa4b-4a79-b07d-265c7fe120ad" containerName="registry-server" probeResult="failure" output=< Feb 27 10:52:15 crc kubenswrapper[4998]: timeout: failed to connect service ":50051" within 1s Feb 27 10:52:15 crc kubenswrapper[4998]: > Feb 27 10:52:15 crc kubenswrapper[4998]: I0227 10:52:15.442446 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9n5t8" event={"ID":"9ff2a9f1-4fc7-4a25-a209-3e58f6610e24","Type":"ContainerDied","Data":"234a5969f6a710324f21bc4fcd83248bf45e6e0fb9898bb29cac39541ec96b7f"} Feb 27 10:52:15 crc kubenswrapper[4998]: I0227 10:52:15.442489 4998 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="234a5969f6a710324f21bc4fcd83248bf45e6e0fb9898bb29cac39541ec96b7f" Feb 27 10:52:15 crc kubenswrapper[4998]: I0227 10:52:15.442545 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9n5t8" Feb 27 10:52:15 crc kubenswrapper[4998]: I0227 10:52:15.554648 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-f8zmr"] Feb 27 10:52:15 crc kubenswrapper[4998]: E0227 10:52:15.555383 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ff2a9f1-4fc7-4a25-a209-3e58f6610e24" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 27 10:52:15 crc kubenswrapper[4998]: I0227 10:52:15.555491 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ff2a9f1-4fc7-4a25-a209-3e58f6610e24" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 27 10:52:15 crc kubenswrapper[4998]: E0227 10:52:15.555591 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61c9cd2a-63ea-4935-9102-c943c6479131" containerName="oc" Feb 27 10:52:15 crc kubenswrapper[4998]: I0227 10:52:15.555664 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="61c9cd2a-63ea-4935-9102-c943c6479131" containerName="oc" Feb 27 10:52:15 crc kubenswrapper[4998]: I0227 10:52:15.555982 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="61c9cd2a-63ea-4935-9102-c943c6479131" containerName="oc" Feb 27 10:52:15 crc kubenswrapper[4998]: I0227 10:52:15.556085 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ff2a9f1-4fc7-4a25-a209-3e58f6610e24" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 27 10:52:15 crc kubenswrapper[4998]: I0227 10:52:15.557656 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-f8zmr" Feb 27 10:52:15 crc kubenswrapper[4998]: I0227 10:52:15.559981 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 27 10:52:15 crc kubenswrapper[4998]: I0227 10:52:15.560442 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 27 10:52:15 crc kubenswrapper[4998]: I0227 10:52:15.560460 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Feb 27 10:52:15 crc kubenswrapper[4998]: I0227 10:52:15.560812 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 27 10:52:15 crc kubenswrapper[4998]: I0227 10:52:15.561021 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-bpcp2" Feb 27 10:52:15 crc kubenswrapper[4998]: I0227 10:52:15.574142 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-f8zmr"] Feb 27 10:52:15 crc kubenswrapper[4998]: I0227 10:52:15.668061 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/5384243b-a246-4b6d-8cba-101d025f3498-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-f8zmr\" (UID: \"5384243b-a246-4b6d-8cba-101d025f3498\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-f8zmr" Feb 27 10:52:15 crc kubenswrapper[4998]: I0227 10:52:15.668477 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5384243b-a246-4b6d-8cba-101d025f3498-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-f8zmr\" (UID: \"5384243b-a246-4b6d-8cba-101d025f3498\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-f8zmr" Feb 27 10:52:15 crc kubenswrapper[4998]: I0227 10:52:15.668594 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5384243b-a246-4b6d-8cba-101d025f3498-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-f8zmr\" (UID: \"5384243b-a246-4b6d-8cba-101d025f3498\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-f8zmr" Feb 27 10:52:15 crc kubenswrapper[4998]: I0227 10:52:15.668764 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-468cf\" (UniqueName: \"kubernetes.io/projected/5384243b-a246-4b6d-8cba-101d025f3498-kube-api-access-468cf\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-f8zmr\" (UID: \"5384243b-a246-4b6d-8cba-101d025f3498\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-f8zmr" Feb 27 10:52:15 crc kubenswrapper[4998]: I0227 10:52:15.668987 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5384243b-a246-4b6d-8cba-101d025f3498-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-f8zmr\" (UID: \"5384243b-a246-4b6d-8cba-101d025f3498\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-f8zmr" Feb 27 10:52:15 crc kubenswrapper[4998]: I0227 10:52:15.770816 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5384243b-a246-4b6d-8cba-101d025f3498-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-f8zmr\" (UID: \"5384243b-a246-4b6d-8cba-101d025f3498\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-f8zmr" Feb 27 10:52:15 crc kubenswrapper[4998]: I0227 10:52:15.771128 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/5384243b-a246-4b6d-8cba-101d025f3498-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-f8zmr\" (UID: \"5384243b-a246-4b6d-8cba-101d025f3498\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-f8zmr" Feb 27 10:52:15 crc kubenswrapper[4998]: I0227 10:52:15.771265 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5384243b-a246-4b6d-8cba-101d025f3498-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-f8zmr\" (UID: \"5384243b-a246-4b6d-8cba-101d025f3498\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-f8zmr" Feb 27 10:52:15 crc kubenswrapper[4998]: I0227 10:52:15.771378 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5384243b-a246-4b6d-8cba-101d025f3498-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-f8zmr\" (UID: \"5384243b-a246-4b6d-8cba-101d025f3498\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-f8zmr" Feb 27 10:52:15 crc kubenswrapper[4998]: I0227 10:52:15.771503 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-468cf\" (UniqueName: \"kubernetes.io/projected/5384243b-a246-4b6d-8cba-101d025f3498-kube-api-access-468cf\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-f8zmr\" (UID: \"5384243b-a246-4b6d-8cba-101d025f3498\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-f8zmr" Feb 27 10:52:15 crc kubenswrapper[4998]: I0227 10:52:15.772431 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/5384243b-a246-4b6d-8cba-101d025f3498-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-f8zmr\" (UID: \"5384243b-a246-4b6d-8cba-101d025f3498\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-f8zmr" Feb 27 10:52:15 crc kubenswrapper[4998]: I0227 10:52:15.776150 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5384243b-a246-4b6d-8cba-101d025f3498-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-f8zmr\" (UID: \"5384243b-a246-4b6d-8cba-101d025f3498\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-f8zmr" Feb 27 10:52:15 crc kubenswrapper[4998]: I0227 10:52:15.777932 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5384243b-a246-4b6d-8cba-101d025f3498-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-f8zmr\" (UID: \"5384243b-a246-4b6d-8cba-101d025f3498\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-f8zmr" Feb 27 10:52:15 crc kubenswrapper[4998]: I0227 10:52:15.781961 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5384243b-a246-4b6d-8cba-101d025f3498-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-f8zmr\" (UID: \"5384243b-a246-4b6d-8cba-101d025f3498\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-f8zmr" Feb 27 10:52:15 crc kubenswrapper[4998]: I0227 10:52:15.791703 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-468cf\" (UniqueName: \"kubernetes.io/projected/5384243b-a246-4b6d-8cba-101d025f3498-kube-api-access-468cf\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-f8zmr\" (UID: \"5384243b-a246-4b6d-8cba-101d025f3498\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-f8zmr" Feb 27 10:52:15 crc kubenswrapper[4998]: I0227 10:52:15.892624 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-f8zmr" Feb 27 10:52:16 crc kubenswrapper[4998]: I0227 10:52:16.475631 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-f8zmr"] Feb 27 10:52:17 crc kubenswrapper[4998]: I0227 10:52:17.462483 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-f8zmr" event={"ID":"5384243b-a246-4b6d-8cba-101d025f3498","Type":"ContainerStarted","Data":"b3e80eaa84477520035f3e2538f377e5232831b40f5246e5079ec31db9287957"} Feb 27 10:52:17 crc kubenswrapper[4998]: I0227 10:52:17.462717 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-f8zmr" event={"ID":"5384243b-a246-4b6d-8cba-101d025f3498","Type":"ContainerStarted","Data":"39b3affb0cac337845193d4e367eab1681f1cf533edfe598a1192c82a542d814"} Feb 27 10:52:17 crc kubenswrapper[4998]: I0227 10:52:17.493874 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-f8zmr" podStartSLOduration=2.103023757 podStartE2EDuration="2.493856206s" podCreationTimestamp="2026-02-27 10:52:15 +0000 UTC" firstStartedPulling="2026-02-27 10:52:16.475286278 +0000 UTC m=+2088.473557256" lastFinishedPulling="2026-02-27 10:52:16.866118737 +0000 UTC m=+2088.864389705" observedRunningTime="2026-02-27 10:52:17.484071961 +0000 UTC m=+2089.482342929" watchObservedRunningTime="2026-02-27 10:52:17.493856206 +0000 UTC m=+2089.492127174" Feb 27 10:52:24 crc kubenswrapper[4998]: I0227 10:52:24.341175 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-25qb8" Feb 27 10:52:24 crc kubenswrapper[4998]: I0227 10:52:24.391818 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-25qb8" Feb 27 10:52:25 crc kubenswrapper[4998]: I0227 10:52:25.784459 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-25qb8"] Feb 27 10:52:25 crc kubenswrapper[4998]: I0227 10:52:25.785168 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-25qb8" podUID="3df5f8c9-aa4b-4a79-b07d-265c7fe120ad" containerName="registry-server" containerID="cri-o://39fccd280175fd4d650b4e8d2d31dcfa65d785f2b5fb930916739e86e13ad9e2" gracePeriod=2 Feb 27 10:52:26 crc kubenswrapper[4998]: I0227 10:52:26.259115 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-25qb8" Feb 27 10:52:26 crc kubenswrapper[4998]: I0227 10:52:26.410744 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3df5f8c9-aa4b-4a79-b07d-265c7fe120ad-utilities\") pod \"3df5f8c9-aa4b-4a79-b07d-265c7fe120ad\" (UID: \"3df5f8c9-aa4b-4a79-b07d-265c7fe120ad\") " Feb 27 10:52:26 crc kubenswrapper[4998]: I0227 10:52:26.410856 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pwdt\" (UniqueName: \"kubernetes.io/projected/3df5f8c9-aa4b-4a79-b07d-265c7fe120ad-kube-api-access-8pwdt\") pod \"3df5f8c9-aa4b-4a79-b07d-265c7fe120ad\" (UID: \"3df5f8c9-aa4b-4a79-b07d-265c7fe120ad\") " Feb 27 10:52:26 crc kubenswrapper[4998]: I0227 10:52:26.410885 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3df5f8c9-aa4b-4a79-b07d-265c7fe120ad-catalog-content\") pod \"3df5f8c9-aa4b-4a79-b07d-265c7fe120ad\" (UID: \"3df5f8c9-aa4b-4a79-b07d-265c7fe120ad\") " Feb 27 10:52:26 crc kubenswrapper[4998]: I0227 10:52:26.412945 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3df5f8c9-aa4b-4a79-b07d-265c7fe120ad-utilities" (OuterVolumeSpecName: "utilities") pod "3df5f8c9-aa4b-4a79-b07d-265c7fe120ad" (UID: "3df5f8c9-aa4b-4a79-b07d-265c7fe120ad"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:52:26 crc kubenswrapper[4998]: I0227 10:52:26.417474 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3df5f8c9-aa4b-4a79-b07d-265c7fe120ad-kube-api-access-8pwdt" (OuterVolumeSpecName: "kube-api-access-8pwdt") pod "3df5f8c9-aa4b-4a79-b07d-265c7fe120ad" (UID: "3df5f8c9-aa4b-4a79-b07d-265c7fe120ad"). InnerVolumeSpecName "kube-api-access-8pwdt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:52:26 crc kubenswrapper[4998]: I0227 10:52:26.513574 4998 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3df5f8c9-aa4b-4a79-b07d-265c7fe120ad-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 10:52:26 crc kubenswrapper[4998]: I0227 10:52:26.513610 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8pwdt\" (UniqueName: \"kubernetes.io/projected/3df5f8c9-aa4b-4a79-b07d-265c7fe120ad-kube-api-access-8pwdt\") on node \"crc\" DevicePath \"\"" Feb 27 10:52:26 crc kubenswrapper[4998]: I0227 10:52:26.532291 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3df5f8c9-aa4b-4a79-b07d-265c7fe120ad-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3df5f8c9-aa4b-4a79-b07d-265c7fe120ad" (UID: "3df5f8c9-aa4b-4a79-b07d-265c7fe120ad"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:52:26 crc kubenswrapper[4998]: I0227 10:52:26.543352 4998 generic.go:334] "Generic (PLEG): container finished" podID="3df5f8c9-aa4b-4a79-b07d-265c7fe120ad" containerID="39fccd280175fd4d650b4e8d2d31dcfa65d785f2b5fb930916739e86e13ad9e2" exitCode=0 Feb 27 10:52:26 crc kubenswrapper[4998]: I0227 10:52:26.543416 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-25qb8" event={"ID":"3df5f8c9-aa4b-4a79-b07d-265c7fe120ad","Type":"ContainerDied","Data":"39fccd280175fd4d650b4e8d2d31dcfa65d785f2b5fb930916739e86e13ad9e2"} Feb 27 10:52:26 crc kubenswrapper[4998]: I0227 10:52:26.543448 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-25qb8" event={"ID":"3df5f8c9-aa4b-4a79-b07d-265c7fe120ad","Type":"ContainerDied","Data":"5bde29f29622d919fb0f22cf220b3e0817c86ecdbc6db7ca5304fb2ae113ffc9"} Feb 27 10:52:26 crc kubenswrapper[4998]: I0227 10:52:26.543457 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-25qb8" Feb 27 10:52:26 crc kubenswrapper[4998]: I0227 10:52:26.543468 4998 scope.go:117] "RemoveContainer" containerID="39fccd280175fd4d650b4e8d2d31dcfa65d785f2b5fb930916739e86e13ad9e2" Feb 27 10:52:26 crc kubenswrapper[4998]: I0227 10:52:26.565156 4998 scope.go:117] "RemoveContainer" containerID="bdbc6b5859a249c5ac35790c86a20e683253bacce9a3ec71e5e7955fb4ce6f9e" Feb 27 10:52:26 crc kubenswrapper[4998]: I0227 10:52:26.578388 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-25qb8"] Feb 27 10:52:26 crc kubenswrapper[4998]: I0227 10:52:26.585682 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-25qb8"] Feb 27 10:52:26 crc kubenswrapper[4998]: I0227 10:52:26.603647 4998 scope.go:117] "RemoveContainer" containerID="691329e447abafa2d123f155c74336a501701987c87a8111f4636ba4b55e64b9" Feb 27 10:52:26 crc kubenswrapper[4998]: I0227 10:52:26.616481 4998 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3df5f8c9-aa4b-4a79-b07d-265c7fe120ad-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 10:52:26 crc kubenswrapper[4998]: I0227 10:52:26.639563 4998 scope.go:117] "RemoveContainer" containerID="39fccd280175fd4d650b4e8d2d31dcfa65d785f2b5fb930916739e86e13ad9e2" Feb 27 10:52:26 crc kubenswrapper[4998]: E0227 10:52:26.640038 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39fccd280175fd4d650b4e8d2d31dcfa65d785f2b5fb930916739e86e13ad9e2\": container with ID starting with 39fccd280175fd4d650b4e8d2d31dcfa65d785f2b5fb930916739e86e13ad9e2 not found: ID does not exist" containerID="39fccd280175fd4d650b4e8d2d31dcfa65d785f2b5fb930916739e86e13ad9e2" Feb 27 10:52:26 crc kubenswrapper[4998]: I0227 10:52:26.640096 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39fccd280175fd4d650b4e8d2d31dcfa65d785f2b5fb930916739e86e13ad9e2"} err="failed to get container status \"39fccd280175fd4d650b4e8d2d31dcfa65d785f2b5fb930916739e86e13ad9e2\": rpc error: code = NotFound desc = could not find container \"39fccd280175fd4d650b4e8d2d31dcfa65d785f2b5fb930916739e86e13ad9e2\": container with ID starting with 39fccd280175fd4d650b4e8d2d31dcfa65d785f2b5fb930916739e86e13ad9e2 not found: ID does not exist" Feb 27 10:52:26 crc kubenswrapper[4998]: I0227 10:52:26.640127 4998 scope.go:117] "RemoveContainer" containerID="bdbc6b5859a249c5ac35790c86a20e683253bacce9a3ec71e5e7955fb4ce6f9e" Feb 27 10:52:26 crc kubenswrapper[4998]: E0227 10:52:26.640599 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bdbc6b5859a249c5ac35790c86a20e683253bacce9a3ec71e5e7955fb4ce6f9e\": container with ID starting with bdbc6b5859a249c5ac35790c86a20e683253bacce9a3ec71e5e7955fb4ce6f9e not found: ID does not exist" containerID="bdbc6b5859a249c5ac35790c86a20e683253bacce9a3ec71e5e7955fb4ce6f9e" Feb 27 10:52:26 crc kubenswrapper[4998]: I0227 10:52:26.640627 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bdbc6b5859a249c5ac35790c86a20e683253bacce9a3ec71e5e7955fb4ce6f9e"} err="failed to get container status \"bdbc6b5859a249c5ac35790c86a20e683253bacce9a3ec71e5e7955fb4ce6f9e\": rpc error: code = NotFound desc = could not find container \"bdbc6b5859a249c5ac35790c86a20e683253bacce9a3ec71e5e7955fb4ce6f9e\": container with ID starting with bdbc6b5859a249c5ac35790c86a20e683253bacce9a3ec71e5e7955fb4ce6f9e not found: ID does not exist" Feb 27 10:52:26 crc kubenswrapper[4998]: I0227 10:52:26.640645 4998 scope.go:117] "RemoveContainer" containerID="691329e447abafa2d123f155c74336a501701987c87a8111f4636ba4b55e64b9" Feb 27 10:52:26 crc kubenswrapper[4998]: E0227 10:52:26.640952 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"691329e447abafa2d123f155c74336a501701987c87a8111f4636ba4b55e64b9\": container with ID starting with 691329e447abafa2d123f155c74336a501701987c87a8111f4636ba4b55e64b9 not found: ID does not exist" containerID="691329e447abafa2d123f155c74336a501701987c87a8111f4636ba4b55e64b9" Feb 27 10:52:26 crc kubenswrapper[4998]: I0227 10:52:26.640996 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"691329e447abafa2d123f155c74336a501701987c87a8111f4636ba4b55e64b9"} err="failed to get container status \"691329e447abafa2d123f155c74336a501701987c87a8111f4636ba4b55e64b9\": rpc error: code = NotFound desc = could not find container \"691329e447abafa2d123f155c74336a501701987c87a8111f4636ba4b55e64b9\": container with ID starting with 691329e447abafa2d123f155c74336a501701987c87a8111f4636ba4b55e64b9 not found: ID does not exist" Feb 27 10:52:26 crc kubenswrapper[4998]: I0227 10:52:26.780605 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3df5f8c9-aa4b-4a79-b07d-265c7fe120ad" path="/var/lib/kubelet/pods/3df5f8c9-aa4b-4a79-b07d-265c7fe120ad/volumes" Feb 27 10:52:48 crc kubenswrapper[4998]: I0227 10:52:48.692031 4998 scope.go:117] "RemoveContainer" containerID="e5e86a51155f19c0e93dfb872c142f8706afdd37d3b4c58cd7ee97a7a1f97537" Feb 27 10:53:16 crc kubenswrapper[4998]: I0227 10:53:16.027118 4998 generic.go:334] "Generic (PLEG): container finished" podID="5384243b-a246-4b6d-8cba-101d025f3498" containerID="b3e80eaa84477520035f3e2538f377e5232831b40f5246e5079ec31db9287957" exitCode=0 Feb 27 10:53:16 crc kubenswrapper[4998]: I0227 10:53:16.027269 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-f8zmr" event={"ID":"5384243b-a246-4b6d-8cba-101d025f3498","Type":"ContainerDied","Data":"b3e80eaa84477520035f3e2538f377e5232831b40f5246e5079ec31db9287957"} Feb 27 10:53:17 crc kubenswrapper[4998]: I0227 10:53:17.516345 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-f8zmr" Feb 27 10:53:17 crc kubenswrapper[4998]: I0227 10:53:17.585241 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5384243b-a246-4b6d-8cba-101d025f3498-inventory\") pod \"5384243b-a246-4b6d-8cba-101d025f3498\" (UID: \"5384243b-a246-4b6d-8cba-101d025f3498\") " Feb 27 10:53:17 crc kubenswrapper[4998]: I0227 10:53:17.585354 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5384243b-a246-4b6d-8cba-101d025f3498-ssh-key-openstack-edpm-ipam\") pod \"5384243b-a246-4b6d-8cba-101d025f3498\" (UID: \"5384243b-a246-4b6d-8cba-101d025f3498\") " Feb 27 10:53:17 crc kubenswrapper[4998]: I0227 10:53:17.585384 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-468cf\" (UniqueName: \"kubernetes.io/projected/5384243b-a246-4b6d-8cba-101d025f3498-kube-api-access-468cf\") pod \"5384243b-a246-4b6d-8cba-101d025f3498\" (UID: \"5384243b-a246-4b6d-8cba-101d025f3498\") " Feb 27 10:53:17 crc kubenswrapper[4998]: I0227 10:53:17.585557 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/5384243b-a246-4b6d-8cba-101d025f3498-ovncontroller-config-0\") pod \"5384243b-a246-4b6d-8cba-101d025f3498\" (UID: \"5384243b-a246-4b6d-8cba-101d025f3498\") " Feb 27 10:53:17 crc kubenswrapper[4998]: I0227 10:53:17.585687 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5384243b-a246-4b6d-8cba-101d025f3498-ovn-combined-ca-bundle\") pod \"5384243b-a246-4b6d-8cba-101d025f3498\" (UID: \"5384243b-a246-4b6d-8cba-101d025f3498\") " Feb 27 10:53:17 crc kubenswrapper[4998]: I0227 10:53:17.603040 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5384243b-a246-4b6d-8cba-101d025f3498-kube-api-access-468cf" (OuterVolumeSpecName: "kube-api-access-468cf") pod "5384243b-a246-4b6d-8cba-101d025f3498" (UID: "5384243b-a246-4b6d-8cba-101d025f3498"). InnerVolumeSpecName "kube-api-access-468cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:53:17 crc kubenswrapper[4998]: I0227 10:53:17.603471 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5384243b-a246-4b6d-8cba-101d025f3498-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "5384243b-a246-4b6d-8cba-101d025f3498" (UID: "5384243b-a246-4b6d-8cba-101d025f3498"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:53:17 crc kubenswrapper[4998]: I0227 10:53:17.615538 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5384243b-a246-4b6d-8cba-101d025f3498-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "5384243b-a246-4b6d-8cba-101d025f3498" (UID: "5384243b-a246-4b6d-8cba-101d025f3498"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:53:17 crc kubenswrapper[4998]: I0227 10:53:17.617068 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5384243b-a246-4b6d-8cba-101d025f3498-inventory" (OuterVolumeSpecName: "inventory") pod "5384243b-a246-4b6d-8cba-101d025f3498" (UID: "5384243b-a246-4b6d-8cba-101d025f3498"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:53:17 crc kubenswrapper[4998]: I0227 10:53:17.618062 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5384243b-a246-4b6d-8cba-101d025f3498-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "5384243b-a246-4b6d-8cba-101d025f3498" (UID: "5384243b-a246-4b6d-8cba-101d025f3498"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:53:17 crc kubenswrapper[4998]: I0227 10:53:17.688354 4998 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/5384243b-a246-4b6d-8cba-101d025f3498-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Feb 27 10:53:17 crc kubenswrapper[4998]: I0227 10:53:17.688415 4998 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5384243b-a246-4b6d-8cba-101d025f3498-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:53:17 crc kubenswrapper[4998]: I0227 10:53:17.688437 4998 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5384243b-a246-4b6d-8cba-101d025f3498-inventory\") on node \"crc\" DevicePath \"\"" Feb 27 10:53:17 crc kubenswrapper[4998]: I0227 10:53:17.688457 4998 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5384243b-a246-4b6d-8cba-101d025f3498-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 27 10:53:17 crc kubenswrapper[4998]: I0227 10:53:17.688475 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-468cf\" (UniqueName: \"kubernetes.io/projected/5384243b-a246-4b6d-8cba-101d025f3498-kube-api-access-468cf\") on node \"crc\" DevicePath \"\"" Feb 27 10:53:18 crc kubenswrapper[4998]: I0227 10:53:18.043063 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-f8zmr" event={"ID":"5384243b-a246-4b6d-8cba-101d025f3498","Type":"ContainerDied","Data":"39b3affb0cac337845193d4e367eab1681f1cf533edfe598a1192c82a542d814"} Feb 27 10:53:18 crc kubenswrapper[4998]: I0227 10:53:18.043407 4998 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="39b3affb0cac337845193d4e367eab1681f1cf533edfe598a1192c82a542d814" Feb 27 10:53:18 crc kubenswrapper[4998]: I0227 10:53:18.043107 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-f8zmr" Feb 27 10:53:18 crc kubenswrapper[4998]: I0227 10:53:18.154555 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-l9sq6"] Feb 27 10:53:18 crc kubenswrapper[4998]: E0227 10:53:18.155036 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3df5f8c9-aa4b-4a79-b07d-265c7fe120ad" containerName="extract-utilities" Feb 27 10:53:18 crc kubenswrapper[4998]: I0227 10:53:18.155055 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="3df5f8c9-aa4b-4a79-b07d-265c7fe120ad" containerName="extract-utilities" Feb 27 10:53:18 crc kubenswrapper[4998]: E0227 10:53:18.155081 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5384243b-a246-4b6d-8cba-101d025f3498" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 27 10:53:18 crc kubenswrapper[4998]: I0227 10:53:18.155087 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="5384243b-a246-4b6d-8cba-101d025f3498" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 27 10:53:18 crc kubenswrapper[4998]: E0227 10:53:18.155100 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3df5f8c9-aa4b-4a79-b07d-265c7fe120ad" containerName="registry-server" Feb 27 10:53:18 crc kubenswrapper[4998]: I0227 10:53:18.155107 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="3df5f8c9-aa4b-4a79-b07d-265c7fe120ad" containerName="registry-server" Feb 27 10:53:18 crc kubenswrapper[4998]: E0227 10:53:18.155126 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3df5f8c9-aa4b-4a79-b07d-265c7fe120ad" containerName="extract-content" Feb 27 10:53:18 crc kubenswrapper[4998]: I0227 10:53:18.155131 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="3df5f8c9-aa4b-4a79-b07d-265c7fe120ad" containerName="extract-content" Feb 27 10:53:18 crc kubenswrapper[4998]: I0227 10:53:18.155365 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="5384243b-a246-4b6d-8cba-101d025f3498" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 27 10:53:18 crc kubenswrapper[4998]: I0227 10:53:18.155385 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="3df5f8c9-aa4b-4a79-b07d-265c7fe120ad" containerName="registry-server" Feb 27 10:53:18 crc kubenswrapper[4998]: I0227 10:53:18.156112 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-l9sq6" Feb 27 10:53:18 crc kubenswrapper[4998]: I0227 10:53:18.158316 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-bpcp2" Feb 27 10:53:18 crc kubenswrapper[4998]: I0227 10:53:18.158382 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 27 10:53:18 crc kubenswrapper[4998]: I0227 10:53:18.159143 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Feb 27 10:53:18 crc kubenswrapper[4998]: I0227 10:53:18.159585 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 27 10:53:18 crc kubenswrapper[4998]: I0227 10:53:18.159592 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 27 10:53:18 crc kubenswrapper[4998]: I0227 10:53:18.160939 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Feb 27 10:53:18 crc kubenswrapper[4998]: I0227 10:53:18.170476 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-l9sq6"] Feb 27 10:53:18 crc kubenswrapper[4998]: I0227 10:53:18.300845 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fbf8ea11-95d4-4444-8f99-591502846aec-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-l9sq6\" (UID: \"fbf8ea11-95d4-4444-8f99-591502846aec\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-l9sq6" Feb 27 10:53:18 crc kubenswrapper[4998]: I0227 10:53:18.300907 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbf8ea11-95d4-4444-8f99-591502846aec-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-l9sq6\" (UID: \"fbf8ea11-95d4-4444-8f99-591502846aec\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-l9sq6" Feb 27 10:53:18 crc kubenswrapper[4998]: I0227 10:53:18.300967 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/fbf8ea11-95d4-4444-8f99-591502846aec-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-l9sq6\" (UID: \"fbf8ea11-95d4-4444-8f99-591502846aec\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-l9sq6" Feb 27 10:53:18 crc kubenswrapper[4998]: I0227 10:53:18.301087 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fbf8ea11-95d4-4444-8f99-591502846aec-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-l9sq6\" (UID: \"fbf8ea11-95d4-4444-8f99-591502846aec\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-l9sq6" Feb 27 10:53:18 crc kubenswrapper[4998]: I0227 10:53:18.301152 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/fbf8ea11-95d4-4444-8f99-591502846aec-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-l9sq6\" (UID: \"fbf8ea11-95d4-4444-8f99-591502846aec\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-l9sq6" Feb 27 10:53:18 crc kubenswrapper[4998]: I0227 10:53:18.301208 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64xtg\" (UniqueName: \"kubernetes.io/projected/fbf8ea11-95d4-4444-8f99-591502846aec-kube-api-access-64xtg\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-l9sq6\" (UID: \"fbf8ea11-95d4-4444-8f99-591502846aec\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-l9sq6" Feb 27 10:53:18 crc kubenswrapper[4998]: I0227 10:53:18.404618 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/fbf8ea11-95d4-4444-8f99-591502846aec-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-l9sq6\" (UID: \"fbf8ea11-95d4-4444-8f99-591502846aec\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-l9sq6" Feb 27 10:53:18 crc kubenswrapper[4998]: I0227 10:53:18.404734 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64xtg\" (UniqueName: \"kubernetes.io/projected/fbf8ea11-95d4-4444-8f99-591502846aec-kube-api-access-64xtg\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-l9sq6\" (UID: \"fbf8ea11-95d4-4444-8f99-591502846aec\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-l9sq6" Feb 27 10:53:18 crc kubenswrapper[4998]: I0227 10:53:18.404822 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fbf8ea11-95d4-4444-8f99-591502846aec-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-l9sq6\" (UID: \"fbf8ea11-95d4-4444-8f99-591502846aec\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-l9sq6" Feb 27 10:53:18 crc kubenswrapper[4998]: I0227 10:53:18.404918 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbf8ea11-95d4-4444-8f99-591502846aec-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-l9sq6\" (UID: \"fbf8ea11-95d4-4444-8f99-591502846aec\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-l9sq6" Feb 27 10:53:18 crc kubenswrapper[4998]: I0227 10:53:18.405005 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/fbf8ea11-95d4-4444-8f99-591502846aec-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-l9sq6\" (UID: \"fbf8ea11-95d4-4444-8f99-591502846aec\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-l9sq6" Feb 27 10:53:18 crc kubenswrapper[4998]: I0227 10:53:18.405371 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fbf8ea11-95d4-4444-8f99-591502846aec-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-l9sq6\" (UID: \"fbf8ea11-95d4-4444-8f99-591502846aec\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-l9sq6" Feb 27 10:53:18 crc kubenswrapper[4998]: I0227 10:53:18.411460 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fbf8ea11-95d4-4444-8f99-591502846aec-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-l9sq6\" (UID: \"fbf8ea11-95d4-4444-8f99-591502846aec\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-l9sq6" Feb 27 10:53:18 crc kubenswrapper[4998]: I0227 10:53:18.411753 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/fbf8ea11-95d4-4444-8f99-591502846aec-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-l9sq6\" (UID: \"fbf8ea11-95d4-4444-8f99-591502846aec\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-l9sq6" Feb 27 10:53:18 crc kubenswrapper[4998]: I0227 10:53:18.412567 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fbf8ea11-95d4-4444-8f99-591502846aec-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-l9sq6\" (UID: \"fbf8ea11-95d4-4444-8f99-591502846aec\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-l9sq6" Feb 27 10:53:18 crc kubenswrapper[4998]: I0227 10:53:18.414121 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbf8ea11-95d4-4444-8f99-591502846aec-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-l9sq6\" (UID: \"fbf8ea11-95d4-4444-8f99-591502846aec\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-l9sq6" Feb 27 10:53:18 crc kubenswrapper[4998]: I0227 10:53:18.425361 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64xtg\" (UniqueName: \"kubernetes.io/projected/fbf8ea11-95d4-4444-8f99-591502846aec-kube-api-access-64xtg\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-l9sq6\" (UID: \"fbf8ea11-95d4-4444-8f99-591502846aec\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-l9sq6" Feb 27 10:53:18 crc kubenswrapper[4998]: I0227 10:53:18.436675 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/fbf8ea11-95d4-4444-8f99-591502846aec-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-l9sq6\" (UID: \"fbf8ea11-95d4-4444-8f99-591502846aec\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-l9sq6" Feb 27 10:53:18 crc kubenswrapper[4998]: I0227 10:53:18.472883 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-l9sq6" Feb 27 10:53:18 crc kubenswrapper[4998]: I0227 10:53:18.972797 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-l9sq6"] Feb 27 10:53:19 crc kubenswrapper[4998]: I0227 10:53:19.055545 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-l9sq6" event={"ID":"fbf8ea11-95d4-4444-8f99-591502846aec","Type":"ContainerStarted","Data":"0277f86f177b326c3a92fb050d0a195980b2b7439dec87c34d15550ed0f2dffb"} Feb 27 10:53:20 crc kubenswrapper[4998]: I0227 10:53:20.079261 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-l9sq6" event={"ID":"fbf8ea11-95d4-4444-8f99-591502846aec","Type":"ContainerStarted","Data":"45fa9d1384ca4ea25c006a056e2c8a878420187b21ae759d1dca888fd86b50a9"} Feb 27 10:53:20 crc kubenswrapper[4998]: I0227 10:53:20.114786 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-l9sq6" podStartSLOduration=1.689433063 podStartE2EDuration="2.114762771s" podCreationTimestamp="2026-02-27 10:53:18 +0000 UTC" firstStartedPulling="2026-02-27 10:53:18.970170902 +0000 UTC m=+2150.968441870" lastFinishedPulling="2026-02-27 10:53:19.3955006 +0000 UTC m=+2151.393771578" observedRunningTime="2026-02-27 10:53:20.102212032 +0000 UTC m=+2152.100483010" watchObservedRunningTime="2026-02-27 10:53:20.114762771 +0000 UTC m=+2152.113033749" Feb 27 10:53:38 crc kubenswrapper[4998]: I0227 10:53:38.295842 4998 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-7758b6f85-bxf6h" podUID="304f7a70-581a-407b-9280-fe7642feb71f" containerName="proxy-server" probeResult="failure" output="HTTP probe failed with statuscode: 502" Feb 27 10:54:00 crc kubenswrapper[4998]: I0227 10:54:00.147081 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536494-bp2cm"] Feb 27 10:54:00 crc kubenswrapper[4998]: I0227 10:54:00.148860 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536494-bp2cm" Feb 27 10:54:00 crc kubenswrapper[4998]: I0227 10:54:00.152811 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 10:54:00 crc kubenswrapper[4998]: I0227 10:54:00.152811 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 10:54:00 crc kubenswrapper[4998]: I0227 10:54:00.153075 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-b74ch" Feb 27 10:54:00 crc kubenswrapper[4998]: I0227 10:54:00.158943 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536494-bp2cm"] Feb 27 10:54:00 crc kubenswrapper[4998]: I0227 10:54:00.219681 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2k2qw\" (UniqueName: \"kubernetes.io/projected/cba97d31-99ca-4c41-86ab-8a712a2c5ddb-kube-api-access-2k2qw\") pod \"auto-csr-approver-29536494-bp2cm\" (UID: \"cba97d31-99ca-4c41-86ab-8a712a2c5ddb\") " pod="openshift-infra/auto-csr-approver-29536494-bp2cm" Feb 27 10:54:00 crc kubenswrapper[4998]: I0227 10:54:00.322008 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2k2qw\" (UniqueName: \"kubernetes.io/projected/cba97d31-99ca-4c41-86ab-8a712a2c5ddb-kube-api-access-2k2qw\") pod \"auto-csr-approver-29536494-bp2cm\" (UID: \"cba97d31-99ca-4c41-86ab-8a712a2c5ddb\") " pod="openshift-infra/auto-csr-approver-29536494-bp2cm" Feb 27 10:54:00 crc kubenswrapper[4998]: I0227 10:54:00.341083 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2k2qw\" (UniqueName: \"kubernetes.io/projected/cba97d31-99ca-4c41-86ab-8a712a2c5ddb-kube-api-access-2k2qw\") pod \"auto-csr-approver-29536494-bp2cm\" (UID: \"cba97d31-99ca-4c41-86ab-8a712a2c5ddb\") " pod="openshift-infra/auto-csr-approver-29536494-bp2cm" Feb 27 10:54:00 crc kubenswrapper[4998]: I0227 10:54:00.473308 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536494-bp2cm" Feb 27 10:54:00 crc kubenswrapper[4998]: I0227 10:54:00.960898 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536494-bp2cm"] Feb 27 10:54:01 crc kubenswrapper[4998]: I0227 10:54:01.636746 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536494-bp2cm" event={"ID":"cba97d31-99ca-4c41-86ab-8a712a2c5ddb","Type":"ContainerStarted","Data":"4e9004edc4e5861a11489a5c9a3289bbba4d7916217a4503ea142aaa8a6c920f"} Feb 27 10:54:02 crc kubenswrapper[4998]: I0227 10:54:02.650077 4998 generic.go:334] "Generic (PLEG): container finished" podID="cba97d31-99ca-4c41-86ab-8a712a2c5ddb" containerID="4a8e94e3f0a0033d67c7f3fa9c1afaf67b408f6883a860a4e441dd61a8ddbe4d" exitCode=0 Feb 27 10:54:02 crc kubenswrapper[4998]: I0227 10:54:02.650195 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536494-bp2cm" event={"ID":"cba97d31-99ca-4c41-86ab-8a712a2c5ddb","Type":"ContainerDied","Data":"4a8e94e3f0a0033d67c7f3fa9c1afaf67b408f6883a860a4e441dd61a8ddbe4d"} Feb 27 10:54:04 crc kubenswrapper[4998]: I0227 10:54:04.059436 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536494-bp2cm" Feb 27 10:54:04 crc kubenswrapper[4998]: I0227 10:54:04.103718 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2k2qw\" (UniqueName: \"kubernetes.io/projected/cba97d31-99ca-4c41-86ab-8a712a2c5ddb-kube-api-access-2k2qw\") pod \"cba97d31-99ca-4c41-86ab-8a712a2c5ddb\" (UID: \"cba97d31-99ca-4c41-86ab-8a712a2c5ddb\") " Feb 27 10:54:04 crc kubenswrapper[4998]: I0227 10:54:04.110968 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cba97d31-99ca-4c41-86ab-8a712a2c5ddb-kube-api-access-2k2qw" (OuterVolumeSpecName: "kube-api-access-2k2qw") pod "cba97d31-99ca-4c41-86ab-8a712a2c5ddb" (UID: "cba97d31-99ca-4c41-86ab-8a712a2c5ddb"). InnerVolumeSpecName "kube-api-access-2k2qw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:54:04 crc kubenswrapper[4998]: I0227 10:54:04.206749 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2k2qw\" (UniqueName: \"kubernetes.io/projected/cba97d31-99ca-4c41-86ab-8a712a2c5ddb-kube-api-access-2k2qw\") on node \"crc\" DevicePath \"\"" Feb 27 10:54:04 crc kubenswrapper[4998]: I0227 10:54:04.678137 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536494-bp2cm" event={"ID":"cba97d31-99ca-4c41-86ab-8a712a2c5ddb","Type":"ContainerDied","Data":"4e9004edc4e5861a11489a5c9a3289bbba4d7916217a4503ea142aaa8a6c920f"} Feb 27 10:54:04 crc kubenswrapper[4998]: I0227 10:54:04.678174 4998 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4e9004edc4e5861a11489a5c9a3289bbba4d7916217a4503ea142aaa8a6c920f" Feb 27 10:54:04 crc kubenswrapper[4998]: I0227 10:54:04.678238 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536494-bp2cm" Feb 27 10:54:05 crc kubenswrapper[4998]: I0227 10:54:05.126614 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536488-7f9vc"] Feb 27 10:54:05 crc kubenswrapper[4998]: I0227 10:54:05.134652 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536488-7f9vc"] Feb 27 10:54:06 crc kubenswrapper[4998]: I0227 10:54:06.701967 4998 generic.go:334] "Generic (PLEG): container finished" podID="fbf8ea11-95d4-4444-8f99-591502846aec" containerID="45fa9d1384ca4ea25c006a056e2c8a878420187b21ae759d1dca888fd86b50a9" exitCode=0 Feb 27 10:54:06 crc kubenswrapper[4998]: I0227 10:54:06.702024 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-l9sq6" event={"ID":"fbf8ea11-95d4-4444-8f99-591502846aec","Type":"ContainerDied","Data":"45fa9d1384ca4ea25c006a056e2c8a878420187b21ae759d1dca888fd86b50a9"} Feb 27 10:54:06 crc kubenswrapper[4998]: I0227 10:54:06.778368 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="779a501d-c19a-4ee2-94ed-9449448195b2" path="/var/lib/kubelet/pods/779a501d-c19a-4ee2-94ed-9449448195b2/volumes" Feb 27 10:54:08 crc kubenswrapper[4998]: I0227 10:54:08.093953 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-l9sq6" Feb 27 10:54:08 crc kubenswrapper[4998]: I0227 10:54:08.185460 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbf8ea11-95d4-4444-8f99-591502846aec-neutron-metadata-combined-ca-bundle\") pod \"fbf8ea11-95d4-4444-8f99-591502846aec\" (UID: \"fbf8ea11-95d4-4444-8f99-591502846aec\") " Feb 27 10:54:08 crc kubenswrapper[4998]: I0227 10:54:08.185648 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fbf8ea11-95d4-4444-8f99-591502846aec-inventory\") pod \"fbf8ea11-95d4-4444-8f99-591502846aec\" (UID: \"fbf8ea11-95d4-4444-8f99-591502846aec\") " Feb 27 10:54:08 crc kubenswrapper[4998]: I0227 10:54:08.185672 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/fbf8ea11-95d4-4444-8f99-591502846aec-nova-metadata-neutron-config-0\") pod \"fbf8ea11-95d4-4444-8f99-591502846aec\" (UID: \"fbf8ea11-95d4-4444-8f99-591502846aec\") " Feb 27 10:54:08 crc kubenswrapper[4998]: I0227 10:54:08.185690 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64xtg\" (UniqueName: \"kubernetes.io/projected/fbf8ea11-95d4-4444-8f99-591502846aec-kube-api-access-64xtg\") pod \"fbf8ea11-95d4-4444-8f99-591502846aec\" (UID: \"fbf8ea11-95d4-4444-8f99-591502846aec\") " Feb 27 10:54:08 crc kubenswrapper[4998]: I0227 10:54:08.185753 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fbf8ea11-95d4-4444-8f99-591502846aec-ssh-key-openstack-edpm-ipam\") pod \"fbf8ea11-95d4-4444-8f99-591502846aec\" (UID: \"fbf8ea11-95d4-4444-8f99-591502846aec\") " Feb 27 10:54:08 crc kubenswrapper[4998]: I0227 10:54:08.185773 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/fbf8ea11-95d4-4444-8f99-591502846aec-neutron-ovn-metadata-agent-neutron-config-0\") pod \"fbf8ea11-95d4-4444-8f99-591502846aec\" (UID: \"fbf8ea11-95d4-4444-8f99-591502846aec\") " Feb 27 10:54:08 crc kubenswrapper[4998]: I0227 10:54:08.191388 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbf8ea11-95d4-4444-8f99-591502846aec-kube-api-access-64xtg" (OuterVolumeSpecName: "kube-api-access-64xtg") pod "fbf8ea11-95d4-4444-8f99-591502846aec" (UID: "fbf8ea11-95d4-4444-8f99-591502846aec"). InnerVolumeSpecName "kube-api-access-64xtg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:54:08 crc kubenswrapper[4998]: I0227 10:54:08.191574 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbf8ea11-95d4-4444-8f99-591502846aec-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "fbf8ea11-95d4-4444-8f99-591502846aec" (UID: "fbf8ea11-95d4-4444-8f99-591502846aec"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:54:08 crc kubenswrapper[4998]: I0227 10:54:08.213923 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbf8ea11-95d4-4444-8f99-591502846aec-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "fbf8ea11-95d4-4444-8f99-591502846aec" (UID: "fbf8ea11-95d4-4444-8f99-591502846aec"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:54:08 crc kubenswrapper[4998]: I0227 10:54:08.216071 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbf8ea11-95d4-4444-8f99-591502846aec-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "fbf8ea11-95d4-4444-8f99-591502846aec" (UID: "fbf8ea11-95d4-4444-8f99-591502846aec"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:54:08 crc kubenswrapper[4998]: I0227 10:54:08.219081 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbf8ea11-95d4-4444-8f99-591502846aec-inventory" (OuterVolumeSpecName: "inventory") pod "fbf8ea11-95d4-4444-8f99-591502846aec" (UID: "fbf8ea11-95d4-4444-8f99-591502846aec"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:54:08 crc kubenswrapper[4998]: I0227 10:54:08.219113 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbf8ea11-95d4-4444-8f99-591502846aec-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "fbf8ea11-95d4-4444-8f99-591502846aec" (UID: "fbf8ea11-95d4-4444-8f99-591502846aec"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:54:08 crc kubenswrapper[4998]: I0227 10:54:08.289106 4998 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fbf8ea11-95d4-4444-8f99-591502846aec-inventory\") on node \"crc\" DevicePath \"\"" Feb 27 10:54:08 crc kubenswrapper[4998]: I0227 10:54:08.289154 4998 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/fbf8ea11-95d4-4444-8f99-591502846aec-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 27 10:54:08 crc kubenswrapper[4998]: I0227 10:54:08.289170 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-64xtg\" (UniqueName: \"kubernetes.io/projected/fbf8ea11-95d4-4444-8f99-591502846aec-kube-api-access-64xtg\") on node \"crc\" DevicePath \"\"" Feb 27 10:54:08 crc kubenswrapper[4998]: I0227 10:54:08.289181 4998 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fbf8ea11-95d4-4444-8f99-591502846aec-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 27 10:54:08 crc kubenswrapper[4998]: I0227 10:54:08.289193 4998 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/fbf8ea11-95d4-4444-8f99-591502846aec-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 27 10:54:08 crc kubenswrapper[4998]: I0227 10:54:08.289207 4998 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbf8ea11-95d4-4444-8f99-591502846aec-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:54:08 crc kubenswrapper[4998]: I0227 10:54:08.720010 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-l9sq6" event={"ID":"fbf8ea11-95d4-4444-8f99-591502846aec","Type":"ContainerDied","Data":"0277f86f177b326c3a92fb050d0a195980b2b7439dec87c34d15550ed0f2dffb"} Feb 27 10:54:08 crc kubenswrapper[4998]: I0227 10:54:08.720054 4998 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0277f86f177b326c3a92fb050d0a195980b2b7439dec87c34d15550ed0f2dffb" Feb 27 10:54:08 crc kubenswrapper[4998]: I0227 10:54:08.720109 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-l9sq6" Feb 27 10:54:08 crc kubenswrapper[4998]: I0227 10:54:08.814701 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4d9t9"] Feb 27 10:54:08 crc kubenswrapper[4998]: E0227 10:54:08.815299 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cba97d31-99ca-4c41-86ab-8a712a2c5ddb" containerName="oc" Feb 27 10:54:08 crc kubenswrapper[4998]: I0227 10:54:08.815327 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="cba97d31-99ca-4c41-86ab-8a712a2c5ddb" containerName="oc" Feb 27 10:54:08 crc kubenswrapper[4998]: E0227 10:54:08.815350 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbf8ea11-95d4-4444-8f99-591502846aec" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 27 10:54:08 crc kubenswrapper[4998]: I0227 10:54:08.815360 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbf8ea11-95d4-4444-8f99-591502846aec" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 27 10:54:08 crc kubenswrapper[4998]: I0227 10:54:08.815934 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbf8ea11-95d4-4444-8f99-591502846aec" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 27 10:54:08 crc kubenswrapper[4998]: I0227 10:54:08.815970 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="cba97d31-99ca-4c41-86ab-8a712a2c5ddb" containerName="oc" Feb 27 10:54:08 crc kubenswrapper[4998]: I0227 10:54:08.816811 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4d9t9" Feb 27 10:54:08 crc kubenswrapper[4998]: I0227 10:54:08.820176 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Feb 27 10:54:08 crc kubenswrapper[4998]: I0227 10:54:08.820390 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 27 10:54:08 crc kubenswrapper[4998]: I0227 10:54:08.820681 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 27 10:54:08 crc kubenswrapper[4998]: I0227 10:54:08.822537 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 27 10:54:08 crc kubenswrapper[4998]: I0227 10:54:08.822767 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-bpcp2" Feb 27 10:54:08 crc kubenswrapper[4998]: I0227 10:54:08.825896 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4d9t9"] Feb 27 10:54:08 crc kubenswrapper[4998]: I0227 10:54:08.900937 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/201a3a34-a0ff-476a-9fb2-db9ad3757a58-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4d9t9\" (UID: \"201a3a34-a0ff-476a-9fb2-db9ad3757a58\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4d9t9" Feb 27 10:54:08 crc kubenswrapper[4998]: I0227 10:54:08.901009 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/201a3a34-a0ff-476a-9fb2-db9ad3757a58-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4d9t9\" (UID: \"201a3a34-a0ff-476a-9fb2-db9ad3757a58\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4d9t9" Feb 27 10:54:08 crc kubenswrapper[4998]: I0227 10:54:08.901078 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d56ps\" (UniqueName: \"kubernetes.io/projected/201a3a34-a0ff-476a-9fb2-db9ad3757a58-kube-api-access-d56ps\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4d9t9\" (UID: \"201a3a34-a0ff-476a-9fb2-db9ad3757a58\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4d9t9" Feb 27 10:54:08 crc kubenswrapper[4998]: I0227 10:54:08.901158 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/201a3a34-a0ff-476a-9fb2-db9ad3757a58-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4d9t9\" (UID: \"201a3a34-a0ff-476a-9fb2-db9ad3757a58\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4d9t9" Feb 27 10:54:08 crc kubenswrapper[4998]: I0227 10:54:08.901179 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/201a3a34-a0ff-476a-9fb2-db9ad3757a58-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4d9t9\" (UID: \"201a3a34-a0ff-476a-9fb2-db9ad3757a58\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4d9t9" Feb 27 10:54:09 crc kubenswrapper[4998]: I0227 10:54:09.002284 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/201a3a34-a0ff-476a-9fb2-db9ad3757a58-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4d9t9\" (UID: \"201a3a34-a0ff-476a-9fb2-db9ad3757a58\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4d9t9" Feb 27 10:54:09 crc kubenswrapper[4998]: I0227 10:54:09.002788 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/201a3a34-a0ff-476a-9fb2-db9ad3757a58-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4d9t9\" (UID: \"201a3a34-a0ff-476a-9fb2-db9ad3757a58\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4d9t9" Feb 27 10:54:09 crc kubenswrapper[4998]: I0227 10:54:09.002940 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/201a3a34-a0ff-476a-9fb2-db9ad3757a58-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4d9t9\" (UID: \"201a3a34-a0ff-476a-9fb2-db9ad3757a58\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4d9t9" Feb 27 10:54:09 crc kubenswrapper[4998]: I0227 10:54:09.003049 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/201a3a34-a0ff-476a-9fb2-db9ad3757a58-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4d9t9\" (UID: \"201a3a34-a0ff-476a-9fb2-db9ad3757a58\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4d9t9" Feb 27 10:54:09 crc kubenswrapper[4998]: I0227 10:54:09.003256 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d56ps\" (UniqueName: \"kubernetes.io/projected/201a3a34-a0ff-476a-9fb2-db9ad3757a58-kube-api-access-d56ps\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4d9t9\" (UID: \"201a3a34-a0ff-476a-9fb2-db9ad3757a58\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4d9t9" Feb 27 10:54:09 crc kubenswrapper[4998]: I0227 10:54:09.007897 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/201a3a34-a0ff-476a-9fb2-db9ad3757a58-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4d9t9\" (UID: \"201a3a34-a0ff-476a-9fb2-db9ad3757a58\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4d9t9" Feb 27 10:54:09 crc kubenswrapper[4998]: I0227 10:54:09.007922 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/201a3a34-a0ff-476a-9fb2-db9ad3757a58-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4d9t9\" (UID: \"201a3a34-a0ff-476a-9fb2-db9ad3757a58\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4d9t9" Feb 27 10:54:09 crc kubenswrapper[4998]: I0227 10:54:09.008537 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/201a3a34-a0ff-476a-9fb2-db9ad3757a58-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4d9t9\" (UID: \"201a3a34-a0ff-476a-9fb2-db9ad3757a58\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4d9t9" Feb 27 10:54:09 crc kubenswrapper[4998]: I0227 10:54:09.016127 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/201a3a34-a0ff-476a-9fb2-db9ad3757a58-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4d9t9\" (UID: \"201a3a34-a0ff-476a-9fb2-db9ad3757a58\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4d9t9" Feb 27 10:54:09 crc kubenswrapper[4998]: I0227 10:54:09.023017 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d56ps\" (UniqueName: \"kubernetes.io/projected/201a3a34-a0ff-476a-9fb2-db9ad3757a58-kube-api-access-d56ps\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4d9t9\" (UID: \"201a3a34-a0ff-476a-9fb2-db9ad3757a58\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4d9t9" Feb 27 10:54:09 crc kubenswrapper[4998]: I0227 10:54:09.145491 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4d9t9" Feb 27 10:54:09 crc kubenswrapper[4998]: I0227 10:54:09.636608 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4d9t9"] Feb 27 10:54:09 crc kubenswrapper[4998]: I0227 10:54:09.729986 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4d9t9" event={"ID":"201a3a34-a0ff-476a-9fb2-db9ad3757a58","Type":"ContainerStarted","Data":"394507b3081fbe26931384c42b6bc6bce80730089e7cc13c91394cf183a56e61"} Feb 27 10:54:10 crc kubenswrapper[4998]: I0227 10:54:10.504835 4998 patch_prober.go:28] interesting pod/machine-config-daemon-m6kr5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 10:54:10 crc kubenswrapper[4998]: I0227 10:54:10.505111 4998 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 10:54:10 crc kubenswrapper[4998]: I0227 10:54:10.741343 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4d9t9" event={"ID":"201a3a34-a0ff-476a-9fb2-db9ad3757a58","Type":"ContainerStarted","Data":"6b6f30299b9eb80e8c5047de5d1d01175ba6dd13c564701a4cb0dad651cba28e"} Feb 27 10:54:10 crc kubenswrapper[4998]: I0227 10:54:10.768493 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4d9t9" podStartSLOduration=2.28570538 podStartE2EDuration="2.768471488s" podCreationTimestamp="2026-02-27 10:54:08 +0000 UTC" firstStartedPulling="2026-02-27 10:54:09.640322144 +0000 UTC m=+2201.638593112" lastFinishedPulling="2026-02-27 10:54:10.123088232 +0000 UTC m=+2202.121359220" observedRunningTime="2026-02-27 10:54:10.76334351 +0000 UTC m=+2202.761614488" watchObservedRunningTime="2026-02-27 10:54:10.768471488 +0000 UTC m=+2202.766742456" Feb 27 10:54:23 crc kubenswrapper[4998]: I0227 10:54:23.411572 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8gk4t"] Feb 27 10:54:23 crc kubenswrapper[4998]: I0227 10:54:23.414755 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8gk4t" Feb 27 10:54:23 crc kubenswrapper[4998]: I0227 10:54:23.434936 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8gk4t"] Feb 27 10:54:23 crc kubenswrapper[4998]: I0227 10:54:23.486844 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/298c4351-591e-4e23-9c4e-b02c9c938b00-utilities\") pod \"redhat-marketplace-8gk4t\" (UID: \"298c4351-591e-4e23-9c4e-b02c9c938b00\") " pod="openshift-marketplace/redhat-marketplace-8gk4t" Feb 27 10:54:23 crc kubenswrapper[4998]: I0227 10:54:23.486894 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/298c4351-591e-4e23-9c4e-b02c9c938b00-catalog-content\") pod \"redhat-marketplace-8gk4t\" (UID: \"298c4351-591e-4e23-9c4e-b02c9c938b00\") " pod="openshift-marketplace/redhat-marketplace-8gk4t" Feb 27 10:54:23 crc kubenswrapper[4998]: I0227 10:54:23.486970 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cwbn\" (UniqueName: \"kubernetes.io/projected/298c4351-591e-4e23-9c4e-b02c9c938b00-kube-api-access-7cwbn\") pod \"redhat-marketplace-8gk4t\" (UID: \"298c4351-591e-4e23-9c4e-b02c9c938b00\") " pod="openshift-marketplace/redhat-marketplace-8gk4t" Feb 27 10:54:23 crc kubenswrapper[4998]: I0227 10:54:23.588997 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7cwbn\" (UniqueName: \"kubernetes.io/projected/298c4351-591e-4e23-9c4e-b02c9c938b00-kube-api-access-7cwbn\") pod \"redhat-marketplace-8gk4t\" (UID: \"298c4351-591e-4e23-9c4e-b02c9c938b00\") " pod="openshift-marketplace/redhat-marketplace-8gk4t" Feb 27 10:54:23 crc kubenswrapper[4998]: I0227 10:54:23.589686 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/298c4351-591e-4e23-9c4e-b02c9c938b00-utilities\") pod \"redhat-marketplace-8gk4t\" (UID: \"298c4351-591e-4e23-9c4e-b02c9c938b00\") " pod="openshift-marketplace/redhat-marketplace-8gk4t" Feb 27 10:54:23 crc kubenswrapper[4998]: I0227 10:54:23.590284 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/298c4351-591e-4e23-9c4e-b02c9c938b00-utilities\") pod \"redhat-marketplace-8gk4t\" (UID: \"298c4351-591e-4e23-9c4e-b02c9c938b00\") " pod="openshift-marketplace/redhat-marketplace-8gk4t" Feb 27 10:54:23 crc kubenswrapper[4998]: I0227 10:54:23.590578 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/298c4351-591e-4e23-9c4e-b02c9c938b00-catalog-content\") pod \"redhat-marketplace-8gk4t\" (UID: \"298c4351-591e-4e23-9c4e-b02c9c938b00\") " pod="openshift-marketplace/redhat-marketplace-8gk4t" Feb 27 10:54:23 crc kubenswrapper[4998]: I0227 10:54:23.590881 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/298c4351-591e-4e23-9c4e-b02c9c938b00-catalog-content\") pod \"redhat-marketplace-8gk4t\" (UID: \"298c4351-591e-4e23-9c4e-b02c9c938b00\") " pod="openshift-marketplace/redhat-marketplace-8gk4t" Feb 27 10:54:23 crc kubenswrapper[4998]: I0227 10:54:23.608888 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7cwbn\" (UniqueName: \"kubernetes.io/projected/298c4351-591e-4e23-9c4e-b02c9c938b00-kube-api-access-7cwbn\") pod \"redhat-marketplace-8gk4t\" (UID: \"298c4351-591e-4e23-9c4e-b02c9c938b00\") " pod="openshift-marketplace/redhat-marketplace-8gk4t" Feb 27 10:54:23 crc kubenswrapper[4998]: I0227 10:54:23.751297 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8gk4t" Feb 27 10:54:24 crc kubenswrapper[4998]: I0227 10:54:24.324449 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8gk4t"] Feb 27 10:54:24 crc kubenswrapper[4998]: E0227 10:54:24.718589 4998 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod298c4351_591e_4e23_9c4e_b02c9c938b00.slice/crio-4471ecdaa02659d50414671b1d4cbde4138159044dade46b70bbc01a66866f89.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod298c4351_591e_4e23_9c4e_b02c9c938b00.slice/crio-conmon-4471ecdaa02659d50414671b1d4cbde4138159044dade46b70bbc01a66866f89.scope\": RecentStats: unable to find data in memory cache]" Feb 27 10:54:24 crc kubenswrapper[4998]: I0227 10:54:24.886326 4998 generic.go:334] "Generic (PLEG): container finished" podID="298c4351-591e-4e23-9c4e-b02c9c938b00" containerID="4471ecdaa02659d50414671b1d4cbde4138159044dade46b70bbc01a66866f89" exitCode=0 Feb 27 10:54:24 crc kubenswrapper[4998]: I0227 10:54:24.886379 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8gk4t" event={"ID":"298c4351-591e-4e23-9c4e-b02c9c938b00","Type":"ContainerDied","Data":"4471ecdaa02659d50414671b1d4cbde4138159044dade46b70bbc01a66866f89"} Feb 27 10:54:24 crc kubenswrapper[4998]: I0227 10:54:24.886408 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8gk4t" event={"ID":"298c4351-591e-4e23-9c4e-b02c9c938b00","Type":"ContainerStarted","Data":"54e70b3a5f4a65eee0eaba0074eac388a44f8bb92b19ea996c6a11a21e4df2d9"} Feb 27 10:54:24 crc kubenswrapper[4998]: I0227 10:54:24.889759 4998 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 27 10:54:25 crc kubenswrapper[4998]: I0227 10:54:25.897095 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8gk4t" event={"ID":"298c4351-591e-4e23-9c4e-b02c9c938b00","Type":"ContainerStarted","Data":"ad1d69a69670570bc40a53c69156baa8f960d4bed8259691b16be5f1ac5711af"} Feb 27 10:54:26 crc kubenswrapper[4998]: I0227 10:54:26.923047 4998 generic.go:334] "Generic (PLEG): container finished" podID="298c4351-591e-4e23-9c4e-b02c9c938b00" containerID="ad1d69a69670570bc40a53c69156baa8f960d4bed8259691b16be5f1ac5711af" exitCode=0 Feb 27 10:54:26 crc kubenswrapper[4998]: I0227 10:54:26.923205 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8gk4t" event={"ID":"298c4351-591e-4e23-9c4e-b02c9c938b00","Type":"ContainerDied","Data":"ad1d69a69670570bc40a53c69156baa8f960d4bed8259691b16be5f1ac5711af"} Feb 27 10:54:26 crc kubenswrapper[4998]: I0227 10:54:26.923404 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8gk4t" event={"ID":"298c4351-591e-4e23-9c4e-b02c9c938b00","Type":"ContainerStarted","Data":"c7111fb1a207b0cd3a7ae066fe29ac22e6477ed7313f215087622c401e02a8a6"} Feb 27 10:54:26 crc kubenswrapper[4998]: I0227 10:54:26.950884 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8gk4t" podStartSLOduration=2.520538209 podStartE2EDuration="3.950861499s" podCreationTimestamp="2026-02-27 10:54:23 +0000 UTC" firstStartedPulling="2026-02-27 10:54:24.88948135 +0000 UTC m=+2216.887752318" lastFinishedPulling="2026-02-27 10:54:26.31980464 +0000 UTC m=+2218.318075608" observedRunningTime="2026-02-27 10:54:26.939805961 +0000 UTC m=+2218.938076929" watchObservedRunningTime="2026-02-27 10:54:26.950861499 +0000 UTC m=+2218.949132467" Feb 27 10:54:33 crc kubenswrapper[4998]: I0227 10:54:33.752289 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8gk4t" Feb 27 10:54:33 crc kubenswrapper[4998]: I0227 10:54:33.752850 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8gk4t" Feb 27 10:54:33 crc kubenswrapper[4998]: I0227 10:54:33.820428 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8gk4t" Feb 27 10:54:34 crc kubenswrapper[4998]: I0227 10:54:34.026951 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8gk4t" Feb 27 10:54:34 crc kubenswrapper[4998]: I0227 10:54:34.075880 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8gk4t"] Feb 27 10:54:35 crc kubenswrapper[4998]: I0227 10:54:35.994578 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-8gk4t" podUID="298c4351-591e-4e23-9c4e-b02c9c938b00" containerName="registry-server" containerID="cri-o://c7111fb1a207b0cd3a7ae066fe29ac22e6477ed7313f215087622c401e02a8a6" gracePeriod=2 Feb 27 10:54:36 crc kubenswrapper[4998]: I0227 10:54:36.511359 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8gk4t" Feb 27 10:54:36 crc kubenswrapper[4998]: I0227 10:54:36.628710 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7cwbn\" (UniqueName: \"kubernetes.io/projected/298c4351-591e-4e23-9c4e-b02c9c938b00-kube-api-access-7cwbn\") pod \"298c4351-591e-4e23-9c4e-b02c9c938b00\" (UID: \"298c4351-591e-4e23-9c4e-b02c9c938b00\") " Feb 27 10:54:36 crc kubenswrapper[4998]: I0227 10:54:36.628759 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/298c4351-591e-4e23-9c4e-b02c9c938b00-catalog-content\") pod \"298c4351-591e-4e23-9c4e-b02c9c938b00\" (UID: \"298c4351-591e-4e23-9c4e-b02c9c938b00\") " Feb 27 10:54:36 crc kubenswrapper[4998]: I0227 10:54:36.628858 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/298c4351-591e-4e23-9c4e-b02c9c938b00-utilities\") pod \"298c4351-591e-4e23-9c4e-b02c9c938b00\" (UID: \"298c4351-591e-4e23-9c4e-b02c9c938b00\") " Feb 27 10:54:36 crc kubenswrapper[4998]: I0227 10:54:36.629990 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/298c4351-591e-4e23-9c4e-b02c9c938b00-utilities" (OuterVolumeSpecName: "utilities") pod "298c4351-591e-4e23-9c4e-b02c9c938b00" (UID: "298c4351-591e-4e23-9c4e-b02c9c938b00"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:54:36 crc kubenswrapper[4998]: I0227 10:54:36.636585 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/298c4351-591e-4e23-9c4e-b02c9c938b00-kube-api-access-7cwbn" (OuterVolumeSpecName: "kube-api-access-7cwbn") pod "298c4351-591e-4e23-9c4e-b02c9c938b00" (UID: "298c4351-591e-4e23-9c4e-b02c9c938b00"). InnerVolumeSpecName "kube-api-access-7cwbn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:54:36 crc kubenswrapper[4998]: I0227 10:54:36.662947 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/298c4351-591e-4e23-9c4e-b02c9c938b00-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "298c4351-591e-4e23-9c4e-b02c9c938b00" (UID: "298c4351-591e-4e23-9c4e-b02c9c938b00"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:54:36 crc kubenswrapper[4998]: I0227 10:54:36.731518 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7cwbn\" (UniqueName: \"kubernetes.io/projected/298c4351-591e-4e23-9c4e-b02c9c938b00-kube-api-access-7cwbn\") on node \"crc\" DevicePath \"\"" Feb 27 10:54:36 crc kubenswrapper[4998]: I0227 10:54:36.731893 4998 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/298c4351-591e-4e23-9c4e-b02c9c938b00-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 10:54:36 crc kubenswrapper[4998]: I0227 10:54:36.732099 4998 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/298c4351-591e-4e23-9c4e-b02c9c938b00-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 10:54:37 crc kubenswrapper[4998]: I0227 10:54:37.005261 4998 generic.go:334] "Generic (PLEG): container finished" podID="298c4351-591e-4e23-9c4e-b02c9c938b00" containerID="c7111fb1a207b0cd3a7ae066fe29ac22e6477ed7313f215087622c401e02a8a6" exitCode=0 Feb 27 10:54:37 crc kubenswrapper[4998]: I0227 10:54:37.005319 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8gk4t" event={"ID":"298c4351-591e-4e23-9c4e-b02c9c938b00","Type":"ContainerDied","Data":"c7111fb1a207b0cd3a7ae066fe29ac22e6477ed7313f215087622c401e02a8a6"} Feb 27 10:54:37 crc kubenswrapper[4998]: I0227 10:54:37.005331 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8gk4t" Feb 27 10:54:37 crc kubenswrapper[4998]: I0227 10:54:37.005357 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8gk4t" event={"ID":"298c4351-591e-4e23-9c4e-b02c9c938b00","Type":"ContainerDied","Data":"54e70b3a5f4a65eee0eaba0074eac388a44f8bb92b19ea996c6a11a21e4df2d9"} Feb 27 10:54:37 crc kubenswrapper[4998]: I0227 10:54:37.005378 4998 scope.go:117] "RemoveContainer" containerID="c7111fb1a207b0cd3a7ae066fe29ac22e6477ed7313f215087622c401e02a8a6" Feb 27 10:54:37 crc kubenswrapper[4998]: I0227 10:54:37.026964 4998 scope.go:117] "RemoveContainer" containerID="ad1d69a69670570bc40a53c69156baa8f960d4bed8259691b16be5f1ac5711af" Feb 27 10:54:37 crc kubenswrapper[4998]: I0227 10:54:37.034803 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8gk4t"] Feb 27 10:54:37 crc kubenswrapper[4998]: I0227 10:54:37.044156 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-8gk4t"] Feb 27 10:54:37 crc kubenswrapper[4998]: I0227 10:54:37.066087 4998 scope.go:117] "RemoveContainer" containerID="4471ecdaa02659d50414671b1d4cbde4138159044dade46b70bbc01a66866f89" Feb 27 10:54:37 crc kubenswrapper[4998]: I0227 10:54:37.106351 4998 scope.go:117] "RemoveContainer" containerID="c7111fb1a207b0cd3a7ae066fe29ac22e6477ed7313f215087622c401e02a8a6" Feb 27 10:54:37 crc kubenswrapper[4998]: E0227 10:54:37.114911 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7111fb1a207b0cd3a7ae066fe29ac22e6477ed7313f215087622c401e02a8a6\": container with ID starting with c7111fb1a207b0cd3a7ae066fe29ac22e6477ed7313f215087622c401e02a8a6 not found: ID does not exist" containerID="c7111fb1a207b0cd3a7ae066fe29ac22e6477ed7313f215087622c401e02a8a6" Feb 27 10:54:37 crc kubenswrapper[4998]: I0227 10:54:37.114971 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7111fb1a207b0cd3a7ae066fe29ac22e6477ed7313f215087622c401e02a8a6"} err="failed to get container status \"c7111fb1a207b0cd3a7ae066fe29ac22e6477ed7313f215087622c401e02a8a6\": rpc error: code = NotFound desc = could not find container \"c7111fb1a207b0cd3a7ae066fe29ac22e6477ed7313f215087622c401e02a8a6\": container with ID starting with c7111fb1a207b0cd3a7ae066fe29ac22e6477ed7313f215087622c401e02a8a6 not found: ID does not exist" Feb 27 10:54:37 crc kubenswrapper[4998]: I0227 10:54:37.115006 4998 scope.go:117] "RemoveContainer" containerID="ad1d69a69670570bc40a53c69156baa8f960d4bed8259691b16be5f1ac5711af" Feb 27 10:54:37 crc kubenswrapper[4998]: E0227 10:54:37.120818 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad1d69a69670570bc40a53c69156baa8f960d4bed8259691b16be5f1ac5711af\": container with ID starting with ad1d69a69670570bc40a53c69156baa8f960d4bed8259691b16be5f1ac5711af not found: ID does not exist" containerID="ad1d69a69670570bc40a53c69156baa8f960d4bed8259691b16be5f1ac5711af" Feb 27 10:54:37 crc kubenswrapper[4998]: I0227 10:54:37.120875 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad1d69a69670570bc40a53c69156baa8f960d4bed8259691b16be5f1ac5711af"} err="failed to get container status \"ad1d69a69670570bc40a53c69156baa8f960d4bed8259691b16be5f1ac5711af\": rpc error: code = NotFound desc = could not find container \"ad1d69a69670570bc40a53c69156baa8f960d4bed8259691b16be5f1ac5711af\": container with ID starting with ad1d69a69670570bc40a53c69156baa8f960d4bed8259691b16be5f1ac5711af not found: ID does not exist" Feb 27 10:54:37 crc kubenswrapper[4998]: I0227 10:54:37.120911 4998 scope.go:117] "RemoveContainer" containerID="4471ecdaa02659d50414671b1d4cbde4138159044dade46b70bbc01a66866f89" Feb 27 10:54:37 crc kubenswrapper[4998]: E0227 10:54:37.125654 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4471ecdaa02659d50414671b1d4cbde4138159044dade46b70bbc01a66866f89\": container with ID starting with 4471ecdaa02659d50414671b1d4cbde4138159044dade46b70bbc01a66866f89 not found: ID does not exist" containerID="4471ecdaa02659d50414671b1d4cbde4138159044dade46b70bbc01a66866f89" Feb 27 10:54:37 crc kubenswrapper[4998]: I0227 10:54:37.125697 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4471ecdaa02659d50414671b1d4cbde4138159044dade46b70bbc01a66866f89"} err="failed to get container status \"4471ecdaa02659d50414671b1d4cbde4138159044dade46b70bbc01a66866f89\": rpc error: code = NotFound desc = could not find container \"4471ecdaa02659d50414671b1d4cbde4138159044dade46b70bbc01a66866f89\": container with ID starting with 4471ecdaa02659d50414671b1d4cbde4138159044dade46b70bbc01a66866f89 not found: ID does not exist" Feb 27 10:54:38 crc kubenswrapper[4998]: I0227 10:54:38.778333 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="298c4351-591e-4e23-9c4e-b02c9c938b00" path="/var/lib/kubelet/pods/298c4351-591e-4e23-9c4e-b02c9c938b00/volumes" Feb 27 10:54:40 crc kubenswrapper[4998]: I0227 10:54:40.504899 4998 patch_prober.go:28] interesting pod/machine-config-daemon-m6kr5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 10:54:40 crc kubenswrapper[4998]: I0227 10:54:40.505170 4998 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 10:54:48 crc kubenswrapper[4998]: I0227 10:54:48.809628 4998 scope.go:117] "RemoveContainer" containerID="9efbca7d52b4645b3c56013046c46d39d981cadb9849f0217787eafd0027a9af" Feb 27 10:55:10 crc kubenswrapper[4998]: I0227 10:55:10.505085 4998 patch_prober.go:28] interesting pod/machine-config-daemon-m6kr5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 10:55:10 crc kubenswrapper[4998]: I0227 10:55:10.505854 4998 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 10:55:10 crc kubenswrapper[4998]: I0227 10:55:10.505922 4998 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" Feb 27 10:55:10 crc kubenswrapper[4998]: I0227 10:55:10.506794 4998 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7b0185177c0eba8b1ede867992da9cdf152aa626aa33c9939f91ead1ce17d9c4"} pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 27 10:55:10 crc kubenswrapper[4998]: I0227 10:55:10.506863 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" containerName="machine-config-daemon" containerID="cri-o://7b0185177c0eba8b1ede867992da9cdf152aa626aa33c9939f91ead1ce17d9c4" gracePeriod=600 Feb 27 10:55:10 crc kubenswrapper[4998]: E0227 10:55:10.633425 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m6kr5_openshift-machine-config-operator(400c5e2f-5448-49c6-bf8e-04b21e552bb2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" Feb 27 10:55:11 crc kubenswrapper[4998]: I0227 10:55:11.352408 4998 generic.go:334] "Generic (PLEG): container finished" podID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" containerID="7b0185177c0eba8b1ede867992da9cdf152aa626aa33c9939f91ead1ce17d9c4" exitCode=0 Feb 27 10:55:11 crc kubenswrapper[4998]: I0227 10:55:11.352475 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" event={"ID":"400c5e2f-5448-49c6-bf8e-04b21e552bb2","Type":"ContainerDied","Data":"7b0185177c0eba8b1ede867992da9cdf152aa626aa33c9939f91ead1ce17d9c4"} Feb 27 10:55:11 crc kubenswrapper[4998]: I0227 10:55:11.352511 4998 scope.go:117] "RemoveContainer" containerID="04de93613a67bf6912778a8335db7a4a5c63027b3022fa81b03ae228d08d8d7b" Feb 27 10:55:11 crc kubenswrapper[4998]: I0227 10:55:11.353346 4998 scope.go:117] "RemoveContainer" containerID="7b0185177c0eba8b1ede867992da9cdf152aa626aa33c9939f91ead1ce17d9c4" Feb 27 10:55:11 crc kubenswrapper[4998]: E0227 10:55:11.353751 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m6kr5_openshift-machine-config-operator(400c5e2f-5448-49c6-bf8e-04b21e552bb2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" Feb 27 10:55:23 crc kubenswrapper[4998]: I0227 10:55:23.764632 4998 scope.go:117] "RemoveContainer" containerID="7b0185177c0eba8b1ede867992da9cdf152aa626aa33c9939f91ead1ce17d9c4" Feb 27 10:55:23 crc kubenswrapper[4998]: E0227 10:55:23.765362 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m6kr5_openshift-machine-config-operator(400c5e2f-5448-49c6-bf8e-04b21e552bb2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" Feb 27 10:55:36 crc kubenswrapper[4998]: I0227 10:55:36.764795 4998 scope.go:117] "RemoveContainer" containerID="7b0185177c0eba8b1ede867992da9cdf152aa626aa33c9939f91ead1ce17d9c4" Feb 27 10:55:36 crc kubenswrapper[4998]: E0227 10:55:36.765644 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m6kr5_openshift-machine-config-operator(400c5e2f-5448-49c6-bf8e-04b21e552bb2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" Feb 27 10:55:46 crc kubenswrapper[4998]: I0227 10:55:46.377449 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-b5rhx"] Feb 27 10:55:46 crc kubenswrapper[4998]: E0227 10:55:46.378376 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="298c4351-591e-4e23-9c4e-b02c9c938b00" containerName="extract-utilities" Feb 27 10:55:46 crc kubenswrapper[4998]: I0227 10:55:46.378389 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="298c4351-591e-4e23-9c4e-b02c9c938b00" containerName="extract-utilities" Feb 27 10:55:46 crc kubenswrapper[4998]: E0227 10:55:46.378401 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="298c4351-591e-4e23-9c4e-b02c9c938b00" containerName="extract-content" Feb 27 10:55:46 crc kubenswrapper[4998]: I0227 10:55:46.378407 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="298c4351-591e-4e23-9c4e-b02c9c938b00" containerName="extract-content" Feb 27 10:55:46 crc kubenswrapper[4998]: E0227 10:55:46.378434 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="298c4351-591e-4e23-9c4e-b02c9c938b00" containerName="registry-server" Feb 27 10:55:46 crc kubenswrapper[4998]: I0227 10:55:46.378441 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="298c4351-591e-4e23-9c4e-b02c9c938b00" containerName="registry-server" Feb 27 10:55:46 crc kubenswrapper[4998]: I0227 10:55:46.378592 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="298c4351-591e-4e23-9c4e-b02c9c938b00" containerName="registry-server" Feb 27 10:55:46 crc kubenswrapper[4998]: I0227 10:55:46.379989 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b5rhx" Feb 27 10:55:46 crc kubenswrapper[4998]: I0227 10:55:46.396385 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-b5rhx"] Feb 27 10:55:46 crc kubenswrapper[4998]: I0227 10:55:46.563350 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n729x\" (UniqueName: \"kubernetes.io/projected/6a5a65e6-f3ed-4489-9573-2d6e72465be5-kube-api-access-n729x\") pod \"community-operators-b5rhx\" (UID: \"6a5a65e6-f3ed-4489-9573-2d6e72465be5\") " pod="openshift-marketplace/community-operators-b5rhx" Feb 27 10:55:46 crc kubenswrapper[4998]: I0227 10:55:46.563446 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a5a65e6-f3ed-4489-9573-2d6e72465be5-catalog-content\") pod \"community-operators-b5rhx\" (UID: \"6a5a65e6-f3ed-4489-9573-2d6e72465be5\") " pod="openshift-marketplace/community-operators-b5rhx" Feb 27 10:55:46 crc kubenswrapper[4998]: I0227 10:55:46.563477 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a5a65e6-f3ed-4489-9573-2d6e72465be5-utilities\") pod \"community-operators-b5rhx\" (UID: \"6a5a65e6-f3ed-4489-9573-2d6e72465be5\") " pod="openshift-marketplace/community-operators-b5rhx" Feb 27 10:55:46 crc kubenswrapper[4998]: I0227 10:55:46.665180 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n729x\" (UniqueName: \"kubernetes.io/projected/6a5a65e6-f3ed-4489-9573-2d6e72465be5-kube-api-access-n729x\") pod \"community-operators-b5rhx\" (UID: \"6a5a65e6-f3ed-4489-9573-2d6e72465be5\") " pod="openshift-marketplace/community-operators-b5rhx" Feb 27 10:55:46 crc kubenswrapper[4998]: I0227 10:55:46.665298 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a5a65e6-f3ed-4489-9573-2d6e72465be5-catalog-content\") pod \"community-operators-b5rhx\" (UID: \"6a5a65e6-f3ed-4489-9573-2d6e72465be5\") " pod="openshift-marketplace/community-operators-b5rhx" Feb 27 10:55:46 crc kubenswrapper[4998]: I0227 10:55:46.665345 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a5a65e6-f3ed-4489-9573-2d6e72465be5-utilities\") pod \"community-operators-b5rhx\" (UID: \"6a5a65e6-f3ed-4489-9573-2d6e72465be5\") " pod="openshift-marketplace/community-operators-b5rhx" Feb 27 10:55:46 crc kubenswrapper[4998]: I0227 10:55:46.665950 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a5a65e6-f3ed-4489-9573-2d6e72465be5-utilities\") pod \"community-operators-b5rhx\" (UID: \"6a5a65e6-f3ed-4489-9573-2d6e72465be5\") " pod="openshift-marketplace/community-operators-b5rhx" Feb 27 10:55:46 crc kubenswrapper[4998]: I0227 10:55:46.666311 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a5a65e6-f3ed-4489-9573-2d6e72465be5-catalog-content\") pod \"community-operators-b5rhx\" (UID: \"6a5a65e6-f3ed-4489-9573-2d6e72465be5\") " pod="openshift-marketplace/community-operators-b5rhx" Feb 27 10:55:46 crc kubenswrapper[4998]: I0227 10:55:46.685119 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n729x\" (UniqueName: \"kubernetes.io/projected/6a5a65e6-f3ed-4489-9573-2d6e72465be5-kube-api-access-n729x\") pod \"community-operators-b5rhx\" (UID: \"6a5a65e6-f3ed-4489-9573-2d6e72465be5\") " pod="openshift-marketplace/community-operators-b5rhx" Feb 27 10:55:46 crc kubenswrapper[4998]: I0227 10:55:46.710767 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b5rhx" Feb 27 10:55:47 crc kubenswrapper[4998]: I0227 10:55:47.273558 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-b5rhx"] Feb 27 10:55:47 crc kubenswrapper[4998]: W0227 10:55:47.281641 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a5a65e6_f3ed_4489_9573_2d6e72465be5.slice/crio-a194bfd824df6e90e32043c137035c58424badf03e60fb4070867cd4707edec2 WatchSource:0}: Error finding container a194bfd824df6e90e32043c137035c58424badf03e60fb4070867cd4707edec2: Status 404 returned error can't find the container with id a194bfd824df6e90e32043c137035c58424badf03e60fb4070867cd4707edec2 Feb 27 10:55:47 crc kubenswrapper[4998]: I0227 10:55:47.677004 4998 generic.go:334] "Generic (PLEG): container finished" podID="6a5a65e6-f3ed-4489-9573-2d6e72465be5" containerID="7bed490dbc59d52e33917ee896efa19575f1fa76c0c4ce713b2a39cc69d6f31d" exitCode=0 Feb 27 10:55:47 crc kubenswrapper[4998]: I0227 10:55:47.677047 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b5rhx" event={"ID":"6a5a65e6-f3ed-4489-9573-2d6e72465be5","Type":"ContainerDied","Data":"7bed490dbc59d52e33917ee896efa19575f1fa76c0c4ce713b2a39cc69d6f31d"} Feb 27 10:55:47 crc kubenswrapper[4998]: I0227 10:55:47.677078 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b5rhx" event={"ID":"6a5a65e6-f3ed-4489-9573-2d6e72465be5","Type":"ContainerStarted","Data":"a194bfd824df6e90e32043c137035c58424badf03e60fb4070867cd4707edec2"} Feb 27 10:55:48 crc kubenswrapper[4998]: I0227 10:55:48.690050 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b5rhx" event={"ID":"6a5a65e6-f3ed-4489-9573-2d6e72465be5","Type":"ContainerStarted","Data":"548613e424346fd884db303f1873e4de8e7e521e7dc9a6d05a648b8c070fd7a7"} Feb 27 10:55:49 crc kubenswrapper[4998]: I0227 10:55:49.708030 4998 generic.go:334] "Generic (PLEG): container finished" podID="6a5a65e6-f3ed-4489-9573-2d6e72465be5" containerID="548613e424346fd884db303f1873e4de8e7e521e7dc9a6d05a648b8c070fd7a7" exitCode=0 Feb 27 10:55:49 crc kubenswrapper[4998]: I0227 10:55:49.708113 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b5rhx" event={"ID":"6a5a65e6-f3ed-4489-9573-2d6e72465be5","Type":"ContainerDied","Data":"548613e424346fd884db303f1873e4de8e7e521e7dc9a6d05a648b8c070fd7a7"} Feb 27 10:55:49 crc kubenswrapper[4998]: I0227 10:55:49.765330 4998 scope.go:117] "RemoveContainer" containerID="7b0185177c0eba8b1ede867992da9cdf152aa626aa33c9939f91ead1ce17d9c4" Feb 27 10:55:49 crc kubenswrapper[4998]: E0227 10:55:49.765721 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m6kr5_openshift-machine-config-operator(400c5e2f-5448-49c6-bf8e-04b21e552bb2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" Feb 27 10:55:52 crc kubenswrapper[4998]: I0227 10:55:52.944078 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b5rhx" event={"ID":"6a5a65e6-f3ed-4489-9573-2d6e72465be5","Type":"ContainerStarted","Data":"ce3033df989693b9c415a2ae16ca8558c5394226c374d800de7a1bcafac7e45c"} Feb 27 10:55:52 crc kubenswrapper[4998]: I0227 10:55:52.978068 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-b5rhx" podStartSLOduration=2.457584366 podStartE2EDuration="6.978012777s" podCreationTimestamp="2026-02-27 10:55:46 +0000 UTC" firstStartedPulling="2026-02-27 10:55:47.679445897 +0000 UTC m=+2299.677716865" lastFinishedPulling="2026-02-27 10:55:52.199874308 +0000 UTC m=+2304.198145276" observedRunningTime="2026-02-27 10:55:52.975342515 +0000 UTC m=+2304.973613493" watchObservedRunningTime="2026-02-27 10:55:52.978012777 +0000 UTC m=+2304.976283745" Feb 27 10:55:56 crc kubenswrapper[4998]: I0227 10:55:56.711888 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-b5rhx" Feb 27 10:55:56 crc kubenswrapper[4998]: I0227 10:55:56.712422 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-b5rhx" Feb 27 10:55:56 crc kubenswrapper[4998]: I0227 10:55:56.763096 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-b5rhx" Feb 27 10:56:00 crc kubenswrapper[4998]: I0227 10:56:00.141086 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536496-hrmgf"] Feb 27 10:56:00 crc kubenswrapper[4998]: I0227 10:56:00.142879 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536496-hrmgf" Feb 27 10:56:00 crc kubenswrapper[4998]: I0227 10:56:00.145352 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 10:56:00 crc kubenswrapper[4998]: I0227 10:56:00.145673 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 10:56:00 crc kubenswrapper[4998]: I0227 10:56:00.146039 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-b74ch" Feb 27 10:56:00 crc kubenswrapper[4998]: I0227 10:56:00.162985 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536496-hrmgf"] Feb 27 10:56:00 crc kubenswrapper[4998]: I0227 10:56:00.248957 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wldj7\" (UniqueName: \"kubernetes.io/projected/2097850c-5872-4115-837e-f7bdd36a5e0e-kube-api-access-wldj7\") pod \"auto-csr-approver-29536496-hrmgf\" (UID: \"2097850c-5872-4115-837e-f7bdd36a5e0e\") " pod="openshift-infra/auto-csr-approver-29536496-hrmgf" Feb 27 10:56:00 crc kubenswrapper[4998]: I0227 10:56:00.350897 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wldj7\" (UniqueName: \"kubernetes.io/projected/2097850c-5872-4115-837e-f7bdd36a5e0e-kube-api-access-wldj7\") pod \"auto-csr-approver-29536496-hrmgf\" (UID: \"2097850c-5872-4115-837e-f7bdd36a5e0e\") " pod="openshift-infra/auto-csr-approver-29536496-hrmgf" Feb 27 10:56:00 crc kubenswrapper[4998]: I0227 10:56:00.371473 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wldj7\" (UniqueName: \"kubernetes.io/projected/2097850c-5872-4115-837e-f7bdd36a5e0e-kube-api-access-wldj7\") pod \"auto-csr-approver-29536496-hrmgf\" (UID: \"2097850c-5872-4115-837e-f7bdd36a5e0e\") " pod="openshift-infra/auto-csr-approver-29536496-hrmgf" Feb 27 10:56:00 crc kubenswrapper[4998]: I0227 10:56:00.470049 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536496-hrmgf" Feb 27 10:56:00 crc kubenswrapper[4998]: I0227 10:56:00.764985 4998 scope.go:117] "RemoveContainer" containerID="7b0185177c0eba8b1ede867992da9cdf152aa626aa33c9939f91ead1ce17d9c4" Feb 27 10:56:00 crc kubenswrapper[4998]: E0227 10:56:00.765541 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m6kr5_openshift-machine-config-operator(400c5e2f-5448-49c6-bf8e-04b21e552bb2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" Feb 27 10:56:00 crc kubenswrapper[4998]: I0227 10:56:00.946940 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536496-hrmgf"] Feb 27 10:56:01 crc kubenswrapper[4998]: I0227 10:56:01.025934 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536496-hrmgf" event={"ID":"2097850c-5872-4115-837e-f7bdd36a5e0e","Type":"ContainerStarted","Data":"dde139cb486595b0d35006c311804fe4dc1d1e0dbab761e1e1d626f643a94e13"} Feb 27 10:56:03 crc kubenswrapper[4998]: I0227 10:56:03.061137 4998 generic.go:334] "Generic (PLEG): container finished" podID="2097850c-5872-4115-837e-f7bdd36a5e0e" containerID="c66bb1f650c2aa0f8be4b7e37e6bccd11964da4848402148bc570b6802a42150" exitCode=0 Feb 27 10:56:03 crc kubenswrapper[4998]: I0227 10:56:03.061268 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536496-hrmgf" event={"ID":"2097850c-5872-4115-837e-f7bdd36a5e0e","Type":"ContainerDied","Data":"c66bb1f650c2aa0f8be4b7e37e6bccd11964da4848402148bc570b6802a42150"} Feb 27 10:56:04 crc kubenswrapper[4998]: I0227 10:56:04.459892 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536496-hrmgf" Feb 27 10:56:04 crc kubenswrapper[4998]: I0227 10:56:04.639337 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wldj7\" (UniqueName: \"kubernetes.io/projected/2097850c-5872-4115-837e-f7bdd36a5e0e-kube-api-access-wldj7\") pod \"2097850c-5872-4115-837e-f7bdd36a5e0e\" (UID: \"2097850c-5872-4115-837e-f7bdd36a5e0e\") " Feb 27 10:56:04 crc kubenswrapper[4998]: I0227 10:56:04.655797 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2097850c-5872-4115-837e-f7bdd36a5e0e-kube-api-access-wldj7" (OuterVolumeSpecName: "kube-api-access-wldj7") pod "2097850c-5872-4115-837e-f7bdd36a5e0e" (UID: "2097850c-5872-4115-837e-f7bdd36a5e0e"). InnerVolumeSpecName "kube-api-access-wldj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:56:04 crc kubenswrapper[4998]: I0227 10:56:04.742872 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wldj7\" (UniqueName: \"kubernetes.io/projected/2097850c-5872-4115-837e-f7bdd36a5e0e-kube-api-access-wldj7\") on node \"crc\" DevicePath \"\"" Feb 27 10:56:05 crc kubenswrapper[4998]: I0227 10:56:05.084297 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536496-hrmgf" event={"ID":"2097850c-5872-4115-837e-f7bdd36a5e0e","Type":"ContainerDied","Data":"dde139cb486595b0d35006c311804fe4dc1d1e0dbab761e1e1d626f643a94e13"} Feb 27 10:56:05 crc kubenswrapper[4998]: I0227 10:56:05.084332 4998 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dde139cb486595b0d35006c311804fe4dc1d1e0dbab761e1e1d626f643a94e13" Feb 27 10:56:05 crc kubenswrapper[4998]: I0227 10:56:05.084420 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536496-hrmgf" Feb 27 10:56:05 crc kubenswrapper[4998]: I0227 10:56:05.533998 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536490-qzmpd"] Feb 27 10:56:05 crc kubenswrapper[4998]: I0227 10:56:05.541247 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536490-qzmpd"] Feb 27 10:56:06 crc kubenswrapper[4998]: I0227 10:56:06.763419 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-b5rhx" Feb 27 10:56:06 crc kubenswrapper[4998]: I0227 10:56:06.784765 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6af23187-3396-44e8-8d49-96deb9d77a91" path="/var/lib/kubelet/pods/6af23187-3396-44e8-8d49-96deb9d77a91/volumes" Feb 27 10:56:06 crc kubenswrapper[4998]: I0227 10:56:06.825326 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-b5rhx"] Feb 27 10:56:07 crc kubenswrapper[4998]: I0227 10:56:07.101540 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-b5rhx" podUID="6a5a65e6-f3ed-4489-9573-2d6e72465be5" containerName="registry-server" containerID="cri-o://ce3033df989693b9c415a2ae16ca8558c5394226c374d800de7a1bcafac7e45c" gracePeriod=2 Feb 27 10:56:07 crc kubenswrapper[4998]: E0227 10:56:07.184753 4998 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a5a65e6_f3ed_4489_9573_2d6e72465be5.slice/crio-ce3033df989693b9c415a2ae16ca8558c5394226c374d800de7a1bcafac7e45c.scope\": RecentStats: unable to find data in memory cache]" Feb 27 10:56:07 crc kubenswrapper[4998]: I0227 10:56:07.595347 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b5rhx" Feb 27 10:56:07 crc kubenswrapper[4998]: I0227 10:56:07.697192 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n729x\" (UniqueName: \"kubernetes.io/projected/6a5a65e6-f3ed-4489-9573-2d6e72465be5-kube-api-access-n729x\") pod \"6a5a65e6-f3ed-4489-9573-2d6e72465be5\" (UID: \"6a5a65e6-f3ed-4489-9573-2d6e72465be5\") " Feb 27 10:56:07 crc kubenswrapper[4998]: I0227 10:56:07.697454 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a5a65e6-f3ed-4489-9573-2d6e72465be5-utilities\") pod \"6a5a65e6-f3ed-4489-9573-2d6e72465be5\" (UID: \"6a5a65e6-f3ed-4489-9573-2d6e72465be5\") " Feb 27 10:56:07 crc kubenswrapper[4998]: I0227 10:56:07.697617 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a5a65e6-f3ed-4489-9573-2d6e72465be5-catalog-content\") pod \"6a5a65e6-f3ed-4489-9573-2d6e72465be5\" (UID: \"6a5a65e6-f3ed-4489-9573-2d6e72465be5\") " Feb 27 10:56:07 crc kubenswrapper[4998]: I0227 10:56:07.698390 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a5a65e6-f3ed-4489-9573-2d6e72465be5-utilities" (OuterVolumeSpecName: "utilities") pod "6a5a65e6-f3ed-4489-9573-2d6e72465be5" (UID: "6a5a65e6-f3ed-4489-9573-2d6e72465be5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:56:07 crc kubenswrapper[4998]: I0227 10:56:07.702025 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a5a65e6-f3ed-4489-9573-2d6e72465be5-kube-api-access-n729x" (OuterVolumeSpecName: "kube-api-access-n729x") pod "6a5a65e6-f3ed-4489-9573-2d6e72465be5" (UID: "6a5a65e6-f3ed-4489-9573-2d6e72465be5"). InnerVolumeSpecName "kube-api-access-n729x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:56:07 crc kubenswrapper[4998]: I0227 10:56:07.755492 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a5a65e6-f3ed-4489-9573-2d6e72465be5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6a5a65e6-f3ed-4489-9573-2d6e72465be5" (UID: "6a5a65e6-f3ed-4489-9573-2d6e72465be5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:56:07 crc kubenswrapper[4998]: I0227 10:56:07.799692 4998 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a5a65e6-f3ed-4489-9573-2d6e72465be5-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 10:56:07 crc kubenswrapper[4998]: I0227 10:56:07.799739 4998 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a5a65e6-f3ed-4489-9573-2d6e72465be5-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 10:56:07 crc kubenswrapper[4998]: I0227 10:56:07.799751 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n729x\" (UniqueName: \"kubernetes.io/projected/6a5a65e6-f3ed-4489-9573-2d6e72465be5-kube-api-access-n729x\") on node \"crc\" DevicePath \"\"" Feb 27 10:56:08 crc kubenswrapper[4998]: I0227 10:56:08.109661 4998 generic.go:334] "Generic (PLEG): container finished" podID="6a5a65e6-f3ed-4489-9573-2d6e72465be5" containerID="ce3033df989693b9c415a2ae16ca8558c5394226c374d800de7a1bcafac7e45c" exitCode=0 Feb 27 10:56:08 crc kubenswrapper[4998]: I0227 10:56:08.109702 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b5rhx" event={"ID":"6a5a65e6-f3ed-4489-9573-2d6e72465be5","Type":"ContainerDied","Data":"ce3033df989693b9c415a2ae16ca8558c5394226c374d800de7a1bcafac7e45c"} Feb 27 10:56:08 crc kubenswrapper[4998]: I0227 10:56:08.109727 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b5rhx" Feb 27 10:56:08 crc kubenswrapper[4998]: I0227 10:56:08.109746 4998 scope.go:117] "RemoveContainer" containerID="ce3033df989693b9c415a2ae16ca8558c5394226c374d800de7a1bcafac7e45c" Feb 27 10:56:08 crc kubenswrapper[4998]: I0227 10:56:08.109735 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b5rhx" event={"ID":"6a5a65e6-f3ed-4489-9573-2d6e72465be5","Type":"ContainerDied","Data":"a194bfd824df6e90e32043c137035c58424badf03e60fb4070867cd4707edec2"} Feb 27 10:56:08 crc kubenswrapper[4998]: I0227 10:56:08.134217 4998 scope.go:117] "RemoveContainer" containerID="548613e424346fd884db303f1873e4de8e7e521e7dc9a6d05a648b8c070fd7a7" Feb 27 10:56:08 crc kubenswrapper[4998]: I0227 10:56:08.143482 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-b5rhx"] Feb 27 10:56:08 crc kubenswrapper[4998]: I0227 10:56:08.150700 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-b5rhx"] Feb 27 10:56:08 crc kubenswrapper[4998]: I0227 10:56:08.162222 4998 scope.go:117] "RemoveContainer" containerID="7bed490dbc59d52e33917ee896efa19575f1fa76c0c4ce713b2a39cc69d6f31d" Feb 27 10:56:08 crc kubenswrapper[4998]: I0227 10:56:08.199658 4998 scope.go:117] "RemoveContainer" containerID="ce3033df989693b9c415a2ae16ca8558c5394226c374d800de7a1bcafac7e45c" Feb 27 10:56:08 crc kubenswrapper[4998]: E0227 10:56:08.200271 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce3033df989693b9c415a2ae16ca8558c5394226c374d800de7a1bcafac7e45c\": container with ID starting with ce3033df989693b9c415a2ae16ca8558c5394226c374d800de7a1bcafac7e45c not found: ID does not exist" containerID="ce3033df989693b9c415a2ae16ca8558c5394226c374d800de7a1bcafac7e45c" Feb 27 10:56:08 crc kubenswrapper[4998]: I0227 10:56:08.200333 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce3033df989693b9c415a2ae16ca8558c5394226c374d800de7a1bcafac7e45c"} err="failed to get container status \"ce3033df989693b9c415a2ae16ca8558c5394226c374d800de7a1bcafac7e45c\": rpc error: code = NotFound desc = could not find container \"ce3033df989693b9c415a2ae16ca8558c5394226c374d800de7a1bcafac7e45c\": container with ID starting with ce3033df989693b9c415a2ae16ca8558c5394226c374d800de7a1bcafac7e45c not found: ID does not exist" Feb 27 10:56:08 crc kubenswrapper[4998]: I0227 10:56:08.200367 4998 scope.go:117] "RemoveContainer" containerID="548613e424346fd884db303f1873e4de8e7e521e7dc9a6d05a648b8c070fd7a7" Feb 27 10:56:08 crc kubenswrapper[4998]: E0227 10:56:08.200953 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"548613e424346fd884db303f1873e4de8e7e521e7dc9a6d05a648b8c070fd7a7\": container with ID starting with 548613e424346fd884db303f1873e4de8e7e521e7dc9a6d05a648b8c070fd7a7 not found: ID does not exist" containerID="548613e424346fd884db303f1873e4de8e7e521e7dc9a6d05a648b8c070fd7a7" Feb 27 10:56:08 crc kubenswrapper[4998]: I0227 10:56:08.201033 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"548613e424346fd884db303f1873e4de8e7e521e7dc9a6d05a648b8c070fd7a7"} err="failed to get container status \"548613e424346fd884db303f1873e4de8e7e521e7dc9a6d05a648b8c070fd7a7\": rpc error: code = NotFound desc = could not find container \"548613e424346fd884db303f1873e4de8e7e521e7dc9a6d05a648b8c070fd7a7\": container with ID starting with 548613e424346fd884db303f1873e4de8e7e521e7dc9a6d05a648b8c070fd7a7 not found: ID does not exist" Feb 27 10:56:08 crc kubenswrapper[4998]: I0227 10:56:08.201071 4998 scope.go:117] "RemoveContainer" containerID="7bed490dbc59d52e33917ee896efa19575f1fa76c0c4ce713b2a39cc69d6f31d" Feb 27 10:56:08 crc kubenswrapper[4998]: E0227 10:56:08.201483 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7bed490dbc59d52e33917ee896efa19575f1fa76c0c4ce713b2a39cc69d6f31d\": container with ID starting with 7bed490dbc59d52e33917ee896efa19575f1fa76c0c4ce713b2a39cc69d6f31d not found: ID does not exist" containerID="7bed490dbc59d52e33917ee896efa19575f1fa76c0c4ce713b2a39cc69d6f31d" Feb 27 10:56:08 crc kubenswrapper[4998]: I0227 10:56:08.201530 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7bed490dbc59d52e33917ee896efa19575f1fa76c0c4ce713b2a39cc69d6f31d"} err="failed to get container status \"7bed490dbc59d52e33917ee896efa19575f1fa76c0c4ce713b2a39cc69d6f31d\": rpc error: code = NotFound desc = could not find container \"7bed490dbc59d52e33917ee896efa19575f1fa76c0c4ce713b2a39cc69d6f31d\": container with ID starting with 7bed490dbc59d52e33917ee896efa19575f1fa76c0c4ce713b2a39cc69d6f31d not found: ID does not exist" Feb 27 10:56:08 crc kubenswrapper[4998]: I0227 10:56:08.777693 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a5a65e6-f3ed-4489-9573-2d6e72465be5" path="/var/lib/kubelet/pods/6a5a65e6-f3ed-4489-9573-2d6e72465be5/volumes" Feb 27 10:56:12 crc kubenswrapper[4998]: I0227 10:56:12.765676 4998 scope.go:117] "RemoveContainer" containerID="7b0185177c0eba8b1ede867992da9cdf152aa626aa33c9939f91ead1ce17d9c4" Feb 27 10:56:12 crc kubenswrapper[4998]: E0227 10:56:12.766388 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m6kr5_openshift-machine-config-operator(400c5e2f-5448-49c6-bf8e-04b21e552bb2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" Feb 27 10:56:27 crc kubenswrapper[4998]: I0227 10:56:27.764931 4998 scope.go:117] "RemoveContainer" containerID="7b0185177c0eba8b1ede867992da9cdf152aa626aa33c9939f91ead1ce17d9c4" Feb 27 10:56:27 crc kubenswrapper[4998]: E0227 10:56:27.766673 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m6kr5_openshift-machine-config-operator(400c5e2f-5448-49c6-bf8e-04b21e552bb2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" Feb 27 10:56:29 crc kubenswrapper[4998]: I0227 10:56:29.575786 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-x7mg4"] Feb 27 10:56:29 crc kubenswrapper[4998]: E0227 10:56:29.576189 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2097850c-5872-4115-837e-f7bdd36a5e0e" containerName="oc" Feb 27 10:56:29 crc kubenswrapper[4998]: I0227 10:56:29.576205 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="2097850c-5872-4115-837e-f7bdd36a5e0e" containerName="oc" Feb 27 10:56:29 crc kubenswrapper[4998]: E0227 10:56:29.576222 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a5a65e6-f3ed-4489-9573-2d6e72465be5" containerName="extract-content" Feb 27 10:56:29 crc kubenswrapper[4998]: I0227 10:56:29.576249 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a5a65e6-f3ed-4489-9573-2d6e72465be5" containerName="extract-content" Feb 27 10:56:29 crc kubenswrapper[4998]: E0227 10:56:29.576259 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a5a65e6-f3ed-4489-9573-2d6e72465be5" containerName="registry-server" Feb 27 10:56:29 crc kubenswrapper[4998]: I0227 10:56:29.576269 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a5a65e6-f3ed-4489-9573-2d6e72465be5" containerName="registry-server" Feb 27 10:56:29 crc kubenswrapper[4998]: E0227 10:56:29.576294 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a5a65e6-f3ed-4489-9573-2d6e72465be5" containerName="extract-utilities" Feb 27 10:56:29 crc kubenswrapper[4998]: I0227 10:56:29.576302 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a5a65e6-f3ed-4489-9573-2d6e72465be5" containerName="extract-utilities" Feb 27 10:56:29 crc kubenswrapper[4998]: I0227 10:56:29.576517 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="2097850c-5872-4115-837e-f7bdd36a5e0e" containerName="oc" Feb 27 10:56:29 crc kubenswrapper[4998]: I0227 10:56:29.576535 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a5a65e6-f3ed-4489-9573-2d6e72465be5" containerName="registry-server" Feb 27 10:56:29 crc kubenswrapper[4998]: I0227 10:56:29.578162 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x7mg4" Feb 27 10:56:29 crc kubenswrapper[4998]: I0227 10:56:29.586819 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-x7mg4"] Feb 27 10:56:29 crc kubenswrapper[4998]: I0227 10:56:29.621279 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9b958b3-f9cf-4386-9734-3d52c0c3ba65-utilities\") pod \"certified-operators-x7mg4\" (UID: \"d9b958b3-f9cf-4386-9734-3d52c0c3ba65\") " pod="openshift-marketplace/certified-operators-x7mg4" Feb 27 10:56:29 crc kubenswrapper[4998]: I0227 10:56:29.621416 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9b958b3-f9cf-4386-9734-3d52c0c3ba65-catalog-content\") pod \"certified-operators-x7mg4\" (UID: \"d9b958b3-f9cf-4386-9734-3d52c0c3ba65\") " pod="openshift-marketplace/certified-operators-x7mg4" Feb 27 10:56:29 crc kubenswrapper[4998]: I0227 10:56:29.621473 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7cxf\" (UniqueName: \"kubernetes.io/projected/d9b958b3-f9cf-4386-9734-3d52c0c3ba65-kube-api-access-v7cxf\") pod \"certified-operators-x7mg4\" (UID: \"d9b958b3-f9cf-4386-9734-3d52c0c3ba65\") " pod="openshift-marketplace/certified-operators-x7mg4" Feb 27 10:56:29 crc kubenswrapper[4998]: I0227 10:56:29.723047 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7cxf\" (UniqueName: \"kubernetes.io/projected/d9b958b3-f9cf-4386-9734-3d52c0c3ba65-kube-api-access-v7cxf\") pod \"certified-operators-x7mg4\" (UID: \"d9b958b3-f9cf-4386-9734-3d52c0c3ba65\") " pod="openshift-marketplace/certified-operators-x7mg4" Feb 27 10:56:29 crc kubenswrapper[4998]: I0227 10:56:29.723135 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9b958b3-f9cf-4386-9734-3d52c0c3ba65-utilities\") pod \"certified-operators-x7mg4\" (UID: \"d9b958b3-f9cf-4386-9734-3d52c0c3ba65\") " pod="openshift-marketplace/certified-operators-x7mg4" Feb 27 10:56:29 crc kubenswrapper[4998]: I0227 10:56:29.723252 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9b958b3-f9cf-4386-9734-3d52c0c3ba65-catalog-content\") pod \"certified-operators-x7mg4\" (UID: \"d9b958b3-f9cf-4386-9734-3d52c0c3ba65\") " pod="openshift-marketplace/certified-operators-x7mg4" Feb 27 10:56:29 crc kubenswrapper[4998]: I0227 10:56:29.723782 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9b958b3-f9cf-4386-9734-3d52c0c3ba65-catalog-content\") pod \"certified-operators-x7mg4\" (UID: \"d9b958b3-f9cf-4386-9734-3d52c0c3ba65\") " pod="openshift-marketplace/certified-operators-x7mg4" Feb 27 10:56:29 crc kubenswrapper[4998]: I0227 10:56:29.724013 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9b958b3-f9cf-4386-9734-3d52c0c3ba65-utilities\") pod \"certified-operators-x7mg4\" (UID: \"d9b958b3-f9cf-4386-9734-3d52c0c3ba65\") " pod="openshift-marketplace/certified-operators-x7mg4" Feb 27 10:56:29 crc kubenswrapper[4998]: I0227 10:56:29.746275 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7cxf\" (UniqueName: \"kubernetes.io/projected/d9b958b3-f9cf-4386-9734-3d52c0c3ba65-kube-api-access-v7cxf\") pod \"certified-operators-x7mg4\" (UID: \"d9b958b3-f9cf-4386-9734-3d52c0c3ba65\") " pod="openshift-marketplace/certified-operators-x7mg4" Feb 27 10:56:29 crc kubenswrapper[4998]: I0227 10:56:29.909133 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x7mg4" Feb 27 10:56:30 crc kubenswrapper[4998]: I0227 10:56:30.473441 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-x7mg4"] Feb 27 10:56:31 crc kubenswrapper[4998]: I0227 10:56:31.323998 4998 generic.go:334] "Generic (PLEG): container finished" podID="d9b958b3-f9cf-4386-9734-3d52c0c3ba65" containerID="73c124b10b0486f933da6813c0e0f204607e14ac408f76ffe5416e6b0d1f5885" exitCode=0 Feb 27 10:56:31 crc kubenswrapper[4998]: I0227 10:56:31.324046 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x7mg4" event={"ID":"d9b958b3-f9cf-4386-9734-3d52c0c3ba65","Type":"ContainerDied","Data":"73c124b10b0486f933da6813c0e0f204607e14ac408f76ffe5416e6b0d1f5885"} Feb 27 10:56:31 crc kubenswrapper[4998]: I0227 10:56:31.324315 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x7mg4" event={"ID":"d9b958b3-f9cf-4386-9734-3d52c0c3ba65","Type":"ContainerStarted","Data":"4e587d13b760568ba8ea5333398e0d07b284ce699e66955127d5d4bddec6c655"} Feb 27 10:56:37 crc kubenswrapper[4998]: I0227 10:56:37.389672 4998 generic.go:334] "Generic (PLEG): container finished" podID="d9b958b3-f9cf-4386-9734-3d52c0c3ba65" containerID="808828b831c07d3b07610005c2e22729d25e9d064da7d6a818aac23be98480b4" exitCode=0 Feb 27 10:56:37 crc kubenswrapper[4998]: I0227 10:56:37.389719 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x7mg4" event={"ID":"d9b958b3-f9cf-4386-9734-3d52c0c3ba65","Type":"ContainerDied","Data":"808828b831c07d3b07610005c2e22729d25e9d064da7d6a818aac23be98480b4"} Feb 27 10:56:38 crc kubenswrapper[4998]: I0227 10:56:38.404691 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x7mg4" event={"ID":"d9b958b3-f9cf-4386-9734-3d52c0c3ba65","Type":"ContainerStarted","Data":"018559bcf5dbbf77cc67e2a30dec86659cf80d96cc34c2acb1d1d0f1c9bce77e"} Feb 27 10:56:38 crc kubenswrapper[4998]: I0227 10:56:38.434707 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-x7mg4" podStartSLOduration=2.981527487 podStartE2EDuration="9.434684512s" podCreationTimestamp="2026-02-27 10:56:29 +0000 UTC" firstStartedPulling="2026-02-27 10:56:31.326115689 +0000 UTC m=+2343.324386657" lastFinishedPulling="2026-02-27 10:56:37.779272714 +0000 UTC m=+2349.777543682" observedRunningTime="2026-02-27 10:56:38.426052399 +0000 UTC m=+2350.424323407" watchObservedRunningTime="2026-02-27 10:56:38.434684512 +0000 UTC m=+2350.432955470" Feb 27 10:56:39 crc kubenswrapper[4998]: I0227 10:56:39.909770 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-x7mg4" Feb 27 10:56:39 crc kubenswrapper[4998]: I0227 10:56:39.909899 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-x7mg4" Feb 27 10:56:39 crc kubenswrapper[4998]: I0227 10:56:39.958536 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-x7mg4" Feb 27 10:56:40 crc kubenswrapper[4998]: I0227 10:56:40.765608 4998 scope.go:117] "RemoveContainer" containerID="7b0185177c0eba8b1ede867992da9cdf152aa626aa33c9939f91ead1ce17d9c4" Feb 27 10:56:40 crc kubenswrapper[4998]: E0227 10:56:40.766048 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m6kr5_openshift-machine-config-operator(400c5e2f-5448-49c6-bf8e-04b21e552bb2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" Feb 27 10:56:48 crc kubenswrapper[4998]: I0227 10:56:48.913865 4998 scope.go:117] "RemoveContainer" containerID="5b75aa082c7e0c2d83962823826db1897428a5c5fe7ab7c09b7255f6bf953d5e" Feb 27 10:56:49 crc kubenswrapper[4998]: I0227 10:56:49.980743 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-x7mg4" Feb 27 10:56:50 crc kubenswrapper[4998]: I0227 10:56:50.060629 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-x7mg4"] Feb 27 10:56:50 crc kubenswrapper[4998]: I0227 10:56:50.107193 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5hqqj"] Feb 27 10:56:50 crc kubenswrapper[4998]: I0227 10:56:50.107568 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5hqqj" podUID="dc780063-d484-4f3b-9baf-c1071d1a5b23" containerName="registry-server" containerID="cri-o://ad286d860afaa73859e36c2e5a072cd90a581e79835a274f8cc79626572bfd12" gracePeriod=2 Feb 27 10:56:50 crc kubenswrapper[4998]: I0227 10:56:50.510384 4998 generic.go:334] "Generic (PLEG): container finished" podID="dc780063-d484-4f3b-9baf-c1071d1a5b23" containerID="ad286d860afaa73859e36c2e5a072cd90a581e79835a274f8cc79626572bfd12" exitCode=0 Feb 27 10:56:50 crc kubenswrapper[4998]: I0227 10:56:50.511183 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5hqqj" event={"ID":"dc780063-d484-4f3b-9baf-c1071d1a5b23","Type":"ContainerDied","Data":"ad286d860afaa73859e36c2e5a072cd90a581e79835a274f8cc79626572bfd12"} Feb 27 10:56:50 crc kubenswrapper[4998]: I0227 10:56:50.511212 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5hqqj" event={"ID":"dc780063-d484-4f3b-9baf-c1071d1a5b23","Type":"ContainerDied","Data":"ec96ac6c8ba4662d4b5bb139b7311d19253e4ffb1716f4ad54262e65ebb544c2"} Feb 27 10:56:50 crc kubenswrapper[4998]: I0227 10:56:50.511242 4998 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec96ac6c8ba4662d4b5bb139b7311d19253e4ffb1716f4ad54262e65ebb544c2" Feb 27 10:56:50 crc kubenswrapper[4998]: I0227 10:56:50.543937 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5hqqj" Feb 27 10:56:50 crc kubenswrapper[4998]: I0227 10:56:50.632686 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc780063-d484-4f3b-9baf-c1071d1a5b23-catalog-content\") pod \"dc780063-d484-4f3b-9baf-c1071d1a5b23\" (UID: \"dc780063-d484-4f3b-9baf-c1071d1a5b23\") " Feb 27 10:56:50 crc kubenswrapper[4998]: I0227 10:56:50.632841 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc780063-d484-4f3b-9baf-c1071d1a5b23-utilities\") pod \"dc780063-d484-4f3b-9baf-c1071d1a5b23\" (UID: \"dc780063-d484-4f3b-9baf-c1071d1a5b23\") " Feb 27 10:56:50 crc kubenswrapper[4998]: I0227 10:56:50.632890 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6fqlc\" (UniqueName: \"kubernetes.io/projected/dc780063-d484-4f3b-9baf-c1071d1a5b23-kube-api-access-6fqlc\") pod \"dc780063-d484-4f3b-9baf-c1071d1a5b23\" (UID: \"dc780063-d484-4f3b-9baf-c1071d1a5b23\") " Feb 27 10:56:50 crc kubenswrapper[4998]: I0227 10:56:50.634534 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc780063-d484-4f3b-9baf-c1071d1a5b23-utilities" (OuterVolumeSpecName: "utilities") pod "dc780063-d484-4f3b-9baf-c1071d1a5b23" (UID: "dc780063-d484-4f3b-9baf-c1071d1a5b23"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:56:50 crc kubenswrapper[4998]: I0227 10:56:50.640370 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc780063-d484-4f3b-9baf-c1071d1a5b23-kube-api-access-6fqlc" (OuterVolumeSpecName: "kube-api-access-6fqlc") pod "dc780063-d484-4f3b-9baf-c1071d1a5b23" (UID: "dc780063-d484-4f3b-9baf-c1071d1a5b23"). InnerVolumeSpecName "kube-api-access-6fqlc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:56:50 crc kubenswrapper[4998]: I0227 10:56:50.684473 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc780063-d484-4f3b-9baf-c1071d1a5b23-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dc780063-d484-4f3b-9baf-c1071d1a5b23" (UID: "dc780063-d484-4f3b-9baf-c1071d1a5b23"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:56:50 crc kubenswrapper[4998]: I0227 10:56:50.737139 4998 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc780063-d484-4f3b-9baf-c1071d1a5b23-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 10:56:50 crc kubenswrapper[4998]: I0227 10:56:50.737183 4998 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc780063-d484-4f3b-9baf-c1071d1a5b23-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 10:56:50 crc kubenswrapper[4998]: I0227 10:56:50.737199 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6fqlc\" (UniqueName: \"kubernetes.io/projected/dc780063-d484-4f3b-9baf-c1071d1a5b23-kube-api-access-6fqlc\") on node \"crc\" DevicePath \"\"" Feb 27 10:56:51 crc kubenswrapper[4998]: I0227 10:56:51.517892 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5hqqj" Feb 27 10:56:51 crc kubenswrapper[4998]: I0227 10:56:51.538691 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5hqqj"] Feb 27 10:56:51 crc kubenswrapper[4998]: I0227 10:56:51.551963 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5hqqj"] Feb 27 10:56:51 crc kubenswrapper[4998]: I0227 10:56:51.764877 4998 scope.go:117] "RemoveContainer" containerID="7b0185177c0eba8b1ede867992da9cdf152aa626aa33c9939f91ead1ce17d9c4" Feb 27 10:56:51 crc kubenswrapper[4998]: E0227 10:56:51.765153 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m6kr5_openshift-machine-config-operator(400c5e2f-5448-49c6-bf8e-04b21e552bb2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" Feb 27 10:56:52 crc kubenswrapper[4998]: I0227 10:56:52.779108 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc780063-d484-4f3b-9baf-c1071d1a5b23" path="/var/lib/kubelet/pods/dc780063-d484-4f3b-9baf-c1071d1a5b23/volumes" Feb 27 10:57:06 crc kubenswrapper[4998]: I0227 10:57:06.765986 4998 scope.go:117] "RemoveContainer" containerID="7b0185177c0eba8b1ede867992da9cdf152aa626aa33c9939f91ead1ce17d9c4" Feb 27 10:57:06 crc kubenswrapper[4998]: E0227 10:57:06.767052 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m6kr5_openshift-machine-config-operator(400c5e2f-5448-49c6-bf8e-04b21e552bb2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" Feb 27 10:57:19 crc kubenswrapper[4998]: I0227 10:57:19.764377 4998 scope.go:117] "RemoveContainer" containerID="7b0185177c0eba8b1ede867992da9cdf152aa626aa33c9939f91ead1ce17d9c4" Feb 27 10:57:19 crc kubenswrapper[4998]: E0227 10:57:19.765008 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m6kr5_openshift-machine-config-operator(400c5e2f-5448-49c6-bf8e-04b21e552bb2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" Feb 27 10:57:31 crc kubenswrapper[4998]: I0227 10:57:31.765822 4998 scope.go:117] "RemoveContainer" containerID="7b0185177c0eba8b1ede867992da9cdf152aa626aa33c9939f91ead1ce17d9c4" Feb 27 10:57:31 crc kubenswrapper[4998]: E0227 10:57:31.766760 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m6kr5_openshift-machine-config-operator(400c5e2f-5448-49c6-bf8e-04b21e552bb2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" Feb 27 10:57:44 crc kubenswrapper[4998]: I0227 10:57:44.765954 4998 scope.go:117] "RemoveContainer" containerID="7b0185177c0eba8b1ede867992da9cdf152aa626aa33c9939f91ead1ce17d9c4" Feb 27 10:57:44 crc kubenswrapper[4998]: E0227 10:57:44.766814 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m6kr5_openshift-machine-config-operator(400c5e2f-5448-49c6-bf8e-04b21e552bb2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" Feb 27 10:57:49 crc kubenswrapper[4998]: I0227 10:57:49.018169 4998 scope.go:117] "RemoveContainer" containerID="ad286d860afaa73859e36c2e5a072cd90a581e79835a274f8cc79626572bfd12" Feb 27 10:57:49 crc kubenswrapper[4998]: I0227 10:57:49.045929 4998 scope.go:117] "RemoveContainer" containerID="23d268034254ceae99ffe8551dc8b36388bad45c9cf33e279d19d6978a3689cc" Feb 27 10:57:49 crc kubenswrapper[4998]: I0227 10:57:49.071033 4998 scope.go:117] "RemoveContainer" containerID="800144401f9cdcb7a74ce983979fee962f3fd6fc568ed3af42e5e21886b514e2" Feb 27 10:57:53 crc kubenswrapper[4998]: I0227 10:57:53.067593 4998 generic.go:334] "Generic (PLEG): container finished" podID="201a3a34-a0ff-476a-9fb2-db9ad3757a58" containerID="6b6f30299b9eb80e8c5047de5d1d01175ba6dd13c564701a4cb0dad651cba28e" exitCode=0 Feb 27 10:57:53 crc kubenswrapper[4998]: I0227 10:57:53.068140 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4d9t9" event={"ID":"201a3a34-a0ff-476a-9fb2-db9ad3757a58","Type":"ContainerDied","Data":"6b6f30299b9eb80e8c5047de5d1d01175ba6dd13c564701a4cb0dad651cba28e"} Feb 27 10:57:54 crc kubenswrapper[4998]: I0227 10:57:54.512906 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4d9t9" Feb 27 10:57:54 crc kubenswrapper[4998]: I0227 10:57:54.649810 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d56ps\" (UniqueName: \"kubernetes.io/projected/201a3a34-a0ff-476a-9fb2-db9ad3757a58-kube-api-access-d56ps\") pod \"201a3a34-a0ff-476a-9fb2-db9ad3757a58\" (UID: \"201a3a34-a0ff-476a-9fb2-db9ad3757a58\") " Feb 27 10:57:54 crc kubenswrapper[4998]: I0227 10:57:54.650437 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/201a3a34-a0ff-476a-9fb2-db9ad3757a58-inventory\") pod \"201a3a34-a0ff-476a-9fb2-db9ad3757a58\" (UID: \"201a3a34-a0ff-476a-9fb2-db9ad3757a58\") " Feb 27 10:57:54 crc kubenswrapper[4998]: I0227 10:57:54.650647 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/201a3a34-a0ff-476a-9fb2-db9ad3757a58-libvirt-secret-0\") pod \"201a3a34-a0ff-476a-9fb2-db9ad3757a58\" (UID: \"201a3a34-a0ff-476a-9fb2-db9ad3757a58\") " Feb 27 10:57:54 crc kubenswrapper[4998]: I0227 10:57:54.651062 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/201a3a34-a0ff-476a-9fb2-db9ad3757a58-ssh-key-openstack-edpm-ipam\") pod \"201a3a34-a0ff-476a-9fb2-db9ad3757a58\" (UID: \"201a3a34-a0ff-476a-9fb2-db9ad3757a58\") " Feb 27 10:57:54 crc kubenswrapper[4998]: I0227 10:57:54.651181 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/201a3a34-a0ff-476a-9fb2-db9ad3757a58-libvirt-combined-ca-bundle\") pod \"201a3a34-a0ff-476a-9fb2-db9ad3757a58\" (UID: \"201a3a34-a0ff-476a-9fb2-db9ad3757a58\") " Feb 27 10:57:54 crc kubenswrapper[4998]: I0227 10:57:54.657680 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/201a3a34-a0ff-476a-9fb2-db9ad3757a58-kube-api-access-d56ps" (OuterVolumeSpecName: "kube-api-access-d56ps") pod "201a3a34-a0ff-476a-9fb2-db9ad3757a58" (UID: "201a3a34-a0ff-476a-9fb2-db9ad3757a58"). InnerVolumeSpecName "kube-api-access-d56ps". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:57:54 crc kubenswrapper[4998]: I0227 10:57:54.673667 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/201a3a34-a0ff-476a-9fb2-db9ad3757a58-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "201a3a34-a0ff-476a-9fb2-db9ad3757a58" (UID: "201a3a34-a0ff-476a-9fb2-db9ad3757a58"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:57:54 crc kubenswrapper[4998]: I0227 10:57:54.695349 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/201a3a34-a0ff-476a-9fb2-db9ad3757a58-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "201a3a34-a0ff-476a-9fb2-db9ad3757a58" (UID: "201a3a34-a0ff-476a-9fb2-db9ad3757a58"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:57:54 crc kubenswrapper[4998]: I0227 10:57:54.696756 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/201a3a34-a0ff-476a-9fb2-db9ad3757a58-inventory" (OuterVolumeSpecName: "inventory") pod "201a3a34-a0ff-476a-9fb2-db9ad3757a58" (UID: "201a3a34-a0ff-476a-9fb2-db9ad3757a58"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:57:54 crc kubenswrapper[4998]: I0227 10:57:54.696820 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/201a3a34-a0ff-476a-9fb2-db9ad3757a58-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "201a3a34-a0ff-476a-9fb2-db9ad3757a58" (UID: "201a3a34-a0ff-476a-9fb2-db9ad3757a58"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:57:54 crc kubenswrapper[4998]: I0227 10:57:54.755085 4998 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/201a3a34-a0ff-476a-9fb2-db9ad3757a58-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 27 10:57:54 crc kubenswrapper[4998]: I0227 10:57:54.755122 4998 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/201a3a34-a0ff-476a-9fb2-db9ad3757a58-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:57:54 crc kubenswrapper[4998]: I0227 10:57:54.755134 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d56ps\" (UniqueName: \"kubernetes.io/projected/201a3a34-a0ff-476a-9fb2-db9ad3757a58-kube-api-access-d56ps\") on node \"crc\" DevicePath \"\"" Feb 27 10:57:54 crc kubenswrapper[4998]: I0227 10:57:54.755147 4998 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/201a3a34-a0ff-476a-9fb2-db9ad3757a58-inventory\") on node \"crc\" DevicePath \"\"" Feb 27 10:57:54 crc kubenswrapper[4998]: I0227 10:57:54.755160 4998 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/201a3a34-a0ff-476a-9fb2-db9ad3757a58-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Feb 27 10:57:55 crc kubenswrapper[4998]: I0227 10:57:55.086996 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4d9t9" event={"ID":"201a3a34-a0ff-476a-9fb2-db9ad3757a58","Type":"ContainerDied","Data":"394507b3081fbe26931384c42b6bc6bce80730089e7cc13c91394cf183a56e61"} Feb 27 10:57:55 crc kubenswrapper[4998]: I0227 10:57:55.087047 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4d9t9" Feb 27 10:57:55 crc kubenswrapper[4998]: I0227 10:57:55.087052 4998 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="394507b3081fbe26931384c42b6bc6bce80730089e7cc13c91394cf183a56e61" Feb 27 10:57:55 crc kubenswrapper[4998]: I0227 10:57:55.225400 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-g6xl9"] Feb 27 10:57:55 crc kubenswrapper[4998]: E0227 10:57:55.226160 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="201a3a34-a0ff-476a-9fb2-db9ad3757a58" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 27 10:57:55 crc kubenswrapper[4998]: I0227 10:57:55.226201 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="201a3a34-a0ff-476a-9fb2-db9ad3757a58" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 27 10:57:55 crc kubenswrapper[4998]: E0227 10:57:55.226282 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc780063-d484-4f3b-9baf-c1071d1a5b23" containerName="extract-content" Feb 27 10:57:55 crc kubenswrapper[4998]: I0227 10:57:55.226303 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc780063-d484-4f3b-9baf-c1071d1a5b23" containerName="extract-content" Feb 27 10:57:55 crc kubenswrapper[4998]: E0227 10:57:55.226339 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc780063-d484-4f3b-9baf-c1071d1a5b23" containerName="registry-server" Feb 27 10:57:55 crc kubenswrapper[4998]: I0227 10:57:55.226357 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc780063-d484-4f3b-9baf-c1071d1a5b23" containerName="registry-server" Feb 27 10:57:55 crc kubenswrapper[4998]: E0227 10:57:55.226410 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc780063-d484-4f3b-9baf-c1071d1a5b23" containerName="extract-utilities" Feb 27 10:57:55 crc kubenswrapper[4998]: I0227 10:57:55.226428 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc780063-d484-4f3b-9baf-c1071d1a5b23" containerName="extract-utilities" Feb 27 10:57:55 crc kubenswrapper[4998]: I0227 10:57:55.226878 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc780063-d484-4f3b-9baf-c1071d1a5b23" containerName="registry-server" Feb 27 10:57:55 crc kubenswrapper[4998]: I0227 10:57:55.226926 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="201a3a34-a0ff-476a-9fb2-db9ad3757a58" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 27 10:57:55 crc kubenswrapper[4998]: I0227 10:57:55.228365 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-g6xl9" Feb 27 10:57:55 crc kubenswrapper[4998]: I0227 10:57:55.233808 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-g6xl9"] Feb 27 10:57:55 crc kubenswrapper[4998]: I0227 10:57:55.234170 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Feb 27 10:57:55 crc kubenswrapper[4998]: I0227 10:57:55.234586 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 27 10:57:55 crc kubenswrapper[4998]: I0227 10:57:55.234724 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Feb 27 10:57:55 crc kubenswrapper[4998]: I0227 10:57:55.234850 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 27 10:57:55 crc kubenswrapper[4998]: I0227 10:57:55.234960 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 27 10:57:55 crc kubenswrapper[4998]: I0227 10:57:55.235076 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-bpcp2" Feb 27 10:57:55 crc kubenswrapper[4998]: I0227 10:57:55.237687 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Feb 27 10:57:55 crc kubenswrapper[4998]: I0227 10:57:55.366416 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dde05d60-5841-4834-b7fc-a0dea36c8a93-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-g6xl9\" (UID: \"dde05d60-5841-4834-b7fc-a0dea36c8a93\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-g6xl9" Feb 27 10:57:55 crc kubenswrapper[4998]: I0227 10:57:55.366480 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/dde05d60-5841-4834-b7fc-a0dea36c8a93-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-g6xl9\" (UID: \"dde05d60-5841-4834-b7fc-a0dea36c8a93\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-g6xl9" Feb 27 10:57:55 crc kubenswrapper[4998]: I0227 10:57:55.366507 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/dde05d60-5841-4834-b7fc-a0dea36c8a93-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-g6xl9\" (UID: \"dde05d60-5841-4834-b7fc-a0dea36c8a93\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-g6xl9" Feb 27 10:57:55 crc kubenswrapper[4998]: I0227 10:57:55.366554 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/dde05d60-5841-4834-b7fc-a0dea36c8a93-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-g6xl9\" (UID: \"dde05d60-5841-4834-b7fc-a0dea36c8a93\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-g6xl9" Feb 27 10:57:55 crc kubenswrapper[4998]: I0227 10:57:55.366587 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/dde05d60-5841-4834-b7fc-a0dea36c8a93-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-g6xl9\" (UID: \"dde05d60-5841-4834-b7fc-a0dea36c8a93\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-g6xl9" Feb 27 10:57:55 crc kubenswrapper[4998]: I0227 10:57:55.366630 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/dde05d60-5841-4834-b7fc-a0dea36c8a93-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-g6xl9\" (UID: \"dde05d60-5841-4834-b7fc-a0dea36c8a93\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-g6xl9" Feb 27 10:57:55 crc kubenswrapper[4998]: I0227 10:57:55.366671 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/dde05d60-5841-4834-b7fc-a0dea36c8a93-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-g6xl9\" (UID: \"dde05d60-5841-4834-b7fc-a0dea36c8a93\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-g6xl9" Feb 27 10:57:55 crc kubenswrapper[4998]: I0227 10:57:55.366692 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dde05d60-5841-4834-b7fc-a0dea36c8a93-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-g6xl9\" (UID: \"dde05d60-5841-4834-b7fc-a0dea36c8a93\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-g6xl9" Feb 27 10:57:55 crc kubenswrapper[4998]: I0227 10:57:55.366723 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/dde05d60-5841-4834-b7fc-a0dea36c8a93-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-g6xl9\" (UID: \"dde05d60-5841-4834-b7fc-a0dea36c8a93\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-g6xl9" Feb 27 10:57:55 crc kubenswrapper[4998]: I0227 10:57:55.366879 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dde05d60-5841-4834-b7fc-a0dea36c8a93-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-g6xl9\" (UID: \"dde05d60-5841-4834-b7fc-a0dea36c8a93\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-g6xl9" Feb 27 10:57:55 crc kubenswrapper[4998]: I0227 10:57:55.366940 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x46h9\" (UniqueName: \"kubernetes.io/projected/dde05d60-5841-4834-b7fc-a0dea36c8a93-kube-api-access-x46h9\") pod \"nova-edpm-deployment-openstack-edpm-ipam-g6xl9\" (UID: \"dde05d60-5841-4834-b7fc-a0dea36c8a93\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-g6xl9" Feb 27 10:57:55 crc kubenswrapper[4998]: I0227 10:57:55.468862 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/dde05d60-5841-4834-b7fc-a0dea36c8a93-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-g6xl9\" (UID: \"dde05d60-5841-4834-b7fc-a0dea36c8a93\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-g6xl9" Feb 27 10:57:55 crc kubenswrapper[4998]: I0227 10:57:55.468927 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/dde05d60-5841-4834-b7fc-a0dea36c8a93-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-g6xl9\" (UID: \"dde05d60-5841-4834-b7fc-a0dea36c8a93\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-g6xl9" Feb 27 10:57:55 crc kubenswrapper[4998]: I0227 10:57:55.468957 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dde05d60-5841-4834-b7fc-a0dea36c8a93-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-g6xl9\" (UID: \"dde05d60-5841-4834-b7fc-a0dea36c8a93\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-g6xl9" Feb 27 10:57:55 crc kubenswrapper[4998]: I0227 10:57:55.468992 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/dde05d60-5841-4834-b7fc-a0dea36c8a93-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-g6xl9\" (UID: \"dde05d60-5841-4834-b7fc-a0dea36c8a93\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-g6xl9" Feb 27 10:57:55 crc kubenswrapper[4998]: I0227 10:57:55.469025 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dde05d60-5841-4834-b7fc-a0dea36c8a93-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-g6xl9\" (UID: \"dde05d60-5841-4834-b7fc-a0dea36c8a93\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-g6xl9" Feb 27 10:57:55 crc kubenswrapper[4998]: I0227 10:57:55.469044 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x46h9\" (UniqueName: \"kubernetes.io/projected/dde05d60-5841-4834-b7fc-a0dea36c8a93-kube-api-access-x46h9\") pod \"nova-edpm-deployment-openstack-edpm-ipam-g6xl9\" (UID: \"dde05d60-5841-4834-b7fc-a0dea36c8a93\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-g6xl9" Feb 27 10:57:55 crc kubenswrapper[4998]: I0227 10:57:55.469075 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dde05d60-5841-4834-b7fc-a0dea36c8a93-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-g6xl9\" (UID: \"dde05d60-5841-4834-b7fc-a0dea36c8a93\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-g6xl9" Feb 27 10:57:55 crc kubenswrapper[4998]: I0227 10:57:55.469104 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/dde05d60-5841-4834-b7fc-a0dea36c8a93-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-g6xl9\" (UID: \"dde05d60-5841-4834-b7fc-a0dea36c8a93\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-g6xl9" Feb 27 10:57:55 crc kubenswrapper[4998]: I0227 10:57:55.469126 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/dde05d60-5841-4834-b7fc-a0dea36c8a93-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-g6xl9\" (UID: \"dde05d60-5841-4834-b7fc-a0dea36c8a93\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-g6xl9" Feb 27 10:57:55 crc kubenswrapper[4998]: I0227 10:57:55.469167 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/dde05d60-5841-4834-b7fc-a0dea36c8a93-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-g6xl9\" (UID: \"dde05d60-5841-4834-b7fc-a0dea36c8a93\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-g6xl9" Feb 27 10:57:55 crc kubenswrapper[4998]: I0227 10:57:55.469186 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/dde05d60-5841-4834-b7fc-a0dea36c8a93-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-g6xl9\" (UID: \"dde05d60-5841-4834-b7fc-a0dea36c8a93\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-g6xl9" Feb 27 10:57:55 crc kubenswrapper[4998]: I0227 10:57:55.470429 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/dde05d60-5841-4834-b7fc-a0dea36c8a93-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-g6xl9\" (UID: \"dde05d60-5841-4834-b7fc-a0dea36c8a93\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-g6xl9" Feb 27 10:57:55 crc kubenswrapper[4998]: I0227 10:57:55.472326 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/dde05d60-5841-4834-b7fc-a0dea36c8a93-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-g6xl9\" (UID: \"dde05d60-5841-4834-b7fc-a0dea36c8a93\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-g6xl9" Feb 27 10:57:55 crc kubenswrapper[4998]: I0227 10:57:55.472330 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/dde05d60-5841-4834-b7fc-a0dea36c8a93-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-g6xl9\" (UID: \"dde05d60-5841-4834-b7fc-a0dea36c8a93\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-g6xl9" Feb 27 10:57:55 crc kubenswrapper[4998]: I0227 10:57:55.472675 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dde05d60-5841-4834-b7fc-a0dea36c8a93-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-g6xl9\" (UID: \"dde05d60-5841-4834-b7fc-a0dea36c8a93\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-g6xl9" Feb 27 10:57:55 crc kubenswrapper[4998]: I0227 10:57:55.473000 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/dde05d60-5841-4834-b7fc-a0dea36c8a93-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-g6xl9\" (UID: \"dde05d60-5841-4834-b7fc-a0dea36c8a93\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-g6xl9" Feb 27 10:57:55 crc kubenswrapper[4998]: I0227 10:57:55.473258 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/dde05d60-5841-4834-b7fc-a0dea36c8a93-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-g6xl9\" (UID: \"dde05d60-5841-4834-b7fc-a0dea36c8a93\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-g6xl9" Feb 27 10:57:55 crc kubenswrapper[4998]: I0227 10:57:55.474636 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dde05d60-5841-4834-b7fc-a0dea36c8a93-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-g6xl9\" (UID: \"dde05d60-5841-4834-b7fc-a0dea36c8a93\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-g6xl9" Feb 27 10:57:55 crc kubenswrapper[4998]: I0227 10:57:55.475176 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dde05d60-5841-4834-b7fc-a0dea36c8a93-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-g6xl9\" (UID: \"dde05d60-5841-4834-b7fc-a0dea36c8a93\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-g6xl9" Feb 27 10:57:55 crc kubenswrapper[4998]: I0227 10:57:55.476603 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/dde05d60-5841-4834-b7fc-a0dea36c8a93-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-g6xl9\" (UID: \"dde05d60-5841-4834-b7fc-a0dea36c8a93\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-g6xl9" Feb 27 10:57:55 crc kubenswrapper[4998]: I0227 10:57:55.482024 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/dde05d60-5841-4834-b7fc-a0dea36c8a93-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-g6xl9\" (UID: \"dde05d60-5841-4834-b7fc-a0dea36c8a93\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-g6xl9" Feb 27 10:57:55 crc kubenswrapper[4998]: I0227 10:57:55.484941 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x46h9\" (UniqueName: \"kubernetes.io/projected/dde05d60-5841-4834-b7fc-a0dea36c8a93-kube-api-access-x46h9\") pod \"nova-edpm-deployment-openstack-edpm-ipam-g6xl9\" (UID: \"dde05d60-5841-4834-b7fc-a0dea36c8a93\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-g6xl9" Feb 27 10:57:55 crc kubenswrapper[4998]: I0227 10:57:55.550427 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-g6xl9" Feb 27 10:57:56 crc kubenswrapper[4998]: I0227 10:57:56.035955 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-g6xl9"] Feb 27 10:57:56 crc kubenswrapper[4998]: I0227 10:57:56.099262 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-g6xl9" event={"ID":"dde05d60-5841-4834-b7fc-a0dea36c8a93","Type":"ContainerStarted","Data":"bb30331c4e7607c8841e99c1c1111483543d3282047817033268121750c0dd0f"} Feb 27 10:57:57 crc kubenswrapper[4998]: I0227 10:57:57.124790 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-g6xl9" event={"ID":"dde05d60-5841-4834-b7fc-a0dea36c8a93","Type":"ContainerStarted","Data":"ff324f27f07a344a861724d8818ae63476e64948a601e262f493bcf44409e5cd"} Feb 27 10:57:57 crc kubenswrapper[4998]: I0227 10:57:57.166532 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-g6xl9" podStartSLOduration=1.722561136 podStartE2EDuration="2.166498863s" podCreationTimestamp="2026-02-27 10:57:55 +0000 UTC" firstStartedPulling="2026-02-27 10:57:56.04664892 +0000 UTC m=+2428.044919888" lastFinishedPulling="2026-02-27 10:57:56.490586647 +0000 UTC m=+2428.488857615" observedRunningTime="2026-02-27 10:57:57.150422555 +0000 UTC m=+2429.148693563" watchObservedRunningTime="2026-02-27 10:57:57.166498863 +0000 UTC m=+2429.164769871" Feb 27 10:57:57 crc kubenswrapper[4998]: I0227 10:57:57.765344 4998 scope.go:117] "RemoveContainer" containerID="7b0185177c0eba8b1ede867992da9cdf152aa626aa33c9939f91ead1ce17d9c4" Feb 27 10:57:57 crc kubenswrapper[4998]: E0227 10:57:57.765604 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m6kr5_openshift-machine-config-operator(400c5e2f-5448-49c6-bf8e-04b21e552bb2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" Feb 27 10:58:00 crc kubenswrapper[4998]: I0227 10:58:00.135092 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536498-c6m4k"] Feb 27 10:58:00 crc kubenswrapper[4998]: I0227 10:58:00.138198 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536498-c6m4k" Feb 27 10:58:00 crc kubenswrapper[4998]: I0227 10:58:00.146135 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 10:58:00 crc kubenswrapper[4998]: I0227 10:58:00.146166 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 10:58:00 crc kubenswrapper[4998]: I0227 10:58:00.146553 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-b74ch" Feb 27 10:58:00 crc kubenswrapper[4998]: I0227 10:58:00.157618 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536498-c6m4k"] Feb 27 10:58:00 crc kubenswrapper[4998]: I0227 10:58:00.267323 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96w2p\" (UniqueName: \"kubernetes.io/projected/bb7e87c2-b500-4b19-8173-7188e8e24443-kube-api-access-96w2p\") pod \"auto-csr-approver-29536498-c6m4k\" (UID: \"bb7e87c2-b500-4b19-8173-7188e8e24443\") " pod="openshift-infra/auto-csr-approver-29536498-c6m4k" Feb 27 10:58:00 crc kubenswrapper[4998]: I0227 10:58:00.369732 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96w2p\" (UniqueName: \"kubernetes.io/projected/bb7e87c2-b500-4b19-8173-7188e8e24443-kube-api-access-96w2p\") pod \"auto-csr-approver-29536498-c6m4k\" (UID: \"bb7e87c2-b500-4b19-8173-7188e8e24443\") " pod="openshift-infra/auto-csr-approver-29536498-c6m4k" Feb 27 10:58:00 crc kubenswrapper[4998]: I0227 10:58:00.390994 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96w2p\" (UniqueName: \"kubernetes.io/projected/bb7e87c2-b500-4b19-8173-7188e8e24443-kube-api-access-96w2p\") pod \"auto-csr-approver-29536498-c6m4k\" (UID: \"bb7e87c2-b500-4b19-8173-7188e8e24443\") " pod="openshift-infra/auto-csr-approver-29536498-c6m4k" Feb 27 10:58:00 crc kubenswrapper[4998]: I0227 10:58:00.462282 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536498-c6m4k" Feb 27 10:58:00 crc kubenswrapper[4998]: I0227 10:58:00.897162 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536498-c6m4k"] Feb 27 10:58:00 crc kubenswrapper[4998]: W0227 10:58:00.899719 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbb7e87c2_b500_4b19_8173_7188e8e24443.slice/crio-f0b2e7be742b8f1576ab0d4cfb14e363657931e22b899449334e09386c3d103a WatchSource:0}: Error finding container f0b2e7be742b8f1576ab0d4cfb14e363657931e22b899449334e09386c3d103a: Status 404 returned error can't find the container with id f0b2e7be742b8f1576ab0d4cfb14e363657931e22b899449334e09386c3d103a Feb 27 10:58:01 crc kubenswrapper[4998]: I0227 10:58:01.163956 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536498-c6m4k" event={"ID":"bb7e87c2-b500-4b19-8173-7188e8e24443","Type":"ContainerStarted","Data":"f0b2e7be742b8f1576ab0d4cfb14e363657931e22b899449334e09386c3d103a"} Feb 27 10:58:02 crc kubenswrapper[4998]: I0227 10:58:02.173670 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536498-c6m4k" event={"ID":"bb7e87c2-b500-4b19-8173-7188e8e24443","Type":"ContainerStarted","Data":"6a694e911c917907b4eb2b60212a6482ce64cfb6ebcfac6b794bde2223bf5c1e"} Feb 27 10:58:02 crc kubenswrapper[4998]: I0227 10:58:02.198634 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29536498-c6m4k" podStartSLOduration=1.251763346 podStartE2EDuration="2.19858863s" podCreationTimestamp="2026-02-27 10:58:00 +0000 UTC" firstStartedPulling="2026-02-27 10:58:00.902652397 +0000 UTC m=+2432.900923365" lastFinishedPulling="2026-02-27 10:58:01.849477681 +0000 UTC m=+2433.847748649" observedRunningTime="2026-02-27 10:58:02.189521499 +0000 UTC m=+2434.187792467" watchObservedRunningTime="2026-02-27 10:58:02.19858863 +0000 UTC m=+2434.196859598" Feb 27 10:58:03 crc kubenswrapper[4998]: I0227 10:58:03.186811 4998 generic.go:334] "Generic (PLEG): container finished" podID="bb7e87c2-b500-4b19-8173-7188e8e24443" containerID="6a694e911c917907b4eb2b60212a6482ce64cfb6ebcfac6b794bde2223bf5c1e" exitCode=0 Feb 27 10:58:03 crc kubenswrapper[4998]: I0227 10:58:03.186968 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536498-c6m4k" event={"ID":"bb7e87c2-b500-4b19-8173-7188e8e24443","Type":"ContainerDied","Data":"6a694e911c917907b4eb2b60212a6482ce64cfb6ebcfac6b794bde2223bf5c1e"} Feb 27 10:58:04 crc kubenswrapper[4998]: I0227 10:58:04.541485 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536498-c6m4k" Feb 27 10:58:04 crc kubenswrapper[4998]: I0227 10:58:04.664946 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-96w2p\" (UniqueName: \"kubernetes.io/projected/bb7e87c2-b500-4b19-8173-7188e8e24443-kube-api-access-96w2p\") pod \"bb7e87c2-b500-4b19-8173-7188e8e24443\" (UID: \"bb7e87c2-b500-4b19-8173-7188e8e24443\") " Feb 27 10:58:04 crc kubenswrapper[4998]: I0227 10:58:04.671691 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb7e87c2-b500-4b19-8173-7188e8e24443-kube-api-access-96w2p" (OuterVolumeSpecName: "kube-api-access-96w2p") pod "bb7e87c2-b500-4b19-8173-7188e8e24443" (UID: "bb7e87c2-b500-4b19-8173-7188e8e24443"). InnerVolumeSpecName "kube-api-access-96w2p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:58:04 crc kubenswrapper[4998]: I0227 10:58:04.767123 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-96w2p\" (UniqueName: \"kubernetes.io/projected/bb7e87c2-b500-4b19-8173-7188e8e24443-kube-api-access-96w2p\") on node \"crc\" DevicePath \"\"" Feb 27 10:58:05 crc kubenswrapper[4998]: I0227 10:58:05.212665 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536498-c6m4k" event={"ID":"bb7e87c2-b500-4b19-8173-7188e8e24443","Type":"ContainerDied","Data":"f0b2e7be742b8f1576ab0d4cfb14e363657931e22b899449334e09386c3d103a"} Feb 27 10:58:05 crc kubenswrapper[4998]: I0227 10:58:05.212706 4998 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f0b2e7be742b8f1576ab0d4cfb14e363657931e22b899449334e09386c3d103a" Feb 27 10:58:05 crc kubenswrapper[4998]: I0227 10:58:05.212744 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536498-c6m4k" Feb 27 10:58:05 crc kubenswrapper[4998]: I0227 10:58:05.266383 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536492-j8djb"] Feb 27 10:58:05 crc kubenswrapper[4998]: I0227 10:58:05.274443 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536492-j8djb"] Feb 27 10:58:06 crc kubenswrapper[4998]: I0227 10:58:06.779385 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61c9cd2a-63ea-4935-9102-c943c6479131" path="/var/lib/kubelet/pods/61c9cd2a-63ea-4935-9102-c943c6479131/volumes" Feb 27 10:58:08 crc kubenswrapper[4998]: I0227 10:58:08.772715 4998 scope.go:117] "RemoveContainer" containerID="7b0185177c0eba8b1ede867992da9cdf152aa626aa33c9939f91ead1ce17d9c4" Feb 27 10:58:08 crc kubenswrapper[4998]: E0227 10:58:08.774080 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m6kr5_openshift-machine-config-operator(400c5e2f-5448-49c6-bf8e-04b21e552bb2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" Feb 27 10:58:23 crc kubenswrapper[4998]: I0227 10:58:23.765118 4998 scope.go:117] "RemoveContainer" containerID="7b0185177c0eba8b1ede867992da9cdf152aa626aa33c9939f91ead1ce17d9c4" Feb 27 10:58:23 crc kubenswrapper[4998]: E0227 10:58:23.765839 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m6kr5_openshift-machine-config-operator(400c5e2f-5448-49c6-bf8e-04b21e552bb2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" Feb 27 10:58:37 crc kubenswrapper[4998]: I0227 10:58:37.765815 4998 scope.go:117] "RemoveContainer" containerID="7b0185177c0eba8b1ede867992da9cdf152aa626aa33c9939f91ead1ce17d9c4" Feb 27 10:58:37 crc kubenswrapper[4998]: E0227 10:58:37.766505 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m6kr5_openshift-machine-config-operator(400c5e2f-5448-49c6-bf8e-04b21e552bb2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" Feb 27 10:58:49 crc kubenswrapper[4998]: I0227 10:58:49.147435 4998 scope.go:117] "RemoveContainer" containerID="93cb2f76c85667978039c472fb5fa9408058836f54e2c418be1791dad6bb2483" Feb 27 10:58:52 crc kubenswrapper[4998]: I0227 10:58:52.764955 4998 scope.go:117] "RemoveContainer" containerID="7b0185177c0eba8b1ede867992da9cdf152aa626aa33c9939f91ead1ce17d9c4" Feb 27 10:58:52 crc kubenswrapper[4998]: E0227 10:58:52.766102 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m6kr5_openshift-machine-config-operator(400c5e2f-5448-49c6-bf8e-04b21e552bb2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" Feb 27 10:59:07 crc kubenswrapper[4998]: I0227 10:59:07.766458 4998 scope.go:117] "RemoveContainer" containerID="7b0185177c0eba8b1ede867992da9cdf152aa626aa33c9939f91ead1ce17d9c4" Feb 27 10:59:07 crc kubenswrapper[4998]: E0227 10:59:07.767342 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m6kr5_openshift-machine-config-operator(400c5e2f-5448-49c6-bf8e-04b21e552bb2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" Feb 27 10:59:22 crc kubenswrapper[4998]: I0227 10:59:22.775906 4998 scope.go:117] "RemoveContainer" containerID="7b0185177c0eba8b1ede867992da9cdf152aa626aa33c9939f91ead1ce17d9c4" Feb 27 10:59:22 crc kubenswrapper[4998]: E0227 10:59:22.778315 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m6kr5_openshift-machine-config-operator(400c5e2f-5448-49c6-bf8e-04b21e552bb2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" Feb 27 10:59:33 crc kubenswrapper[4998]: I0227 10:59:33.765115 4998 scope.go:117] "RemoveContainer" containerID="7b0185177c0eba8b1ede867992da9cdf152aa626aa33c9939f91ead1ce17d9c4" Feb 27 10:59:33 crc kubenswrapper[4998]: E0227 10:59:33.766145 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m6kr5_openshift-machine-config-operator(400c5e2f-5448-49c6-bf8e-04b21e552bb2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" Feb 27 10:59:44 crc kubenswrapper[4998]: I0227 10:59:44.765428 4998 scope.go:117] "RemoveContainer" containerID="7b0185177c0eba8b1ede867992da9cdf152aa626aa33c9939f91ead1ce17d9c4" Feb 27 10:59:44 crc kubenswrapper[4998]: E0227 10:59:44.766332 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m6kr5_openshift-machine-config-operator(400c5e2f-5448-49c6-bf8e-04b21e552bb2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" Feb 27 10:59:59 crc kubenswrapper[4998]: I0227 10:59:59.764797 4998 scope.go:117] "RemoveContainer" containerID="7b0185177c0eba8b1ede867992da9cdf152aa626aa33c9939f91ead1ce17d9c4" Feb 27 10:59:59 crc kubenswrapper[4998]: E0227 10:59:59.765652 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m6kr5_openshift-machine-config-operator(400c5e2f-5448-49c6-bf8e-04b21e552bb2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" Feb 27 11:00:00 crc kubenswrapper[4998]: I0227 11:00:00.148826 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536500-q4vck"] Feb 27 11:00:00 crc kubenswrapper[4998]: E0227 11:00:00.149490 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb7e87c2-b500-4b19-8173-7188e8e24443" containerName="oc" Feb 27 11:00:00 crc kubenswrapper[4998]: I0227 11:00:00.149523 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb7e87c2-b500-4b19-8173-7188e8e24443" containerName="oc" Feb 27 11:00:00 crc kubenswrapper[4998]: I0227 11:00:00.149834 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb7e87c2-b500-4b19-8173-7188e8e24443" containerName="oc" Feb 27 11:00:00 crc kubenswrapper[4998]: I0227 11:00:00.150929 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536500-q4vck" Feb 27 11:00:00 crc kubenswrapper[4998]: I0227 11:00:00.153340 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 11:00:00 crc kubenswrapper[4998]: I0227 11:00:00.153381 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-b74ch" Feb 27 11:00:00 crc kubenswrapper[4998]: I0227 11:00:00.153345 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 11:00:00 crc kubenswrapper[4998]: I0227 11:00:00.158496 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536500-42r5h"] Feb 27 11:00:00 crc kubenswrapper[4998]: I0227 11:00:00.160086 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536500-42r5h" Feb 27 11:00:00 crc kubenswrapper[4998]: I0227 11:00:00.163858 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 27 11:00:00 crc kubenswrapper[4998]: I0227 11:00:00.164077 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 27 11:00:00 crc kubenswrapper[4998]: I0227 11:00:00.174159 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536500-42r5h"] Feb 27 11:00:00 crc kubenswrapper[4998]: I0227 11:00:00.189310 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536500-q4vck"] Feb 27 11:00:00 crc kubenswrapper[4998]: I0227 11:00:00.222531 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtcbq\" (UniqueName: \"kubernetes.io/projected/1a7512eb-4b55-4205-9f1b-9dc902021e2f-kube-api-access-vtcbq\") pod \"auto-csr-approver-29536500-q4vck\" (UID: \"1a7512eb-4b55-4205-9f1b-9dc902021e2f\") " pod="openshift-infra/auto-csr-approver-29536500-q4vck" Feb 27 11:00:00 crc kubenswrapper[4998]: I0227 11:00:00.324729 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e902aadf-8e95-4bf8-a8df-0589512a283b-config-volume\") pod \"collect-profiles-29536500-42r5h\" (UID: \"e902aadf-8e95-4bf8-a8df-0589512a283b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536500-42r5h" Feb 27 11:00:00 crc kubenswrapper[4998]: I0227 11:00:00.324816 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzfj7\" (UniqueName: \"kubernetes.io/projected/e902aadf-8e95-4bf8-a8df-0589512a283b-kube-api-access-qzfj7\") pod \"collect-profiles-29536500-42r5h\" (UID: \"e902aadf-8e95-4bf8-a8df-0589512a283b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536500-42r5h" Feb 27 11:00:00 crc kubenswrapper[4998]: I0227 11:00:00.324909 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e902aadf-8e95-4bf8-a8df-0589512a283b-secret-volume\") pod \"collect-profiles-29536500-42r5h\" (UID: \"e902aadf-8e95-4bf8-a8df-0589512a283b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536500-42r5h" Feb 27 11:00:00 crc kubenswrapper[4998]: I0227 11:00:00.325289 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtcbq\" (UniqueName: \"kubernetes.io/projected/1a7512eb-4b55-4205-9f1b-9dc902021e2f-kube-api-access-vtcbq\") pod \"auto-csr-approver-29536500-q4vck\" (UID: \"1a7512eb-4b55-4205-9f1b-9dc902021e2f\") " pod="openshift-infra/auto-csr-approver-29536500-q4vck" Feb 27 11:00:00 crc kubenswrapper[4998]: I0227 11:00:00.344444 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtcbq\" (UniqueName: \"kubernetes.io/projected/1a7512eb-4b55-4205-9f1b-9dc902021e2f-kube-api-access-vtcbq\") pod \"auto-csr-approver-29536500-q4vck\" (UID: \"1a7512eb-4b55-4205-9f1b-9dc902021e2f\") " pod="openshift-infra/auto-csr-approver-29536500-q4vck" Feb 27 11:00:00 crc kubenswrapper[4998]: I0227 11:00:00.427345 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e902aadf-8e95-4bf8-a8df-0589512a283b-config-volume\") pod \"collect-profiles-29536500-42r5h\" (UID: \"e902aadf-8e95-4bf8-a8df-0589512a283b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536500-42r5h" Feb 27 11:00:00 crc kubenswrapper[4998]: I0227 11:00:00.427427 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qzfj7\" (UniqueName: \"kubernetes.io/projected/e902aadf-8e95-4bf8-a8df-0589512a283b-kube-api-access-qzfj7\") pod \"collect-profiles-29536500-42r5h\" (UID: \"e902aadf-8e95-4bf8-a8df-0589512a283b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536500-42r5h" Feb 27 11:00:00 crc kubenswrapper[4998]: I0227 11:00:00.427444 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e902aadf-8e95-4bf8-a8df-0589512a283b-secret-volume\") pod \"collect-profiles-29536500-42r5h\" (UID: \"e902aadf-8e95-4bf8-a8df-0589512a283b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536500-42r5h" Feb 27 11:00:00 crc kubenswrapper[4998]: I0227 11:00:00.428318 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e902aadf-8e95-4bf8-a8df-0589512a283b-config-volume\") pod \"collect-profiles-29536500-42r5h\" (UID: \"e902aadf-8e95-4bf8-a8df-0589512a283b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536500-42r5h" Feb 27 11:00:00 crc kubenswrapper[4998]: I0227 11:00:00.430687 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e902aadf-8e95-4bf8-a8df-0589512a283b-secret-volume\") pod \"collect-profiles-29536500-42r5h\" (UID: \"e902aadf-8e95-4bf8-a8df-0589512a283b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536500-42r5h" Feb 27 11:00:00 crc kubenswrapper[4998]: I0227 11:00:00.443607 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzfj7\" (UniqueName: \"kubernetes.io/projected/e902aadf-8e95-4bf8-a8df-0589512a283b-kube-api-access-qzfj7\") pod \"collect-profiles-29536500-42r5h\" (UID: \"e902aadf-8e95-4bf8-a8df-0589512a283b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536500-42r5h" Feb 27 11:00:00 crc kubenswrapper[4998]: I0227 11:00:00.480894 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536500-q4vck" Feb 27 11:00:00 crc kubenswrapper[4998]: I0227 11:00:00.498998 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536500-42r5h" Feb 27 11:00:00 crc kubenswrapper[4998]: I0227 11:00:00.928417 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536500-q4vck"] Feb 27 11:00:00 crc kubenswrapper[4998]: I0227 11:00:00.932020 4998 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 27 11:00:00 crc kubenswrapper[4998]: I0227 11:00:00.993267 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536500-42r5h"] Feb 27 11:00:00 crc kubenswrapper[4998]: W0227 11:00:00.995002 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode902aadf_8e95_4bf8_a8df_0589512a283b.slice/crio-1b70b1daf88f273f59fdfb5e2b8ae9ca56b0feaa5401c4ee72d641ab104a63bb WatchSource:0}: Error finding container 1b70b1daf88f273f59fdfb5e2b8ae9ca56b0feaa5401c4ee72d641ab104a63bb: Status 404 returned error can't find the container with id 1b70b1daf88f273f59fdfb5e2b8ae9ca56b0feaa5401c4ee72d641ab104a63bb Feb 27 11:00:01 crc kubenswrapper[4998]: I0227 11:00:01.273355 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536500-42r5h" event={"ID":"e902aadf-8e95-4bf8-a8df-0589512a283b","Type":"ContainerStarted","Data":"65517f136ab99e154b0d57897ef8aba05d148744ddab963424f604302029ea25"} Feb 27 11:00:01 crc kubenswrapper[4998]: I0227 11:00:01.273406 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536500-42r5h" event={"ID":"e902aadf-8e95-4bf8-a8df-0589512a283b","Type":"ContainerStarted","Data":"1b70b1daf88f273f59fdfb5e2b8ae9ca56b0feaa5401c4ee72d641ab104a63bb"} Feb 27 11:00:01 crc kubenswrapper[4998]: I0227 11:00:01.276016 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536500-q4vck" event={"ID":"1a7512eb-4b55-4205-9f1b-9dc902021e2f","Type":"ContainerStarted","Data":"49f53cf06dc343d03c6244c2a9adc3360dde479fcb57b73ff7602f66864a6e04"} Feb 27 11:00:01 crc kubenswrapper[4998]: I0227 11:00:01.296369 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29536500-42r5h" podStartSLOduration=1.29634869 podStartE2EDuration="1.29634869s" podCreationTimestamp="2026-02-27 11:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 11:00:01.294581373 +0000 UTC m=+2553.292852351" watchObservedRunningTime="2026-02-27 11:00:01.29634869 +0000 UTC m=+2553.294619658" Feb 27 11:00:02 crc kubenswrapper[4998]: I0227 11:00:02.289214 4998 generic.go:334] "Generic (PLEG): container finished" podID="e902aadf-8e95-4bf8-a8df-0589512a283b" containerID="65517f136ab99e154b0d57897ef8aba05d148744ddab963424f604302029ea25" exitCode=0 Feb 27 11:00:02 crc kubenswrapper[4998]: I0227 11:00:02.289356 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536500-42r5h" event={"ID":"e902aadf-8e95-4bf8-a8df-0589512a283b","Type":"ContainerDied","Data":"65517f136ab99e154b0d57897ef8aba05d148744ddab963424f604302029ea25"} Feb 27 11:00:03 crc kubenswrapper[4998]: I0227 11:00:03.636074 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536500-42r5h" Feb 27 11:00:03 crc kubenswrapper[4998]: I0227 11:00:03.790647 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e902aadf-8e95-4bf8-a8df-0589512a283b-config-volume\") pod \"e902aadf-8e95-4bf8-a8df-0589512a283b\" (UID: \"e902aadf-8e95-4bf8-a8df-0589512a283b\") " Feb 27 11:00:03 crc kubenswrapper[4998]: I0227 11:00:03.790743 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e902aadf-8e95-4bf8-a8df-0589512a283b-secret-volume\") pod \"e902aadf-8e95-4bf8-a8df-0589512a283b\" (UID: \"e902aadf-8e95-4bf8-a8df-0589512a283b\") " Feb 27 11:00:03 crc kubenswrapper[4998]: I0227 11:00:03.791047 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qzfj7\" (UniqueName: \"kubernetes.io/projected/e902aadf-8e95-4bf8-a8df-0589512a283b-kube-api-access-qzfj7\") pod \"e902aadf-8e95-4bf8-a8df-0589512a283b\" (UID: \"e902aadf-8e95-4bf8-a8df-0589512a283b\") " Feb 27 11:00:03 crc kubenswrapper[4998]: I0227 11:00:03.791521 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e902aadf-8e95-4bf8-a8df-0589512a283b-config-volume" (OuterVolumeSpecName: "config-volume") pod "e902aadf-8e95-4bf8-a8df-0589512a283b" (UID: "e902aadf-8e95-4bf8-a8df-0589512a283b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 11:00:03 crc kubenswrapper[4998]: I0227 11:00:03.791688 4998 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e902aadf-8e95-4bf8-a8df-0589512a283b-config-volume\") on node \"crc\" DevicePath \"\"" Feb 27 11:00:03 crc kubenswrapper[4998]: I0227 11:00:03.797144 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e902aadf-8e95-4bf8-a8df-0589512a283b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e902aadf-8e95-4bf8-a8df-0589512a283b" (UID: "e902aadf-8e95-4bf8-a8df-0589512a283b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:00:03 crc kubenswrapper[4998]: I0227 11:00:03.797161 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e902aadf-8e95-4bf8-a8df-0589512a283b-kube-api-access-qzfj7" (OuterVolumeSpecName: "kube-api-access-qzfj7") pod "e902aadf-8e95-4bf8-a8df-0589512a283b" (UID: "e902aadf-8e95-4bf8-a8df-0589512a283b"). InnerVolumeSpecName "kube-api-access-qzfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:00:03 crc kubenswrapper[4998]: I0227 11:00:03.893966 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qzfj7\" (UniqueName: \"kubernetes.io/projected/e902aadf-8e95-4bf8-a8df-0589512a283b-kube-api-access-qzfj7\") on node \"crc\" DevicePath \"\"" Feb 27 11:00:03 crc kubenswrapper[4998]: I0227 11:00:03.893996 4998 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e902aadf-8e95-4bf8-a8df-0589512a283b-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 27 11:00:04 crc kubenswrapper[4998]: I0227 11:00:04.308958 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536500-42r5h" event={"ID":"e902aadf-8e95-4bf8-a8df-0589512a283b","Type":"ContainerDied","Data":"1b70b1daf88f273f59fdfb5e2b8ae9ca56b0feaa5401c4ee72d641ab104a63bb"} Feb 27 11:00:04 crc kubenswrapper[4998]: I0227 11:00:04.309004 4998 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1b70b1daf88f273f59fdfb5e2b8ae9ca56b0feaa5401c4ee72d641ab104a63bb" Feb 27 11:00:04 crc kubenswrapper[4998]: I0227 11:00:04.309017 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536500-42r5h" Feb 27 11:00:04 crc kubenswrapper[4998]: I0227 11:00:04.377530 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536455-gfh89"] Feb 27 11:00:04 crc kubenswrapper[4998]: I0227 11:00:04.384914 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536455-gfh89"] Feb 27 11:00:04 crc kubenswrapper[4998]: I0227 11:00:04.776159 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="223282ee-d242-4896-a1b9-9f63a9bb0915" path="/var/lib/kubelet/pods/223282ee-d242-4896-a1b9-9f63a9bb0915/volumes" Feb 27 11:00:09 crc kubenswrapper[4998]: I0227 11:00:09.356133 4998 generic.go:334] "Generic (PLEG): container finished" podID="dde05d60-5841-4834-b7fc-a0dea36c8a93" containerID="ff324f27f07a344a861724d8818ae63476e64948a601e262f493bcf44409e5cd" exitCode=0 Feb 27 11:00:09 crc kubenswrapper[4998]: I0227 11:00:09.356395 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-g6xl9" event={"ID":"dde05d60-5841-4834-b7fc-a0dea36c8a93","Type":"ContainerDied","Data":"ff324f27f07a344a861724d8818ae63476e64948a601e262f493bcf44409e5cd"} Feb 27 11:00:10 crc kubenswrapper[4998]: I0227 11:00:10.782981 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-g6xl9" Feb 27 11:00:10 crc kubenswrapper[4998]: I0227 11:00:10.934042 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dde05d60-5841-4834-b7fc-a0dea36c8a93-nova-combined-ca-bundle\") pod \"dde05d60-5841-4834-b7fc-a0dea36c8a93\" (UID: \"dde05d60-5841-4834-b7fc-a0dea36c8a93\") " Feb 27 11:00:10 crc kubenswrapper[4998]: I0227 11:00:10.934098 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x46h9\" (UniqueName: \"kubernetes.io/projected/dde05d60-5841-4834-b7fc-a0dea36c8a93-kube-api-access-x46h9\") pod \"dde05d60-5841-4834-b7fc-a0dea36c8a93\" (UID: \"dde05d60-5841-4834-b7fc-a0dea36c8a93\") " Feb 27 11:00:10 crc kubenswrapper[4998]: I0227 11:00:10.934128 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/dde05d60-5841-4834-b7fc-a0dea36c8a93-nova-cell1-compute-config-1\") pod \"dde05d60-5841-4834-b7fc-a0dea36c8a93\" (UID: \"dde05d60-5841-4834-b7fc-a0dea36c8a93\") " Feb 27 11:00:10 crc kubenswrapper[4998]: I0227 11:00:10.934175 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/dde05d60-5841-4834-b7fc-a0dea36c8a93-nova-cell1-compute-config-3\") pod \"dde05d60-5841-4834-b7fc-a0dea36c8a93\" (UID: \"dde05d60-5841-4834-b7fc-a0dea36c8a93\") " Feb 27 11:00:10 crc kubenswrapper[4998]: I0227 11:00:10.934198 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/dde05d60-5841-4834-b7fc-a0dea36c8a93-nova-migration-ssh-key-0\") pod \"dde05d60-5841-4834-b7fc-a0dea36c8a93\" (UID: \"dde05d60-5841-4834-b7fc-a0dea36c8a93\") " Feb 27 11:00:10 crc kubenswrapper[4998]: I0227 11:00:10.934257 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/dde05d60-5841-4834-b7fc-a0dea36c8a93-nova-extra-config-0\") pod \"dde05d60-5841-4834-b7fc-a0dea36c8a93\" (UID: \"dde05d60-5841-4834-b7fc-a0dea36c8a93\") " Feb 27 11:00:10 crc kubenswrapper[4998]: I0227 11:00:10.934297 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/dde05d60-5841-4834-b7fc-a0dea36c8a93-nova-cell1-compute-config-2\") pod \"dde05d60-5841-4834-b7fc-a0dea36c8a93\" (UID: \"dde05d60-5841-4834-b7fc-a0dea36c8a93\") " Feb 27 11:00:10 crc kubenswrapper[4998]: I0227 11:00:10.934320 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dde05d60-5841-4834-b7fc-a0dea36c8a93-inventory\") pod \"dde05d60-5841-4834-b7fc-a0dea36c8a93\" (UID: \"dde05d60-5841-4834-b7fc-a0dea36c8a93\") " Feb 27 11:00:10 crc kubenswrapper[4998]: I0227 11:00:10.934372 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dde05d60-5841-4834-b7fc-a0dea36c8a93-ssh-key-openstack-edpm-ipam\") pod \"dde05d60-5841-4834-b7fc-a0dea36c8a93\" (UID: \"dde05d60-5841-4834-b7fc-a0dea36c8a93\") " Feb 27 11:00:10 crc kubenswrapper[4998]: I0227 11:00:10.934417 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/dde05d60-5841-4834-b7fc-a0dea36c8a93-nova-migration-ssh-key-1\") pod \"dde05d60-5841-4834-b7fc-a0dea36c8a93\" (UID: \"dde05d60-5841-4834-b7fc-a0dea36c8a93\") " Feb 27 11:00:10 crc kubenswrapper[4998]: I0227 11:00:10.934524 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/dde05d60-5841-4834-b7fc-a0dea36c8a93-nova-cell1-compute-config-0\") pod \"dde05d60-5841-4834-b7fc-a0dea36c8a93\" (UID: \"dde05d60-5841-4834-b7fc-a0dea36c8a93\") " Feb 27 11:00:10 crc kubenswrapper[4998]: I0227 11:00:10.952872 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dde05d60-5841-4834-b7fc-a0dea36c8a93-kube-api-access-x46h9" (OuterVolumeSpecName: "kube-api-access-x46h9") pod "dde05d60-5841-4834-b7fc-a0dea36c8a93" (UID: "dde05d60-5841-4834-b7fc-a0dea36c8a93"). InnerVolumeSpecName "kube-api-access-x46h9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:00:10 crc kubenswrapper[4998]: I0227 11:00:10.953478 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dde05d60-5841-4834-b7fc-a0dea36c8a93-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "dde05d60-5841-4834-b7fc-a0dea36c8a93" (UID: "dde05d60-5841-4834-b7fc-a0dea36c8a93"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:00:10 crc kubenswrapper[4998]: I0227 11:00:10.960132 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dde05d60-5841-4834-b7fc-a0dea36c8a93-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "dde05d60-5841-4834-b7fc-a0dea36c8a93" (UID: "dde05d60-5841-4834-b7fc-a0dea36c8a93"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:00:10 crc kubenswrapper[4998]: I0227 11:00:10.964332 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dde05d60-5841-4834-b7fc-a0dea36c8a93-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "dde05d60-5841-4834-b7fc-a0dea36c8a93" (UID: "dde05d60-5841-4834-b7fc-a0dea36c8a93"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 11:00:10 crc kubenswrapper[4998]: I0227 11:00:10.968583 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dde05d60-5841-4834-b7fc-a0dea36c8a93-inventory" (OuterVolumeSpecName: "inventory") pod "dde05d60-5841-4834-b7fc-a0dea36c8a93" (UID: "dde05d60-5841-4834-b7fc-a0dea36c8a93"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:00:10 crc kubenswrapper[4998]: I0227 11:00:10.972744 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dde05d60-5841-4834-b7fc-a0dea36c8a93-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "dde05d60-5841-4834-b7fc-a0dea36c8a93" (UID: "dde05d60-5841-4834-b7fc-a0dea36c8a93"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:00:10 crc kubenswrapper[4998]: I0227 11:00:10.980444 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dde05d60-5841-4834-b7fc-a0dea36c8a93-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "dde05d60-5841-4834-b7fc-a0dea36c8a93" (UID: "dde05d60-5841-4834-b7fc-a0dea36c8a93"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:00:10 crc kubenswrapper[4998]: I0227 11:00:10.984129 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dde05d60-5841-4834-b7fc-a0dea36c8a93-nova-cell1-compute-config-2" (OuterVolumeSpecName: "nova-cell1-compute-config-2") pod "dde05d60-5841-4834-b7fc-a0dea36c8a93" (UID: "dde05d60-5841-4834-b7fc-a0dea36c8a93"). InnerVolumeSpecName "nova-cell1-compute-config-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:00:10 crc kubenswrapper[4998]: I0227 11:00:10.989955 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dde05d60-5841-4834-b7fc-a0dea36c8a93-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "dde05d60-5841-4834-b7fc-a0dea36c8a93" (UID: "dde05d60-5841-4834-b7fc-a0dea36c8a93"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:00:10 crc kubenswrapper[4998]: I0227 11:00:10.991824 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dde05d60-5841-4834-b7fc-a0dea36c8a93-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "dde05d60-5841-4834-b7fc-a0dea36c8a93" (UID: "dde05d60-5841-4834-b7fc-a0dea36c8a93"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:00:10 crc kubenswrapper[4998]: I0227 11:00:10.998847 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dde05d60-5841-4834-b7fc-a0dea36c8a93-nova-cell1-compute-config-3" (OuterVolumeSpecName: "nova-cell1-compute-config-3") pod "dde05d60-5841-4834-b7fc-a0dea36c8a93" (UID: "dde05d60-5841-4834-b7fc-a0dea36c8a93"). InnerVolumeSpecName "nova-cell1-compute-config-3". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:00:11 crc kubenswrapper[4998]: I0227 11:00:11.037397 4998 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/dde05d60-5841-4834-b7fc-a0dea36c8a93-nova-cell1-compute-config-3\") on node \"crc\" DevicePath \"\"" Feb 27 11:00:11 crc kubenswrapper[4998]: I0227 11:00:11.037428 4998 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/dde05d60-5841-4834-b7fc-a0dea36c8a93-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Feb 27 11:00:11 crc kubenswrapper[4998]: I0227 11:00:11.037438 4998 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/dde05d60-5841-4834-b7fc-a0dea36c8a93-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Feb 27 11:00:11 crc kubenswrapper[4998]: I0227 11:00:11.037446 4998 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/dde05d60-5841-4834-b7fc-a0dea36c8a93-nova-cell1-compute-config-2\") on node \"crc\" DevicePath \"\"" Feb 27 11:00:11 crc kubenswrapper[4998]: I0227 11:00:11.037456 4998 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dde05d60-5841-4834-b7fc-a0dea36c8a93-inventory\") on node \"crc\" DevicePath \"\"" Feb 27 11:00:11 crc kubenswrapper[4998]: I0227 11:00:11.037465 4998 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dde05d60-5841-4834-b7fc-a0dea36c8a93-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 27 11:00:11 crc kubenswrapper[4998]: I0227 11:00:11.037475 4998 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/dde05d60-5841-4834-b7fc-a0dea36c8a93-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Feb 27 11:00:11 crc kubenswrapper[4998]: I0227 11:00:11.037486 4998 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/dde05d60-5841-4834-b7fc-a0dea36c8a93-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Feb 27 11:00:11 crc kubenswrapper[4998]: I0227 11:00:11.037496 4998 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dde05d60-5841-4834-b7fc-a0dea36c8a93-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 11:00:11 crc kubenswrapper[4998]: I0227 11:00:11.037506 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x46h9\" (UniqueName: \"kubernetes.io/projected/dde05d60-5841-4834-b7fc-a0dea36c8a93-kube-api-access-x46h9\") on node \"crc\" DevicePath \"\"" Feb 27 11:00:11 crc kubenswrapper[4998]: I0227 11:00:11.037516 4998 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/dde05d60-5841-4834-b7fc-a0dea36c8a93-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Feb 27 11:00:11 crc kubenswrapper[4998]: I0227 11:00:11.375670 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-g6xl9" event={"ID":"dde05d60-5841-4834-b7fc-a0dea36c8a93","Type":"ContainerDied","Data":"bb30331c4e7607c8841e99c1c1111483543d3282047817033268121750c0dd0f"} Feb 27 11:00:11 crc kubenswrapper[4998]: I0227 11:00:11.375714 4998 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bb30331c4e7607c8841e99c1c1111483543d3282047817033268121750c0dd0f" Feb 27 11:00:11 crc kubenswrapper[4998]: I0227 11:00:11.376119 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-g6xl9" Feb 27 11:00:11 crc kubenswrapper[4998]: I0227 11:00:11.475020 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wd9lg"] Feb 27 11:00:11 crc kubenswrapper[4998]: E0227 11:00:11.475491 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dde05d60-5841-4834-b7fc-a0dea36c8a93" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 27 11:00:11 crc kubenswrapper[4998]: I0227 11:00:11.475508 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="dde05d60-5841-4834-b7fc-a0dea36c8a93" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 27 11:00:11 crc kubenswrapper[4998]: E0227 11:00:11.475532 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e902aadf-8e95-4bf8-a8df-0589512a283b" containerName="collect-profiles" Feb 27 11:00:11 crc kubenswrapper[4998]: I0227 11:00:11.475539 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="e902aadf-8e95-4bf8-a8df-0589512a283b" containerName="collect-profiles" Feb 27 11:00:11 crc kubenswrapper[4998]: I0227 11:00:11.475688 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="e902aadf-8e95-4bf8-a8df-0589512a283b" containerName="collect-profiles" Feb 27 11:00:11 crc kubenswrapper[4998]: I0227 11:00:11.475710 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="dde05d60-5841-4834-b7fc-a0dea36c8a93" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 27 11:00:11 crc kubenswrapper[4998]: I0227 11:00:11.476315 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wd9lg" Feb 27 11:00:11 crc kubenswrapper[4998]: I0227 11:00:11.478958 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 27 11:00:11 crc kubenswrapper[4998]: I0227 11:00:11.479172 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Feb 27 11:00:11 crc kubenswrapper[4998]: I0227 11:00:11.479510 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 27 11:00:11 crc kubenswrapper[4998]: I0227 11:00:11.479820 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-bpcp2" Feb 27 11:00:11 crc kubenswrapper[4998]: I0227 11:00:11.480057 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 27 11:00:11 crc kubenswrapper[4998]: I0227 11:00:11.499242 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wd9lg"] Feb 27 11:00:11 crc kubenswrapper[4998]: I0227 11:00:11.547503 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c9f848a0-3810-4574-82f8-097918a288a4-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-wd9lg\" (UID: \"c9f848a0-3810-4574-82f8-097918a288a4\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wd9lg" Feb 27 11:00:11 crc kubenswrapper[4998]: I0227 11:00:11.547902 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwcmg\" (UniqueName: \"kubernetes.io/projected/c9f848a0-3810-4574-82f8-097918a288a4-kube-api-access-lwcmg\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-wd9lg\" (UID: \"c9f848a0-3810-4574-82f8-097918a288a4\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wd9lg" Feb 27 11:00:11 crc kubenswrapper[4998]: I0227 11:00:11.547951 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/c9f848a0-3810-4574-82f8-097918a288a4-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-wd9lg\" (UID: \"c9f848a0-3810-4574-82f8-097918a288a4\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wd9lg" Feb 27 11:00:11 crc kubenswrapper[4998]: I0227 11:00:11.547987 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9f848a0-3810-4574-82f8-097918a288a4-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-wd9lg\" (UID: \"c9f848a0-3810-4574-82f8-097918a288a4\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wd9lg" Feb 27 11:00:11 crc kubenswrapper[4998]: I0227 11:00:11.548109 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c9f848a0-3810-4574-82f8-097918a288a4-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-wd9lg\" (UID: \"c9f848a0-3810-4574-82f8-097918a288a4\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wd9lg" Feb 27 11:00:11 crc kubenswrapper[4998]: I0227 11:00:11.548139 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/c9f848a0-3810-4574-82f8-097918a288a4-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-wd9lg\" (UID: \"c9f848a0-3810-4574-82f8-097918a288a4\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wd9lg" Feb 27 11:00:11 crc kubenswrapper[4998]: I0227 11:00:11.548167 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/c9f848a0-3810-4574-82f8-097918a288a4-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-wd9lg\" (UID: \"c9f848a0-3810-4574-82f8-097918a288a4\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wd9lg" Feb 27 11:00:11 crc kubenswrapper[4998]: I0227 11:00:11.649932 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c9f848a0-3810-4574-82f8-097918a288a4-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-wd9lg\" (UID: \"c9f848a0-3810-4574-82f8-097918a288a4\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wd9lg" Feb 27 11:00:11 crc kubenswrapper[4998]: I0227 11:00:11.649998 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/c9f848a0-3810-4574-82f8-097918a288a4-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-wd9lg\" (UID: \"c9f848a0-3810-4574-82f8-097918a288a4\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wd9lg" Feb 27 11:00:11 crc kubenswrapper[4998]: I0227 11:00:11.650019 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/c9f848a0-3810-4574-82f8-097918a288a4-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-wd9lg\" (UID: \"c9f848a0-3810-4574-82f8-097918a288a4\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wd9lg" Feb 27 11:00:11 crc kubenswrapper[4998]: I0227 11:00:11.650077 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c9f848a0-3810-4574-82f8-097918a288a4-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-wd9lg\" (UID: \"c9f848a0-3810-4574-82f8-097918a288a4\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wd9lg" Feb 27 11:00:11 crc kubenswrapper[4998]: I0227 11:00:11.650116 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwcmg\" (UniqueName: \"kubernetes.io/projected/c9f848a0-3810-4574-82f8-097918a288a4-kube-api-access-lwcmg\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-wd9lg\" (UID: \"c9f848a0-3810-4574-82f8-097918a288a4\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wd9lg" Feb 27 11:00:11 crc kubenswrapper[4998]: I0227 11:00:11.650158 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/c9f848a0-3810-4574-82f8-097918a288a4-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-wd9lg\" (UID: \"c9f848a0-3810-4574-82f8-097918a288a4\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wd9lg" Feb 27 11:00:11 crc kubenswrapper[4998]: I0227 11:00:11.650187 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9f848a0-3810-4574-82f8-097918a288a4-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-wd9lg\" (UID: \"c9f848a0-3810-4574-82f8-097918a288a4\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wd9lg" Feb 27 11:00:11 crc kubenswrapper[4998]: I0227 11:00:11.656175 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c9f848a0-3810-4574-82f8-097918a288a4-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-wd9lg\" (UID: \"c9f848a0-3810-4574-82f8-097918a288a4\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wd9lg" Feb 27 11:00:11 crc kubenswrapper[4998]: I0227 11:00:11.656307 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c9f848a0-3810-4574-82f8-097918a288a4-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-wd9lg\" (UID: \"c9f848a0-3810-4574-82f8-097918a288a4\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wd9lg" Feb 27 11:00:11 crc kubenswrapper[4998]: I0227 11:00:11.656612 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/c9f848a0-3810-4574-82f8-097918a288a4-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-wd9lg\" (UID: \"c9f848a0-3810-4574-82f8-097918a288a4\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wd9lg" Feb 27 11:00:11 crc kubenswrapper[4998]: I0227 11:00:11.656868 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9f848a0-3810-4574-82f8-097918a288a4-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-wd9lg\" (UID: \"c9f848a0-3810-4574-82f8-097918a288a4\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wd9lg" Feb 27 11:00:11 crc kubenswrapper[4998]: I0227 11:00:11.657057 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/c9f848a0-3810-4574-82f8-097918a288a4-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-wd9lg\" (UID: \"c9f848a0-3810-4574-82f8-097918a288a4\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wd9lg" Feb 27 11:00:11 crc kubenswrapper[4998]: I0227 11:00:11.657214 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/c9f848a0-3810-4574-82f8-097918a288a4-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-wd9lg\" (UID: \"c9f848a0-3810-4574-82f8-097918a288a4\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wd9lg" Feb 27 11:00:11 crc kubenswrapper[4998]: I0227 11:00:11.671565 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwcmg\" (UniqueName: \"kubernetes.io/projected/c9f848a0-3810-4574-82f8-097918a288a4-kube-api-access-lwcmg\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-wd9lg\" (UID: \"c9f848a0-3810-4574-82f8-097918a288a4\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wd9lg" Feb 27 11:00:11 crc kubenswrapper[4998]: I0227 11:00:11.798091 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wd9lg" Feb 27 11:00:12 crc kubenswrapper[4998]: I0227 11:00:12.862205 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wd9lg"] Feb 27 11:00:13 crc kubenswrapper[4998]: I0227 11:00:13.413759 4998 generic.go:334] "Generic (PLEG): container finished" podID="1a7512eb-4b55-4205-9f1b-9dc902021e2f" containerID="6391a4a5a0e8326c6dc3e7ca04c6c4bbb0aa97a0bba3190e9cc0d3695a3102be" exitCode=0 Feb 27 11:00:13 crc kubenswrapper[4998]: I0227 11:00:13.413833 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536500-q4vck" event={"ID":"1a7512eb-4b55-4205-9f1b-9dc902021e2f","Type":"ContainerDied","Data":"6391a4a5a0e8326c6dc3e7ca04c6c4bbb0aa97a0bba3190e9cc0d3695a3102be"} Feb 27 11:00:13 crc kubenswrapper[4998]: I0227 11:00:13.421620 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wd9lg" event={"ID":"c9f848a0-3810-4574-82f8-097918a288a4","Type":"ContainerStarted","Data":"31f1e7a8bc32c4129d1171da07fd14e1ca1322e83a3e37b2448dee5871305596"} Feb 27 11:00:14 crc kubenswrapper[4998]: I0227 11:00:14.433889 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wd9lg" event={"ID":"c9f848a0-3810-4574-82f8-097918a288a4","Type":"ContainerStarted","Data":"df9ff43759fbad9f51bd069d10682153d00966e79998be317ee27d7d5bce33db"} Feb 27 11:00:14 crc kubenswrapper[4998]: I0227 11:00:14.465801 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wd9lg" podStartSLOduration=2.961477177 podStartE2EDuration="3.465771382s" podCreationTimestamp="2026-02-27 11:00:11 +0000 UTC" firstStartedPulling="2026-02-27 11:00:12.871290374 +0000 UTC m=+2564.869561342" lastFinishedPulling="2026-02-27 11:00:13.375584579 +0000 UTC m=+2565.373855547" observedRunningTime="2026-02-27 11:00:14.46195442 +0000 UTC m=+2566.460225508" watchObservedRunningTime="2026-02-27 11:00:14.465771382 +0000 UTC m=+2566.464042360" Feb 27 11:00:14 crc kubenswrapper[4998]: I0227 11:00:14.765815 4998 scope.go:117] "RemoveContainer" containerID="7b0185177c0eba8b1ede867992da9cdf152aa626aa33c9939f91ead1ce17d9c4" Feb 27 11:00:14 crc kubenswrapper[4998]: I0227 11:00:14.810647 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536500-q4vck" Feb 27 11:00:14 crc kubenswrapper[4998]: I0227 11:00:14.934890 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vtcbq\" (UniqueName: \"kubernetes.io/projected/1a7512eb-4b55-4205-9f1b-9dc902021e2f-kube-api-access-vtcbq\") pod \"1a7512eb-4b55-4205-9f1b-9dc902021e2f\" (UID: \"1a7512eb-4b55-4205-9f1b-9dc902021e2f\") " Feb 27 11:00:14 crc kubenswrapper[4998]: I0227 11:00:14.955344 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a7512eb-4b55-4205-9f1b-9dc902021e2f-kube-api-access-vtcbq" (OuterVolumeSpecName: "kube-api-access-vtcbq") pod "1a7512eb-4b55-4205-9f1b-9dc902021e2f" (UID: "1a7512eb-4b55-4205-9f1b-9dc902021e2f"). InnerVolumeSpecName "kube-api-access-vtcbq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:00:15 crc kubenswrapper[4998]: I0227 11:00:15.037648 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vtcbq\" (UniqueName: \"kubernetes.io/projected/1a7512eb-4b55-4205-9f1b-9dc902021e2f-kube-api-access-vtcbq\") on node \"crc\" DevicePath \"\"" Feb 27 11:00:15 crc kubenswrapper[4998]: I0227 11:00:15.450376 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536500-q4vck" Feb 27 11:00:15 crc kubenswrapper[4998]: I0227 11:00:15.450478 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536500-q4vck" event={"ID":"1a7512eb-4b55-4205-9f1b-9dc902021e2f","Type":"ContainerDied","Data":"49f53cf06dc343d03c6244c2a9adc3360dde479fcb57b73ff7602f66864a6e04"} Feb 27 11:00:15 crc kubenswrapper[4998]: I0227 11:00:15.450538 4998 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="49f53cf06dc343d03c6244c2a9adc3360dde479fcb57b73ff7602f66864a6e04" Feb 27 11:00:15 crc kubenswrapper[4998]: I0227 11:00:15.456738 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" event={"ID":"400c5e2f-5448-49c6-bf8e-04b21e552bb2","Type":"ContainerStarted","Data":"aa8a77df4c4a84d755ee497912634ba2635597c1a94094fe039e8e95846fe91f"} Feb 27 11:00:15 crc kubenswrapper[4998]: I0227 11:00:15.890370 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536494-bp2cm"] Feb 27 11:00:15 crc kubenswrapper[4998]: I0227 11:00:15.901539 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536494-bp2cm"] Feb 27 11:00:16 crc kubenswrapper[4998]: I0227 11:00:16.784684 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cba97d31-99ca-4c41-86ab-8a712a2c5ddb" path="/var/lib/kubelet/pods/cba97d31-99ca-4c41-86ab-8a712a2c5ddb/volumes" Feb 27 11:00:49 crc kubenswrapper[4998]: I0227 11:00:49.262391 4998 scope.go:117] "RemoveContainer" containerID="4a8e94e3f0a0033d67c7f3fa9c1afaf67b408f6883a860a4e441dd61a8ddbe4d" Feb 27 11:00:49 crc kubenswrapper[4998]: I0227 11:00:49.325771 4998 scope.go:117] "RemoveContainer" containerID="f9a692f8796aad2576deb220af9aeb561db69389600671e3e61d44bbd69a1dd2" Feb 27 11:01:00 crc kubenswrapper[4998]: I0227 11:01:00.155660 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29536501-rg2vz"] Feb 27 11:01:00 crc kubenswrapper[4998]: E0227 11:01:00.156828 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a7512eb-4b55-4205-9f1b-9dc902021e2f" containerName="oc" Feb 27 11:01:00 crc kubenswrapper[4998]: I0227 11:01:00.156849 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a7512eb-4b55-4205-9f1b-9dc902021e2f" containerName="oc" Feb 27 11:01:00 crc kubenswrapper[4998]: I0227 11:01:00.157184 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a7512eb-4b55-4205-9f1b-9dc902021e2f" containerName="oc" Feb 27 11:01:00 crc kubenswrapper[4998]: I0227 11:01:00.158150 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29536501-rg2vz" Feb 27 11:01:00 crc kubenswrapper[4998]: I0227 11:01:00.164100 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29536501-rg2vz"] Feb 27 11:01:00 crc kubenswrapper[4998]: I0227 11:01:00.307163 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fd007ec-7007-4505-9839-dc43fd40ca3a-combined-ca-bundle\") pod \"keystone-cron-29536501-rg2vz\" (UID: \"6fd007ec-7007-4505-9839-dc43fd40ca3a\") " pod="openstack/keystone-cron-29536501-rg2vz" Feb 27 11:01:00 crc kubenswrapper[4998]: I0227 11:01:00.307385 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6fd007ec-7007-4505-9839-dc43fd40ca3a-fernet-keys\") pod \"keystone-cron-29536501-rg2vz\" (UID: \"6fd007ec-7007-4505-9839-dc43fd40ca3a\") " pod="openstack/keystone-cron-29536501-rg2vz" Feb 27 11:01:00 crc kubenswrapper[4998]: I0227 11:01:00.307608 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kn7z\" (UniqueName: \"kubernetes.io/projected/6fd007ec-7007-4505-9839-dc43fd40ca3a-kube-api-access-2kn7z\") pod \"keystone-cron-29536501-rg2vz\" (UID: \"6fd007ec-7007-4505-9839-dc43fd40ca3a\") " pod="openstack/keystone-cron-29536501-rg2vz" Feb 27 11:01:00 crc kubenswrapper[4998]: I0227 11:01:00.307867 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fd007ec-7007-4505-9839-dc43fd40ca3a-config-data\") pod \"keystone-cron-29536501-rg2vz\" (UID: \"6fd007ec-7007-4505-9839-dc43fd40ca3a\") " pod="openstack/keystone-cron-29536501-rg2vz" Feb 27 11:01:00 crc kubenswrapper[4998]: I0227 11:01:00.410053 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fd007ec-7007-4505-9839-dc43fd40ca3a-config-data\") pod \"keystone-cron-29536501-rg2vz\" (UID: \"6fd007ec-7007-4505-9839-dc43fd40ca3a\") " pod="openstack/keystone-cron-29536501-rg2vz" Feb 27 11:01:00 crc kubenswrapper[4998]: I0227 11:01:00.410187 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fd007ec-7007-4505-9839-dc43fd40ca3a-combined-ca-bundle\") pod \"keystone-cron-29536501-rg2vz\" (UID: \"6fd007ec-7007-4505-9839-dc43fd40ca3a\") " pod="openstack/keystone-cron-29536501-rg2vz" Feb 27 11:01:00 crc kubenswrapper[4998]: I0227 11:01:00.410302 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6fd007ec-7007-4505-9839-dc43fd40ca3a-fernet-keys\") pod \"keystone-cron-29536501-rg2vz\" (UID: \"6fd007ec-7007-4505-9839-dc43fd40ca3a\") " pod="openstack/keystone-cron-29536501-rg2vz" Feb 27 11:01:00 crc kubenswrapper[4998]: I0227 11:01:00.410405 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kn7z\" (UniqueName: \"kubernetes.io/projected/6fd007ec-7007-4505-9839-dc43fd40ca3a-kube-api-access-2kn7z\") pod \"keystone-cron-29536501-rg2vz\" (UID: \"6fd007ec-7007-4505-9839-dc43fd40ca3a\") " pod="openstack/keystone-cron-29536501-rg2vz" Feb 27 11:01:00 crc kubenswrapper[4998]: I0227 11:01:00.420404 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6fd007ec-7007-4505-9839-dc43fd40ca3a-fernet-keys\") pod \"keystone-cron-29536501-rg2vz\" (UID: \"6fd007ec-7007-4505-9839-dc43fd40ca3a\") " pod="openstack/keystone-cron-29536501-rg2vz" Feb 27 11:01:00 crc kubenswrapper[4998]: I0227 11:01:00.420566 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fd007ec-7007-4505-9839-dc43fd40ca3a-combined-ca-bundle\") pod \"keystone-cron-29536501-rg2vz\" (UID: \"6fd007ec-7007-4505-9839-dc43fd40ca3a\") " pod="openstack/keystone-cron-29536501-rg2vz" Feb 27 11:01:00 crc kubenswrapper[4998]: I0227 11:01:00.424654 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fd007ec-7007-4505-9839-dc43fd40ca3a-config-data\") pod \"keystone-cron-29536501-rg2vz\" (UID: \"6fd007ec-7007-4505-9839-dc43fd40ca3a\") " pod="openstack/keystone-cron-29536501-rg2vz" Feb 27 11:01:00 crc kubenswrapper[4998]: I0227 11:01:00.427411 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kn7z\" (UniqueName: \"kubernetes.io/projected/6fd007ec-7007-4505-9839-dc43fd40ca3a-kube-api-access-2kn7z\") pod \"keystone-cron-29536501-rg2vz\" (UID: \"6fd007ec-7007-4505-9839-dc43fd40ca3a\") " pod="openstack/keystone-cron-29536501-rg2vz" Feb 27 11:01:00 crc kubenswrapper[4998]: I0227 11:01:00.479517 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29536501-rg2vz" Feb 27 11:01:00 crc kubenswrapper[4998]: I0227 11:01:00.943735 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29536501-rg2vz"] Feb 27 11:01:00 crc kubenswrapper[4998]: W0227 11:01:00.955674 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6fd007ec_7007_4505_9839_dc43fd40ca3a.slice/crio-04fe08e523954fd46d3ce2afe71c1ae99e62fbc11f2f824e8cadea2550571e0a WatchSource:0}: Error finding container 04fe08e523954fd46d3ce2afe71c1ae99e62fbc11f2f824e8cadea2550571e0a: Status 404 returned error can't find the container with id 04fe08e523954fd46d3ce2afe71c1ae99e62fbc11f2f824e8cadea2550571e0a Feb 27 11:01:00 crc kubenswrapper[4998]: I0227 11:01:00.980287 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29536501-rg2vz" event={"ID":"6fd007ec-7007-4505-9839-dc43fd40ca3a","Type":"ContainerStarted","Data":"04fe08e523954fd46d3ce2afe71c1ae99e62fbc11f2f824e8cadea2550571e0a"} Feb 27 11:01:02 crc kubenswrapper[4998]: I0227 11:01:02.018496 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29536501-rg2vz" event={"ID":"6fd007ec-7007-4505-9839-dc43fd40ca3a","Type":"ContainerStarted","Data":"d77750240745a6865ecd585a470dc1ea69e5c00ba6542196ff4bb66b3e31af91"} Feb 27 11:01:04 crc kubenswrapper[4998]: I0227 11:01:04.048626 4998 generic.go:334] "Generic (PLEG): container finished" podID="6fd007ec-7007-4505-9839-dc43fd40ca3a" containerID="d77750240745a6865ecd585a470dc1ea69e5c00ba6542196ff4bb66b3e31af91" exitCode=0 Feb 27 11:01:04 crc kubenswrapper[4998]: I0227 11:01:04.048743 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29536501-rg2vz" event={"ID":"6fd007ec-7007-4505-9839-dc43fd40ca3a","Type":"ContainerDied","Data":"d77750240745a6865ecd585a470dc1ea69e5c00ba6542196ff4bb66b3e31af91"} Feb 27 11:01:05 crc kubenswrapper[4998]: I0227 11:01:05.405887 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29536501-rg2vz" Feb 27 11:01:05 crc kubenswrapper[4998]: I0227 11:01:05.519691 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fd007ec-7007-4505-9839-dc43fd40ca3a-combined-ca-bundle\") pod \"6fd007ec-7007-4505-9839-dc43fd40ca3a\" (UID: \"6fd007ec-7007-4505-9839-dc43fd40ca3a\") " Feb 27 11:01:05 crc kubenswrapper[4998]: I0227 11:01:05.519759 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fd007ec-7007-4505-9839-dc43fd40ca3a-config-data\") pod \"6fd007ec-7007-4505-9839-dc43fd40ca3a\" (UID: \"6fd007ec-7007-4505-9839-dc43fd40ca3a\") " Feb 27 11:01:05 crc kubenswrapper[4998]: I0227 11:01:05.519786 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2kn7z\" (UniqueName: \"kubernetes.io/projected/6fd007ec-7007-4505-9839-dc43fd40ca3a-kube-api-access-2kn7z\") pod \"6fd007ec-7007-4505-9839-dc43fd40ca3a\" (UID: \"6fd007ec-7007-4505-9839-dc43fd40ca3a\") " Feb 27 11:01:05 crc kubenswrapper[4998]: I0227 11:01:05.519874 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6fd007ec-7007-4505-9839-dc43fd40ca3a-fernet-keys\") pod \"6fd007ec-7007-4505-9839-dc43fd40ca3a\" (UID: \"6fd007ec-7007-4505-9839-dc43fd40ca3a\") " Feb 27 11:01:05 crc kubenswrapper[4998]: I0227 11:01:05.526512 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fd007ec-7007-4505-9839-dc43fd40ca3a-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "6fd007ec-7007-4505-9839-dc43fd40ca3a" (UID: "6fd007ec-7007-4505-9839-dc43fd40ca3a"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:01:05 crc kubenswrapper[4998]: I0227 11:01:05.526845 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fd007ec-7007-4505-9839-dc43fd40ca3a-kube-api-access-2kn7z" (OuterVolumeSpecName: "kube-api-access-2kn7z") pod "6fd007ec-7007-4505-9839-dc43fd40ca3a" (UID: "6fd007ec-7007-4505-9839-dc43fd40ca3a"). InnerVolumeSpecName "kube-api-access-2kn7z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:01:05 crc kubenswrapper[4998]: I0227 11:01:05.562684 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fd007ec-7007-4505-9839-dc43fd40ca3a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6fd007ec-7007-4505-9839-dc43fd40ca3a" (UID: "6fd007ec-7007-4505-9839-dc43fd40ca3a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:01:05 crc kubenswrapper[4998]: I0227 11:01:05.588611 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fd007ec-7007-4505-9839-dc43fd40ca3a-config-data" (OuterVolumeSpecName: "config-data") pod "6fd007ec-7007-4505-9839-dc43fd40ca3a" (UID: "6fd007ec-7007-4505-9839-dc43fd40ca3a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:01:05 crc kubenswrapper[4998]: I0227 11:01:05.622356 4998 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fd007ec-7007-4505-9839-dc43fd40ca3a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 11:01:05 crc kubenswrapper[4998]: I0227 11:01:05.622387 4998 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fd007ec-7007-4505-9839-dc43fd40ca3a-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 11:01:05 crc kubenswrapper[4998]: I0227 11:01:05.622395 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2kn7z\" (UniqueName: \"kubernetes.io/projected/6fd007ec-7007-4505-9839-dc43fd40ca3a-kube-api-access-2kn7z\") on node \"crc\" DevicePath \"\"" Feb 27 11:01:05 crc kubenswrapper[4998]: I0227 11:01:05.622405 4998 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6fd007ec-7007-4505-9839-dc43fd40ca3a-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 27 11:01:06 crc kubenswrapper[4998]: I0227 11:01:06.068999 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29536501-rg2vz" event={"ID":"6fd007ec-7007-4505-9839-dc43fd40ca3a","Type":"ContainerDied","Data":"04fe08e523954fd46d3ce2afe71c1ae99e62fbc11f2f824e8cadea2550571e0a"} Feb 27 11:01:06 crc kubenswrapper[4998]: I0227 11:01:06.069038 4998 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="04fe08e523954fd46d3ce2afe71c1ae99e62fbc11f2f824e8cadea2550571e0a" Feb 27 11:01:06 crc kubenswrapper[4998]: I0227 11:01:06.069072 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29536501-rg2vz" Feb 27 11:02:00 crc kubenswrapper[4998]: I0227 11:02:00.160102 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536502-glvp4"] Feb 27 11:02:00 crc kubenswrapper[4998]: E0227 11:02:00.161046 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fd007ec-7007-4505-9839-dc43fd40ca3a" containerName="keystone-cron" Feb 27 11:02:00 crc kubenswrapper[4998]: I0227 11:02:00.161057 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fd007ec-7007-4505-9839-dc43fd40ca3a" containerName="keystone-cron" Feb 27 11:02:00 crc kubenswrapper[4998]: I0227 11:02:00.161239 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fd007ec-7007-4505-9839-dc43fd40ca3a" containerName="keystone-cron" Feb 27 11:02:00 crc kubenswrapper[4998]: I0227 11:02:00.161895 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536502-glvp4" Feb 27 11:02:00 crc kubenswrapper[4998]: I0227 11:02:00.164842 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-b74ch" Feb 27 11:02:00 crc kubenswrapper[4998]: I0227 11:02:00.165404 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 11:02:00 crc kubenswrapper[4998]: I0227 11:02:00.167467 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 11:02:00 crc kubenswrapper[4998]: I0227 11:02:00.179691 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536502-glvp4"] Feb 27 11:02:00 crc kubenswrapper[4998]: I0227 11:02:00.234449 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67d5g\" (UniqueName: \"kubernetes.io/projected/cee4571a-5b43-45e9-9369-1694d8cbf950-kube-api-access-67d5g\") pod \"auto-csr-approver-29536502-glvp4\" (UID: \"cee4571a-5b43-45e9-9369-1694d8cbf950\") " pod="openshift-infra/auto-csr-approver-29536502-glvp4" Feb 27 11:02:00 crc kubenswrapper[4998]: I0227 11:02:00.336377 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67d5g\" (UniqueName: \"kubernetes.io/projected/cee4571a-5b43-45e9-9369-1694d8cbf950-kube-api-access-67d5g\") pod \"auto-csr-approver-29536502-glvp4\" (UID: \"cee4571a-5b43-45e9-9369-1694d8cbf950\") " pod="openshift-infra/auto-csr-approver-29536502-glvp4" Feb 27 11:02:00 crc kubenswrapper[4998]: I0227 11:02:00.365301 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67d5g\" (UniqueName: \"kubernetes.io/projected/cee4571a-5b43-45e9-9369-1694d8cbf950-kube-api-access-67d5g\") pod \"auto-csr-approver-29536502-glvp4\" (UID: \"cee4571a-5b43-45e9-9369-1694d8cbf950\") " pod="openshift-infra/auto-csr-approver-29536502-glvp4" Feb 27 11:02:00 crc kubenswrapper[4998]: I0227 11:02:00.484361 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536502-glvp4" Feb 27 11:02:01 crc kubenswrapper[4998]: I0227 11:02:01.007145 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536502-glvp4"] Feb 27 11:02:01 crc kubenswrapper[4998]: I0227 11:02:01.676577 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536502-glvp4" event={"ID":"cee4571a-5b43-45e9-9369-1694d8cbf950","Type":"ContainerStarted","Data":"6eed69dbc38f15705d131b8f793ee3f21e2c58107631e528c67c4fc571c2a35a"} Feb 27 11:02:03 crc kubenswrapper[4998]: I0227 11:02:03.703537 4998 generic.go:334] "Generic (PLEG): container finished" podID="cee4571a-5b43-45e9-9369-1694d8cbf950" containerID="a40a2e4da94976d0c88200579958e02ee72a624ca7ccf0951dafa373dcfe4e10" exitCode=0 Feb 27 11:02:03 crc kubenswrapper[4998]: I0227 11:02:03.703605 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536502-glvp4" event={"ID":"cee4571a-5b43-45e9-9369-1694d8cbf950","Type":"ContainerDied","Data":"a40a2e4da94976d0c88200579958e02ee72a624ca7ccf0951dafa373dcfe4e10"} Feb 27 11:02:05 crc kubenswrapper[4998]: I0227 11:02:05.114504 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536502-glvp4" Feb 27 11:02:05 crc kubenswrapper[4998]: I0227 11:02:05.143353 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-67d5g\" (UniqueName: \"kubernetes.io/projected/cee4571a-5b43-45e9-9369-1694d8cbf950-kube-api-access-67d5g\") pod \"cee4571a-5b43-45e9-9369-1694d8cbf950\" (UID: \"cee4571a-5b43-45e9-9369-1694d8cbf950\") " Feb 27 11:02:05 crc kubenswrapper[4998]: I0227 11:02:05.149853 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cee4571a-5b43-45e9-9369-1694d8cbf950-kube-api-access-67d5g" (OuterVolumeSpecName: "kube-api-access-67d5g") pod "cee4571a-5b43-45e9-9369-1694d8cbf950" (UID: "cee4571a-5b43-45e9-9369-1694d8cbf950"). InnerVolumeSpecName "kube-api-access-67d5g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:02:05 crc kubenswrapper[4998]: I0227 11:02:05.246758 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-67d5g\" (UniqueName: \"kubernetes.io/projected/cee4571a-5b43-45e9-9369-1694d8cbf950-kube-api-access-67d5g\") on node \"crc\" DevicePath \"\"" Feb 27 11:02:05 crc kubenswrapper[4998]: I0227 11:02:05.731836 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536502-glvp4" event={"ID":"cee4571a-5b43-45e9-9369-1694d8cbf950","Type":"ContainerDied","Data":"6eed69dbc38f15705d131b8f793ee3f21e2c58107631e528c67c4fc571c2a35a"} Feb 27 11:02:05 crc kubenswrapper[4998]: I0227 11:02:05.732281 4998 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6eed69dbc38f15705d131b8f793ee3f21e2c58107631e528c67c4fc571c2a35a" Feb 27 11:02:05 crc kubenswrapper[4998]: I0227 11:02:05.731929 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536502-glvp4" Feb 27 11:02:06 crc kubenswrapper[4998]: I0227 11:02:06.214393 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536496-hrmgf"] Feb 27 11:02:06 crc kubenswrapper[4998]: I0227 11:02:06.224805 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536496-hrmgf"] Feb 27 11:02:06 crc kubenswrapper[4998]: I0227 11:02:06.783829 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2097850c-5872-4115-837e-f7bdd36a5e0e" path="/var/lib/kubelet/pods/2097850c-5872-4115-837e-f7bdd36a5e0e/volumes" Feb 27 11:02:39 crc kubenswrapper[4998]: I0227 11:02:39.156873 4998 generic.go:334] "Generic (PLEG): container finished" podID="c9f848a0-3810-4574-82f8-097918a288a4" containerID="df9ff43759fbad9f51bd069d10682153d00966e79998be317ee27d7d5bce33db" exitCode=0 Feb 27 11:02:39 crc kubenswrapper[4998]: I0227 11:02:39.156988 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wd9lg" event={"ID":"c9f848a0-3810-4574-82f8-097918a288a4","Type":"ContainerDied","Data":"df9ff43759fbad9f51bd069d10682153d00966e79998be317ee27d7d5bce33db"} Feb 27 11:02:40 crc kubenswrapper[4998]: I0227 11:02:40.504701 4998 patch_prober.go:28] interesting pod/machine-config-daemon-m6kr5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 11:02:40 crc kubenswrapper[4998]: I0227 11:02:40.505252 4998 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 11:02:40 crc kubenswrapper[4998]: I0227 11:02:40.663807 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wd9lg" Feb 27 11:02:40 crc kubenswrapper[4998]: I0227 11:02:40.752206 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lwcmg\" (UniqueName: \"kubernetes.io/projected/c9f848a0-3810-4574-82f8-097918a288a4-kube-api-access-lwcmg\") pod \"c9f848a0-3810-4574-82f8-097918a288a4\" (UID: \"c9f848a0-3810-4574-82f8-097918a288a4\") " Feb 27 11:02:40 crc kubenswrapper[4998]: I0227 11:02:40.752756 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9f848a0-3810-4574-82f8-097918a288a4-telemetry-combined-ca-bundle\") pod \"c9f848a0-3810-4574-82f8-097918a288a4\" (UID: \"c9f848a0-3810-4574-82f8-097918a288a4\") " Feb 27 11:02:40 crc kubenswrapper[4998]: I0227 11:02:40.752792 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c9f848a0-3810-4574-82f8-097918a288a4-inventory\") pod \"c9f848a0-3810-4574-82f8-097918a288a4\" (UID: \"c9f848a0-3810-4574-82f8-097918a288a4\") " Feb 27 11:02:40 crc kubenswrapper[4998]: I0227 11:02:40.752826 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/c9f848a0-3810-4574-82f8-097918a288a4-ceilometer-compute-config-data-0\") pod \"c9f848a0-3810-4574-82f8-097918a288a4\" (UID: \"c9f848a0-3810-4574-82f8-097918a288a4\") " Feb 27 11:02:40 crc kubenswrapper[4998]: I0227 11:02:40.752872 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/c9f848a0-3810-4574-82f8-097918a288a4-ceilometer-compute-config-data-1\") pod \"c9f848a0-3810-4574-82f8-097918a288a4\" (UID: \"c9f848a0-3810-4574-82f8-097918a288a4\") " Feb 27 11:02:40 crc kubenswrapper[4998]: I0227 11:02:40.752970 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/c9f848a0-3810-4574-82f8-097918a288a4-ceilometer-compute-config-data-2\") pod \"c9f848a0-3810-4574-82f8-097918a288a4\" (UID: \"c9f848a0-3810-4574-82f8-097918a288a4\") " Feb 27 11:02:40 crc kubenswrapper[4998]: I0227 11:02:40.753011 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c9f848a0-3810-4574-82f8-097918a288a4-ssh-key-openstack-edpm-ipam\") pod \"c9f848a0-3810-4574-82f8-097918a288a4\" (UID: \"c9f848a0-3810-4574-82f8-097918a288a4\") " Feb 27 11:02:40 crc kubenswrapper[4998]: I0227 11:02:40.759568 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9f848a0-3810-4574-82f8-097918a288a4-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "c9f848a0-3810-4574-82f8-097918a288a4" (UID: "c9f848a0-3810-4574-82f8-097918a288a4"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:02:40 crc kubenswrapper[4998]: I0227 11:02:40.771182 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9f848a0-3810-4574-82f8-097918a288a4-kube-api-access-lwcmg" (OuterVolumeSpecName: "kube-api-access-lwcmg") pod "c9f848a0-3810-4574-82f8-097918a288a4" (UID: "c9f848a0-3810-4574-82f8-097918a288a4"). InnerVolumeSpecName "kube-api-access-lwcmg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:02:40 crc kubenswrapper[4998]: I0227 11:02:40.787538 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9f848a0-3810-4574-82f8-097918a288a4-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "c9f848a0-3810-4574-82f8-097918a288a4" (UID: "c9f848a0-3810-4574-82f8-097918a288a4"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:02:40 crc kubenswrapper[4998]: I0227 11:02:40.788078 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9f848a0-3810-4574-82f8-097918a288a4-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c9f848a0-3810-4574-82f8-097918a288a4" (UID: "c9f848a0-3810-4574-82f8-097918a288a4"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:02:40 crc kubenswrapper[4998]: I0227 11:02:40.790925 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9f848a0-3810-4574-82f8-097918a288a4-inventory" (OuterVolumeSpecName: "inventory") pod "c9f848a0-3810-4574-82f8-097918a288a4" (UID: "c9f848a0-3810-4574-82f8-097918a288a4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:02:40 crc kubenswrapper[4998]: I0227 11:02:40.792486 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9f848a0-3810-4574-82f8-097918a288a4-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "c9f848a0-3810-4574-82f8-097918a288a4" (UID: "c9f848a0-3810-4574-82f8-097918a288a4"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:02:40 crc kubenswrapper[4998]: I0227 11:02:40.805512 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9f848a0-3810-4574-82f8-097918a288a4-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "c9f848a0-3810-4574-82f8-097918a288a4" (UID: "c9f848a0-3810-4574-82f8-097918a288a4"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:02:40 crc kubenswrapper[4998]: I0227 11:02:40.855848 4998 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c9f848a0-3810-4574-82f8-097918a288a4-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 27 11:02:40 crc kubenswrapper[4998]: I0227 11:02:40.855884 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lwcmg\" (UniqueName: \"kubernetes.io/projected/c9f848a0-3810-4574-82f8-097918a288a4-kube-api-access-lwcmg\") on node \"crc\" DevicePath \"\"" Feb 27 11:02:40 crc kubenswrapper[4998]: I0227 11:02:40.855892 4998 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9f848a0-3810-4574-82f8-097918a288a4-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 11:02:40 crc kubenswrapper[4998]: I0227 11:02:40.855901 4998 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c9f848a0-3810-4574-82f8-097918a288a4-inventory\") on node \"crc\" DevicePath \"\"" Feb 27 11:02:40 crc kubenswrapper[4998]: I0227 11:02:40.855911 4998 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/c9f848a0-3810-4574-82f8-097918a288a4-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Feb 27 11:02:40 crc kubenswrapper[4998]: I0227 11:02:40.855920 4998 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/c9f848a0-3810-4574-82f8-097918a288a4-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Feb 27 11:02:40 crc kubenswrapper[4998]: I0227 11:02:40.855931 4998 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/c9f848a0-3810-4574-82f8-097918a288a4-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Feb 27 11:02:41 crc kubenswrapper[4998]: I0227 11:02:41.187368 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wd9lg" event={"ID":"c9f848a0-3810-4574-82f8-097918a288a4","Type":"ContainerDied","Data":"31f1e7a8bc32c4129d1171da07fd14e1ca1322e83a3e37b2448dee5871305596"} Feb 27 11:02:41 crc kubenswrapper[4998]: I0227 11:02:41.187414 4998 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="31f1e7a8bc32c4129d1171da07fd14e1ca1322e83a3e37b2448dee5871305596" Feb 27 11:02:41 crc kubenswrapper[4998]: I0227 11:02:41.187486 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wd9lg" Feb 27 11:02:49 crc kubenswrapper[4998]: I0227 11:02:49.434585 4998 scope.go:117] "RemoveContainer" containerID="c66bb1f650c2aa0f8be4b7e37e6bccd11964da4848402148bc570b6802a42150" Feb 27 11:03:10 crc kubenswrapper[4998]: I0227 11:03:10.505159 4998 patch_prober.go:28] interesting pod/machine-config-daemon-m6kr5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 11:03:10 crc kubenswrapper[4998]: I0227 11:03:10.505826 4998 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 11:03:36 crc kubenswrapper[4998]: I0227 11:03:36.206181 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Feb 27 11:03:36 crc kubenswrapper[4998]: E0227 11:03:36.207532 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cee4571a-5b43-45e9-9369-1694d8cbf950" containerName="oc" Feb 27 11:03:36 crc kubenswrapper[4998]: I0227 11:03:36.207552 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="cee4571a-5b43-45e9-9369-1694d8cbf950" containerName="oc" Feb 27 11:03:36 crc kubenswrapper[4998]: E0227 11:03:36.207573 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9f848a0-3810-4574-82f8-097918a288a4" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 27 11:03:36 crc kubenswrapper[4998]: I0227 11:03:36.207582 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9f848a0-3810-4574-82f8-097918a288a4" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 27 11:03:36 crc kubenswrapper[4998]: I0227 11:03:36.207811 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="cee4571a-5b43-45e9-9369-1694d8cbf950" containerName="oc" Feb 27 11:03:36 crc kubenswrapper[4998]: I0227 11:03:36.207827 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9f848a0-3810-4574-82f8-097918a288a4" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 27 11:03:36 crc kubenswrapper[4998]: I0227 11:03:36.208584 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 27 11:03:36 crc kubenswrapper[4998]: I0227 11:03:36.211767 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Feb 27 11:03:36 crc kubenswrapper[4998]: I0227 11:03:36.212271 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-78fmt" Feb 27 11:03:36 crc kubenswrapper[4998]: I0227 11:03:36.212307 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Feb 27 11:03:36 crc kubenswrapper[4998]: I0227 11:03:36.214357 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Feb 27 11:03:36 crc kubenswrapper[4998]: I0227 11:03:36.231908 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 27 11:03:36 crc kubenswrapper[4998]: I0227 11:03:36.333048 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a3d42422-7c5a-4605-b6c0-79682b9511ed-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"a3d42422-7c5a-4605-b6c0-79682b9511ed\") " pod="openstack/tempest-tests-tempest" Feb 27 11:03:36 crc kubenswrapper[4998]: I0227 11:03:36.333120 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kn8wb\" (UniqueName: \"kubernetes.io/projected/a3d42422-7c5a-4605-b6c0-79682b9511ed-kube-api-access-kn8wb\") pod \"tempest-tests-tempest\" (UID: \"a3d42422-7c5a-4605-b6c0-79682b9511ed\") " pod="openstack/tempest-tests-tempest" Feb 27 11:03:36 crc kubenswrapper[4998]: I0227 11:03:36.333178 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a3d42422-7c5a-4605-b6c0-79682b9511ed-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"a3d42422-7c5a-4605-b6c0-79682b9511ed\") " pod="openstack/tempest-tests-tempest" Feb 27 11:03:36 crc kubenswrapper[4998]: I0227 11:03:36.333268 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"tempest-tests-tempest\" (UID: \"a3d42422-7c5a-4605-b6c0-79682b9511ed\") " pod="openstack/tempest-tests-tempest" Feb 27 11:03:36 crc kubenswrapper[4998]: I0227 11:03:36.333309 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/a3d42422-7c5a-4605-b6c0-79682b9511ed-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"a3d42422-7c5a-4605-b6c0-79682b9511ed\") " pod="openstack/tempest-tests-tempest" Feb 27 11:03:36 crc kubenswrapper[4998]: I0227 11:03:36.333403 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a3d42422-7c5a-4605-b6c0-79682b9511ed-config-data\") pod \"tempest-tests-tempest\" (UID: \"a3d42422-7c5a-4605-b6c0-79682b9511ed\") " pod="openstack/tempest-tests-tempest" Feb 27 11:03:36 crc kubenswrapper[4998]: I0227 11:03:36.333433 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a3d42422-7c5a-4605-b6c0-79682b9511ed-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"a3d42422-7c5a-4605-b6c0-79682b9511ed\") " pod="openstack/tempest-tests-tempest" Feb 27 11:03:36 crc kubenswrapper[4998]: I0227 11:03:36.333822 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/a3d42422-7c5a-4605-b6c0-79682b9511ed-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"a3d42422-7c5a-4605-b6c0-79682b9511ed\") " pod="openstack/tempest-tests-tempest" Feb 27 11:03:36 crc kubenswrapper[4998]: I0227 11:03:36.333931 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/a3d42422-7c5a-4605-b6c0-79682b9511ed-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"a3d42422-7c5a-4605-b6c0-79682b9511ed\") " pod="openstack/tempest-tests-tempest" Feb 27 11:03:36 crc kubenswrapper[4998]: I0227 11:03:36.436419 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/a3d42422-7c5a-4605-b6c0-79682b9511ed-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"a3d42422-7c5a-4605-b6c0-79682b9511ed\") " pod="openstack/tempest-tests-tempest" Feb 27 11:03:36 crc kubenswrapper[4998]: I0227 11:03:36.436519 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/a3d42422-7c5a-4605-b6c0-79682b9511ed-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"a3d42422-7c5a-4605-b6c0-79682b9511ed\") " pod="openstack/tempest-tests-tempest" Feb 27 11:03:36 crc kubenswrapper[4998]: I0227 11:03:36.436574 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a3d42422-7c5a-4605-b6c0-79682b9511ed-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"a3d42422-7c5a-4605-b6c0-79682b9511ed\") " pod="openstack/tempest-tests-tempest" Feb 27 11:03:36 crc kubenswrapper[4998]: I0227 11:03:36.436623 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kn8wb\" (UniqueName: \"kubernetes.io/projected/a3d42422-7c5a-4605-b6c0-79682b9511ed-kube-api-access-kn8wb\") pod \"tempest-tests-tempest\" (UID: \"a3d42422-7c5a-4605-b6c0-79682b9511ed\") " pod="openstack/tempest-tests-tempest" Feb 27 11:03:36 crc kubenswrapper[4998]: I0227 11:03:36.436673 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a3d42422-7c5a-4605-b6c0-79682b9511ed-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"a3d42422-7c5a-4605-b6c0-79682b9511ed\") " pod="openstack/tempest-tests-tempest" Feb 27 11:03:36 crc kubenswrapper[4998]: I0227 11:03:36.436738 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"tempest-tests-tempest\" (UID: \"a3d42422-7c5a-4605-b6c0-79682b9511ed\") " pod="openstack/tempest-tests-tempest" Feb 27 11:03:36 crc kubenswrapper[4998]: I0227 11:03:36.436773 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/a3d42422-7c5a-4605-b6c0-79682b9511ed-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"a3d42422-7c5a-4605-b6c0-79682b9511ed\") " pod="openstack/tempest-tests-tempest" Feb 27 11:03:36 crc kubenswrapper[4998]: I0227 11:03:36.436867 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a3d42422-7c5a-4605-b6c0-79682b9511ed-config-data\") pod \"tempest-tests-tempest\" (UID: \"a3d42422-7c5a-4605-b6c0-79682b9511ed\") " pod="openstack/tempest-tests-tempest" Feb 27 11:03:36 crc kubenswrapper[4998]: I0227 11:03:36.436908 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a3d42422-7c5a-4605-b6c0-79682b9511ed-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"a3d42422-7c5a-4605-b6c0-79682b9511ed\") " pod="openstack/tempest-tests-tempest" Feb 27 11:03:36 crc kubenswrapper[4998]: I0227 11:03:36.437424 4998 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"tempest-tests-tempest\" (UID: \"a3d42422-7c5a-4605-b6c0-79682b9511ed\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/tempest-tests-tempest" Feb 27 11:03:36 crc kubenswrapper[4998]: I0227 11:03:36.438137 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/a3d42422-7c5a-4605-b6c0-79682b9511ed-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"a3d42422-7c5a-4605-b6c0-79682b9511ed\") " pod="openstack/tempest-tests-tempest" Feb 27 11:03:36 crc kubenswrapper[4998]: I0227 11:03:36.438276 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/a3d42422-7c5a-4605-b6c0-79682b9511ed-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"a3d42422-7c5a-4605-b6c0-79682b9511ed\") " pod="openstack/tempest-tests-tempest" Feb 27 11:03:36 crc kubenswrapper[4998]: I0227 11:03:36.438769 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a3d42422-7c5a-4605-b6c0-79682b9511ed-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"a3d42422-7c5a-4605-b6c0-79682b9511ed\") " pod="openstack/tempest-tests-tempest" Feb 27 11:03:36 crc kubenswrapper[4998]: I0227 11:03:36.439701 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a3d42422-7c5a-4605-b6c0-79682b9511ed-config-data\") pod \"tempest-tests-tempest\" (UID: \"a3d42422-7c5a-4605-b6c0-79682b9511ed\") " pod="openstack/tempest-tests-tempest" Feb 27 11:03:36 crc kubenswrapper[4998]: I0227 11:03:36.445949 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a3d42422-7c5a-4605-b6c0-79682b9511ed-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"a3d42422-7c5a-4605-b6c0-79682b9511ed\") " pod="openstack/tempest-tests-tempest" Feb 27 11:03:36 crc kubenswrapper[4998]: I0227 11:03:36.446202 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a3d42422-7c5a-4605-b6c0-79682b9511ed-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"a3d42422-7c5a-4605-b6c0-79682b9511ed\") " pod="openstack/tempest-tests-tempest" Feb 27 11:03:36 crc kubenswrapper[4998]: I0227 11:03:36.446574 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/a3d42422-7c5a-4605-b6c0-79682b9511ed-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"a3d42422-7c5a-4605-b6c0-79682b9511ed\") " pod="openstack/tempest-tests-tempest" Feb 27 11:03:36 crc kubenswrapper[4998]: I0227 11:03:36.476443 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kn8wb\" (UniqueName: \"kubernetes.io/projected/a3d42422-7c5a-4605-b6c0-79682b9511ed-kube-api-access-kn8wb\") pod \"tempest-tests-tempest\" (UID: \"a3d42422-7c5a-4605-b6c0-79682b9511ed\") " pod="openstack/tempest-tests-tempest" Feb 27 11:03:36 crc kubenswrapper[4998]: I0227 11:03:36.483337 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"tempest-tests-tempest\" (UID: \"a3d42422-7c5a-4605-b6c0-79682b9511ed\") " pod="openstack/tempest-tests-tempest" Feb 27 11:03:36 crc kubenswrapper[4998]: I0227 11:03:36.559167 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 27 11:03:37 crc kubenswrapper[4998]: I0227 11:03:37.066412 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 27 11:03:37 crc kubenswrapper[4998]: I0227 11:03:37.845771 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"a3d42422-7c5a-4605-b6c0-79682b9511ed","Type":"ContainerStarted","Data":"aba0eb6c64365d27a7ad9978d6af2cf5d3c2083985fdb58d88024e4ab3a3adfb"} Feb 27 11:03:40 crc kubenswrapper[4998]: I0227 11:03:40.504951 4998 patch_prober.go:28] interesting pod/machine-config-daemon-m6kr5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 11:03:40 crc kubenswrapper[4998]: I0227 11:03:40.512573 4998 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 11:03:40 crc kubenswrapper[4998]: I0227 11:03:40.512642 4998 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" Feb 27 11:03:40 crc kubenswrapper[4998]: I0227 11:03:40.513640 4998 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"aa8a77df4c4a84d755ee497912634ba2635597c1a94094fe039e8e95846fe91f"} pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 27 11:03:40 crc kubenswrapper[4998]: I0227 11:03:40.513697 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" containerName="machine-config-daemon" containerID="cri-o://aa8a77df4c4a84d755ee497912634ba2635597c1a94094fe039e8e95846fe91f" gracePeriod=600 Feb 27 11:03:40 crc kubenswrapper[4998]: I0227 11:03:40.920169 4998 generic.go:334] "Generic (PLEG): container finished" podID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" containerID="aa8a77df4c4a84d755ee497912634ba2635597c1a94094fe039e8e95846fe91f" exitCode=0 Feb 27 11:03:40 crc kubenswrapper[4998]: I0227 11:03:40.921218 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" event={"ID":"400c5e2f-5448-49c6-bf8e-04b21e552bb2","Type":"ContainerDied","Data":"aa8a77df4c4a84d755ee497912634ba2635597c1a94094fe039e8e95846fe91f"} Feb 27 11:03:40 crc kubenswrapper[4998]: I0227 11:03:40.921275 4998 scope.go:117] "RemoveContainer" containerID="7b0185177c0eba8b1ede867992da9cdf152aa626aa33c9939f91ead1ce17d9c4" Feb 27 11:03:43 crc kubenswrapper[4998]: I0227 11:03:43.980592 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" event={"ID":"400c5e2f-5448-49c6-bf8e-04b21e552bb2","Type":"ContainerStarted","Data":"80e499ffd9f6c39119f9b9bd2b1b9c0b38519d681fc2c93cfe8afbe50a1baa31"} Feb 27 11:04:00 crc kubenswrapper[4998]: I0227 11:04:00.184358 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536504-stkb7"] Feb 27 11:04:00 crc kubenswrapper[4998]: I0227 11:04:00.185966 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536504-stkb7" Feb 27 11:04:00 crc kubenswrapper[4998]: I0227 11:04:00.191619 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 11:04:00 crc kubenswrapper[4998]: I0227 11:04:00.191794 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 11:04:00 crc kubenswrapper[4998]: I0227 11:04:00.191856 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-b74ch" Feb 27 11:04:00 crc kubenswrapper[4998]: I0227 11:04:00.198902 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536504-stkb7"] Feb 27 11:04:00 crc kubenswrapper[4998]: I0227 11:04:00.259337 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fm5km\" (UniqueName: \"kubernetes.io/projected/5f5d454a-cf0e-418a-8026-4a7c4a47d253-kube-api-access-fm5km\") pod \"auto-csr-approver-29536504-stkb7\" (UID: \"5f5d454a-cf0e-418a-8026-4a7c4a47d253\") " pod="openshift-infra/auto-csr-approver-29536504-stkb7" Feb 27 11:04:00 crc kubenswrapper[4998]: I0227 11:04:00.361098 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fm5km\" (UniqueName: \"kubernetes.io/projected/5f5d454a-cf0e-418a-8026-4a7c4a47d253-kube-api-access-fm5km\") pod \"auto-csr-approver-29536504-stkb7\" (UID: \"5f5d454a-cf0e-418a-8026-4a7c4a47d253\") " pod="openshift-infra/auto-csr-approver-29536504-stkb7" Feb 27 11:04:00 crc kubenswrapper[4998]: I0227 11:04:00.383060 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fm5km\" (UniqueName: \"kubernetes.io/projected/5f5d454a-cf0e-418a-8026-4a7c4a47d253-kube-api-access-fm5km\") pod \"auto-csr-approver-29536504-stkb7\" (UID: \"5f5d454a-cf0e-418a-8026-4a7c4a47d253\") " pod="openshift-infra/auto-csr-approver-29536504-stkb7" Feb 27 11:04:00 crc kubenswrapper[4998]: I0227 11:04:00.524801 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536504-stkb7" Feb 27 11:04:11 crc kubenswrapper[4998]: E0227 11:04:11.639029 4998 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Feb 27 11:04:11 crc kubenswrapper[4998]: E0227 11:04:11.641004 4998 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kn8wb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(a3d42422-7c5a-4605-b6c0-79682b9511ed): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 27 11:04:11 crc kubenswrapper[4998]: E0227 11:04:11.648004 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="a3d42422-7c5a-4605-b6c0-79682b9511ed" Feb 27 11:04:12 crc kubenswrapper[4998]: I0227 11:04:12.019267 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536504-stkb7"] Feb 27 11:04:12 crc kubenswrapper[4998]: I0227 11:04:12.286487 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536504-stkb7" event={"ID":"5f5d454a-cf0e-418a-8026-4a7c4a47d253","Type":"ContainerStarted","Data":"b9acb48b8c53ce46c1b0bf6594395aa7c856438b5fb55ab3db020f24d20b0c31"} Feb 27 11:04:12 crc kubenswrapper[4998]: E0227 11:04:12.287971 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="a3d42422-7c5a-4605-b6c0-79682b9511ed" Feb 27 11:04:13 crc kubenswrapper[4998]: I0227 11:04:13.304864 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536504-stkb7" event={"ID":"5f5d454a-cf0e-418a-8026-4a7c4a47d253","Type":"ContainerStarted","Data":"ecb2662395685abb4009b04ad95af47df4babd9af901db78d07730df88667b5e"} Feb 27 11:04:13 crc kubenswrapper[4998]: I0227 11:04:13.329217 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29536504-stkb7" podStartSLOduration=12.490686099 podStartE2EDuration="13.3291994s" podCreationTimestamp="2026-02-27 11:04:00 +0000 UTC" firstStartedPulling="2026-02-27 11:04:12.02499274 +0000 UTC m=+2804.023263718" lastFinishedPulling="2026-02-27 11:04:12.863506011 +0000 UTC m=+2804.861777019" observedRunningTime="2026-02-27 11:04:13.318109164 +0000 UTC m=+2805.316380142" watchObservedRunningTime="2026-02-27 11:04:13.3291994 +0000 UTC m=+2805.327470368" Feb 27 11:04:14 crc kubenswrapper[4998]: I0227 11:04:14.316609 4998 generic.go:334] "Generic (PLEG): container finished" podID="5f5d454a-cf0e-418a-8026-4a7c4a47d253" containerID="ecb2662395685abb4009b04ad95af47df4babd9af901db78d07730df88667b5e" exitCode=0 Feb 27 11:04:14 crc kubenswrapper[4998]: I0227 11:04:14.316688 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536504-stkb7" event={"ID":"5f5d454a-cf0e-418a-8026-4a7c4a47d253","Type":"ContainerDied","Data":"ecb2662395685abb4009b04ad95af47df4babd9af901db78d07730df88667b5e"} Feb 27 11:04:15 crc kubenswrapper[4998]: I0227 11:04:15.685898 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536504-stkb7" Feb 27 11:04:15 crc kubenswrapper[4998]: I0227 11:04:15.737226 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fm5km\" (UniqueName: \"kubernetes.io/projected/5f5d454a-cf0e-418a-8026-4a7c4a47d253-kube-api-access-fm5km\") pod \"5f5d454a-cf0e-418a-8026-4a7c4a47d253\" (UID: \"5f5d454a-cf0e-418a-8026-4a7c4a47d253\") " Feb 27 11:04:15 crc kubenswrapper[4998]: I0227 11:04:15.743869 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f5d454a-cf0e-418a-8026-4a7c4a47d253-kube-api-access-fm5km" (OuterVolumeSpecName: "kube-api-access-fm5km") pod "5f5d454a-cf0e-418a-8026-4a7c4a47d253" (UID: "5f5d454a-cf0e-418a-8026-4a7c4a47d253"). InnerVolumeSpecName "kube-api-access-fm5km". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:04:15 crc kubenswrapper[4998]: I0227 11:04:15.839962 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fm5km\" (UniqueName: \"kubernetes.io/projected/5f5d454a-cf0e-418a-8026-4a7c4a47d253-kube-api-access-fm5km\") on node \"crc\" DevicePath \"\"" Feb 27 11:04:16 crc kubenswrapper[4998]: I0227 11:04:16.333581 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536504-stkb7" event={"ID":"5f5d454a-cf0e-418a-8026-4a7c4a47d253","Type":"ContainerDied","Data":"b9acb48b8c53ce46c1b0bf6594395aa7c856438b5fb55ab3db020f24d20b0c31"} Feb 27 11:04:16 crc kubenswrapper[4998]: I0227 11:04:16.333946 4998 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9acb48b8c53ce46c1b0bf6594395aa7c856438b5fb55ab3db020f24d20b0c31" Feb 27 11:04:16 crc kubenswrapper[4998]: I0227 11:04:16.333625 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536504-stkb7" Feb 27 11:04:16 crc kubenswrapper[4998]: I0227 11:04:16.385719 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536498-c6m4k"] Feb 27 11:04:16 crc kubenswrapper[4998]: I0227 11:04:16.395058 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536498-c6m4k"] Feb 27 11:04:16 crc kubenswrapper[4998]: I0227 11:04:16.783395 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb7e87c2-b500-4b19-8173-7188e8e24443" path="/var/lib/kubelet/pods/bb7e87c2-b500-4b19-8173-7188e8e24443/volumes" Feb 27 11:04:29 crc kubenswrapper[4998]: I0227 11:04:29.464410 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"a3d42422-7c5a-4605-b6c0-79682b9511ed","Type":"ContainerStarted","Data":"01879e7e969f51e21c860f70873581cb0174678b53335e0c07f48ac6ae5bb0f1"} Feb 27 11:04:29 crc kubenswrapper[4998]: I0227 11:04:29.487975 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=3.381126876 podStartE2EDuration="54.487955275s" podCreationTimestamp="2026-02-27 11:03:35 +0000 UTC" firstStartedPulling="2026-02-27 11:03:37.083544731 +0000 UTC m=+2769.081815699" lastFinishedPulling="2026-02-27 11:04:28.19037313 +0000 UTC m=+2820.188644098" observedRunningTime="2026-02-27 11:04:29.481782829 +0000 UTC m=+2821.480053797" watchObservedRunningTime="2026-02-27 11:04:29.487955275 +0000 UTC m=+2821.486226243" Feb 27 11:04:39 crc kubenswrapper[4998]: I0227 11:04:39.585084 4998 generic.go:334] "Generic (PLEG): container finished" podID="a3d42422-7c5a-4605-b6c0-79682b9511ed" containerID="01879e7e969f51e21c860f70873581cb0174678b53335e0c07f48ac6ae5bb0f1" exitCode=123 Feb 27 11:04:39 crc kubenswrapper[4998]: I0227 11:04:39.585263 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"a3d42422-7c5a-4605-b6c0-79682b9511ed","Type":"ContainerDied","Data":"01879e7e969f51e21c860f70873581cb0174678b53335e0c07f48ac6ae5bb0f1"} Feb 27 11:04:41 crc kubenswrapper[4998]: I0227 11:04:41.062109 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 27 11:04:41 crc kubenswrapper[4998]: I0227 11:04:41.204142 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/a3d42422-7c5a-4605-b6c0-79682b9511ed-test-operator-ephemeral-temporary\") pod \"a3d42422-7c5a-4605-b6c0-79682b9511ed\" (UID: \"a3d42422-7c5a-4605-b6c0-79682b9511ed\") " Feb 27 11:04:41 crc kubenswrapper[4998]: I0227 11:04:41.204534 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"a3d42422-7c5a-4605-b6c0-79682b9511ed\" (UID: \"a3d42422-7c5a-4605-b6c0-79682b9511ed\") " Feb 27 11:04:41 crc kubenswrapper[4998]: I0227 11:04:41.204589 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a3d42422-7c5a-4605-b6c0-79682b9511ed-openstack-config-secret\") pod \"a3d42422-7c5a-4605-b6c0-79682b9511ed\" (UID: \"a3d42422-7c5a-4605-b6c0-79682b9511ed\") " Feb 27 11:04:41 crc kubenswrapper[4998]: I0227 11:04:41.204681 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a3d42422-7c5a-4605-b6c0-79682b9511ed-openstack-config\") pod \"a3d42422-7c5a-4605-b6c0-79682b9511ed\" (UID: \"a3d42422-7c5a-4605-b6c0-79682b9511ed\") " Feb 27 11:04:41 crc kubenswrapper[4998]: I0227 11:04:41.204734 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/a3d42422-7c5a-4605-b6c0-79682b9511ed-test-operator-ephemeral-workdir\") pod \"a3d42422-7c5a-4605-b6c0-79682b9511ed\" (UID: \"a3d42422-7c5a-4605-b6c0-79682b9511ed\") " Feb 27 11:04:41 crc kubenswrapper[4998]: I0227 11:04:41.204777 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a3d42422-7c5a-4605-b6c0-79682b9511ed-ssh-key\") pod \"a3d42422-7c5a-4605-b6c0-79682b9511ed\" (UID: \"a3d42422-7c5a-4605-b6c0-79682b9511ed\") " Feb 27 11:04:41 crc kubenswrapper[4998]: I0227 11:04:41.204955 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kn8wb\" (UniqueName: \"kubernetes.io/projected/a3d42422-7c5a-4605-b6c0-79682b9511ed-kube-api-access-kn8wb\") pod \"a3d42422-7c5a-4605-b6c0-79682b9511ed\" (UID: \"a3d42422-7c5a-4605-b6c0-79682b9511ed\") " Feb 27 11:04:41 crc kubenswrapper[4998]: I0227 11:04:41.204981 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a3d42422-7c5a-4605-b6c0-79682b9511ed-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "a3d42422-7c5a-4605-b6c0-79682b9511ed" (UID: "a3d42422-7c5a-4605-b6c0-79682b9511ed"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 11:04:41 crc kubenswrapper[4998]: I0227 11:04:41.205054 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a3d42422-7c5a-4605-b6c0-79682b9511ed-config-data\") pod \"a3d42422-7c5a-4605-b6c0-79682b9511ed\" (UID: \"a3d42422-7c5a-4605-b6c0-79682b9511ed\") " Feb 27 11:04:41 crc kubenswrapper[4998]: I0227 11:04:41.205145 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/a3d42422-7c5a-4605-b6c0-79682b9511ed-ca-certs\") pod \"a3d42422-7c5a-4605-b6c0-79682b9511ed\" (UID: \"a3d42422-7c5a-4605-b6c0-79682b9511ed\") " Feb 27 11:04:41 crc kubenswrapper[4998]: I0227 11:04:41.205807 4998 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/a3d42422-7c5a-4605-b6c0-79682b9511ed-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Feb 27 11:04:41 crc kubenswrapper[4998]: I0227 11:04:41.206087 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3d42422-7c5a-4605-b6c0-79682b9511ed-config-data" (OuterVolumeSpecName: "config-data") pod "a3d42422-7c5a-4605-b6c0-79682b9511ed" (UID: "a3d42422-7c5a-4605-b6c0-79682b9511ed"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 11:04:41 crc kubenswrapper[4998]: I0227 11:04:41.206967 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a3d42422-7c5a-4605-b6c0-79682b9511ed-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "a3d42422-7c5a-4605-b6c0-79682b9511ed" (UID: "a3d42422-7c5a-4605-b6c0-79682b9511ed"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 11:04:41 crc kubenswrapper[4998]: I0227 11:04:41.211577 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "test-operator-logs") pod "a3d42422-7c5a-4605-b6c0-79682b9511ed" (UID: "a3d42422-7c5a-4605-b6c0-79682b9511ed"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 27 11:04:41 crc kubenswrapper[4998]: I0227 11:04:41.213928 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3d42422-7c5a-4605-b6c0-79682b9511ed-kube-api-access-kn8wb" (OuterVolumeSpecName: "kube-api-access-kn8wb") pod "a3d42422-7c5a-4605-b6c0-79682b9511ed" (UID: "a3d42422-7c5a-4605-b6c0-79682b9511ed"). InnerVolumeSpecName "kube-api-access-kn8wb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:04:41 crc kubenswrapper[4998]: I0227 11:04:41.235781 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3d42422-7c5a-4605-b6c0-79682b9511ed-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "a3d42422-7c5a-4605-b6c0-79682b9511ed" (UID: "a3d42422-7c5a-4605-b6c0-79682b9511ed"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:04:41 crc kubenswrapper[4998]: I0227 11:04:41.261637 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3d42422-7c5a-4605-b6c0-79682b9511ed-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "a3d42422-7c5a-4605-b6c0-79682b9511ed" (UID: "a3d42422-7c5a-4605-b6c0-79682b9511ed"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:04:41 crc kubenswrapper[4998]: I0227 11:04:41.264872 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3d42422-7c5a-4605-b6c0-79682b9511ed-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "a3d42422-7c5a-4605-b6c0-79682b9511ed" (UID: "a3d42422-7c5a-4605-b6c0-79682b9511ed"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:04:41 crc kubenswrapper[4998]: I0227 11:04:41.270278 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3d42422-7c5a-4605-b6c0-79682b9511ed-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "a3d42422-7c5a-4605-b6c0-79682b9511ed" (UID: "a3d42422-7c5a-4605-b6c0-79682b9511ed"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 11:04:41 crc kubenswrapper[4998]: I0227 11:04:41.307844 4998 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Feb 27 11:04:41 crc kubenswrapper[4998]: I0227 11:04:41.307926 4998 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a3d42422-7c5a-4605-b6c0-79682b9511ed-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 27 11:04:41 crc kubenswrapper[4998]: I0227 11:04:41.307959 4998 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a3d42422-7c5a-4605-b6c0-79682b9511ed-openstack-config\") on node \"crc\" DevicePath \"\"" Feb 27 11:04:41 crc kubenswrapper[4998]: I0227 11:04:41.307984 4998 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/a3d42422-7c5a-4605-b6c0-79682b9511ed-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Feb 27 11:04:41 crc kubenswrapper[4998]: I0227 11:04:41.308009 4998 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a3d42422-7c5a-4605-b6c0-79682b9511ed-ssh-key\") on node \"crc\" DevicePath \"\"" Feb 27 11:04:41 crc kubenswrapper[4998]: I0227 11:04:41.308034 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kn8wb\" (UniqueName: \"kubernetes.io/projected/a3d42422-7c5a-4605-b6c0-79682b9511ed-kube-api-access-kn8wb\") on node \"crc\" DevicePath \"\"" Feb 27 11:04:41 crc kubenswrapper[4998]: I0227 11:04:41.308056 4998 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a3d42422-7c5a-4605-b6c0-79682b9511ed-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 11:04:41 crc kubenswrapper[4998]: I0227 11:04:41.308078 4998 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/a3d42422-7c5a-4605-b6c0-79682b9511ed-ca-certs\") on node \"crc\" DevicePath \"\"" Feb 27 11:04:41 crc kubenswrapper[4998]: I0227 11:04:41.328307 4998 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Feb 27 11:04:41 crc kubenswrapper[4998]: I0227 11:04:41.410656 4998 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Feb 27 11:04:41 crc kubenswrapper[4998]: I0227 11:04:41.626356 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"a3d42422-7c5a-4605-b6c0-79682b9511ed","Type":"ContainerDied","Data":"aba0eb6c64365d27a7ad9978d6af2cf5d3c2083985fdb58d88024e4ab3a3adfb"} Feb 27 11:04:41 crc kubenswrapper[4998]: I0227 11:04:41.626395 4998 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aba0eb6c64365d27a7ad9978d6af2cf5d3c2083985fdb58d88024e4ab3a3adfb" Feb 27 11:04:41 crc kubenswrapper[4998]: I0227 11:04:41.626452 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 27 11:04:49 crc kubenswrapper[4998]: I0227 11:04:49.579127 4998 scope.go:117] "RemoveContainer" containerID="6a694e911c917907b4eb2b60212a6482ce64cfb6ebcfac6b794bde2223bf5c1e" Feb 27 11:04:53 crc kubenswrapper[4998]: I0227 11:04:53.041209 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 27 11:04:53 crc kubenswrapper[4998]: E0227 11:04:53.042840 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3d42422-7c5a-4605-b6c0-79682b9511ed" containerName="tempest-tests-tempest-tests-runner" Feb 27 11:04:53 crc kubenswrapper[4998]: I0227 11:04:53.042864 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3d42422-7c5a-4605-b6c0-79682b9511ed" containerName="tempest-tests-tempest-tests-runner" Feb 27 11:04:53 crc kubenswrapper[4998]: E0227 11:04:53.042901 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f5d454a-cf0e-418a-8026-4a7c4a47d253" containerName="oc" Feb 27 11:04:53 crc kubenswrapper[4998]: I0227 11:04:53.042909 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f5d454a-cf0e-418a-8026-4a7c4a47d253" containerName="oc" Feb 27 11:04:53 crc kubenswrapper[4998]: I0227 11:04:53.043180 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f5d454a-cf0e-418a-8026-4a7c4a47d253" containerName="oc" Feb 27 11:04:53 crc kubenswrapper[4998]: I0227 11:04:53.043207 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3d42422-7c5a-4605-b6c0-79682b9511ed" containerName="tempest-tests-tempest-tests-runner" Feb 27 11:04:53 crc kubenswrapper[4998]: I0227 11:04:53.043922 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 27 11:04:53 crc kubenswrapper[4998]: I0227 11:04:53.048280 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-78fmt" Feb 27 11:04:53 crc kubenswrapper[4998]: I0227 11:04:53.062855 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 27 11:04:53 crc kubenswrapper[4998]: I0227 11:04:53.198359 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"f77398a3-6376-4fb2-9a1a-f3d0067d9cc4\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 27 11:04:53 crc kubenswrapper[4998]: I0227 11:04:53.198411 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8prc\" (UniqueName: \"kubernetes.io/projected/f77398a3-6376-4fb2-9a1a-f3d0067d9cc4-kube-api-access-t8prc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"f77398a3-6376-4fb2-9a1a-f3d0067d9cc4\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 27 11:04:53 crc kubenswrapper[4998]: I0227 11:04:53.300170 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"f77398a3-6376-4fb2-9a1a-f3d0067d9cc4\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 27 11:04:53 crc kubenswrapper[4998]: I0227 11:04:53.300213 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8prc\" (UniqueName: \"kubernetes.io/projected/f77398a3-6376-4fb2-9a1a-f3d0067d9cc4-kube-api-access-t8prc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"f77398a3-6376-4fb2-9a1a-f3d0067d9cc4\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 27 11:04:53 crc kubenswrapper[4998]: I0227 11:04:53.300707 4998 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"f77398a3-6376-4fb2-9a1a-f3d0067d9cc4\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 27 11:04:53 crc kubenswrapper[4998]: I0227 11:04:53.329164 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8prc\" (UniqueName: \"kubernetes.io/projected/f77398a3-6376-4fb2-9a1a-f3d0067d9cc4-kube-api-access-t8prc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"f77398a3-6376-4fb2-9a1a-f3d0067d9cc4\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 27 11:04:53 crc kubenswrapper[4998]: I0227 11:04:53.332721 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"f77398a3-6376-4fb2-9a1a-f3d0067d9cc4\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 27 11:04:53 crc kubenswrapper[4998]: I0227 11:04:53.399735 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 27 11:04:53 crc kubenswrapper[4998]: I0227 11:04:53.676491 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 27 11:04:53 crc kubenswrapper[4998]: I0227 11:04:53.771552 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"f77398a3-6376-4fb2-9a1a-f3d0067d9cc4","Type":"ContainerStarted","Data":"3d3fdd8e1c76aab6a2cecbd922a3a84118bf99800d15e8dfb52f03c9ebc8ce65"} Feb 27 11:04:55 crc kubenswrapper[4998]: I0227 11:04:55.799501 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"f77398a3-6376-4fb2-9a1a-f3d0067d9cc4","Type":"ContainerStarted","Data":"355e0a7e86d1dcca79fc9c1b4c7c9a8ed8befbe1e3e6bffb1a2ac1684fd30c7d"} Feb 27 11:04:55 crc kubenswrapper[4998]: I0227 11:04:55.831210 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=1.961857044 podStartE2EDuration="2.831181397s" podCreationTimestamp="2026-02-27 11:04:53 +0000 UTC" firstStartedPulling="2026-02-27 11:04:53.689280767 +0000 UTC m=+2845.687551755" lastFinishedPulling="2026-02-27 11:04:54.55860511 +0000 UTC m=+2846.556876108" observedRunningTime="2026-02-27 11:04:55.818083387 +0000 UTC m=+2847.816354365" watchObservedRunningTime="2026-02-27 11:04:55.831181397 +0000 UTC m=+2847.829452405" Feb 27 11:05:15 crc kubenswrapper[4998]: I0227 11:05:15.156749 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-b7hpg"] Feb 27 11:05:15 crc kubenswrapper[4998]: I0227 11:05:15.159765 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b7hpg" Feb 27 11:05:15 crc kubenswrapper[4998]: I0227 11:05:15.168887 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-b7hpg"] Feb 27 11:05:15 crc kubenswrapper[4998]: I0227 11:05:15.292512 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcnm7\" (UniqueName: \"kubernetes.io/projected/5951e940-a098-4def-97f9-c7f46c352132-kube-api-access-mcnm7\") pod \"redhat-marketplace-b7hpg\" (UID: \"5951e940-a098-4def-97f9-c7f46c352132\") " pod="openshift-marketplace/redhat-marketplace-b7hpg" Feb 27 11:05:15 crc kubenswrapper[4998]: I0227 11:05:15.292612 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5951e940-a098-4def-97f9-c7f46c352132-utilities\") pod \"redhat-marketplace-b7hpg\" (UID: \"5951e940-a098-4def-97f9-c7f46c352132\") " pod="openshift-marketplace/redhat-marketplace-b7hpg" Feb 27 11:05:15 crc kubenswrapper[4998]: I0227 11:05:15.292658 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5951e940-a098-4def-97f9-c7f46c352132-catalog-content\") pod \"redhat-marketplace-b7hpg\" (UID: \"5951e940-a098-4def-97f9-c7f46c352132\") " pod="openshift-marketplace/redhat-marketplace-b7hpg" Feb 27 11:05:15 crc kubenswrapper[4998]: I0227 11:05:15.394101 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcnm7\" (UniqueName: \"kubernetes.io/projected/5951e940-a098-4def-97f9-c7f46c352132-kube-api-access-mcnm7\") pod \"redhat-marketplace-b7hpg\" (UID: \"5951e940-a098-4def-97f9-c7f46c352132\") " pod="openshift-marketplace/redhat-marketplace-b7hpg" Feb 27 11:05:15 crc kubenswrapper[4998]: I0227 11:05:15.394174 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5951e940-a098-4def-97f9-c7f46c352132-utilities\") pod \"redhat-marketplace-b7hpg\" (UID: \"5951e940-a098-4def-97f9-c7f46c352132\") " pod="openshift-marketplace/redhat-marketplace-b7hpg" Feb 27 11:05:15 crc kubenswrapper[4998]: I0227 11:05:15.394202 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5951e940-a098-4def-97f9-c7f46c352132-catalog-content\") pod \"redhat-marketplace-b7hpg\" (UID: \"5951e940-a098-4def-97f9-c7f46c352132\") " pod="openshift-marketplace/redhat-marketplace-b7hpg" Feb 27 11:05:15 crc kubenswrapper[4998]: I0227 11:05:15.394791 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5951e940-a098-4def-97f9-c7f46c352132-catalog-content\") pod \"redhat-marketplace-b7hpg\" (UID: \"5951e940-a098-4def-97f9-c7f46c352132\") " pod="openshift-marketplace/redhat-marketplace-b7hpg" Feb 27 11:05:15 crc kubenswrapper[4998]: I0227 11:05:15.395446 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5951e940-a098-4def-97f9-c7f46c352132-utilities\") pod \"redhat-marketplace-b7hpg\" (UID: \"5951e940-a098-4def-97f9-c7f46c352132\") " pod="openshift-marketplace/redhat-marketplace-b7hpg" Feb 27 11:05:15 crc kubenswrapper[4998]: I0227 11:05:15.421680 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcnm7\" (UniqueName: \"kubernetes.io/projected/5951e940-a098-4def-97f9-c7f46c352132-kube-api-access-mcnm7\") pod \"redhat-marketplace-b7hpg\" (UID: \"5951e940-a098-4def-97f9-c7f46c352132\") " pod="openshift-marketplace/redhat-marketplace-b7hpg" Feb 27 11:05:15 crc kubenswrapper[4998]: I0227 11:05:15.489945 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b7hpg" Feb 27 11:05:15 crc kubenswrapper[4998]: I0227 11:05:15.977504 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-b7hpg"] Feb 27 11:05:16 crc kubenswrapper[4998]: I0227 11:05:16.049488 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b7hpg" event={"ID":"5951e940-a098-4def-97f9-c7f46c352132","Type":"ContainerStarted","Data":"66863bc9f6851f35b9a1afe8579c35eb917f1a3304ac6fa6df2ac55692846637"} Feb 27 11:05:17 crc kubenswrapper[4998]: I0227 11:05:17.063461 4998 generic.go:334] "Generic (PLEG): container finished" podID="5951e940-a098-4def-97f9-c7f46c352132" containerID="f30e2363bca4e0c3865d6989671c7cef68c4d21f75e18566ae0fcc6c32f82306" exitCode=0 Feb 27 11:05:17 crc kubenswrapper[4998]: I0227 11:05:17.063534 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b7hpg" event={"ID":"5951e940-a098-4def-97f9-c7f46c352132","Type":"ContainerDied","Data":"f30e2363bca4e0c3865d6989671c7cef68c4d21f75e18566ae0fcc6c32f82306"} Feb 27 11:05:17 crc kubenswrapper[4998]: I0227 11:05:17.066601 4998 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 27 11:05:18 crc kubenswrapper[4998]: I0227 11:05:18.086376 4998 generic.go:334] "Generic (PLEG): container finished" podID="5951e940-a098-4def-97f9-c7f46c352132" containerID="a5abaa9c12eb9032a860247bdabd5ed4c7210a8c78f3e4ab2e5cf92d7f706a66" exitCode=0 Feb 27 11:05:18 crc kubenswrapper[4998]: I0227 11:05:18.086521 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b7hpg" event={"ID":"5951e940-a098-4def-97f9-c7f46c352132","Type":"ContainerDied","Data":"a5abaa9c12eb9032a860247bdabd5ed4c7210a8c78f3e4ab2e5cf92d7f706a66"} Feb 27 11:05:19 crc kubenswrapper[4998]: I0227 11:05:19.102015 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b7hpg" event={"ID":"5951e940-a098-4def-97f9-c7f46c352132","Type":"ContainerStarted","Data":"372dd3982f4fb77bc13681ef983819a2078b46475248e81fa5a381cf8e95246d"} Feb 27 11:05:19 crc kubenswrapper[4998]: I0227 11:05:19.148909 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-b7hpg" podStartSLOduration=2.718318706 podStartE2EDuration="4.148882577s" podCreationTimestamp="2026-02-27 11:05:15 +0000 UTC" firstStartedPulling="2026-02-27 11:05:17.066090862 +0000 UTC m=+2869.064361870" lastFinishedPulling="2026-02-27 11:05:18.496654763 +0000 UTC m=+2870.494925741" observedRunningTime="2026-02-27 11:05:19.132351166 +0000 UTC m=+2871.130622174" watchObservedRunningTime="2026-02-27 11:05:19.148882577 +0000 UTC m=+2871.147153585" Feb 27 11:05:25 crc kubenswrapper[4998]: I0227 11:05:25.490502 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-b7hpg" Feb 27 11:05:25 crc kubenswrapper[4998]: I0227 11:05:25.491436 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-b7hpg" Feb 27 11:05:25 crc kubenswrapper[4998]: I0227 11:05:25.578040 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-b7hpg" Feb 27 11:05:26 crc kubenswrapper[4998]: I0227 11:05:26.271634 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-b7hpg" Feb 27 11:05:26 crc kubenswrapper[4998]: I0227 11:05:26.348909 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-b7hpg"] Feb 27 11:05:28 crc kubenswrapper[4998]: I0227 11:05:28.206796 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-b7hpg" podUID="5951e940-a098-4def-97f9-c7f46c352132" containerName="registry-server" containerID="cri-o://372dd3982f4fb77bc13681ef983819a2078b46475248e81fa5a381cf8e95246d" gracePeriod=2 Feb 27 11:05:28 crc kubenswrapper[4998]: I0227 11:05:28.644412 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b7hpg" Feb 27 11:05:28 crc kubenswrapper[4998]: I0227 11:05:28.716595 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5951e940-a098-4def-97f9-c7f46c352132-utilities\") pod \"5951e940-a098-4def-97f9-c7f46c352132\" (UID: \"5951e940-a098-4def-97f9-c7f46c352132\") " Feb 27 11:05:28 crc kubenswrapper[4998]: I0227 11:05:28.717299 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5951e940-a098-4def-97f9-c7f46c352132-catalog-content\") pod \"5951e940-a098-4def-97f9-c7f46c352132\" (UID: \"5951e940-a098-4def-97f9-c7f46c352132\") " Feb 27 11:05:28 crc kubenswrapper[4998]: I0227 11:05:28.717335 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mcnm7\" (UniqueName: \"kubernetes.io/projected/5951e940-a098-4def-97f9-c7f46c352132-kube-api-access-mcnm7\") pod \"5951e940-a098-4def-97f9-c7f46c352132\" (UID: \"5951e940-a098-4def-97f9-c7f46c352132\") " Feb 27 11:05:28 crc kubenswrapper[4998]: I0227 11:05:28.717964 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5951e940-a098-4def-97f9-c7f46c352132-utilities" (OuterVolumeSpecName: "utilities") pod "5951e940-a098-4def-97f9-c7f46c352132" (UID: "5951e940-a098-4def-97f9-c7f46c352132"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 11:05:28 crc kubenswrapper[4998]: I0227 11:05:28.725897 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5951e940-a098-4def-97f9-c7f46c352132-kube-api-access-mcnm7" (OuterVolumeSpecName: "kube-api-access-mcnm7") pod "5951e940-a098-4def-97f9-c7f46c352132" (UID: "5951e940-a098-4def-97f9-c7f46c352132"). InnerVolumeSpecName "kube-api-access-mcnm7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:05:28 crc kubenswrapper[4998]: I0227 11:05:28.753404 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5951e940-a098-4def-97f9-c7f46c352132-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5951e940-a098-4def-97f9-c7f46c352132" (UID: "5951e940-a098-4def-97f9-c7f46c352132"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 11:05:28 crc kubenswrapper[4998]: I0227 11:05:28.820078 4998 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5951e940-a098-4def-97f9-c7f46c352132-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 11:05:28 crc kubenswrapper[4998]: I0227 11:05:28.820162 4998 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5951e940-a098-4def-97f9-c7f46c352132-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 11:05:28 crc kubenswrapper[4998]: I0227 11:05:28.820189 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mcnm7\" (UniqueName: \"kubernetes.io/projected/5951e940-a098-4def-97f9-c7f46c352132-kube-api-access-mcnm7\") on node \"crc\" DevicePath \"\"" Feb 27 11:05:29 crc kubenswrapper[4998]: I0227 11:05:29.224982 4998 generic.go:334] "Generic (PLEG): container finished" podID="5951e940-a098-4def-97f9-c7f46c352132" containerID="372dd3982f4fb77bc13681ef983819a2078b46475248e81fa5a381cf8e95246d" exitCode=0 Feb 27 11:05:29 crc kubenswrapper[4998]: I0227 11:05:29.225040 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b7hpg" event={"ID":"5951e940-a098-4def-97f9-c7f46c352132","Type":"ContainerDied","Data":"372dd3982f4fb77bc13681ef983819a2078b46475248e81fa5a381cf8e95246d"} Feb 27 11:05:29 crc kubenswrapper[4998]: I0227 11:05:29.225077 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b7hpg" Feb 27 11:05:29 crc kubenswrapper[4998]: I0227 11:05:29.225113 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b7hpg" event={"ID":"5951e940-a098-4def-97f9-c7f46c352132","Type":"ContainerDied","Data":"66863bc9f6851f35b9a1afe8579c35eb917f1a3304ac6fa6df2ac55692846637"} Feb 27 11:05:29 crc kubenswrapper[4998]: I0227 11:05:29.225144 4998 scope.go:117] "RemoveContainer" containerID="372dd3982f4fb77bc13681ef983819a2078b46475248e81fa5a381cf8e95246d" Feb 27 11:05:29 crc kubenswrapper[4998]: I0227 11:05:29.259989 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-b7hpg"] Feb 27 11:05:29 crc kubenswrapper[4998]: I0227 11:05:29.265101 4998 scope.go:117] "RemoveContainer" containerID="a5abaa9c12eb9032a860247bdabd5ed4c7210a8c78f3e4ab2e5cf92d7f706a66" Feb 27 11:05:29 crc kubenswrapper[4998]: I0227 11:05:29.272058 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-b7hpg"] Feb 27 11:05:29 crc kubenswrapper[4998]: I0227 11:05:29.288892 4998 scope.go:117] "RemoveContainer" containerID="f30e2363bca4e0c3865d6989671c7cef68c4d21f75e18566ae0fcc6c32f82306" Feb 27 11:05:29 crc kubenswrapper[4998]: I0227 11:05:29.331940 4998 scope.go:117] "RemoveContainer" containerID="372dd3982f4fb77bc13681ef983819a2078b46475248e81fa5a381cf8e95246d" Feb 27 11:05:29 crc kubenswrapper[4998]: E0227 11:05:29.332949 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"372dd3982f4fb77bc13681ef983819a2078b46475248e81fa5a381cf8e95246d\": container with ID starting with 372dd3982f4fb77bc13681ef983819a2078b46475248e81fa5a381cf8e95246d not found: ID does not exist" containerID="372dd3982f4fb77bc13681ef983819a2078b46475248e81fa5a381cf8e95246d" Feb 27 11:05:29 crc kubenswrapper[4998]: I0227 11:05:29.333018 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"372dd3982f4fb77bc13681ef983819a2078b46475248e81fa5a381cf8e95246d"} err="failed to get container status \"372dd3982f4fb77bc13681ef983819a2078b46475248e81fa5a381cf8e95246d\": rpc error: code = NotFound desc = could not find container \"372dd3982f4fb77bc13681ef983819a2078b46475248e81fa5a381cf8e95246d\": container with ID starting with 372dd3982f4fb77bc13681ef983819a2078b46475248e81fa5a381cf8e95246d not found: ID does not exist" Feb 27 11:05:29 crc kubenswrapper[4998]: I0227 11:05:29.333058 4998 scope.go:117] "RemoveContainer" containerID="a5abaa9c12eb9032a860247bdabd5ed4c7210a8c78f3e4ab2e5cf92d7f706a66" Feb 27 11:05:29 crc kubenswrapper[4998]: E0227 11:05:29.333434 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5abaa9c12eb9032a860247bdabd5ed4c7210a8c78f3e4ab2e5cf92d7f706a66\": container with ID starting with a5abaa9c12eb9032a860247bdabd5ed4c7210a8c78f3e4ab2e5cf92d7f706a66 not found: ID does not exist" containerID="a5abaa9c12eb9032a860247bdabd5ed4c7210a8c78f3e4ab2e5cf92d7f706a66" Feb 27 11:05:29 crc kubenswrapper[4998]: I0227 11:05:29.333484 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5abaa9c12eb9032a860247bdabd5ed4c7210a8c78f3e4ab2e5cf92d7f706a66"} err="failed to get container status \"a5abaa9c12eb9032a860247bdabd5ed4c7210a8c78f3e4ab2e5cf92d7f706a66\": rpc error: code = NotFound desc = could not find container \"a5abaa9c12eb9032a860247bdabd5ed4c7210a8c78f3e4ab2e5cf92d7f706a66\": container with ID starting with a5abaa9c12eb9032a860247bdabd5ed4c7210a8c78f3e4ab2e5cf92d7f706a66 not found: ID does not exist" Feb 27 11:05:29 crc kubenswrapper[4998]: I0227 11:05:29.333511 4998 scope.go:117] "RemoveContainer" containerID="f30e2363bca4e0c3865d6989671c7cef68c4d21f75e18566ae0fcc6c32f82306" Feb 27 11:05:29 crc kubenswrapper[4998]: E0227 11:05:29.334071 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f30e2363bca4e0c3865d6989671c7cef68c4d21f75e18566ae0fcc6c32f82306\": container with ID starting with f30e2363bca4e0c3865d6989671c7cef68c4d21f75e18566ae0fcc6c32f82306 not found: ID does not exist" containerID="f30e2363bca4e0c3865d6989671c7cef68c4d21f75e18566ae0fcc6c32f82306" Feb 27 11:05:29 crc kubenswrapper[4998]: I0227 11:05:29.334121 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f30e2363bca4e0c3865d6989671c7cef68c4d21f75e18566ae0fcc6c32f82306"} err="failed to get container status \"f30e2363bca4e0c3865d6989671c7cef68c4d21f75e18566ae0fcc6c32f82306\": rpc error: code = NotFound desc = could not find container \"f30e2363bca4e0c3865d6989671c7cef68c4d21f75e18566ae0fcc6c32f82306\": container with ID starting with f30e2363bca4e0c3865d6989671c7cef68c4d21f75e18566ae0fcc6c32f82306 not found: ID does not exist" Feb 27 11:05:30 crc kubenswrapper[4998]: I0227 11:05:30.779931 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5951e940-a098-4def-97f9-c7f46c352132" path="/var/lib/kubelet/pods/5951e940-a098-4def-97f9-c7f46c352132/volumes" Feb 27 11:05:31 crc kubenswrapper[4998]: E0227 11:05:31.574078 4998 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5951e940_a098_4def_97f9_c7f46c352132.slice/crio-66863bc9f6851f35b9a1afe8579c35eb917f1a3304ac6fa6df2ac55692846637\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5951e940_a098_4def_97f9_c7f46c352132.slice\": RecentStats: unable to find data in memory cache]" Feb 27 11:05:41 crc kubenswrapper[4998]: E0227 11:05:41.852388 4998 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5951e940_a098_4def_97f9_c7f46c352132.slice/crio-66863bc9f6851f35b9a1afe8579c35eb917f1a3304ac6fa6df2ac55692846637\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5951e940_a098_4def_97f9_c7f46c352132.slice\": RecentStats: unable to find data in memory cache]" Feb 27 11:05:44 crc kubenswrapper[4998]: I0227 11:05:44.103031 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-6ndsf/must-gather-b5b4f"] Feb 27 11:05:44 crc kubenswrapper[4998]: E0227 11:05:44.103619 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5951e940-a098-4def-97f9-c7f46c352132" containerName="extract-utilities" Feb 27 11:05:44 crc kubenswrapper[4998]: I0227 11:05:44.103631 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="5951e940-a098-4def-97f9-c7f46c352132" containerName="extract-utilities" Feb 27 11:05:44 crc kubenswrapper[4998]: E0227 11:05:44.103641 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5951e940-a098-4def-97f9-c7f46c352132" containerName="extract-content" Feb 27 11:05:44 crc kubenswrapper[4998]: I0227 11:05:44.103647 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="5951e940-a098-4def-97f9-c7f46c352132" containerName="extract-content" Feb 27 11:05:44 crc kubenswrapper[4998]: E0227 11:05:44.103670 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5951e940-a098-4def-97f9-c7f46c352132" containerName="registry-server" Feb 27 11:05:44 crc kubenswrapper[4998]: I0227 11:05:44.103677 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="5951e940-a098-4def-97f9-c7f46c352132" containerName="registry-server" Feb 27 11:05:44 crc kubenswrapper[4998]: I0227 11:05:44.103846 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="5951e940-a098-4def-97f9-c7f46c352132" containerName="registry-server" Feb 27 11:05:44 crc kubenswrapper[4998]: I0227 11:05:44.104759 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6ndsf/must-gather-b5b4f" Feb 27 11:05:44 crc kubenswrapper[4998]: I0227 11:05:44.111839 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-6ndsf"/"openshift-service-ca.crt" Feb 27 11:05:44 crc kubenswrapper[4998]: I0227 11:05:44.112293 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-6ndsf"/"kube-root-ca.crt" Feb 27 11:05:44 crc kubenswrapper[4998]: I0227 11:05:44.137660 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-6ndsf/must-gather-b5b4f"] Feb 27 11:05:44 crc kubenswrapper[4998]: I0227 11:05:44.244011 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vl4zk\" (UniqueName: \"kubernetes.io/projected/52d03bda-caf6-462f-96d8-37dd3c6ed001-kube-api-access-vl4zk\") pod \"must-gather-b5b4f\" (UID: \"52d03bda-caf6-462f-96d8-37dd3c6ed001\") " pod="openshift-must-gather-6ndsf/must-gather-b5b4f" Feb 27 11:05:44 crc kubenswrapper[4998]: I0227 11:05:44.244112 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/52d03bda-caf6-462f-96d8-37dd3c6ed001-must-gather-output\") pod \"must-gather-b5b4f\" (UID: \"52d03bda-caf6-462f-96d8-37dd3c6ed001\") " pod="openshift-must-gather-6ndsf/must-gather-b5b4f" Feb 27 11:05:44 crc kubenswrapper[4998]: I0227 11:05:44.346220 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vl4zk\" (UniqueName: \"kubernetes.io/projected/52d03bda-caf6-462f-96d8-37dd3c6ed001-kube-api-access-vl4zk\") pod \"must-gather-b5b4f\" (UID: \"52d03bda-caf6-462f-96d8-37dd3c6ed001\") " pod="openshift-must-gather-6ndsf/must-gather-b5b4f" Feb 27 11:05:44 crc kubenswrapper[4998]: I0227 11:05:44.346558 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/52d03bda-caf6-462f-96d8-37dd3c6ed001-must-gather-output\") pod \"must-gather-b5b4f\" (UID: \"52d03bda-caf6-462f-96d8-37dd3c6ed001\") " pod="openshift-must-gather-6ndsf/must-gather-b5b4f" Feb 27 11:05:44 crc kubenswrapper[4998]: I0227 11:05:44.346947 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/52d03bda-caf6-462f-96d8-37dd3c6ed001-must-gather-output\") pod \"must-gather-b5b4f\" (UID: \"52d03bda-caf6-462f-96d8-37dd3c6ed001\") " pod="openshift-must-gather-6ndsf/must-gather-b5b4f" Feb 27 11:05:44 crc kubenswrapper[4998]: I0227 11:05:44.364166 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vl4zk\" (UniqueName: \"kubernetes.io/projected/52d03bda-caf6-462f-96d8-37dd3c6ed001-kube-api-access-vl4zk\") pod \"must-gather-b5b4f\" (UID: \"52d03bda-caf6-462f-96d8-37dd3c6ed001\") " pod="openshift-must-gather-6ndsf/must-gather-b5b4f" Feb 27 11:05:44 crc kubenswrapper[4998]: I0227 11:05:44.425721 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6ndsf/must-gather-b5b4f" Feb 27 11:05:44 crc kubenswrapper[4998]: I0227 11:05:44.892314 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-6ndsf/must-gather-b5b4f"] Feb 27 11:05:45 crc kubenswrapper[4998]: I0227 11:05:45.389313 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6ndsf/must-gather-b5b4f" event={"ID":"52d03bda-caf6-462f-96d8-37dd3c6ed001","Type":"ContainerStarted","Data":"818eb27be1721c9d99c7cc3539e71ba143105976c68379dc78404e93572d7ef4"} Feb 27 11:05:46 crc kubenswrapper[4998]: I0227 11:05:46.524118 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-k8vdc"] Feb 27 11:05:46 crc kubenswrapper[4998]: I0227 11:05:46.535189 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k8vdc" Feb 27 11:05:46 crc kubenswrapper[4998]: I0227 11:05:46.544040 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-k8vdc"] Feb 27 11:05:46 crc kubenswrapper[4998]: I0227 11:05:46.604385 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e253718-3614-4c0b-a7ed-2e370e6d3369-utilities\") pod \"community-operators-k8vdc\" (UID: \"0e253718-3614-4c0b-a7ed-2e370e6d3369\") " pod="openshift-marketplace/community-operators-k8vdc" Feb 27 11:05:46 crc kubenswrapper[4998]: I0227 11:05:46.604534 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e253718-3614-4c0b-a7ed-2e370e6d3369-catalog-content\") pod \"community-operators-k8vdc\" (UID: \"0e253718-3614-4c0b-a7ed-2e370e6d3369\") " pod="openshift-marketplace/community-operators-k8vdc" Feb 27 11:05:46 crc kubenswrapper[4998]: I0227 11:05:46.604587 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8gxx\" (UniqueName: \"kubernetes.io/projected/0e253718-3614-4c0b-a7ed-2e370e6d3369-kube-api-access-r8gxx\") pod \"community-operators-k8vdc\" (UID: \"0e253718-3614-4c0b-a7ed-2e370e6d3369\") " pod="openshift-marketplace/community-operators-k8vdc" Feb 27 11:05:46 crc kubenswrapper[4998]: I0227 11:05:46.706659 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e253718-3614-4c0b-a7ed-2e370e6d3369-utilities\") pod \"community-operators-k8vdc\" (UID: \"0e253718-3614-4c0b-a7ed-2e370e6d3369\") " pod="openshift-marketplace/community-operators-k8vdc" Feb 27 11:05:46 crc kubenswrapper[4998]: I0227 11:05:46.706806 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e253718-3614-4c0b-a7ed-2e370e6d3369-catalog-content\") pod \"community-operators-k8vdc\" (UID: \"0e253718-3614-4c0b-a7ed-2e370e6d3369\") " pod="openshift-marketplace/community-operators-k8vdc" Feb 27 11:05:46 crc kubenswrapper[4998]: I0227 11:05:46.706862 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8gxx\" (UniqueName: \"kubernetes.io/projected/0e253718-3614-4c0b-a7ed-2e370e6d3369-kube-api-access-r8gxx\") pod \"community-operators-k8vdc\" (UID: \"0e253718-3614-4c0b-a7ed-2e370e6d3369\") " pod="openshift-marketplace/community-operators-k8vdc" Feb 27 11:05:46 crc kubenswrapper[4998]: I0227 11:05:46.707147 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e253718-3614-4c0b-a7ed-2e370e6d3369-utilities\") pod \"community-operators-k8vdc\" (UID: \"0e253718-3614-4c0b-a7ed-2e370e6d3369\") " pod="openshift-marketplace/community-operators-k8vdc" Feb 27 11:05:46 crc kubenswrapper[4998]: I0227 11:05:46.707158 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e253718-3614-4c0b-a7ed-2e370e6d3369-catalog-content\") pod \"community-operators-k8vdc\" (UID: \"0e253718-3614-4c0b-a7ed-2e370e6d3369\") " pod="openshift-marketplace/community-operators-k8vdc" Feb 27 11:05:46 crc kubenswrapper[4998]: I0227 11:05:46.727354 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8gxx\" (UniqueName: \"kubernetes.io/projected/0e253718-3614-4c0b-a7ed-2e370e6d3369-kube-api-access-r8gxx\") pod \"community-operators-k8vdc\" (UID: \"0e253718-3614-4c0b-a7ed-2e370e6d3369\") " pod="openshift-marketplace/community-operators-k8vdc" Feb 27 11:05:46 crc kubenswrapper[4998]: I0227 11:05:46.870971 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k8vdc" Feb 27 11:05:47 crc kubenswrapper[4998]: W0227 11:05:47.244873 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0e253718_3614_4c0b_a7ed_2e370e6d3369.slice/crio-d51fb3cc7e43088df9ea9239c1017212b03d9db5c3ec46d4d199dbed0ae4af4e WatchSource:0}: Error finding container d51fb3cc7e43088df9ea9239c1017212b03d9db5c3ec46d4d199dbed0ae4af4e: Status 404 returned error can't find the container with id d51fb3cc7e43088df9ea9239c1017212b03d9db5c3ec46d4d199dbed0ae4af4e Feb 27 11:05:47 crc kubenswrapper[4998]: I0227 11:05:47.245159 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-k8vdc"] Feb 27 11:05:47 crc kubenswrapper[4998]: I0227 11:05:47.419432 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k8vdc" event={"ID":"0e253718-3614-4c0b-a7ed-2e370e6d3369","Type":"ContainerStarted","Data":"d51fb3cc7e43088df9ea9239c1017212b03d9db5c3ec46d4d199dbed0ae4af4e"} Feb 27 11:05:48 crc kubenswrapper[4998]: I0227 11:05:48.433661 4998 generic.go:334] "Generic (PLEG): container finished" podID="0e253718-3614-4c0b-a7ed-2e370e6d3369" containerID="bb345f440c79a86b7f3de4657e98b686778a0bf8d3e9bb659ef4384a13ec0f7e" exitCode=0 Feb 27 11:05:48 crc kubenswrapper[4998]: I0227 11:05:48.433764 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k8vdc" event={"ID":"0e253718-3614-4c0b-a7ed-2e370e6d3369","Type":"ContainerDied","Data":"bb345f440c79a86b7f3de4657e98b686778a0bf8d3e9bb659ef4384a13ec0f7e"} Feb 27 11:05:52 crc kubenswrapper[4998]: E0227 11:05:52.118491 4998 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5951e940_a098_4def_97f9_c7f46c352132.slice/crio-66863bc9f6851f35b9a1afe8579c35eb917f1a3304ac6fa6df2ac55692846637\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5951e940_a098_4def_97f9_c7f46c352132.slice\": RecentStats: unable to find data in memory cache]" Feb 27 11:05:54 crc kubenswrapper[4998]: I0227 11:05:54.496778 4998 generic.go:334] "Generic (PLEG): container finished" podID="0e253718-3614-4c0b-a7ed-2e370e6d3369" containerID="fdcd4b2ecba8647b103e0ca2b089bbaa593b5d9fee527ccbc101545208d00907" exitCode=0 Feb 27 11:05:54 crc kubenswrapper[4998]: I0227 11:05:54.496877 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k8vdc" event={"ID":"0e253718-3614-4c0b-a7ed-2e370e6d3369","Type":"ContainerDied","Data":"fdcd4b2ecba8647b103e0ca2b089bbaa593b5d9fee527ccbc101545208d00907"} Feb 27 11:05:54 crc kubenswrapper[4998]: I0227 11:05:54.501140 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6ndsf/must-gather-b5b4f" event={"ID":"52d03bda-caf6-462f-96d8-37dd3c6ed001","Type":"ContainerStarted","Data":"cacb879143f9093a92c1055fbbdb8060a1caa5a52c72f3429e18d4cf2490d513"} Feb 27 11:05:54 crc kubenswrapper[4998]: I0227 11:05:54.501187 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6ndsf/must-gather-b5b4f" event={"ID":"52d03bda-caf6-462f-96d8-37dd3c6ed001","Type":"ContainerStarted","Data":"25f463779b5babc9ef17007ce6902c0c44f85cfa1b27688d6c443c2a9700d78c"} Feb 27 11:05:54 crc kubenswrapper[4998]: I0227 11:05:54.567497 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-6ndsf/must-gather-b5b4f" podStartSLOduration=1.652535598 podStartE2EDuration="10.567476083s" podCreationTimestamp="2026-02-27 11:05:44 +0000 UTC" firstStartedPulling="2026-02-27 11:05:44.905094695 +0000 UTC m=+2896.903365703" lastFinishedPulling="2026-02-27 11:05:53.82003521 +0000 UTC m=+2905.818306188" observedRunningTime="2026-02-27 11:05:54.559147371 +0000 UTC m=+2906.557418349" watchObservedRunningTime="2026-02-27 11:05:54.567476083 +0000 UTC m=+2906.565747071" Feb 27 11:05:55 crc kubenswrapper[4998]: I0227 11:05:55.516983 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k8vdc" event={"ID":"0e253718-3614-4c0b-a7ed-2e370e6d3369","Type":"ContainerStarted","Data":"1036e75c366e4b88a422a447d7e0af1e4d9a8992536b317a3f5da662f5ff161e"} Feb 27 11:05:55 crc kubenswrapper[4998]: I0227 11:05:55.563902 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-k8vdc" podStartSLOduration=5.888191201 podStartE2EDuration="9.563877825s" podCreationTimestamp="2026-02-27 11:05:46 +0000 UTC" firstStartedPulling="2026-02-27 11:05:51.213089338 +0000 UTC m=+2903.211360296" lastFinishedPulling="2026-02-27 11:05:54.888775952 +0000 UTC m=+2906.887046920" observedRunningTime="2026-02-27 11:05:55.556967951 +0000 UTC m=+2907.555238919" watchObservedRunningTime="2026-02-27 11:05:55.563877825 +0000 UTC m=+2907.562148793" Feb 27 11:05:56 crc kubenswrapper[4998]: I0227 11:05:56.871680 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-k8vdc" Feb 27 11:05:56 crc kubenswrapper[4998]: I0227 11:05:56.871907 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-k8vdc" Feb 27 11:05:57 crc kubenswrapper[4998]: I0227 11:05:57.909366 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-6ndsf/crc-debug-6b4hf"] Feb 27 11:05:57 crc kubenswrapper[4998]: I0227 11:05:57.910728 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6ndsf/crc-debug-6b4hf" Feb 27 11:05:57 crc kubenswrapper[4998]: I0227 11:05:57.912460 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-6ndsf"/"default-dockercfg-2qlrz" Feb 27 11:05:57 crc kubenswrapper[4998]: I0227 11:05:57.920274 4998 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-k8vdc" podUID="0e253718-3614-4c0b-a7ed-2e370e6d3369" containerName="registry-server" probeResult="failure" output=< Feb 27 11:05:57 crc kubenswrapper[4998]: timeout: failed to connect service ":50051" within 1s Feb 27 11:05:57 crc kubenswrapper[4998]: > Feb 27 11:05:57 crc kubenswrapper[4998]: I0227 11:05:57.936630 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mtbk\" (UniqueName: \"kubernetes.io/projected/2cf7f972-7a85-4a02-bf39-943c4c01814d-kube-api-access-8mtbk\") pod \"crc-debug-6b4hf\" (UID: \"2cf7f972-7a85-4a02-bf39-943c4c01814d\") " pod="openshift-must-gather-6ndsf/crc-debug-6b4hf" Feb 27 11:05:57 crc kubenswrapper[4998]: I0227 11:05:57.936779 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2cf7f972-7a85-4a02-bf39-943c4c01814d-host\") pod \"crc-debug-6b4hf\" (UID: \"2cf7f972-7a85-4a02-bf39-943c4c01814d\") " pod="openshift-must-gather-6ndsf/crc-debug-6b4hf" Feb 27 11:05:58 crc kubenswrapper[4998]: I0227 11:05:58.037490 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2cf7f972-7a85-4a02-bf39-943c4c01814d-host\") pod \"crc-debug-6b4hf\" (UID: \"2cf7f972-7a85-4a02-bf39-943c4c01814d\") " pod="openshift-must-gather-6ndsf/crc-debug-6b4hf" Feb 27 11:05:58 crc kubenswrapper[4998]: I0227 11:05:58.037613 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mtbk\" (UniqueName: \"kubernetes.io/projected/2cf7f972-7a85-4a02-bf39-943c4c01814d-kube-api-access-8mtbk\") pod \"crc-debug-6b4hf\" (UID: \"2cf7f972-7a85-4a02-bf39-943c4c01814d\") " pod="openshift-must-gather-6ndsf/crc-debug-6b4hf" Feb 27 11:05:58 crc kubenswrapper[4998]: I0227 11:05:58.037992 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2cf7f972-7a85-4a02-bf39-943c4c01814d-host\") pod \"crc-debug-6b4hf\" (UID: \"2cf7f972-7a85-4a02-bf39-943c4c01814d\") " pod="openshift-must-gather-6ndsf/crc-debug-6b4hf" Feb 27 11:05:58 crc kubenswrapper[4998]: I0227 11:05:58.061465 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mtbk\" (UniqueName: \"kubernetes.io/projected/2cf7f972-7a85-4a02-bf39-943c4c01814d-kube-api-access-8mtbk\") pod \"crc-debug-6b4hf\" (UID: \"2cf7f972-7a85-4a02-bf39-943c4c01814d\") " pod="openshift-must-gather-6ndsf/crc-debug-6b4hf" Feb 27 11:05:58 crc kubenswrapper[4998]: I0227 11:05:58.227724 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6ndsf/crc-debug-6b4hf" Feb 27 11:05:58 crc kubenswrapper[4998]: I0227 11:05:58.539524 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6ndsf/crc-debug-6b4hf" event={"ID":"2cf7f972-7a85-4a02-bf39-943c4c01814d","Type":"ContainerStarted","Data":"7052e6b5bc6c33a6920fbc6399e2b8e07d1809c7278e403cc8d26776248de35d"} Feb 27 11:06:00 crc kubenswrapper[4998]: I0227 11:06:00.145592 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536506-wb42h"] Feb 27 11:06:00 crc kubenswrapper[4998]: I0227 11:06:00.146814 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536506-wb42h" Feb 27 11:06:00 crc kubenswrapper[4998]: I0227 11:06:00.148782 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 11:06:00 crc kubenswrapper[4998]: I0227 11:06:00.150316 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 11:06:00 crc kubenswrapper[4998]: I0227 11:06:00.152094 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-b74ch" Feb 27 11:06:00 crc kubenswrapper[4998]: I0227 11:06:00.156627 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536506-wb42h"] Feb 27 11:06:00 crc kubenswrapper[4998]: I0227 11:06:00.276472 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsxt5\" (UniqueName: \"kubernetes.io/projected/9e1ac321-7765-4310-9042-626ce024d9e4-kube-api-access-lsxt5\") pod \"auto-csr-approver-29536506-wb42h\" (UID: \"9e1ac321-7765-4310-9042-626ce024d9e4\") " pod="openshift-infra/auto-csr-approver-29536506-wb42h" Feb 27 11:06:00 crc kubenswrapper[4998]: I0227 11:06:00.378138 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lsxt5\" (UniqueName: \"kubernetes.io/projected/9e1ac321-7765-4310-9042-626ce024d9e4-kube-api-access-lsxt5\") pod \"auto-csr-approver-29536506-wb42h\" (UID: \"9e1ac321-7765-4310-9042-626ce024d9e4\") " pod="openshift-infra/auto-csr-approver-29536506-wb42h" Feb 27 11:06:00 crc kubenswrapper[4998]: I0227 11:06:00.398547 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lsxt5\" (UniqueName: \"kubernetes.io/projected/9e1ac321-7765-4310-9042-626ce024d9e4-kube-api-access-lsxt5\") pod \"auto-csr-approver-29536506-wb42h\" (UID: \"9e1ac321-7765-4310-9042-626ce024d9e4\") " pod="openshift-infra/auto-csr-approver-29536506-wb42h" Feb 27 11:06:00 crc kubenswrapper[4998]: I0227 11:06:00.469493 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536506-wb42h" Feb 27 11:06:00 crc kubenswrapper[4998]: I0227 11:06:00.938027 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536506-wb42h"] Feb 27 11:06:01 crc kubenswrapper[4998]: I0227 11:06:01.570842 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536506-wb42h" event={"ID":"9e1ac321-7765-4310-9042-626ce024d9e4","Type":"ContainerStarted","Data":"13bedab64318a2d7249ac5ea3e4abc82e6fb8fca26a9878be7254784b77d1ce8"} Feb 27 11:06:02 crc kubenswrapper[4998]: E0227 11:06:02.362770 4998 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5951e940_a098_4def_97f9_c7f46c352132.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5951e940_a098_4def_97f9_c7f46c352132.slice/crio-66863bc9f6851f35b9a1afe8579c35eb917f1a3304ac6fa6df2ac55692846637\": RecentStats: unable to find data in memory cache]" Feb 27 11:06:02 crc kubenswrapper[4998]: I0227 11:06:02.581458 4998 generic.go:334] "Generic (PLEG): container finished" podID="9e1ac321-7765-4310-9042-626ce024d9e4" containerID="d8037209095bf2f748f81ee6ce79d4a6d505c61a0aac2f3a7d7603b35c8402ef" exitCode=0 Feb 27 11:06:02 crc kubenswrapper[4998]: I0227 11:06:02.581516 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536506-wb42h" event={"ID":"9e1ac321-7765-4310-9042-626ce024d9e4","Type":"ContainerDied","Data":"d8037209095bf2f748f81ee6ce79d4a6d505c61a0aac2f3a7d7603b35c8402ef"} Feb 27 11:06:06 crc kubenswrapper[4998]: I0227 11:06:06.922408 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-k8vdc" Feb 27 11:06:06 crc kubenswrapper[4998]: I0227 11:06:06.979282 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-k8vdc" Feb 27 11:06:07 crc kubenswrapper[4998]: I0227 11:06:07.159128 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-k8vdc"] Feb 27 11:06:08 crc kubenswrapper[4998]: I0227 11:06:08.559720 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536506-wb42h" Feb 27 11:06:08 crc kubenswrapper[4998]: I0227 11:06:08.633248 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lsxt5\" (UniqueName: \"kubernetes.io/projected/9e1ac321-7765-4310-9042-626ce024d9e4-kube-api-access-lsxt5\") pod \"9e1ac321-7765-4310-9042-626ce024d9e4\" (UID: \"9e1ac321-7765-4310-9042-626ce024d9e4\") " Feb 27 11:06:08 crc kubenswrapper[4998]: I0227 11:06:08.642299 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e1ac321-7765-4310-9042-626ce024d9e4-kube-api-access-lsxt5" (OuterVolumeSpecName: "kube-api-access-lsxt5") pod "9e1ac321-7765-4310-9042-626ce024d9e4" (UID: "9e1ac321-7765-4310-9042-626ce024d9e4"). InnerVolumeSpecName "kube-api-access-lsxt5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:06:08 crc kubenswrapper[4998]: I0227 11:06:08.654062 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536506-wb42h" event={"ID":"9e1ac321-7765-4310-9042-626ce024d9e4","Type":"ContainerDied","Data":"13bedab64318a2d7249ac5ea3e4abc82e6fb8fca26a9878be7254784b77d1ce8"} Feb 27 11:06:08 crc kubenswrapper[4998]: I0227 11:06:08.654117 4998 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="13bedab64318a2d7249ac5ea3e4abc82e6fb8fca26a9878be7254784b77d1ce8" Feb 27 11:06:08 crc kubenswrapper[4998]: I0227 11:06:08.654080 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536506-wb42h" Feb 27 11:06:08 crc kubenswrapper[4998]: I0227 11:06:08.654164 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-k8vdc" podUID="0e253718-3614-4c0b-a7ed-2e370e6d3369" containerName="registry-server" containerID="cri-o://1036e75c366e4b88a422a447d7e0af1e4d9a8992536b317a3f5da662f5ff161e" gracePeriod=2 Feb 27 11:06:08 crc kubenswrapper[4998]: I0227 11:06:08.736575 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lsxt5\" (UniqueName: \"kubernetes.io/projected/9e1ac321-7765-4310-9042-626ce024d9e4-kube-api-access-lsxt5\") on node \"crc\" DevicePath \"\"" Feb 27 11:06:09 crc kubenswrapper[4998]: I0227 11:06:09.019134 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k8vdc" Feb 27 11:06:09 crc kubenswrapper[4998]: I0227 11:06:09.143754 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e253718-3614-4c0b-a7ed-2e370e6d3369-catalog-content\") pod \"0e253718-3614-4c0b-a7ed-2e370e6d3369\" (UID: \"0e253718-3614-4c0b-a7ed-2e370e6d3369\") " Feb 27 11:06:09 crc kubenswrapper[4998]: I0227 11:06:09.144396 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e253718-3614-4c0b-a7ed-2e370e6d3369-utilities\") pod \"0e253718-3614-4c0b-a7ed-2e370e6d3369\" (UID: \"0e253718-3614-4c0b-a7ed-2e370e6d3369\") " Feb 27 11:06:09 crc kubenswrapper[4998]: I0227 11:06:09.144583 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r8gxx\" (UniqueName: \"kubernetes.io/projected/0e253718-3614-4c0b-a7ed-2e370e6d3369-kube-api-access-r8gxx\") pod \"0e253718-3614-4c0b-a7ed-2e370e6d3369\" (UID: \"0e253718-3614-4c0b-a7ed-2e370e6d3369\") " Feb 27 11:06:09 crc kubenswrapper[4998]: I0227 11:06:09.144966 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e253718-3614-4c0b-a7ed-2e370e6d3369-utilities" (OuterVolumeSpecName: "utilities") pod "0e253718-3614-4c0b-a7ed-2e370e6d3369" (UID: "0e253718-3614-4c0b-a7ed-2e370e6d3369"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 11:06:09 crc kubenswrapper[4998]: I0227 11:06:09.145378 4998 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e253718-3614-4c0b-a7ed-2e370e6d3369-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 11:06:09 crc kubenswrapper[4998]: I0227 11:06:09.154449 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e253718-3614-4c0b-a7ed-2e370e6d3369-kube-api-access-r8gxx" (OuterVolumeSpecName: "kube-api-access-r8gxx") pod "0e253718-3614-4c0b-a7ed-2e370e6d3369" (UID: "0e253718-3614-4c0b-a7ed-2e370e6d3369"). InnerVolumeSpecName "kube-api-access-r8gxx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:06:09 crc kubenswrapper[4998]: I0227 11:06:09.194824 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e253718-3614-4c0b-a7ed-2e370e6d3369-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0e253718-3614-4c0b-a7ed-2e370e6d3369" (UID: "0e253718-3614-4c0b-a7ed-2e370e6d3369"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 11:06:09 crc kubenswrapper[4998]: I0227 11:06:09.247005 4998 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e253718-3614-4c0b-a7ed-2e370e6d3369-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 11:06:09 crc kubenswrapper[4998]: I0227 11:06:09.247037 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r8gxx\" (UniqueName: \"kubernetes.io/projected/0e253718-3614-4c0b-a7ed-2e370e6d3369-kube-api-access-r8gxx\") on node \"crc\" DevicePath \"\"" Feb 27 11:06:09 crc kubenswrapper[4998]: I0227 11:06:09.637472 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536500-q4vck"] Feb 27 11:06:09 crc kubenswrapper[4998]: I0227 11:06:09.657924 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536500-q4vck"] Feb 27 11:06:09 crc kubenswrapper[4998]: I0227 11:06:09.700797 4998 generic.go:334] "Generic (PLEG): container finished" podID="0e253718-3614-4c0b-a7ed-2e370e6d3369" containerID="1036e75c366e4b88a422a447d7e0af1e4d9a8992536b317a3f5da662f5ff161e" exitCode=0 Feb 27 11:06:09 crc kubenswrapper[4998]: I0227 11:06:09.700877 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k8vdc" Feb 27 11:06:09 crc kubenswrapper[4998]: I0227 11:06:09.700919 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k8vdc" event={"ID":"0e253718-3614-4c0b-a7ed-2e370e6d3369","Type":"ContainerDied","Data":"1036e75c366e4b88a422a447d7e0af1e4d9a8992536b317a3f5da662f5ff161e"} Feb 27 11:06:09 crc kubenswrapper[4998]: I0227 11:06:09.703111 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k8vdc" event={"ID":"0e253718-3614-4c0b-a7ed-2e370e6d3369","Type":"ContainerDied","Data":"d51fb3cc7e43088df9ea9239c1017212b03d9db5c3ec46d4d199dbed0ae4af4e"} Feb 27 11:06:09 crc kubenswrapper[4998]: I0227 11:06:09.703132 4998 scope.go:117] "RemoveContainer" containerID="1036e75c366e4b88a422a447d7e0af1e4d9a8992536b317a3f5da662f5ff161e" Feb 27 11:06:09 crc kubenswrapper[4998]: I0227 11:06:09.712735 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6ndsf/crc-debug-6b4hf" event={"ID":"2cf7f972-7a85-4a02-bf39-943c4c01814d","Type":"ContainerStarted","Data":"aec93a6141f69cdb27ab2a02499f4e3270294bbcd8836b9ecae822d2876282e6"} Feb 27 11:06:09 crc kubenswrapper[4998]: I0227 11:06:09.740015 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-6ndsf/crc-debug-6b4hf" podStartSLOduration=2.463517291 podStartE2EDuration="12.739994536s" podCreationTimestamp="2026-02-27 11:05:57 +0000 UTC" firstStartedPulling="2026-02-27 11:05:58.263283513 +0000 UTC m=+2910.261554471" lastFinishedPulling="2026-02-27 11:06:08.539760738 +0000 UTC m=+2920.538031716" observedRunningTime="2026-02-27 11:06:09.731297744 +0000 UTC m=+2921.729568712" watchObservedRunningTime="2026-02-27 11:06:09.739994536 +0000 UTC m=+2921.738265504" Feb 27 11:06:09 crc kubenswrapper[4998]: I0227 11:06:09.753449 4998 scope.go:117] "RemoveContainer" containerID="fdcd4b2ecba8647b103e0ca2b089bbaa593b5d9fee527ccbc101545208d00907" Feb 27 11:06:09 crc kubenswrapper[4998]: I0227 11:06:09.757894 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-k8vdc"] Feb 27 11:06:09 crc kubenswrapper[4998]: I0227 11:06:09.776766 4998 scope.go:117] "RemoveContainer" containerID="bb345f440c79a86b7f3de4657e98b686778a0bf8d3e9bb659ef4384a13ec0f7e" Feb 27 11:06:09 crc kubenswrapper[4998]: I0227 11:06:09.780862 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-k8vdc"] Feb 27 11:06:09 crc kubenswrapper[4998]: I0227 11:06:09.825939 4998 scope.go:117] "RemoveContainer" containerID="1036e75c366e4b88a422a447d7e0af1e4d9a8992536b317a3f5da662f5ff161e" Feb 27 11:06:09 crc kubenswrapper[4998]: E0227 11:06:09.826717 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1036e75c366e4b88a422a447d7e0af1e4d9a8992536b317a3f5da662f5ff161e\": container with ID starting with 1036e75c366e4b88a422a447d7e0af1e4d9a8992536b317a3f5da662f5ff161e not found: ID does not exist" containerID="1036e75c366e4b88a422a447d7e0af1e4d9a8992536b317a3f5da662f5ff161e" Feb 27 11:06:09 crc kubenswrapper[4998]: I0227 11:06:09.826762 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1036e75c366e4b88a422a447d7e0af1e4d9a8992536b317a3f5da662f5ff161e"} err="failed to get container status \"1036e75c366e4b88a422a447d7e0af1e4d9a8992536b317a3f5da662f5ff161e\": rpc error: code = NotFound desc = could not find container \"1036e75c366e4b88a422a447d7e0af1e4d9a8992536b317a3f5da662f5ff161e\": container with ID starting with 1036e75c366e4b88a422a447d7e0af1e4d9a8992536b317a3f5da662f5ff161e not found: ID does not exist" Feb 27 11:06:09 crc kubenswrapper[4998]: I0227 11:06:09.826792 4998 scope.go:117] "RemoveContainer" containerID="fdcd4b2ecba8647b103e0ca2b089bbaa593b5d9fee527ccbc101545208d00907" Feb 27 11:06:09 crc kubenswrapper[4998]: E0227 11:06:09.827104 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fdcd4b2ecba8647b103e0ca2b089bbaa593b5d9fee527ccbc101545208d00907\": container with ID starting with fdcd4b2ecba8647b103e0ca2b089bbaa593b5d9fee527ccbc101545208d00907 not found: ID does not exist" containerID="fdcd4b2ecba8647b103e0ca2b089bbaa593b5d9fee527ccbc101545208d00907" Feb 27 11:06:09 crc kubenswrapper[4998]: I0227 11:06:09.827124 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fdcd4b2ecba8647b103e0ca2b089bbaa593b5d9fee527ccbc101545208d00907"} err="failed to get container status \"fdcd4b2ecba8647b103e0ca2b089bbaa593b5d9fee527ccbc101545208d00907\": rpc error: code = NotFound desc = could not find container \"fdcd4b2ecba8647b103e0ca2b089bbaa593b5d9fee527ccbc101545208d00907\": container with ID starting with fdcd4b2ecba8647b103e0ca2b089bbaa593b5d9fee527ccbc101545208d00907 not found: ID does not exist" Feb 27 11:06:09 crc kubenswrapper[4998]: I0227 11:06:09.827138 4998 scope.go:117] "RemoveContainer" containerID="bb345f440c79a86b7f3de4657e98b686778a0bf8d3e9bb659ef4384a13ec0f7e" Feb 27 11:06:09 crc kubenswrapper[4998]: E0227 11:06:09.827358 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb345f440c79a86b7f3de4657e98b686778a0bf8d3e9bb659ef4384a13ec0f7e\": container with ID starting with bb345f440c79a86b7f3de4657e98b686778a0bf8d3e9bb659ef4384a13ec0f7e not found: ID does not exist" containerID="bb345f440c79a86b7f3de4657e98b686778a0bf8d3e9bb659ef4384a13ec0f7e" Feb 27 11:06:09 crc kubenswrapper[4998]: I0227 11:06:09.827373 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb345f440c79a86b7f3de4657e98b686778a0bf8d3e9bb659ef4384a13ec0f7e"} err="failed to get container status \"bb345f440c79a86b7f3de4657e98b686778a0bf8d3e9bb659ef4384a13ec0f7e\": rpc error: code = NotFound desc = could not find container \"bb345f440c79a86b7f3de4657e98b686778a0bf8d3e9bb659ef4384a13ec0f7e\": container with ID starting with bb345f440c79a86b7f3de4657e98b686778a0bf8d3e9bb659ef4384a13ec0f7e not found: ID does not exist" Feb 27 11:06:10 crc kubenswrapper[4998]: I0227 11:06:10.504570 4998 patch_prober.go:28] interesting pod/machine-config-daemon-m6kr5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 11:06:10 crc kubenswrapper[4998]: I0227 11:06:10.504624 4998 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 11:06:10 crc kubenswrapper[4998]: I0227 11:06:10.775926 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e253718-3614-4c0b-a7ed-2e370e6d3369" path="/var/lib/kubelet/pods/0e253718-3614-4c0b-a7ed-2e370e6d3369/volumes" Feb 27 11:06:10 crc kubenswrapper[4998]: I0227 11:06:10.776985 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a7512eb-4b55-4205-9f1b-9dc902021e2f" path="/var/lib/kubelet/pods/1a7512eb-4b55-4205-9f1b-9dc902021e2f/volumes" Feb 27 11:06:12 crc kubenswrapper[4998]: E0227 11:06:12.576765 4998 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5951e940_a098_4def_97f9_c7f46c352132.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5951e940_a098_4def_97f9_c7f46c352132.slice/crio-66863bc9f6851f35b9a1afe8579c35eb917f1a3304ac6fa6df2ac55692846637\": RecentStats: unable to find data in memory cache]" Feb 27 11:06:22 crc kubenswrapper[4998]: E0227 11:06:22.835203 4998 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5951e940_a098_4def_97f9_c7f46c352132.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5951e940_a098_4def_97f9_c7f46c352132.slice/crio-66863bc9f6851f35b9a1afe8579c35eb917f1a3304ac6fa6df2ac55692846637\": RecentStats: unable to find data in memory cache]" Feb 27 11:06:22 crc kubenswrapper[4998]: I0227 11:06:22.855897 4998 generic.go:334] "Generic (PLEG): container finished" podID="2cf7f972-7a85-4a02-bf39-943c4c01814d" containerID="aec93a6141f69cdb27ab2a02499f4e3270294bbcd8836b9ecae822d2876282e6" exitCode=0 Feb 27 11:06:22 crc kubenswrapper[4998]: I0227 11:06:22.855942 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6ndsf/crc-debug-6b4hf" event={"ID":"2cf7f972-7a85-4a02-bf39-943c4c01814d","Type":"ContainerDied","Data":"aec93a6141f69cdb27ab2a02499f4e3270294bbcd8836b9ecae822d2876282e6"} Feb 27 11:06:23 crc kubenswrapper[4998]: I0227 11:06:23.963448 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6ndsf/crc-debug-6b4hf" Feb 27 11:06:23 crc kubenswrapper[4998]: I0227 11:06:23.990544 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-6ndsf/crc-debug-6b4hf"] Feb 27 11:06:23 crc kubenswrapper[4998]: I0227 11:06:23.998178 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-6ndsf/crc-debug-6b4hf"] Feb 27 11:06:24 crc kubenswrapper[4998]: I0227 11:06:24.053526 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8mtbk\" (UniqueName: \"kubernetes.io/projected/2cf7f972-7a85-4a02-bf39-943c4c01814d-kube-api-access-8mtbk\") pod \"2cf7f972-7a85-4a02-bf39-943c4c01814d\" (UID: \"2cf7f972-7a85-4a02-bf39-943c4c01814d\") " Feb 27 11:06:24 crc kubenswrapper[4998]: I0227 11:06:24.053753 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2cf7f972-7a85-4a02-bf39-943c4c01814d-host\") pod \"2cf7f972-7a85-4a02-bf39-943c4c01814d\" (UID: \"2cf7f972-7a85-4a02-bf39-943c4c01814d\") " Feb 27 11:06:24 crc kubenswrapper[4998]: I0227 11:06:24.053844 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2cf7f972-7a85-4a02-bf39-943c4c01814d-host" (OuterVolumeSpecName: "host") pod "2cf7f972-7a85-4a02-bf39-943c4c01814d" (UID: "2cf7f972-7a85-4a02-bf39-943c4c01814d"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 11:06:24 crc kubenswrapper[4998]: I0227 11:06:24.054153 4998 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2cf7f972-7a85-4a02-bf39-943c4c01814d-host\") on node \"crc\" DevicePath \"\"" Feb 27 11:06:24 crc kubenswrapper[4998]: I0227 11:06:24.060146 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cf7f972-7a85-4a02-bf39-943c4c01814d-kube-api-access-8mtbk" (OuterVolumeSpecName: "kube-api-access-8mtbk") pod "2cf7f972-7a85-4a02-bf39-943c4c01814d" (UID: "2cf7f972-7a85-4a02-bf39-943c4c01814d"). InnerVolumeSpecName "kube-api-access-8mtbk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:06:24 crc kubenswrapper[4998]: I0227 11:06:24.159417 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8mtbk\" (UniqueName: \"kubernetes.io/projected/2cf7f972-7a85-4a02-bf39-943c4c01814d-kube-api-access-8mtbk\") on node \"crc\" DevicePath \"\"" Feb 27 11:06:24 crc kubenswrapper[4998]: I0227 11:06:24.793806 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2cf7f972-7a85-4a02-bf39-943c4c01814d" path="/var/lib/kubelet/pods/2cf7f972-7a85-4a02-bf39-943c4c01814d/volumes" Feb 27 11:06:24 crc kubenswrapper[4998]: I0227 11:06:24.877537 4998 scope.go:117] "RemoveContainer" containerID="aec93a6141f69cdb27ab2a02499f4e3270294bbcd8836b9ecae822d2876282e6" Feb 27 11:06:24 crc kubenswrapper[4998]: I0227 11:06:24.877803 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6ndsf/crc-debug-6b4hf" Feb 27 11:06:25 crc kubenswrapper[4998]: I0227 11:06:25.165516 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-6ndsf/crc-debug-xg2nc"] Feb 27 11:06:25 crc kubenswrapper[4998]: E0227 11:06:25.165861 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cf7f972-7a85-4a02-bf39-943c4c01814d" containerName="container-00" Feb 27 11:06:25 crc kubenswrapper[4998]: I0227 11:06:25.165873 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cf7f972-7a85-4a02-bf39-943c4c01814d" containerName="container-00" Feb 27 11:06:25 crc kubenswrapper[4998]: E0227 11:06:25.165892 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e253718-3614-4c0b-a7ed-2e370e6d3369" containerName="extract-content" Feb 27 11:06:25 crc kubenswrapper[4998]: I0227 11:06:25.165898 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e253718-3614-4c0b-a7ed-2e370e6d3369" containerName="extract-content" Feb 27 11:06:25 crc kubenswrapper[4998]: E0227 11:06:25.165906 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e1ac321-7765-4310-9042-626ce024d9e4" containerName="oc" Feb 27 11:06:25 crc kubenswrapper[4998]: I0227 11:06:25.165912 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e1ac321-7765-4310-9042-626ce024d9e4" containerName="oc" Feb 27 11:06:25 crc kubenswrapper[4998]: E0227 11:06:25.165924 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e253718-3614-4c0b-a7ed-2e370e6d3369" containerName="extract-utilities" Feb 27 11:06:25 crc kubenswrapper[4998]: I0227 11:06:25.165929 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e253718-3614-4c0b-a7ed-2e370e6d3369" containerName="extract-utilities" Feb 27 11:06:25 crc kubenswrapper[4998]: E0227 11:06:25.165940 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e253718-3614-4c0b-a7ed-2e370e6d3369" containerName="registry-server" Feb 27 11:06:25 crc kubenswrapper[4998]: I0227 11:06:25.165945 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e253718-3614-4c0b-a7ed-2e370e6d3369" containerName="registry-server" Feb 27 11:06:25 crc kubenswrapper[4998]: I0227 11:06:25.166121 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cf7f972-7a85-4a02-bf39-943c4c01814d" containerName="container-00" Feb 27 11:06:25 crc kubenswrapper[4998]: I0227 11:06:25.166136 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e253718-3614-4c0b-a7ed-2e370e6d3369" containerName="registry-server" Feb 27 11:06:25 crc kubenswrapper[4998]: I0227 11:06:25.166149 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e1ac321-7765-4310-9042-626ce024d9e4" containerName="oc" Feb 27 11:06:25 crc kubenswrapper[4998]: I0227 11:06:25.166717 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6ndsf/crc-debug-xg2nc" Feb 27 11:06:25 crc kubenswrapper[4998]: I0227 11:06:25.169781 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-6ndsf"/"default-dockercfg-2qlrz" Feb 27 11:06:25 crc kubenswrapper[4998]: I0227 11:06:25.288846 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmtf2\" (UniqueName: \"kubernetes.io/projected/b5473782-fb33-4fdc-ae55-6161d3a22209-kube-api-access-bmtf2\") pod \"crc-debug-xg2nc\" (UID: \"b5473782-fb33-4fdc-ae55-6161d3a22209\") " pod="openshift-must-gather-6ndsf/crc-debug-xg2nc" Feb 27 11:06:25 crc kubenswrapper[4998]: I0227 11:06:25.288987 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b5473782-fb33-4fdc-ae55-6161d3a22209-host\") pod \"crc-debug-xg2nc\" (UID: \"b5473782-fb33-4fdc-ae55-6161d3a22209\") " pod="openshift-must-gather-6ndsf/crc-debug-xg2nc" Feb 27 11:06:25 crc kubenswrapper[4998]: I0227 11:06:25.390253 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmtf2\" (UniqueName: \"kubernetes.io/projected/b5473782-fb33-4fdc-ae55-6161d3a22209-kube-api-access-bmtf2\") pod \"crc-debug-xg2nc\" (UID: \"b5473782-fb33-4fdc-ae55-6161d3a22209\") " pod="openshift-must-gather-6ndsf/crc-debug-xg2nc" Feb 27 11:06:25 crc kubenswrapper[4998]: I0227 11:06:25.390365 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b5473782-fb33-4fdc-ae55-6161d3a22209-host\") pod \"crc-debug-xg2nc\" (UID: \"b5473782-fb33-4fdc-ae55-6161d3a22209\") " pod="openshift-must-gather-6ndsf/crc-debug-xg2nc" Feb 27 11:06:25 crc kubenswrapper[4998]: I0227 11:06:25.390536 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b5473782-fb33-4fdc-ae55-6161d3a22209-host\") pod \"crc-debug-xg2nc\" (UID: \"b5473782-fb33-4fdc-ae55-6161d3a22209\") " pod="openshift-must-gather-6ndsf/crc-debug-xg2nc" Feb 27 11:06:25 crc kubenswrapper[4998]: I0227 11:06:25.412888 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmtf2\" (UniqueName: \"kubernetes.io/projected/b5473782-fb33-4fdc-ae55-6161d3a22209-kube-api-access-bmtf2\") pod \"crc-debug-xg2nc\" (UID: \"b5473782-fb33-4fdc-ae55-6161d3a22209\") " pod="openshift-must-gather-6ndsf/crc-debug-xg2nc" Feb 27 11:06:25 crc kubenswrapper[4998]: I0227 11:06:25.482285 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6ndsf/crc-debug-xg2nc" Feb 27 11:06:25 crc kubenswrapper[4998]: I0227 11:06:25.890133 4998 generic.go:334] "Generic (PLEG): container finished" podID="b5473782-fb33-4fdc-ae55-6161d3a22209" containerID="6f744d1c269a55e8c178c3bc337a7e4ecef938dfee3e3925581f349109d3ec51" exitCode=1 Feb 27 11:06:25 crc kubenswrapper[4998]: I0227 11:06:25.890339 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6ndsf/crc-debug-xg2nc" event={"ID":"b5473782-fb33-4fdc-ae55-6161d3a22209","Type":"ContainerDied","Data":"6f744d1c269a55e8c178c3bc337a7e4ecef938dfee3e3925581f349109d3ec51"} Feb 27 11:06:25 crc kubenswrapper[4998]: I0227 11:06:25.890408 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6ndsf/crc-debug-xg2nc" event={"ID":"b5473782-fb33-4fdc-ae55-6161d3a22209","Type":"ContainerStarted","Data":"4795d50f1092f78d01f900040ff40fb53aa085319c40030b771583b848c2ea1c"} Feb 27 11:06:25 crc kubenswrapper[4998]: I0227 11:06:25.932328 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-6ndsf/crc-debug-xg2nc"] Feb 27 11:06:25 crc kubenswrapper[4998]: I0227 11:06:25.943863 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-6ndsf/crc-debug-xg2nc"] Feb 27 11:06:26 crc kubenswrapper[4998]: I0227 11:06:26.986768 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6ndsf/crc-debug-xg2nc" Feb 27 11:06:27 crc kubenswrapper[4998]: I0227 11:06:27.119666 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bmtf2\" (UniqueName: \"kubernetes.io/projected/b5473782-fb33-4fdc-ae55-6161d3a22209-kube-api-access-bmtf2\") pod \"b5473782-fb33-4fdc-ae55-6161d3a22209\" (UID: \"b5473782-fb33-4fdc-ae55-6161d3a22209\") " Feb 27 11:06:27 crc kubenswrapper[4998]: I0227 11:06:27.119976 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b5473782-fb33-4fdc-ae55-6161d3a22209-host\") pod \"b5473782-fb33-4fdc-ae55-6161d3a22209\" (UID: \"b5473782-fb33-4fdc-ae55-6161d3a22209\") " Feb 27 11:06:27 crc kubenswrapper[4998]: I0227 11:06:27.120106 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5473782-fb33-4fdc-ae55-6161d3a22209-host" (OuterVolumeSpecName: "host") pod "b5473782-fb33-4fdc-ae55-6161d3a22209" (UID: "b5473782-fb33-4fdc-ae55-6161d3a22209"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 11:06:27 crc kubenswrapper[4998]: I0227 11:06:27.120485 4998 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b5473782-fb33-4fdc-ae55-6161d3a22209-host\") on node \"crc\" DevicePath \"\"" Feb 27 11:06:27 crc kubenswrapper[4998]: I0227 11:06:27.135515 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5473782-fb33-4fdc-ae55-6161d3a22209-kube-api-access-bmtf2" (OuterVolumeSpecName: "kube-api-access-bmtf2") pod "b5473782-fb33-4fdc-ae55-6161d3a22209" (UID: "b5473782-fb33-4fdc-ae55-6161d3a22209"). InnerVolumeSpecName "kube-api-access-bmtf2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:06:27 crc kubenswrapper[4998]: I0227 11:06:27.222630 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bmtf2\" (UniqueName: \"kubernetes.io/projected/b5473782-fb33-4fdc-ae55-6161d3a22209-kube-api-access-bmtf2\") on node \"crc\" DevicePath \"\"" Feb 27 11:06:27 crc kubenswrapper[4998]: I0227 11:06:27.906704 4998 scope.go:117] "RemoveContainer" containerID="6f744d1c269a55e8c178c3bc337a7e4ecef938dfee3e3925581f349109d3ec51" Feb 27 11:06:27 crc kubenswrapper[4998]: I0227 11:06:27.906761 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6ndsf/crc-debug-xg2nc" Feb 27 11:06:28 crc kubenswrapper[4998]: I0227 11:06:28.781392 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5473782-fb33-4fdc-ae55-6161d3a22209" path="/var/lib/kubelet/pods/b5473782-fb33-4fdc-ae55-6161d3a22209/volumes" Feb 27 11:06:38 crc kubenswrapper[4998]: I0227 11:06:38.816062 4998 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod9e1ac321-7765-4310-9042-626ce024d9e4"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod9e1ac321-7765-4310-9042-626ce024d9e4] : Timed out while waiting for systemd to remove kubepods-besteffort-pod9e1ac321_7765_4310_9042_626ce024d9e4.slice" Feb 27 11:06:38 crc kubenswrapper[4998]: E0227 11:06:38.816594 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort pod9e1ac321-7765-4310-9042-626ce024d9e4] : unable to destroy cgroup paths for cgroup [kubepods besteffort pod9e1ac321-7765-4310-9042-626ce024d9e4] : Timed out while waiting for systemd to remove kubepods-besteffort-pod9e1ac321_7765_4310_9042_626ce024d9e4.slice" pod="openshift-infra/auto-csr-approver-29536506-wb42h" podUID="9e1ac321-7765-4310-9042-626ce024d9e4" Feb 27 11:06:39 crc kubenswrapper[4998]: I0227 11:06:39.013126 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536506-wb42h" Feb 27 11:06:40 crc kubenswrapper[4998]: I0227 11:06:40.504495 4998 patch_prober.go:28] interesting pod/machine-config-daemon-m6kr5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 11:06:40 crc kubenswrapper[4998]: I0227 11:06:40.504777 4998 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 11:06:53 crc kubenswrapper[4998]: I0227 11:06:53.750762 4998 scope.go:117] "RemoveContainer" containerID="6391a4a5a0e8326c6dc3e7ca04c6c4bbb0aa97a0bba3190e9cc0d3695a3102be" Feb 27 11:07:08 crc kubenswrapper[4998]: I0227 11:07:08.128482 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-8fcf676c4-nnmzc_5b6939aa-143d-43c5-9547-0bdebbebaf43/barbican-api/0.log" Feb 27 11:07:08 crc kubenswrapper[4998]: I0227 11:07:08.341134 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-8fcf676c4-nnmzc_5b6939aa-143d-43c5-9547-0bdebbebaf43/barbican-api-log/0.log" Feb 27 11:07:08 crc kubenswrapper[4998]: I0227 11:07:08.427556 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-775f48c444-nclg6_6626932a-9b39-41b4-a857-0ef4489cc74c/barbican-keystone-listener/0.log" Feb 27 11:07:08 crc kubenswrapper[4998]: I0227 11:07:08.446693 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-775f48c444-nclg6_6626932a-9b39-41b4-a857-0ef4489cc74c/barbican-keystone-listener-log/0.log" Feb 27 11:07:08 crc kubenswrapper[4998]: I0227 11:07:08.558362 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5578686c6c-rhd22_f355d01e-079d-48f0-abc8-b26c45650314/barbican-worker/0.log" Feb 27 11:07:08 crc kubenswrapper[4998]: I0227 11:07:08.630110 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5578686c6c-rhd22_f355d01e-079d-48f0-abc8-b26c45650314/barbican-worker-log/0.log" Feb 27 11:07:08 crc kubenswrapper[4998]: I0227 11:07:08.766122 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-xvnkl_bf074ce9-dfbc-44fe-8839-67aa9d2d43f8/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Feb 27 11:07:08 crc kubenswrapper[4998]: I0227 11:07:08.827894 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_4f1132ff-2042-4f61-9889-9d666509cfc3/ceilometer-central-agent/0.log" Feb 27 11:07:08 crc kubenswrapper[4998]: I0227 11:07:08.919307 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_4f1132ff-2042-4f61-9889-9d666509cfc3/ceilometer-notification-agent/0.log" Feb 27 11:07:08 crc kubenswrapper[4998]: I0227 11:07:08.960155 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_4f1132ff-2042-4f61-9889-9d666509cfc3/proxy-httpd/0.log" Feb 27 11:07:09 crc kubenswrapper[4998]: I0227 11:07:09.005076 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_4f1132ff-2042-4f61-9889-9d666509cfc3/sg-core/0.log" Feb 27 11:07:09 crc kubenswrapper[4998]: I0227 11:07:09.131387 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_d7d4f173-de8d-491e-b190-3b00b6da940a/cinder-api/0.log" Feb 27 11:07:09 crc kubenswrapper[4998]: I0227 11:07:09.182715 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_d7d4f173-de8d-491e-b190-3b00b6da940a/cinder-api-log/0.log" Feb 27 11:07:09 crc kubenswrapper[4998]: I0227 11:07:09.261116 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_7a27e8b4-9378-4f49-8a5b-a336418a70e6/cinder-scheduler/0.log" Feb 27 11:07:09 crc kubenswrapper[4998]: I0227 11:07:09.375930 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_7a27e8b4-9378-4f49-8a5b-a336418a70e6/probe/0.log" Feb 27 11:07:09 crc kubenswrapper[4998]: I0227 11:07:09.468127 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-ph82m_23592e13-bc1d-4017-918b-1b78a059e903/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 27 11:07:09 crc kubenswrapper[4998]: I0227 11:07:09.598776 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-w89m5_090bec07-4f2c-4d72-be75-06274215cdf1/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 27 11:07:09 crc kubenswrapper[4998]: I0227 11:07:09.672460 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-zvnkz_50dafab7-7800-4392-a6e1-d081a9a29ae9/init/0.log" Feb 27 11:07:10 crc kubenswrapper[4998]: I0227 11:07:10.076571 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-zvnkz_50dafab7-7800-4392-a6e1-d081a9a29ae9/init/0.log" Feb 27 11:07:10 crc kubenswrapper[4998]: I0227 11:07:10.097026 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-zvnkz_50dafab7-7800-4392-a6e1-d081a9a29ae9/dnsmasq-dns/0.log" Feb 27 11:07:10 crc kubenswrapper[4998]: I0227 11:07:10.140055 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-t98b7_7fa18717-d186-4f9e-8a17-f3689b187491/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 27 11:07:10 crc kubenswrapper[4998]: I0227 11:07:10.299285 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_93b9f3a4-566e-4aa1-9980-7747c7d53efe/glance-httpd/0.log" Feb 27 11:07:10 crc kubenswrapper[4998]: I0227 11:07:10.326928 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_93b9f3a4-566e-4aa1-9980-7747c7d53efe/glance-log/0.log" Feb 27 11:07:10 crc kubenswrapper[4998]: I0227 11:07:10.464458 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_a80b984c-5ec4-4e6e-9e7e-01c1653244d5/glance-httpd/0.log" Feb 27 11:07:10 crc kubenswrapper[4998]: I0227 11:07:10.500314 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_a80b984c-5ec4-4e6e-9e7e-01c1653244d5/glance-log/0.log" Feb 27 11:07:10 crc kubenswrapper[4998]: I0227 11:07:10.504738 4998 patch_prober.go:28] interesting pod/machine-config-daemon-m6kr5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 11:07:10 crc kubenswrapper[4998]: I0227 11:07:10.504805 4998 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 11:07:10 crc kubenswrapper[4998]: I0227 11:07:10.504854 4998 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" Feb 27 11:07:10 crc kubenswrapper[4998]: I0227 11:07:10.505809 4998 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"80e499ffd9f6c39119f9b9bd2b1b9c0b38519d681fc2c93cfe8afbe50a1baa31"} pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 27 11:07:10 crc kubenswrapper[4998]: I0227 11:07:10.505889 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" containerName="machine-config-daemon" containerID="cri-o://80e499ffd9f6c39119f9b9bd2b1b9c0b38519d681fc2c93cfe8afbe50a1baa31" gracePeriod=600 Feb 27 11:07:10 crc kubenswrapper[4998]: E0227 11:07:10.627114 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m6kr5_openshift-machine-config-operator(400c5e2f-5448-49c6-bf8e-04b21e552bb2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" Feb 27 11:07:10 crc kubenswrapper[4998]: I0227 11:07:10.647758 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5d7f558cb4-k5mxh_c6f8dcd8-b50e-47b8-b54c-2aa103be577c/horizon/0.log" Feb 27 11:07:10 crc kubenswrapper[4998]: I0227 11:07:10.808182 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-9n5t8_9ff2a9f1-4fc7-4a25-a209-3e58f6610e24/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Feb 27 11:07:10 crc kubenswrapper[4998]: I0227 11:07:10.879936 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5d7f558cb4-k5mxh_c6f8dcd8-b50e-47b8-b54c-2aa103be577c/horizon-log/0.log" Feb 27 11:07:10 crc kubenswrapper[4998]: I0227 11:07:10.985550 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-5dm8k_4feb60c2-418d-4406-ac16-26ad4e5d0825/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 27 11:07:11 crc kubenswrapper[4998]: I0227 11:07:11.099068 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-7fb98b967f-nv7q9_ad724f01-459e-4616-b2a2-989f67e5f334/keystone-api/0.log" Feb 27 11:07:11 crc kubenswrapper[4998]: I0227 11:07:11.192916 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29536501-rg2vz_6fd007ec-7007-4505-9839-dc43fd40ca3a/keystone-cron/0.log" Feb 27 11:07:11 crc kubenswrapper[4998]: I0227 11:07:11.296423 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_ba497ed5-458a-4720-a701-9e6f9a200c6d/kube-state-metrics/0.log" Feb 27 11:07:11 crc kubenswrapper[4998]: I0227 11:07:11.320372 4998 generic.go:334] "Generic (PLEG): container finished" podID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" containerID="80e499ffd9f6c39119f9b9bd2b1b9c0b38519d681fc2c93cfe8afbe50a1baa31" exitCode=0 Feb 27 11:07:11 crc kubenswrapper[4998]: I0227 11:07:11.320418 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" event={"ID":"400c5e2f-5448-49c6-bf8e-04b21e552bb2","Type":"ContainerDied","Data":"80e499ffd9f6c39119f9b9bd2b1b9c0b38519d681fc2c93cfe8afbe50a1baa31"} Feb 27 11:07:11 crc kubenswrapper[4998]: I0227 11:07:11.320450 4998 scope.go:117] "RemoveContainer" containerID="aa8a77df4c4a84d755ee497912634ba2635597c1a94094fe039e8e95846fe91f" Feb 27 11:07:11 crc kubenswrapper[4998]: I0227 11:07:11.321029 4998 scope.go:117] "RemoveContainer" containerID="80e499ffd9f6c39119f9b9bd2b1b9c0b38519d681fc2c93cfe8afbe50a1baa31" Feb 27 11:07:11 crc kubenswrapper[4998]: E0227 11:07:11.321430 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m6kr5_openshift-machine-config-operator(400c5e2f-5448-49c6-bf8e-04b21e552bb2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" Feb 27 11:07:11 crc kubenswrapper[4998]: I0227 11:07:11.446589 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-4d9t9_201a3a34-a0ff-476a-9fb2-db9ad3757a58/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Feb 27 11:07:11 crc kubenswrapper[4998]: I0227 11:07:11.682183 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6c687d6d7f-drrbg_82249664-67b0-479a-b26e-4a756f1d8b35/neutron-api/0.log" Feb 27 11:07:11 crc kubenswrapper[4998]: I0227 11:07:11.827587 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6c687d6d7f-drrbg_82249664-67b0-479a-b26e-4a756f1d8b35/neutron-httpd/0.log" Feb 27 11:07:11 crc kubenswrapper[4998]: I0227 11:07:11.921844 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-l9sq6_fbf8ea11-95d4-4444-8f99-591502846aec/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Feb 27 11:07:12 crc kubenswrapper[4998]: I0227 11:07:12.244997 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_ba55db21-2e4c-4171-ab36-8b3ad880e27f/nova-api-log/0.log" Feb 27 11:07:12 crc kubenswrapper[4998]: I0227 11:07:12.368733 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_ba55db21-2e4c-4171-ab36-8b3ad880e27f/nova-api-api/0.log" Feb 27 11:07:12 crc kubenswrapper[4998]: I0227 11:07:12.487332 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_3a51769e-cff7-4683-bfbe-b498c4c3f5f4/nova-cell0-conductor-conductor/0.log" Feb 27 11:07:12 crc kubenswrapper[4998]: I0227 11:07:12.639855 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_f0d7aa87-1b5e-4f9e-a031-923f6c24c818/nova-cell1-conductor-conductor/0.log" Feb 27 11:07:12 crc kubenswrapper[4998]: I0227 11:07:12.798006 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_33ba9efe-d1af-4c38-b767-9f1b41518e97/nova-cell1-novncproxy-novncproxy/0.log" Feb 27 11:07:12 crc kubenswrapper[4998]: I0227 11:07:12.939546 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-g6xl9_dde05d60-5841-4834-b7fc-a0dea36c8a93/nova-edpm-deployment-openstack-edpm-ipam/0.log" Feb 27 11:07:13 crc kubenswrapper[4998]: I0227 11:07:13.012275 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_9945116f-270e-499c-b0a1-98473487ff27/nova-metadata-log/0.log" Feb 27 11:07:13 crc kubenswrapper[4998]: I0227 11:07:13.348774 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_b061605c-7c6d-4893-ac07-0ce61a84be5e/nova-scheduler-scheduler/0.log" Feb 27 11:07:13 crc kubenswrapper[4998]: I0227 11:07:13.429474 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_7eaabc1a-14fa-4a68-b828-c6fc9a19d7b2/mysql-bootstrap/0.log" Feb 27 11:07:13 crc kubenswrapper[4998]: I0227 11:07:13.686264 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_7eaabc1a-14fa-4a68-b828-c6fc9a19d7b2/mysql-bootstrap/0.log" Feb 27 11:07:13 crc kubenswrapper[4998]: I0227 11:07:13.710253 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_7eaabc1a-14fa-4a68-b828-c6fc9a19d7b2/galera/0.log" Feb 27 11:07:13 crc kubenswrapper[4998]: I0227 11:07:13.838330 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_9945116f-270e-499c-b0a1-98473487ff27/nova-metadata-metadata/0.log" Feb 27 11:07:13 crc kubenswrapper[4998]: I0227 11:07:13.927545 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_bffe5e23-2abd-45ce-b167-5cc72eb06ae2/mysql-bootstrap/0.log" Feb 27 11:07:14 crc kubenswrapper[4998]: I0227 11:07:14.072980 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_bffe5e23-2abd-45ce-b167-5cc72eb06ae2/mysql-bootstrap/0.log" Feb 27 11:07:14 crc kubenswrapper[4998]: I0227 11:07:14.083789 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_bffe5e23-2abd-45ce-b167-5cc72eb06ae2/galera/0.log" Feb 27 11:07:14 crc kubenswrapper[4998]: I0227 11:07:14.130261 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_f9f2ae90-9f69-40cf-92d9-b1e9e320b8f5/openstackclient/0.log" Feb 27 11:07:14 crc kubenswrapper[4998]: I0227 11:07:14.335301 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-695g4_ca3e5ab9-22e3-473e-8e9f-3c78b654bdc9/ovn-controller/0.log" Feb 27 11:07:14 crc kubenswrapper[4998]: I0227 11:07:14.373691 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-jtqtq_2dd08b20-6d5e-4d2a-8237-fabf05188a4e/openstack-network-exporter/0.log" Feb 27 11:07:14 crc kubenswrapper[4998]: I0227 11:07:14.567006 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-pds2s_b2c77aed-5925-42b3-a90c-3f7acc4da187/ovsdb-server-init/0.log" Feb 27 11:07:14 crc kubenswrapper[4998]: I0227 11:07:14.719377 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-pds2s_b2c77aed-5925-42b3-a90c-3f7acc4da187/ovsdb-server-init/0.log" Feb 27 11:07:14 crc kubenswrapper[4998]: I0227 11:07:14.727499 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-pds2s_b2c77aed-5925-42b3-a90c-3f7acc4da187/ovs-vswitchd/0.log" Feb 27 11:07:14 crc kubenswrapper[4998]: I0227 11:07:14.800484 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-pds2s_b2c77aed-5925-42b3-a90c-3f7acc4da187/ovsdb-server/0.log" Feb 27 11:07:14 crc kubenswrapper[4998]: I0227 11:07:14.948576 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-f8zmr_5384243b-a246-4b6d-8cba-101d025f3498/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Feb 27 11:07:14 crc kubenswrapper[4998]: I0227 11:07:14.983926 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_196e9f4a-19f0-4a5d-b07b-fdfadfce3f87/openstack-network-exporter/0.log" Feb 27 11:07:15 crc kubenswrapper[4998]: I0227 11:07:15.045602 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_196e9f4a-19f0-4a5d-b07b-fdfadfce3f87/ovn-northd/0.log" Feb 27 11:07:15 crc kubenswrapper[4998]: I0227 11:07:15.204073 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_6332347d-e3f4-468b-9e36-a1b27163d1cd/openstack-network-exporter/0.log" Feb 27 11:07:15 crc kubenswrapper[4998]: I0227 11:07:15.235652 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-f7zpq"] Feb 27 11:07:15 crc kubenswrapper[4998]: E0227 11:07:15.236095 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5473782-fb33-4fdc-ae55-6161d3a22209" containerName="container-00" Feb 27 11:07:15 crc kubenswrapper[4998]: I0227 11:07:15.236106 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5473782-fb33-4fdc-ae55-6161d3a22209" containerName="container-00" Feb 27 11:07:15 crc kubenswrapper[4998]: I0227 11:07:15.236339 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5473782-fb33-4fdc-ae55-6161d3a22209" containerName="container-00" Feb 27 11:07:15 crc kubenswrapper[4998]: I0227 11:07:15.237571 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f7zpq" Feb 27 11:07:15 crc kubenswrapper[4998]: I0227 11:07:15.242118 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_6332347d-e3f4-468b-9e36-a1b27163d1cd/ovsdbserver-nb/0.log" Feb 27 11:07:15 crc kubenswrapper[4998]: I0227 11:07:15.283621 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-f7zpq"] Feb 27 11:07:15 crc kubenswrapper[4998]: I0227 11:07:15.396497 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8h99q\" (UniqueName: \"kubernetes.io/projected/3fc8748e-db1e-4aa5-80b4-53e1f324573e-kube-api-access-8h99q\") pod \"redhat-operators-f7zpq\" (UID: \"3fc8748e-db1e-4aa5-80b4-53e1f324573e\") " pod="openshift-marketplace/redhat-operators-f7zpq" Feb 27 11:07:15 crc kubenswrapper[4998]: I0227 11:07:15.396554 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3fc8748e-db1e-4aa5-80b4-53e1f324573e-catalog-content\") pod \"redhat-operators-f7zpq\" (UID: \"3fc8748e-db1e-4aa5-80b4-53e1f324573e\") " pod="openshift-marketplace/redhat-operators-f7zpq" Feb 27 11:07:15 crc kubenswrapper[4998]: I0227 11:07:15.396597 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3fc8748e-db1e-4aa5-80b4-53e1f324573e-utilities\") pod \"redhat-operators-f7zpq\" (UID: \"3fc8748e-db1e-4aa5-80b4-53e1f324573e\") " pod="openshift-marketplace/redhat-operators-f7zpq" Feb 27 11:07:15 crc kubenswrapper[4998]: I0227 11:07:15.402858 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_d7fe9696-b73f-4bd1-a13f-ed934d6d8a90/openstack-network-exporter/0.log" Feb 27 11:07:15 crc kubenswrapper[4998]: I0227 11:07:15.497853 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8h99q\" (UniqueName: \"kubernetes.io/projected/3fc8748e-db1e-4aa5-80b4-53e1f324573e-kube-api-access-8h99q\") pod \"redhat-operators-f7zpq\" (UID: \"3fc8748e-db1e-4aa5-80b4-53e1f324573e\") " pod="openshift-marketplace/redhat-operators-f7zpq" Feb 27 11:07:15 crc kubenswrapper[4998]: I0227 11:07:15.497953 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3fc8748e-db1e-4aa5-80b4-53e1f324573e-catalog-content\") pod \"redhat-operators-f7zpq\" (UID: \"3fc8748e-db1e-4aa5-80b4-53e1f324573e\") " pod="openshift-marketplace/redhat-operators-f7zpq" Feb 27 11:07:15 crc kubenswrapper[4998]: I0227 11:07:15.498124 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3fc8748e-db1e-4aa5-80b4-53e1f324573e-utilities\") pod \"redhat-operators-f7zpq\" (UID: \"3fc8748e-db1e-4aa5-80b4-53e1f324573e\") " pod="openshift-marketplace/redhat-operators-f7zpq" Feb 27 11:07:15 crc kubenswrapper[4998]: I0227 11:07:15.499497 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3fc8748e-db1e-4aa5-80b4-53e1f324573e-utilities\") pod \"redhat-operators-f7zpq\" (UID: \"3fc8748e-db1e-4aa5-80b4-53e1f324573e\") " pod="openshift-marketplace/redhat-operators-f7zpq" Feb 27 11:07:15 crc kubenswrapper[4998]: I0227 11:07:15.499638 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3fc8748e-db1e-4aa5-80b4-53e1f324573e-catalog-content\") pod \"redhat-operators-f7zpq\" (UID: \"3fc8748e-db1e-4aa5-80b4-53e1f324573e\") " pod="openshift-marketplace/redhat-operators-f7zpq" Feb 27 11:07:15 crc kubenswrapper[4998]: I0227 11:07:15.501068 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-7455798654-94zkv_56b83176-1737-4d20-a5ad-0b88394e2d40/placement-api/0.log" Feb 27 11:07:15 crc kubenswrapper[4998]: I0227 11:07:15.507548 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_d7fe9696-b73f-4bd1-a13f-ed934d6d8a90/ovsdbserver-sb/0.log" Feb 27 11:07:15 crc kubenswrapper[4998]: I0227 11:07:15.518601 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8h99q\" (UniqueName: \"kubernetes.io/projected/3fc8748e-db1e-4aa5-80b4-53e1f324573e-kube-api-access-8h99q\") pod \"redhat-operators-f7zpq\" (UID: \"3fc8748e-db1e-4aa5-80b4-53e1f324573e\") " pod="openshift-marketplace/redhat-operators-f7zpq" Feb 27 11:07:15 crc kubenswrapper[4998]: I0227 11:07:15.556314 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f7zpq" Feb 27 11:07:15 crc kubenswrapper[4998]: I0227 11:07:15.718617 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-7455798654-94zkv_56b83176-1737-4d20-a5ad-0b88394e2d40/placement-log/0.log" Feb 27 11:07:15 crc kubenswrapper[4998]: I0227 11:07:15.792953 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_7d2295db-a05f-492a-82d7-295fd2222daf/setup-container/0.log" Feb 27 11:07:15 crc kubenswrapper[4998]: I0227 11:07:15.887110 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-f7zpq"] Feb 27 11:07:15 crc kubenswrapper[4998]: I0227 11:07:15.990846 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_7d2295db-a05f-492a-82d7-295fd2222daf/setup-container/0.log" Feb 27 11:07:16 crc kubenswrapper[4998]: I0227 11:07:16.133358 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_058b490f-35b3-42df-a3b3-a684664a0e44/setup-container/0.log" Feb 27 11:07:16 crc kubenswrapper[4998]: I0227 11:07:16.174832 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_7d2295db-a05f-492a-82d7-295fd2222daf/rabbitmq/0.log" Feb 27 11:07:16 crc kubenswrapper[4998]: I0227 11:07:16.273589 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_058b490f-35b3-42df-a3b3-a684664a0e44/setup-container/0.log" Feb 27 11:07:16 crc kubenswrapper[4998]: I0227 11:07:16.360613 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_058b490f-35b3-42df-a3b3-a684664a0e44/rabbitmq/0.log" Feb 27 11:07:16 crc kubenswrapper[4998]: I0227 11:07:16.377961 4998 generic.go:334] "Generic (PLEG): container finished" podID="3fc8748e-db1e-4aa5-80b4-53e1f324573e" containerID="2eb618687bb4aaff7aeca527e829f16d1dbc94ff5b98cae797ae4e7789f4295d" exitCode=0 Feb 27 11:07:16 crc kubenswrapper[4998]: I0227 11:07:16.378002 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f7zpq" event={"ID":"3fc8748e-db1e-4aa5-80b4-53e1f324573e","Type":"ContainerDied","Data":"2eb618687bb4aaff7aeca527e829f16d1dbc94ff5b98cae797ae4e7789f4295d"} Feb 27 11:07:16 crc kubenswrapper[4998]: I0227 11:07:16.378029 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f7zpq" event={"ID":"3fc8748e-db1e-4aa5-80b4-53e1f324573e","Type":"ContainerStarted","Data":"fcdefce051486b8cecefed753ba2273d89a025ac3d54e0215ef0e5707b428493"} Feb 27 11:07:16 crc kubenswrapper[4998]: I0227 11:07:16.447388 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-gml48_96387bdc-46ea-4452-afb8-6d0a3fa3a80e/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 27 11:07:16 crc kubenswrapper[4998]: I0227 11:07:16.684642 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-v8vlw_e1d55b1c-6f76-403e-ab08-b47e20de4314/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Feb 27 11:07:16 crc kubenswrapper[4998]: I0227 11:07:16.710192 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-kzbf2_e543d9a5-720b-41ac-92ab-8a2895ce25c3/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Feb 27 11:07:17 crc kubenswrapper[4998]: I0227 11:07:17.188647 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-dznmc_46e0d517-7041-41ce-8cbc-9ed19afff0cb/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 27 11:07:17 crc kubenswrapper[4998]: I0227 11:07:17.353722 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-q5888_176d4d7b-4d9a-4b59-a556-f534a5a574d8/ssh-known-hosts-edpm-deployment/0.log" Feb 27 11:07:17 crc kubenswrapper[4998]: I0227 11:07:17.522577 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-7758b6f85-bxf6h_304f7a70-581a-407b-9280-fe7642feb71f/proxy-server/0.log" Feb 27 11:07:17 crc kubenswrapper[4998]: I0227 11:07:17.523314 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-7758b6f85-bxf6h_304f7a70-581a-407b-9280-fe7642feb71f/proxy-httpd/0.log" Feb 27 11:07:17 crc kubenswrapper[4998]: I0227 11:07:17.583790 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-c2cn4_4ca09e09-cedc-4476-bbe3-d179893232c8/swift-ring-rebalance/0.log" Feb 27 11:07:17 crc kubenswrapper[4998]: I0227 11:07:17.733303 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0928c45d-8553-49e6-a068-3e2e75a28c69/account-auditor/0.log" Feb 27 11:07:17 crc kubenswrapper[4998]: I0227 11:07:17.810968 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0928c45d-8553-49e6-a068-3e2e75a28c69/account-reaper/0.log" Feb 27 11:07:17 crc kubenswrapper[4998]: I0227 11:07:17.891847 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0928c45d-8553-49e6-a068-3e2e75a28c69/account-replicator/0.log" Feb 27 11:07:17 crc kubenswrapper[4998]: I0227 11:07:17.986719 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0928c45d-8553-49e6-a068-3e2e75a28c69/account-server/0.log" Feb 27 11:07:17 crc kubenswrapper[4998]: I0227 11:07:17.997694 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0928c45d-8553-49e6-a068-3e2e75a28c69/container-auditor/0.log" Feb 27 11:07:18 crc kubenswrapper[4998]: I0227 11:07:18.019526 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0928c45d-8553-49e6-a068-3e2e75a28c69/container-replicator/0.log" Feb 27 11:07:18 crc kubenswrapper[4998]: I0227 11:07:18.141139 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0928c45d-8553-49e6-a068-3e2e75a28c69/container-server/0.log" Feb 27 11:07:18 crc kubenswrapper[4998]: I0227 11:07:18.188042 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0928c45d-8553-49e6-a068-3e2e75a28c69/container-updater/0.log" Feb 27 11:07:18 crc kubenswrapper[4998]: I0227 11:07:18.203087 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0928c45d-8553-49e6-a068-3e2e75a28c69/object-expirer/0.log" Feb 27 11:07:18 crc kubenswrapper[4998]: I0227 11:07:18.269499 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0928c45d-8553-49e6-a068-3e2e75a28c69/object-auditor/0.log" Feb 27 11:07:18 crc kubenswrapper[4998]: I0227 11:07:18.377649 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0928c45d-8553-49e6-a068-3e2e75a28c69/object-replicator/0.log" Feb 27 11:07:18 crc kubenswrapper[4998]: I0227 11:07:18.418167 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f7zpq" event={"ID":"3fc8748e-db1e-4aa5-80b4-53e1f324573e","Type":"ContainerStarted","Data":"cfa1e5f08b0e9571a482c301483364c5a18cc82975ebc6584ce21559ed761ff3"} Feb 27 11:07:18 crc kubenswrapper[4998]: I0227 11:07:18.425970 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0928c45d-8553-49e6-a068-3e2e75a28c69/object-updater/0.log" Feb 27 11:07:18 crc kubenswrapper[4998]: I0227 11:07:18.509998 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0928c45d-8553-49e6-a068-3e2e75a28c69/rsync/0.log" Feb 27 11:07:18 crc kubenswrapper[4998]: I0227 11:07:18.541898 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0928c45d-8553-49e6-a068-3e2e75a28c69/object-server/0.log" Feb 27 11:07:18 crc kubenswrapper[4998]: I0227 11:07:18.684298 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0928c45d-8553-49e6-a068-3e2e75a28c69/swift-recon-cron/0.log" Feb 27 11:07:19 crc kubenswrapper[4998]: I0227 11:07:19.126802 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_a3d42422-7c5a-4605-b6c0-79682b9511ed/tempest-tests-tempest-tests-runner/0.log" Feb 27 11:07:19 crc kubenswrapper[4998]: I0227 11:07:19.180526 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-wd9lg_c9f848a0-3810-4574-82f8-097918a288a4/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Feb 27 11:07:19 crc kubenswrapper[4998]: I0227 11:07:19.289610 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_f77398a3-6376-4fb2-9a1a-f3d0067d9cc4/test-operator-logs-container/0.log" Feb 27 11:07:19 crc kubenswrapper[4998]: I0227 11:07:19.413703 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-xtvzq_cba7f413-6f64-4932-99d8-5894c5f3ab7a/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 27 11:07:19 crc kubenswrapper[4998]: I0227 11:07:19.428835 4998 generic.go:334] "Generic (PLEG): container finished" podID="3fc8748e-db1e-4aa5-80b4-53e1f324573e" containerID="cfa1e5f08b0e9571a482c301483364c5a18cc82975ebc6584ce21559ed761ff3" exitCode=0 Feb 27 11:07:19 crc kubenswrapper[4998]: I0227 11:07:19.428874 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f7zpq" event={"ID":"3fc8748e-db1e-4aa5-80b4-53e1f324573e","Type":"ContainerDied","Data":"cfa1e5f08b0e9571a482c301483364c5a18cc82975ebc6584ce21559ed761ff3"} Feb 27 11:07:19 crc kubenswrapper[4998]: I0227 11:07:19.839305 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fwj4m"] Feb 27 11:07:19 crc kubenswrapper[4998]: I0227 11:07:19.841317 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fwj4m" Feb 27 11:07:19 crc kubenswrapper[4998]: I0227 11:07:19.850954 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fwj4m"] Feb 27 11:07:19 crc kubenswrapper[4998]: I0227 11:07:19.988400 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41898aba-d293-4c14-84f9-50a9376209b0-utilities\") pod \"certified-operators-fwj4m\" (UID: \"41898aba-d293-4c14-84f9-50a9376209b0\") " pod="openshift-marketplace/certified-operators-fwj4m" Feb 27 11:07:19 crc kubenswrapper[4998]: I0227 11:07:19.988454 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41898aba-d293-4c14-84f9-50a9376209b0-catalog-content\") pod \"certified-operators-fwj4m\" (UID: \"41898aba-d293-4c14-84f9-50a9376209b0\") " pod="openshift-marketplace/certified-operators-fwj4m" Feb 27 11:07:19 crc kubenswrapper[4998]: I0227 11:07:19.988506 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrd76\" (UniqueName: \"kubernetes.io/projected/41898aba-d293-4c14-84f9-50a9376209b0-kube-api-access-wrd76\") pod \"certified-operators-fwj4m\" (UID: \"41898aba-d293-4c14-84f9-50a9376209b0\") " pod="openshift-marketplace/certified-operators-fwj4m" Feb 27 11:07:20 crc kubenswrapper[4998]: I0227 11:07:20.089623 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrd76\" (UniqueName: \"kubernetes.io/projected/41898aba-d293-4c14-84f9-50a9376209b0-kube-api-access-wrd76\") pod \"certified-operators-fwj4m\" (UID: \"41898aba-d293-4c14-84f9-50a9376209b0\") " pod="openshift-marketplace/certified-operators-fwj4m" Feb 27 11:07:20 crc kubenswrapper[4998]: I0227 11:07:20.089963 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41898aba-d293-4c14-84f9-50a9376209b0-utilities\") pod \"certified-operators-fwj4m\" (UID: \"41898aba-d293-4c14-84f9-50a9376209b0\") " pod="openshift-marketplace/certified-operators-fwj4m" Feb 27 11:07:20 crc kubenswrapper[4998]: I0227 11:07:20.090068 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41898aba-d293-4c14-84f9-50a9376209b0-catalog-content\") pod \"certified-operators-fwj4m\" (UID: \"41898aba-d293-4c14-84f9-50a9376209b0\") " pod="openshift-marketplace/certified-operators-fwj4m" Feb 27 11:07:20 crc kubenswrapper[4998]: I0227 11:07:20.090528 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41898aba-d293-4c14-84f9-50a9376209b0-utilities\") pod \"certified-operators-fwj4m\" (UID: \"41898aba-d293-4c14-84f9-50a9376209b0\") " pod="openshift-marketplace/certified-operators-fwj4m" Feb 27 11:07:20 crc kubenswrapper[4998]: I0227 11:07:20.090560 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41898aba-d293-4c14-84f9-50a9376209b0-catalog-content\") pod \"certified-operators-fwj4m\" (UID: \"41898aba-d293-4c14-84f9-50a9376209b0\") " pod="openshift-marketplace/certified-operators-fwj4m" Feb 27 11:07:20 crc kubenswrapper[4998]: I0227 11:07:20.107244 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrd76\" (UniqueName: \"kubernetes.io/projected/41898aba-d293-4c14-84f9-50a9376209b0-kube-api-access-wrd76\") pod \"certified-operators-fwj4m\" (UID: \"41898aba-d293-4c14-84f9-50a9376209b0\") " pod="openshift-marketplace/certified-operators-fwj4m" Feb 27 11:07:20 crc kubenswrapper[4998]: I0227 11:07:20.173668 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fwj4m" Feb 27 11:07:20 crc kubenswrapper[4998]: I0227 11:07:20.447543 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f7zpq" event={"ID":"3fc8748e-db1e-4aa5-80b4-53e1f324573e","Type":"ContainerStarted","Data":"4e1a5b3f5f79f8d3ea386d615af72a46e1be8251f9e97eade0bc0d40e22d6c5d"} Feb 27 11:07:20 crc kubenswrapper[4998]: I0227 11:07:20.488433 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-f7zpq" podStartSLOduration=1.977829437 podStartE2EDuration="5.488408387s" podCreationTimestamp="2026-02-27 11:07:15 +0000 UTC" firstStartedPulling="2026-02-27 11:07:16.379640454 +0000 UTC m=+2988.377911422" lastFinishedPulling="2026-02-27 11:07:19.890219394 +0000 UTC m=+2991.888490372" observedRunningTime="2026-02-27 11:07:20.473600512 +0000 UTC m=+2992.471871500" watchObservedRunningTime="2026-02-27 11:07:20.488408387 +0000 UTC m=+2992.486679355" Feb 27 11:07:20 crc kubenswrapper[4998]: I0227 11:07:20.705205 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fwj4m"] Feb 27 11:07:21 crc kubenswrapper[4998]: I0227 11:07:21.464064 4998 generic.go:334] "Generic (PLEG): container finished" podID="41898aba-d293-4c14-84f9-50a9376209b0" containerID="d70d167d111349a56d50f78339b517ba2094e2dcb086c766ea4575678409eb5f" exitCode=0 Feb 27 11:07:21 crc kubenswrapper[4998]: I0227 11:07:21.464709 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fwj4m" event={"ID":"41898aba-d293-4c14-84f9-50a9376209b0","Type":"ContainerDied","Data":"d70d167d111349a56d50f78339b517ba2094e2dcb086c766ea4575678409eb5f"} Feb 27 11:07:21 crc kubenswrapper[4998]: I0227 11:07:21.468690 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fwj4m" event={"ID":"41898aba-d293-4c14-84f9-50a9376209b0","Type":"ContainerStarted","Data":"13cc7b2e61eca457a1b6767f4ce1c57f8eff70a202a6d0b8bb50692d3903a0b0"} Feb 27 11:07:22 crc kubenswrapper[4998]: I0227 11:07:22.476869 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fwj4m" event={"ID":"41898aba-d293-4c14-84f9-50a9376209b0","Type":"ContainerStarted","Data":"8617fc915b33802f7f19d9bfbde85ed5ac4316603c8366c87ec171fdfae4960c"} Feb 27 11:07:22 crc kubenswrapper[4998]: I0227 11:07:22.678391 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_f00d2d1a-0f01-465b-9009-3cc7e1544fc0/memcached/0.log" Feb 27 11:07:22 crc kubenswrapper[4998]: I0227 11:07:22.767417 4998 scope.go:117] "RemoveContainer" containerID="80e499ffd9f6c39119f9b9bd2b1b9c0b38519d681fc2c93cfe8afbe50a1baa31" Feb 27 11:07:22 crc kubenswrapper[4998]: E0227 11:07:22.767660 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m6kr5_openshift-machine-config-operator(400c5e2f-5448-49c6-bf8e-04b21e552bb2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" Feb 27 11:07:23 crc kubenswrapper[4998]: I0227 11:07:23.489748 4998 generic.go:334] "Generic (PLEG): container finished" podID="41898aba-d293-4c14-84f9-50a9376209b0" containerID="8617fc915b33802f7f19d9bfbde85ed5ac4316603c8366c87ec171fdfae4960c" exitCode=0 Feb 27 11:07:23 crc kubenswrapper[4998]: I0227 11:07:23.489811 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fwj4m" event={"ID":"41898aba-d293-4c14-84f9-50a9376209b0","Type":"ContainerDied","Data":"8617fc915b33802f7f19d9bfbde85ed5ac4316603c8366c87ec171fdfae4960c"} Feb 27 11:07:24 crc kubenswrapper[4998]: I0227 11:07:24.500947 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fwj4m" event={"ID":"41898aba-d293-4c14-84f9-50a9376209b0","Type":"ContainerStarted","Data":"99b9f6bb554536da854a28ac1aaf180be121ced62ac56ee0b65088f61b6e0ea0"} Feb 27 11:07:24 crc kubenswrapper[4998]: I0227 11:07:24.528930 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fwj4m" podStartSLOduration=2.780525125 podStartE2EDuration="5.528908309s" podCreationTimestamp="2026-02-27 11:07:19 +0000 UTC" firstStartedPulling="2026-02-27 11:07:21.470870748 +0000 UTC m=+2993.469141716" lastFinishedPulling="2026-02-27 11:07:24.219253932 +0000 UTC m=+2996.217524900" observedRunningTime="2026-02-27 11:07:24.520194437 +0000 UTC m=+2996.518465415" watchObservedRunningTime="2026-02-27 11:07:24.528908309 +0000 UTC m=+2996.527179277" Feb 27 11:07:25 crc kubenswrapper[4998]: I0227 11:07:25.556507 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-f7zpq" Feb 27 11:07:25 crc kubenswrapper[4998]: I0227 11:07:25.557251 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-f7zpq" Feb 27 11:07:26 crc kubenswrapper[4998]: I0227 11:07:26.605697 4998 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-f7zpq" podUID="3fc8748e-db1e-4aa5-80b4-53e1f324573e" containerName="registry-server" probeResult="failure" output=< Feb 27 11:07:26 crc kubenswrapper[4998]: timeout: failed to connect service ":50051" within 1s Feb 27 11:07:26 crc kubenswrapper[4998]: > Feb 27 11:07:30 crc kubenswrapper[4998]: I0227 11:07:30.174522 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-fwj4m" Feb 27 11:07:30 crc kubenswrapper[4998]: I0227 11:07:30.174975 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-fwj4m" Feb 27 11:07:30 crc kubenswrapper[4998]: I0227 11:07:30.230104 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-fwj4m" Feb 27 11:07:30 crc kubenswrapper[4998]: I0227 11:07:30.635880 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-fwj4m" Feb 27 11:07:30 crc kubenswrapper[4998]: I0227 11:07:30.680562 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fwj4m"] Feb 27 11:07:32 crc kubenswrapper[4998]: I0227 11:07:32.562833 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-fwj4m" podUID="41898aba-d293-4c14-84f9-50a9376209b0" containerName="registry-server" containerID="cri-o://99b9f6bb554536da854a28ac1aaf180be121ced62ac56ee0b65088f61b6e0ea0" gracePeriod=2 Feb 27 11:07:33 crc kubenswrapper[4998]: I0227 11:07:33.078347 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fwj4m" Feb 27 11:07:33 crc kubenswrapper[4998]: I0227 11:07:33.150465 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wrd76\" (UniqueName: \"kubernetes.io/projected/41898aba-d293-4c14-84f9-50a9376209b0-kube-api-access-wrd76\") pod \"41898aba-d293-4c14-84f9-50a9376209b0\" (UID: \"41898aba-d293-4c14-84f9-50a9376209b0\") " Feb 27 11:07:33 crc kubenswrapper[4998]: I0227 11:07:33.150605 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41898aba-d293-4c14-84f9-50a9376209b0-catalog-content\") pod \"41898aba-d293-4c14-84f9-50a9376209b0\" (UID: \"41898aba-d293-4c14-84f9-50a9376209b0\") " Feb 27 11:07:33 crc kubenswrapper[4998]: I0227 11:07:33.150773 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41898aba-d293-4c14-84f9-50a9376209b0-utilities\") pod \"41898aba-d293-4c14-84f9-50a9376209b0\" (UID: \"41898aba-d293-4c14-84f9-50a9376209b0\") " Feb 27 11:07:33 crc kubenswrapper[4998]: I0227 11:07:33.151493 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41898aba-d293-4c14-84f9-50a9376209b0-utilities" (OuterVolumeSpecName: "utilities") pod "41898aba-d293-4c14-84f9-50a9376209b0" (UID: "41898aba-d293-4c14-84f9-50a9376209b0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 11:07:33 crc kubenswrapper[4998]: I0227 11:07:33.166876 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41898aba-d293-4c14-84f9-50a9376209b0-kube-api-access-wrd76" (OuterVolumeSpecName: "kube-api-access-wrd76") pod "41898aba-d293-4c14-84f9-50a9376209b0" (UID: "41898aba-d293-4c14-84f9-50a9376209b0"). InnerVolumeSpecName "kube-api-access-wrd76". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:07:33 crc kubenswrapper[4998]: I0227 11:07:33.208954 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41898aba-d293-4c14-84f9-50a9376209b0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "41898aba-d293-4c14-84f9-50a9376209b0" (UID: "41898aba-d293-4c14-84f9-50a9376209b0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 11:07:33 crc kubenswrapper[4998]: I0227 11:07:33.253257 4998 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41898aba-d293-4c14-84f9-50a9376209b0-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 11:07:33 crc kubenswrapper[4998]: I0227 11:07:33.253310 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wrd76\" (UniqueName: \"kubernetes.io/projected/41898aba-d293-4c14-84f9-50a9376209b0-kube-api-access-wrd76\") on node \"crc\" DevicePath \"\"" Feb 27 11:07:33 crc kubenswrapper[4998]: I0227 11:07:33.253326 4998 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41898aba-d293-4c14-84f9-50a9376209b0-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 11:07:33 crc kubenswrapper[4998]: I0227 11:07:33.572122 4998 generic.go:334] "Generic (PLEG): container finished" podID="41898aba-d293-4c14-84f9-50a9376209b0" containerID="99b9f6bb554536da854a28ac1aaf180be121ced62ac56ee0b65088f61b6e0ea0" exitCode=0 Feb 27 11:07:33 crc kubenswrapper[4998]: I0227 11:07:33.572156 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fwj4m" Feb 27 11:07:33 crc kubenswrapper[4998]: I0227 11:07:33.572196 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fwj4m" event={"ID":"41898aba-d293-4c14-84f9-50a9376209b0","Type":"ContainerDied","Data":"99b9f6bb554536da854a28ac1aaf180be121ced62ac56ee0b65088f61b6e0ea0"} Feb 27 11:07:33 crc kubenswrapper[4998]: I0227 11:07:33.572277 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fwj4m" event={"ID":"41898aba-d293-4c14-84f9-50a9376209b0","Type":"ContainerDied","Data":"13cc7b2e61eca457a1b6767f4ce1c57f8eff70a202a6d0b8bb50692d3903a0b0"} Feb 27 11:07:33 crc kubenswrapper[4998]: I0227 11:07:33.572298 4998 scope.go:117] "RemoveContainer" containerID="99b9f6bb554536da854a28ac1aaf180be121ced62ac56ee0b65088f61b6e0ea0" Feb 27 11:07:33 crc kubenswrapper[4998]: I0227 11:07:33.593334 4998 scope.go:117] "RemoveContainer" containerID="8617fc915b33802f7f19d9bfbde85ed5ac4316603c8366c87ec171fdfae4960c" Feb 27 11:07:33 crc kubenswrapper[4998]: I0227 11:07:33.613694 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fwj4m"] Feb 27 11:07:33 crc kubenswrapper[4998]: I0227 11:07:33.624317 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-fwj4m"] Feb 27 11:07:33 crc kubenswrapper[4998]: I0227 11:07:33.639610 4998 scope.go:117] "RemoveContainer" containerID="d70d167d111349a56d50f78339b517ba2094e2dcb086c766ea4575678409eb5f" Feb 27 11:07:33 crc kubenswrapper[4998]: I0227 11:07:33.661607 4998 scope.go:117] "RemoveContainer" containerID="99b9f6bb554536da854a28ac1aaf180be121ced62ac56ee0b65088f61b6e0ea0" Feb 27 11:07:33 crc kubenswrapper[4998]: E0227 11:07:33.662289 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99b9f6bb554536da854a28ac1aaf180be121ced62ac56ee0b65088f61b6e0ea0\": container with ID starting with 99b9f6bb554536da854a28ac1aaf180be121ced62ac56ee0b65088f61b6e0ea0 not found: ID does not exist" containerID="99b9f6bb554536da854a28ac1aaf180be121ced62ac56ee0b65088f61b6e0ea0" Feb 27 11:07:33 crc kubenswrapper[4998]: I0227 11:07:33.662339 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99b9f6bb554536da854a28ac1aaf180be121ced62ac56ee0b65088f61b6e0ea0"} err="failed to get container status \"99b9f6bb554536da854a28ac1aaf180be121ced62ac56ee0b65088f61b6e0ea0\": rpc error: code = NotFound desc = could not find container \"99b9f6bb554536da854a28ac1aaf180be121ced62ac56ee0b65088f61b6e0ea0\": container with ID starting with 99b9f6bb554536da854a28ac1aaf180be121ced62ac56ee0b65088f61b6e0ea0 not found: ID does not exist" Feb 27 11:07:33 crc kubenswrapper[4998]: I0227 11:07:33.662374 4998 scope.go:117] "RemoveContainer" containerID="8617fc915b33802f7f19d9bfbde85ed5ac4316603c8366c87ec171fdfae4960c" Feb 27 11:07:33 crc kubenswrapper[4998]: E0227 11:07:33.663018 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8617fc915b33802f7f19d9bfbde85ed5ac4316603c8366c87ec171fdfae4960c\": container with ID starting with 8617fc915b33802f7f19d9bfbde85ed5ac4316603c8366c87ec171fdfae4960c not found: ID does not exist" containerID="8617fc915b33802f7f19d9bfbde85ed5ac4316603c8366c87ec171fdfae4960c" Feb 27 11:07:33 crc kubenswrapper[4998]: I0227 11:07:33.663082 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8617fc915b33802f7f19d9bfbde85ed5ac4316603c8366c87ec171fdfae4960c"} err="failed to get container status \"8617fc915b33802f7f19d9bfbde85ed5ac4316603c8366c87ec171fdfae4960c\": rpc error: code = NotFound desc = could not find container \"8617fc915b33802f7f19d9bfbde85ed5ac4316603c8366c87ec171fdfae4960c\": container with ID starting with 8617fc915b33802f7f19d9bfbde85ed5ac4316603c8366c87ec171fdfae4960c not found: ID does not exist" Feb 27 11:07:33 crc kubenswrapper[4998]: I0227 11:07:33.663136 4998 scope.go:117] "RemoveContainer" containerID="d70d167d111349a56d50f78339b517ba2094e2dcb086c766ea4575678409eb5f" Feb 27 11:07:33 crc kubenswrapper[4998]: E0227 11:07:33.663501 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d70d167d111349a56d50f78339b517ba2094e2dcb086c766ea4575678409eb5f\": container with ID starting with d70d167d111349a56d50f78339b517ba2094e2dcb086c766ea4575678409eb5f not found: ID does not exist" containerID="d70d167d111349a56d50f78339b517ba2094e2dcb086c766ea4575678409eb5f" Feb 27 11:07:33 crc kubenswrapper[4998]: I0227 11:07:33.663548 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d70d167d111349a56d50f78339b517ba2094e2dcb086c766ea4575678409eb5f"} err="failed to get container status \"d70d167d111349a56d50f78339b517ba2094e2dcb086c766ea4575678409eb5f\": rpc error: code = NotFound desc = could not find container \"d70d167d111349a56d50f78339b517ba2094e2dcb086c766ea4575678409eb5f\": container with ID starting with d70d167d111349a56d50f78339b517ba2094e2dcb086c766ea4575678409eb5f not found: ID does not exist" Feb 27 11:07:34 crc kubenswrapper[4998]: I0227 11:07:34.766503 4998 scope.go:117] "RemoveContainer" containerID="80e499ffd9f6c39119f9b9bd2b1b9c0b38519d681fc2c93cfe8afbe50a1baa31" Feb 27 11:07:34 crc kubenswrapper[4998]: E0227 11:07:34.766986 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m6kr5_openshift-machine-config-operator(400c5e2f-5448-49c6-bf8e-04b21e552bb2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" Feb 27 11:07:34 crc kubenswrapper[4998]: I0227 11:07:34.779012 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41898aba-d293-4c14-84f9-50a9376209b0" path="/var/lib/kubelet/pods/41898aba-d293-4c14-84f9-50a9376209b0/volumes" Feb 27 11:07:35 crc kubenswrapper[4998]: I0227 11:07:35.621440 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-f7zpq" Feb 27 11:07:35 crc kubenswrapper[4998]: I0227 11:07:35.696686 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-f7zpq" Feb 27 11:07:36 crc kubenswrapper[4998]: I0227 11:07:36.882141 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-f7zpq"] Feb 27 11:07:37 crc kubenswrapper[4998]: I0227 11:07:37.619296 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-f7zpq" podUID="3fc8748e-db1e-4aa5-80b4-53e1f324573e" containerName="registry-server" containerID="cri-o://4e1a5b3f5f79f8d3ea386d615af72a46e1be8251f9e97eade0bc0d40e22d6c5d" gracePeriod=2 Feb 27 11:07:38 crc kubenswrapper[4998]: I0227 11:07:38.125460 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f7zpq" Feb 27 11:07:38 crc kubenswrapper[4998]: I0227 11:07:38.269984 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8h99q\" (UniqueName: \"kubernetes.io/projected/3fc8748e-db1e-4aa5-80b4-53e1f324573e-kube-api-access-8h99q\") pod \"3fc8748e-db1e-4aa5-80b4-53e1f324573e\" (UID: \"3fc8748e-db1e-4aa5-80b4-53e1f324573e\") " Feb 27 11:07:38 crc kubenswrapper[4998]: I0227 11:07:38.270453 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3fc8748e-db1e-4aa5-80b4-53e1f324573e-catalog-content\") pod \"3fc8748e-db1e-4aa5-80b4-53e1f324573e\" (UID: \"3fc8748e-db1e-4aa5-80b4-53e1f324573e\") " Feb 27 11:07:38 crc kubenswrapper[4998]: I0227 11:07:38.270639 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3fc8748e-db1e-4aa5-80b4-53e1f324573e-utilities\") pod \"3fc8748e-db1e-4aa5-80b4-53e1f324573e\" (UID: \"3fc8748e-db1e-4aa5-80b4-53e1f324573e\") " Feb 27 11:07:38 crc kubenswrapper[4998]: I0227 11:07:38.272141 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3fc8748e-db1e-4aa5-80b4-53e1f324573e-utilities" (OuterVolumeSpecName: "utilities") pod "3fc8748e-db1e-4aa5-80b4-53e1f324573e" (UID: "3fc8748e-db1e-4aa5-80b4-53e1f324573e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 11:07:38 crc kubenswrapper[4998]: I0227 11:07:38.279633 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3fc8748e-db1e-4aa5-80b4-53e1f324573e-kube-api-access-8h99q" (OuterVolumeSpecName: "kube-api-access-8h99q") pod "3fc8748e-db1e-4aa5-80b4-53e1f324573e" (UID: "3fc8748e-db1e-4aa5-80b4-53e1f324573e"). InnerVolumeSpecName "kube-api-access-8h99q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:07:38 crc kubenswrapper[4998]: I0227 11:07:38.373301 4998 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3fc8748e-db1e-4aa5-80b4-53e1f324573e-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 11:07:38 crc kubenswrapper[4998]: I0227 11:07:38.373348 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8h99q\" (UniqueName: \"kubernetes.io/projected/3fc8748e-db1e-4aa5-80b4-53e1f324573e-kube-api-access-8h99q\") on node \"crc\" DevicePath \"\"" Feb 27 11:07:38 crc kubenswrapper[4998]: I0227 11:07:38.382104 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3fc8748e-db1e-4aa5-80b4-53e1f324573e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3fc8748e-db1e-4aa5-80b4-53e1f324573e" (UID: "3fc8748e-db1e-4aa5-80b4-53e1f324573e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 11:07:38 crc kubenswrapper[4998]: I0227 11:07:38.475827 4998 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3fc8748e-db1e-4aa5-80b4-53e1f324573e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 11:07:38 crc kubenswrapper[4998]: I0227 11:07:38.632376 4998 generic.go:334] "Generic (PLEG): container finished" podID="3fc8748e-db1e-4aa5-80b4-53e1f324573e" containerID="4e1a5b3f5f79f8d3ea386d615af72a46e1be8251f9e97eade0bc0d40e22d6c5d" exitCode=0 Feb 27 11:07:38 crc kubenswrapper[4998]: I0227 11:07:38.632437 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f7zpq" event={"ID":"3fc8748e-db1e-4aa5-80b4-53e1f324573e","Type":"ContainerDied","Data":"4e1a5b3f5f79f8d3ea386d615af72a46e1be8251f9e97eade0bc0d40e22d6c5d"} Feb 27 11:07:38 crc kubenswrapper[4998]: I0227 11:07:38.632468 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f7zpq" Feb 27 11:07:38 crc kubenswrapper[4998]: I0227 11:07:38.632518 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f7zpq" event={"ID":"3fc8748e-db1e-4aa5-80b4-53e1f324573e","Type":"ContainerDied","Data":"fcdefce051486b8cecefed753ba2273d89a025ac3d54e0215ef0e5707b428493"} Feb 27 11:07:38 crc kubenswrapper[4998]: I0227 11:07:38.632551 4998 scope.go:117] "RemoveContainer" containerID="4e1a5b3f5f79f8d3ea386d615af72a46e1be8251f9e97eade0bc0d40e22d6c5d" Feb 27 11:07:38 crc kubenswrapper[4998]: I0227 11:07:38.658306 4998 scope.go:117] "RemoveContainer" containerID="cfa1e5f08b0e9571a482c301483364c5a18cc82975ebc6584ce21559ed761ff3" Feb 27 11:07:38 crc kubenswrapper[4998]: I0227 11:07:38.677967 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-f7zpq"] Feb 27 11:07:38 crc kubenswrapper[4998]: I0227 11:07:38.683918 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-f7zpq"] Feb 27 11:07:38 crc kubenswrapper[4998]: I0227 11:07:38.696988 4998 scope.go:117] "RemoveContainer" containerID="2eb618687bb4aaff7aeca527e829f16d1dbc94ff5b98cae797ae4e7789f4295d" Feb 27 11:07:38 crc kubenswrapper[4998]: I0227 11:07:38.741838 4998 scope.go:117] "RemoveContainer" containerID="4e1a5b3f5f79f8d3ea386d615af72a46e1be8251f9e97eade0bc0d40e22d6c5d" Feb 27 11:07:38 crc kubenswrapper[4998]: E0227 11:07:38.742260 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e1a5b3f5f79f8d3ea386d615af72a46e1be8251f9e97eade0bc0d40e22d6c5d\": container with ID starting with 4e1a5b3f5f79f8d3ea386d615af72a46e1be8251f9e97eade0bc0d40e22d6c5d not found: ID does not exist" containerID="4e1a5b3f5f79f8d3ea386d615af72a46e1be8251f9e97eade0bc0d40e22d6c5d" Feb 27 11:07:38 crc kubenswrapper[4998]: I0227 11:07:38.742393 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e1a5b3f5f79f8d3ea386d615af72a46e1be8251f9e97eade0bc0d40e22d6c5d"} err="failed to get container status \"4e1a5b3f5f79f8d3ea386d615af72a46e1be8251f9e97eade0bc0d40e22d6c5d\": rpc error: code = NotFound desc = could not find container \"4e1a5b3f5f79f8d3ea386d615af72a46e1be8251f9e97eade0bc0d40e22d6c5d\": container with ID starting with 4e1a5b3f5f79f8d3ea386d615af72a46e1be8251f9e97eade0bc0d40e22d6c5d not found: ID does not exist" Feb 27 11:07:38 crc kubenswrapper[4998]: I0227 11:07:38.742477 4998 scope.go:117] "RemoveContainer" containerID="cfa1e5f08b0e9571a482c301483364c5a18cc82975ebc6584ce21559ed761ff3" Feb 27 11:07:38 crc kubenswrapper[4998]: E0227 11:07:38.742835 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cfa1e5f08b0e9571a482c301483364c5a18cc82975ebc6584ce21559ed761ff3\": container with ID starting with cfa1e5f08b0e9571a482c301483364c5a18cc82975ebc6584ce21559ed761ff3 not found: ID does not exist" containerID="cfa1e5f08b0e9571a482c301483364c5a18cc82975ebc6584ce21559ed761ff3" Feb 27 11:07:38 crc kubenswrapper[4998]: I0227 11:07:38.742865 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cfa1e5f08b0e9571a482c301483364c5a18cc82975ebc6584ce21559ed761ff3"} err="failed to get container status \"cfa1e5f08b0e9571a482c301483364c5a18cc82975ebc6584ce21559ed761ff3\": rpc error: code = NotFound desc = could not find container \"cfa1e5f08b0e9571a482c301483364c5a18cc82975ebc6584ce21559ed761ff3\": container with ID starting with cfa1e5f08b0e9571a482c301483364c5a18cc82975ebc6584ce21559ed761ff3 not found: ID does not exist" Feb 27 11:07:38 crc kubenswrapper[4998]: I0227 11:07:38.742881 4998 scope.go:117] "RemoveContainer" containerID="2eb618687bb4aaff7aeca527e829f16d1dbc94ff5b98cae797ae4e7789f4295d" Feb 27 11:07:38 crc kubenswrapper[4998]: E0227 11:07:38.743150 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2eb618687bb4aaff7aeca527e829f16d1dbc94ff5b98cae797ae4e7789f4295d\": container with ID starting with 2eb618687bb4aaff7aeca527e829f16d1dbc94ff5b98cae797ae4e7789f4295d not found: ID does not exist" containerID="2eb618687bb4aaff7aeca527e829f16d1dbc94ff5b98cae797ae4e7789f4295d" Feb 27 11:07:38 crc kubenswrapper[4998]: I0227 11:07:38.743179 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2eb618687bb4aaff7aeca527e829f16d1dbc94ff5b98cae797ae4e7789f4295d"} err="failed to get container status \"2eb618687bb4aaff7aeca527e829f16d1dbc94ff5b98cae797ae4e7789f4295d\": rpc error: code = NotFound desc = could not find container \"2eb618687bb4aaff7aeca527e829f16d1dbc94ff5b98cae797ae4e7789f4295d\": container with ID starting with 2eb618687bb4aaff7aeca527e829f16d1dbc94ff5b98cae797ae4e7789f4295d not found: ID does not exist" Feb 27 11:07:38 crc kubenswrapper[4998]: I0227 11:07:38.779044 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3fc8748e-db1e-4aa5-80b4-53e1f324573e" path="/var/lib/kubelet/pods/3fc8748e-db1e-4aa5-80b4-53e1f324573e/volumes" Feb 27 11:07:45 crc kubenswrapper[4998]: I0227 11:07:45.764951 4998 scope.go:117] "RemoveContainer" containerID="80e499ffd9f6c39119f9b9bd2b1b9c0b38519d681fc2c93cfe8afbe50a1baa31" Feb 27 11:07:45 crc kubenswrapper[4998]: E0227 11:07:45.765736 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m6kr5_openshift-machine-config-operator(400c5e2f-5448-49c6-bf8e-04b21e552bb2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" Feb 27 11:07:46 crc kubenswrapper[4998]: I0227 11:07:46.296143 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_96b24b5a2dcfb06a74b6947e2ae317d2fae2eff88377a91491a3fd03d8tvgpz_76aa4e91-bce4-4a29-ba7d-ced8e3a54002/util/0.log" Feb 27 11:07:46 crc kubenswrapper[4998]: I0227 11:07:46.508242 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_96b24b5a2dcfb06a74b6947e2ae317d2fae2eff88377a91491a3fd03d8tvgpz_76aa4e91-bce4-4a29-ba7d-ced8e3a54002/util/0.log" Feb 27 11:07:46 crc kubenswrapper[4998]: I0227 11:07:46.518987 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_96b24b5a2dcfb06a74b6947e2ae317d2fae2eff88377a91491a3fd03d8tvgpz_76aa4e91-bce4-4a29-ba7d-ced8e3a54002/pull/0.log" Feb 27 11:07:46 crc kubenswrapper[4998]: I0227 11:07:46.539307 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_96b24b5a2dcfb06a74b6947e2ae317d2fae2eff88377a91491a3fd03d8tvgpz_76aa4e91-bce4-4a29-ba7d-ced8e3a54002/pull/0.log" Feb 27 11:07:46 crc kubenswrapper[4998]: I0227 11:07:46.674841 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_96b24b5a2dcfb06a74b6947e2ae317d2fae2eff88377a91491a3fd03d8tvgpz_76aa4e91-bce4-4a29-ba7d-ced8e3a54002/pull/0.log" Feb 27 11:07:46 crc kubenswrapper[4998]: I0227 11:07:46.709322 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_96b24b5a2dcfb06a74b6947e2ae317d2fae2eff88377a91491a3fd03d8tvgpz_76aa4e91-bce4-4a29-ba7d-ced8e3a54002/util/0.log" Feb 27 11:07:46 crc kubenswrapper[4998]: I0227 11:07:46.740287 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_96b24b5a2dcfb06a74b6947e2ae317d2fae2eff88377a91491a3fd03d8tvgpz_76aa4e91-bce4-4a29-ba7d-ced8e3a54002/extract/0.log" Feb 27 11:07:47 crc kubenswrapper[4998]: I0227 11:07:47.165530 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-5d87c9d997-nnlcl_36f927dc-fac8-4bb6-85d1-df539857edf1/manager/0.log" Feb 27 11:07:47 crc kubenswrapper[4998]: I0227 11:07:47.490341 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-64db6967f8-249j5_36f2300a-f1d8-429d-8b4f-065c53c8b68b/manager/0.log" Feb 27 11:07:47 crc kubenswrapper[4998]: I0227 11:07:47.615566 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-cf99c678f-jcgpp_a27dd930-0b57-431c-ae5c-7c9af1e11dfa/manager/0.log" Feb 27 11:07:47 crc kubenswrapper[4998]: I0227 11:07:47.852571 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-78bc7f9bd9-wxtjl_35236f87-0600-46d7-ba5b-7576e20b9dc4/manager/0.log" Feb 27 11:07:48 crc kubenswrapper[4998]: I0227 11:07:48.274430 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-55d77d7b5c-gg44v_f7519997-ef58-4091-bce9-a43762551d56/manager/0.log" Feb 27 11:07:48 crc kubenswrapper[4998]: I0227 11:07:48.382741 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-545456dc4-wmdmq_83211ec0-66ef-476c-ad20-e17e88348f29/manager/0.log" Feb 27 11:07:48 crc kubenswrapper[4998]: I0227 11:07:48.426983 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-f7fcc58b9-kjznw_ad7e7a26-5a61-408d-86ae-0b25b8617147/manager/0.log" Feb 27 11:07:48 crc kubenswrapper[4998]: I0227 11:07:48.650100 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7c789f89c6-c9hqt_21596c8d-5360-482b-8cea-eda167f2f1cd/manager/0.log" Feb 27 11:07:48 crc kubenswrapper[4998]: I0227 11:07:48.725561 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-67d996989d-qqrgh_a77eccb8-8369-473a-93ad-d9d67ccea057/manager/0.log" Feb 27 11:07:48 crc kubenswrapper[4998]: I0227 11:07:48.931148 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-7b6bfb6475-zr6mq_9ec9f13a-82dd-4ab4-8f5a-c15f2a42f2dc/manager/0.log" Feb 27 11:07:49 crc kubenswrapper[4998]: I0227 11:07:49.267260 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-74b6b5dc96-hv45m_61e9c6f0-0c9c-47ca-b013-b94ba962ec66/manager/0.log" Feb 27 11:07:49 crc kubenswrapper[4998]: I0227 11:07:49.278535 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-54688575f-fxkzb_4a837f08-5b8a-4cd8-8943-dc252cfb3f0f/manager/0.log" Feb 27 11:07:49 crc kubenswrapper[4998]: I0227 11:07:49.498152 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-5d86c7ddb7-fs8hb_d31df8ec-7c1c-42b3-b538-2949c015b6e6/manager/0.log" Feb 27 11:07:49 crc kubenswrapper[4998]: I0227 11:07:49.731961 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-7c6767dc9cd4n94_6bbe7a4a-b92f-4ba7-ab63-1261ba6a8248/manager/0.log" Feb 27 11:07:50 crc kubenswrapper[4998]: I0227 11:07:50.038894 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-67c7cb969b-hkjl4_9edacacf-594d-495b-bebf-baea9d2d9ab7/operator/0.log" Feb 27 11:07:50 crc kubenswrapper[4998]: I0227 11:07:50.601287 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-cmv4s_0220a262-0cbc-4328-bc03-63d749d85892/registry-server/0.log" Feb 27 11:07:50 crc kubenswrapper[4998]: I0227 11:07:50.668186 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-75684d597f-dh4xx_aaf3093e-214b-48a9-8310-56ba32b094f7/manager/0.log" Feb 27 11:07:50 crc kubenswrapper[4998]: I0227 11:07:50.877817 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-648564c9fc-l6kh7_3855bff7-c203-4258-98a0-5afa77cf9b5c/manager/0.log" Feb 27 11:07:51 crc kubenswrapper[4998]: I0227 11:07:51.076582 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-9rqts_d660dd5a-dbce-413b-9075-9de9a2776d8c/operator/0.log" Feb 27 11:07:51 crc kubenswrapper[4998]: I0227 11:07:51.241261 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-9b9ff9f4d-mhc4p_b712d7f9-e5c9-4a97-9757-e8689527c542/manager/0.log" Feb 27 11:07:51 crc kubenswrapper[4998]: I0227 11:07:51.405800 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-5fdb694969-vhsp7_3b9761b2-ee38-418d-800f-ba54fc8960b6/manager/0.log" Feb 27 11:07:51 crc kubenswrapper[4998]: I0227 11:07:51.474355 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-55b5ff4dbb-bq92j_4b9b7b96-1053-4c39-8079-b0c47d540545/manager/0.log" Feb 27 11:07:51 crc kubenswrapper[4998]: I0227 11:07:51.664022 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-bccc79885-g9dst_f5fd6986-20ba-4982-a978-c76652150ac8/manager/0.log" Feb 27 11:07:51 crc kubenswrapper[4998]: I0227 11:07:51.959279 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-8f6f897df-qkvxk_86a542d0-8588-425c-9d8b-417b0b287ce2/manager/0.log" Feb 27 11:07:52 crc kubenswrapper[4998]: I0227 11:07:52.937137 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-6db6876945-km79k_cd2a5873-e5a8-4cc6-af9e-d90dd1253bdf/manager/0.log" Feb 27 11:07:57 crc kubenswrapper[4998]: I0227 11:07:57.765274 4998 scope.go:117] "RemoveContainer" containerID="80e499ffd9f6c39119f9b9bd2b1b9c0b38519d681fc2c93cfe8afbe50a1baa31" Feb 27 11:07:57 crc kubenswrapper[4998]: E0227 11:07:57.766015 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m6kr5_openshift-machine-config-operator(400c5e2f-5448-49c6-bf8e-04b21e552bb2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" Feb 27 11:08:00 crc kubenswrapper[4998]: I0227 11:08:00.143962 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536508-qd4nl"] Feb 27 11:08:00 crc kubenswrapper[4998]: E0227 11:08:00.144768 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fc8748e-db1e-4aa5-80b4-53e1f324573e" containerName="extract-utilities" Feb 27 11:08:00 crc kubenswrapper[4998]: I0227 11:08:00.144786 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fc8748e-db1e-4aa5-80b4-53e1f324573e" containerName="extract-utilities" Feb 27 11:08:00 crc kubenswrapper[4998]: E0227 11:08:00.144800 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fc8748e-db1e-4aa5-80b4-53e1f324573e" containerName="extract-content" Feb 27 11:08:00 crc kubenswrapper[4998]: I0227 11:08:00.144807 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fc8748e-db1e-4aa5-80b4-53e1f324573e" containerName="extract-content" Feb 27 11:08:00 crc kubenswrapper[4998]: E0227 11:08:00.144823 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41898aba-d293-4c14-84f9-50a9376209b0" containerName="extract-content" Feb 27 11:08:00 crc kubenswrapper[4998]: I0227 11:08:00.144829 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="41898aba-d293-4c14-84f9-50a9376209b0" containerName="extract-content" Feb 27 11:08:00 crc kubenswrapper[4998]: E0227 11:08:00.144846 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fc8748e-db1e-4aa5-80b4-53e1f324573e" containerName="registry-server" Feb 27 11:08:00 crc kubenswrapper[4998]: I0227 11:08:00.144852 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fc8748e-db1e-4aa5-80b4-53e1f324573e" containerName="registry-server" Feb 27 11:08:00 crc kubenswrapper[4998]: E0227 11:08:00.144871 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41898aba-d293-4c14-84f9-50a9376209b0" containerName="registry-server" Feb 27 11:08:00 crc kubenswrapper[4998]: I0227 11:08:00.144876 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="41898aba-d293-4c14-84f9-50a9376209b0" containerName="registry-server" Feb 27 11:08:00 crc kubenswrapper[4998]: E0227 11:08:00.144887 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41898aba-d293-4c14-84f9-50a9376209b0" containerName="extract-utilities" Feb 27 11:08:00 crc kubenswrapper[4998]: I0227 11:08:00.144893 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="41898aba-d293-4c14-84f9-50a9376209b0" containerName="extract-utilities" Feb 27 11:08:00 crc kubenswrapper[4998]: I0227 11:08:00.145054 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="3fc8748e-db1e-4aa5-80b4-53e1f324573e" containerName="registry-server" Feb 27 11:08:00 crc kubenswrapper[4998]: I0227 11:08:00.145078 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="41898aba-d293-4c14-84f9-50a9376209b0" containerName="registry-server" Feb 27 11:08:00 crc kubenswrapper[4998]: I0227 11:08:00.147037 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536508-qd4nl" Feb 27 11:08:00 crc kubenswrapper[4998]: I0227 11:08:00.149008 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 11:08:00 crc kubenswrapper[4998]: I0227 11:08:00.149547 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 11:08:00 crc kubenswrapper[4998]: I0227 11:08:00.150975 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-b74ch" Feb 27 11:08:00 crc kubenswrapper[4998]: I0227 11:08:00.160517 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536508-qd4nl"] Feb 27 11:08:00 crc kubenswrapper[4998]: I0227 11:08:00.273187 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52st5\" (UniqueName: \"kubernetes.io/projected/d2a0757f-dda2-4efc-accf-b9efd88032bb-kube-api-access-52st5\") pod \"auto-csr-approver-29536508-qd4nl\" (UID: \"d2a0757f-dda2-4efc-accf-b9efd88032bb\") " pod="openshift-infra/auto-csr-approver-29536508-qd4nl" Feb 27 11:08:00 crc kubenswrapper[4998]: I0227 11:08:00.375509 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52st5\" (UniqueName: \"kubernetes.io/projected/d2a0757f-dda2-4efc-accf-b9efd88032bb-kube-api-access-52st5\") pod \"auto-csr-approver-29536508-qd4nl\" (UID: \"d2a0757f-dda2-4efc-accf-b9efd88032bb\") " pod="openshift-infra/auto-csr-approver-29536508-qd4nl" Feb 27 11:08:00 crc kubenswrapper[4998]: I0227 11:08:00.404011 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52st5\" (UniqueName: \"kubernetes.io/projected/d2a0757f-dda2-4efc-accf-b9efd88032bb-kube-api-access-52st5\") pod \"auto-csr-approver-29536508-qd4nl\" (UID: \"d2a0757f-dda2-4efc-accf-b9efd88032bb\") " pod="openshift-infra/auto-csr-approver-29536508-qd4nl" Feb 27 11:08:00 crc kubenswrapper[4998]: I0227 11:08:00.467908 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536508-qd4nl" Feb 27 11:08:00 crc kubenswrapper[4998]: I0227 11:08:00.913763 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536508-qd4nl"] Feb 27 11:08:01 crc kubenswrapper[4998]: I0227 11:08:01.826253 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536508-qd4nl" event={"ID":"d2a0757f-dda2-4efc-accf-b9efd88032bb","Type":"ContainerStarted","Data":"2f2cf4a183889979ba30bfba0c794d875605990cb45c911abcd174b69625f77c"} Feb 27 11:08:02 crc kubenswrapper[4998]: I0227 11:08:02.838112 4998 generic.go:334] "Generic (PLEG): container finished" podID="d2a0757f-dda2-4efc-accf-b9efd88032bb" containerID="9487ef2cde9eeb9d99a6308100c7da9ef7d1b52b900b90c41f6b6359db543d79" exitCode=0 Feb 27 11:08:02 crc kubenswrapper[4998]: I0227 11:08:02.838381 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536508-qd4nl" event={"ID":"d2a0757f-dda2-4efc-accf-b9efd88032bb","Type":"ContainerDied","Data":"9487ef2cde9eeb9d99a6308100c7da9ef7d1b52b900b90c41f6b6359db543d79"} Feb 27 11:08:04 crc kubenswrapper[4998]: I0227 11:08:04.233599 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536508-qd4nl" Feb 27 11:08:04 crc kubenswrapper[4998]: I0227 11:08:04.345518 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-52st5\" (UniqueName: \"kubernetes.io/projected/d2a0757f-dda2-4efc-accf-b9efd88032bb-kube-api-access-52st5\") pod \"d2a0757f-dda2-4efc-accf-b9efd88032bb\" (UID: \"d2a0757f-dda2-4efc-accf-b9efd88032bb\") " Feb 27 11:08:04 crc kubenswrapper[4998]: I0227 11:08:04.350899 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2a0757f-dda2-4efc-accf-b9efd88032bb-kube-api-access-52st5" (OuterVolumeSpecName: "kube-api-access-52st5") pod "d2a0757f-dda2-4efc-accf-b9efd88032bb" (UID: "d2a0757f-dda2-4efc-accf-b9efd88032bb"). InnerVolumeSpecName "kube-api-access-52st5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:08:04 crc kubenswrapper[4998]: I0227 11:08:04.447715 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-52st5\" (UniqueName: \"kubernetes.io/projected/d2a0757f-dda2-4efc-accf-b9efd88032bb-kube-api-access-52st5\") on node \"crc\" DevicePath \"\"" Feb 27 11:08:04 crc kubenswrapper[4998]: I0227 11:08:04.857622 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536508-qd4nl" event={"ID":"d2a0757f-dda2-4efc-accf-b9efd88032bb","Type":"ContainerDied","Data":"2f2cf4a183889979ba30bfba0c794d875605990cb45c911abcd174b69625f77c"} Feb 27 11:08:04 crc kubenswrapper[4998]: I0227 11:08:04.857907 4998 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f2cf4a183889979ba30bfba0c794d875605990cb45c911abcd174b69625f77c" Feb 27 11:08:04 crc kubenswrapper[4998]: I0227 11:08:04.857870 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536508-qd4nl" Feb 27 11:08:05 crc kubenswrapper[4998]: I0227 11:08:05.321103 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536502-glvp4"] Feb 27 11:08:05 crc kubenswrapper[4998]: I0227 11:08:05.330843 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536502-glvp4"] Feb 27 11:08:06 crc kubenswrapper[4998]: I0227 11:08:06.778777 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cee4571a-5b43-45e9-9369-1694d8cbf950" path="/var/lib/kubelet/pods/cee4571a-5b43-45e9-9369-1694d8cbf950/volumes" Feb 27 11:08:08 crc kubenswrapper[4998]: I0227 11:08:08.764922 4998 scope.go:117] "RemoveContainer" containerID="80e499ffd9f6c39119f9b9bd2b1b9c0b38519d681fc2c93cfe8afbe50a1baa31" Feb 27 11:08:08 crc kubenswrapper[4998]: E0227 11:08:08.765738 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m6kr5_openshift-machine-config-operator(400c5e2f-5448-49c6-bf8e-04b21e552bb2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" Feb 27 11:08:12 crc kubenswrapper[4998]: I0227 11:08:12.787746 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-b25mg_827ec3fa-aae4-4a9d-b3d6-3b93e0a9b71d/control-plane-machine-set-operator/0.log" Feb 27 11:08:13 crc kubenswrapper[4998]: I0227 11:08:13.019368 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-dvl4h_ae341042-4a8d-4a41-bb0e-931abecc819a/kube-rbac-proxy/0.log" Feb 27 11:08:13 crc kubenswrapper[4998]: I0227 11:08:13.068933 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-dvl4h_ae341042-4a8d-4a41-bb0e-931abecc819a/machine-api-operator/0.log" Feb 27 11:08:22 crc kubenswrapper[4998]: I0227 11:08:22.764641 4998 scope.go:117] "RemoveContainer" containerID="80e499ffd9f6c39119f9b9bd2b1b9c0b38519d681fc2c93cfe8afbe50a1baa31" Feb 27 11:08:22 crc kubenswrapper[4998]: E0227 11:08:22.765607 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m6kr5_openshift-machine-config-operator(400c5e2f-5448-49c6-bf8e-04b21e552bb2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" Feb 27 11:08:28 crc kubenswrapper[4998]: I0227 11:08:28.317980 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-b2wdg_06d91e45-d849-4950-98ad-57110fb7f9bf/cert-manager-controller/0.log" Feb 27 11:08:28 crc kubenswrapper[4998]: I0227 11:08:28.434409 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-lfw62_2ec7f8ec-f44c-458c-b848-641340387d02/cert-manager-cainjector/0.log" Feb 27 11:08:28 crc kubenswrapper[4998]: I0227 11:08:28.550430 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-v8j65_a05eaee3-fb55-4085-95d2-a78ded3a3799/cert-manager-webhook/0.log" Feb 27 11:08:34 crc kubenswrapper[4998]: I0227 11:08:34.767380 4998 scope.go:117] "RemoveContainer" containerID="80e499ffd9f6c39119f9b9bd2b1b9c0b38519d681fc2c93cfe8afbe50a1baa31" Feb 27 11:08:34 crc kubenswrapper[4998]: E0227 11:08:34.768672 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m6kr5_openshift-machine-config-operator(400c5e2f-5448-49c6-bf8e-04b21e552bb2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" Feb 27 11:08:42 crc kubenswrapper[4998]: I0227 11:08:42.351790 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5dcbbd79cf-zr9tf_f3c68fd4-04c2-4b0d-8b6f-2b45639a240c/nmstate-console-plugin/0.log" Feb 27 11:08:42 crc kubenswrapper[4998]: I0227 11:08:42.559181 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-sld54_23f0f1b3-af91-4dd1-8762-9e5ddb1c142e/nmstate-handler/0.log" Feb 27 11:08:42 crc kubenswrapper[4998]: I0227 11:08:42.619713 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-69594cc75-kjzpr_b49731c6-730e-4a09-b5e7-21a5890cc8d7/kube-rbac-proxy/0.log" Feb 27 11:08:42 crc kubenswrapper[4998]: I0227 11:08:42.652902 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-69594cc75-kjzpr_b49731c6-730e-4a09-b5e7-21a5890cc8d7/nmstate-metrics/0.log" Feb 27 11:08:42 crc kubenswrapper[4998]: I0227 11:08:42.768980 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-75c5dccd6c-4rvgw_df1a20ec-d325-4ed5-b485-8a1460ea8cf3/nmstate-operator/0.log" Feb 27 11:08:42 crc kubenswrapper[4998]: I0227 11:08:42.869575 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-786f45cff4-bksxj_818f1bd6-a5d4-431f-87ca-bf94e3c029de/nmstate-webhook/0.log" Feb 27 11:08:45 crc kubenswrapper[4998]: I0227 11:08:45.765084 4998 scope.go:117] "RemoveContainer" containerID="80e499ffd9f6c39119f9b9bd2b1b9c0b38519d681fc2c93cfe8afbe50a1baa31" Feb 27 11:08:45 crc kubenswrapper[4998]: E0227 11:08:45.765689 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m6kr5_openshift-machine-config-operator(400c5e2f-5448-49c6-bf8e-04b21e552bb2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" Feb 27 11:08:53 crc kubenswrapper[4998]: I0227 11:08:53.975471 4998 scope.go:117] "RemoveContainer" containerID="a40a2e4da94976d0c88200579958e02ee72a624ca7ccf0951dafa373dcfe4e10" Feb 27 11:08:57 crc kubenswrapper[4998]: I0227 11:08:57.765307 4998 scope.go:117] "RemoveContainer" containerID="80e499ffd9f6c39119f9b9bd2b1b9c0b38519d681fc2c93cfe8afbe50a1baa31" Feb 27 11:08:57 crc kubenswrapper[4998]: E0227 11:08:57.766137 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m6kr5_openshift-machine-config-operator(400c5e2f-5448-49c6-bf8e-04b21e552bb2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" Feb 27 11:09:11 crc kubenswrapper[4998]: I0227 11:09:11.765012 4998 scope.go:117] "RemoveContainer" containerID="80e499ffd9f6c39119f9b9bd2b1b9c0b38519d681fc2c93cfe8afbe50a1baa31" Feb 27 11:09:11 crc kubenswrapper[4998]: E0227 11:09:11.765631 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m6kr5_openshift-machine-config-operator(400c5e2f-5448-49c6-bf8e-04b21e552bb2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" Feb 27 11:09:11 crc kubenswrapper[4998]: I0227 11:09:11.817296 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-86ddb6bd46-plw4b_665ae341-c75e-42ec-b806-6e58a49d6b0a/kube-rbac-proxy/0.log" Feb 27 11:09:11 crc kubenswrapper[4998]: I0227 11:09:11.985178 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-86ddb6bd46-plw4b_665ae341-c75e-42ec-b806-6e58a49d6b0a/controller/0.log" Feb 27 11:09:12 crc kubenswrapper[4998]: I0227 11:09:12.021828 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t5q9x_9f850639-e424-4e27-b201-24520cb55133/cp-frr-files/0.log" Feb 27 11:09:12 crc kubenswrapper[4998]: I0227 11:09:12.221724 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t5q9x_9f850639-e424-4e27-b201-24520cb55133/cp-metrics/0.log" Feb 27 11:09:12 crc kubenswrapper[4998]: I0227 11:09:12.228663 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t5q9x_9f850639-e424-4e27-b201-24520cb55133/cp-reloader/0.log" Feb 27 11:09:12 crc kubenswrapper[4998]: I0227 11:09:12.230743 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t5q9x_9f850639-e424-4e27-b201-24520cb55133/cp-reloader/0.log" Feb 27 11:09:12 crc kubenswrapper[4998]: I0227 11:09:12.253951 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t5q9x_9f850639-e424-4e27-b201-24520cb55133/cp-frr-files/0.log" Feb 27 11:09:12 crc kubenswrapper[4998]: I0227 11:09:12.411643 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t5q9x_9f850639-e424-4e27-b201-24520cb55133/cp-frr-files/0.log" Feb 27 11:09:12 crc kubenswrapper[4998]: I0227 11:09:12.423346 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t5q9x_9f850639-e424-4e27-b201-24520cb55133/cp-reloader/0.log" Feb 27 11:09:12 crc kubenswrapper[4998]: I0227 11:09:12.427935 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t5q9x_9f850639-e424-4e27-b201-24520cb55133/cp-metrics/0.log" Feb 27 11:09:12 crc kubenswrapper[4998]: I0227 11:09:12.450526 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t5q9x_9f850639-e424-4e27-b201-24520cb55133/cp-metrics/0.log" Feb 27 11:09:12 crc kubenswrapper[4998]: I0227 11:09:12.582861 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t5q9x_9f850639-e424-4e27-b201-24520cb55133/cp-frr-files/0.log" Feb 27 11:09:12 crc kubenswrapper[4998]: I0227 11:09:12.616035 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t5q9x_9f850639-e424-4e27-b201-24520cb55133/cp-reloader/0.log" Feb 27 11:09:12 crc kubenswrapper[4998]: I0227 11:09:12.622253 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t5q9x_9f850639-e424-4e27-b201-24520cb55133/cp-metrics/0.log" Feb 27 11:09:12 crc kubenswrapper[4998]: I0227 11:09:12.627368 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t5q9x_9f850639-e424-4e27-b201-24520cb55133/controller/0.log" Feb 27 11:09:12 crc kubenswrapper[4998]: I0227 11:09:12.783440 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t5q9x_9f850639-e424-4e27-b201-24520cb55133/kube-rbac-proxy-frr/0.log" Feb 27 11:09:12 crc kubenswrapper[4998]: I0227 11:09:12.794081 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t5q9x_9f850639-e424-4e27-b201-24520cb55133/frr-metrics/0.log" Feb 27 11:09:12 crc kubenswrapper[4998]: I0227 11:09:12.803936 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t5q9x_9f850639-e424-4e27-b201-24520cb55133/kube-rbac-proxy/0.log" Feb 27 11:09:13 crc kubenswrapper[4998]: I0227 11:09:13.103026 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7f989f654f-cm24v_6156d7c5-b8a3-4741-b1b5-cc12edf7cba1/frr-k8s-webhook-server/0.log" Feb 27 11:09:13 crc kubenswrapper[4998]: I0227 11:09:13.113008 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t5q9x_9f850639-e424-4e27-b201-24520cb55133/reloader/0.log" Feb 27 11:09:13 crc kubenswrapper[4998]: I0227 11:09:13.314173 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-68d695695-666w7_af258c6d-b8fa-403e-95d8-b2d6d3d3c3a4/manager/0.log" Feb 27 11:09:13 crc kubenswrapper[4998]: I0227 11:09:13.473396 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-7cc8894d64-jsj29_a161099d-bd66-4a7e-9b8d-c6b8881b2512/webhook-server/0.log" Feb 27 11:09:13 crc kubenswrapper[4998]: I0227 11:09:13.545841 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-9rnsc_6251025c-f95e-45c1-a13d-25806c9afc6d/kube-rbac-proxy/0.log" Feb 27 11:09:14 crc kubenswrapper[4998]: I0227 11:09:14.064870 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-9rnsc_6251025c-f95e-45c1-a13d-25806c9afc6d/speaker/0.log" Feb 27 11:09:14 crc kubenswrapper[4998]: I0227 11:09:14.155363 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t5q9x_9f850639-e424-4e27-b201-24520cb55133/frr/0.log" Feb 27 11:09:23 crc kubenswrapper[4998]: I0227 11:09:23.765651 4998 scope.go:117] "RemoveContainer" containerID="80e499ffd9f6c39119f9b9bd2b1b9c0b38519d681fc2c93cfe8afbe50a1baa31" Feb 27 11:09:23 crc kubenswrapper[4998]: E0227 11:09:23.766718 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m6kr5_openshift-machine-config-operator(400c5e2f-5448-49c6-bf8e-04b21e552bb2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" Feb 27 11:09:28 crc kubenswrapper[4998]: I0227 11:09:28.349619 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a829cc95_798850d8-3ba1-4af9-a3d5-4df55bf658a4/util/0.log" Feb 27 11:09:28 crc kubenswrapper[4998]: I0227 11:09:28.625505 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a829cc95_798850d8-3ba1-4af9-a3d5-4df55bf658a4/util/0.log" Feb 27 11:09:28 crc kubenswrapper[4998]: I0227 11:09:28.631439 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a829cc95_798850d8-3ba1-4af9-a3d5-4df55bf658a4/pull/0.log" Feb 27 11:09:28 crc kubenswrapper[4998]: I0227 11:09:28.668738 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a829cc95_798850d8-3ba1-4af9-a3d5-4df55bf658a4/pull/0.log" Feb 27 11:09:28 crc kubenswrapper[4998]: I0227 11:09:28.830352 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a829cc95_798850d8-3ba1-4af9-a3d5-4df55bf658a4/extract/0.log" Feb 27 11:09:28 crc kubenswrapper[4998]: I0227 11:09:28.870289 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a829cc95_798850d8-3ba1-4af9-a3d5-4df55bf658a4/pull/0.log" Feb 27 11:09:28 crc kubenswrapper[4998]: I0227 11:09:28.879135 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a829cc95_798850d8-3ba1-4af9-a3d5-4df55bf658a4/util/0.log" Feb 27 11:09:28 crc kubenswrapper[4998]: I0227 11:09:28.984332 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-x7mg4_d9b958b3-f9cf-4386-9734-3d52c0c3ba65/extract-utilities/0.log" Feb 27 11:09:29 crc kubenswrapper[4998]: I0227 11:09:29.208486 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-x7mg4_d9b958b3-f9cf-4386-9734-3d52c0c3ba65/extract-content/0.log" Feb 27 11:09:29 crc kubenswrapper[4998]: I0227 11:09:29.214212 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-x7mg4_d9b958b3-f9cf-4386-9734-3d52c0c3ba65/extract-utilities/0.log" Feb 27 11:09:29 crc kubenswrapper[4998]: I0227 11:09:29.243632 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-x7mg4_d9b958b3-f9cf-4386-9734-3d52c0c3ba65/extract-content/0.log" Feb 27 11:09:29 crc kubenswrapper[4998]: I0227 11:09:29.417645 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-x7mg4_d9b958b3-f9cf-4386-9734-3d52c0c3ba65/extract-utilities/0.log" Feb 27 11:09:29 crc kubenswrapper[4998]: I0227 11:09:29.449183 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-x7mg4_d9b958b3-f9cf-4386-9734-3d52c0c3ba65/extract-content/0.log" Feb 27 11:09:29 crc kubenswrapper[4998]: I0227 11:09:29.606729 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-sf2xm_f9f52852-ea09-4a76-a196-c48346479c71/extract-utilities/0.log" Feb 27 11:09:29 crc kubenswrapper[4998]: I0227 11:09:29.770056 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-x7mg4_d9b958b3-f9cf-4386-9734-3d52c0c3ba65/registry-server/0.log" Feb 27 11:09:29 crc kubenswrapper[4998]: I0227 11:09:29.804891 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-sf2xm_f9f52852-ea09-4a76-a196-c48346479c71/extract-utilities/0.log" Feb 27 11:09:29 crc kubenswrapper[4998]: I0227 11:09:29.831564 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-sf2xm_f9f52852-ea09-4a76-a196-c48346479c71/extract-content/0.log" Feb 27 11:09:29 crc kubenswrapper[4998]: I0227 11:09:29.856356 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-sf2xm_f9f52852-ea09-4a76-a196-c48346479c71/extract-content/0.log" Feb 27 11:09:30 crc kubenswrapper[4998]: I0227 11:09:30.011064 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-sf2xm_f9f52852-ea09-4a76-a196-c48346479c71/extract-utilities/0.log" Feb 27 11:09:30 crc kubenswrapper[4998]: I0227 11:09:30.028433 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-sf2xm_f9f52852-ea09-4a76-a196-c48346479c71/extract-content/0.log" Feb 27 11:09:30 crc kubenswrapper[4998]: I0227 11:09:30.357806 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4kmlf2_6d48cabe-f00e-4d86-895b-6ed01fbb3ef4/util/0.log" Feb 27 11:09:30 crc kubenswrapper[4998]: I0227 11:09:30.432738 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-sf2xm_f9f52852-ea09-4a76-a196-c48346479c71/registry-server/0.log" Feb 27 11:09:30 crc kubenswrapper[4998]: I0227 11:09:30.550074 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4kmlf2_6d48cabe-f00e-4d86-895b-6ed01fbb3ef4/util/0.log" Feb 27 11:09:30 crc kubenswrapper[4998]: I0227 11:09:30.559201 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4kmlf2_6d48cabe-f00e-4d86-895b-6ed01fbb3ef4/pull/0.log" Feb 27 11:09:30 crc kubenswrapper[4998]: I0227 11:09:30.565512 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4kmlf2_6d48cabe-f00e-4d86-895b-6ed01fbb3ef4/pull/0.log" Feb 27 11:09:30 crc kubenswrapper[4998]: I0227 11:09:30.702713 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4kmlf2_6d48cabe-f00e-4d86-895b-6ed01fbb3ef4/util/0.log" Feb 27 11:09:30 crc kubenswrapper[4998]: I0227 11:09:30.768975 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4kmlf2_6d48cabe-f00e-4d86-895b-6ed01fbb3ef4/pull/0.log" Feb 27 11:09:30 crc kubenswrapper[4998]: I0227 11:09:30.771794 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4kmlf2_6d48cabe-f00e-4d86-895b-6ed01fbb3ef4/extract/0.log" Feb 27 11:09:30 crc kubenswrapper[4998]: I0227 11:09:30.901579 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-lpczb_72f2d961-29af-48b5-b073-9c1de03ed288/marketplace-operator/0.log" Feb 27 11:09:30 crc kubenswrapper[4998]: I0227 11:09:30.957929 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-2rbwc_efd74380-0bb3-4d33-a593-35a6f3388e3d/extract-utilities/0.log" Feb 27 11:09:31 crc kubenswrapper[4998]: I0227 11:09:31.117041 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-2rbwc_efd74380-0bb3-4d33-a593-35a6f3388e3d/extract-utilities/0.log" Feb 27 11:09:31 crc kubenswrapper[4998]: I0227 11:09:31.135962 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-2rbwc_efd74380-0bb3-4d33-a593-35a6f3388e3d/extract-content/0.log" Feb 27 11:09:31 crc kubenswrapper[4998]: I0227 11:09:31.164156 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-2rbwc_efd74380-0bb3-4d33-a593-35a6f3388e3d/extract-content/0.log" Feb 27 11:09:31 crc kubenswrapper[4998]: I0227 11:09:31.352566 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-2rbwc_efd74380-0bb3-4d33-a593-35a6f3388e3d/extract-utilities/0.log" Feb 27 11:09:31 crc kubenswrapper[4998]: I0227 11:09:31.356053 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-2rbwc_efd74380-0bb3-4d33-a593-35a6f3388e3d/extract-content/0.log" Feb 27 11:09:31 crc kubenswrapper[4998]: I0227 11:09:31.455883 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-2rbwc_efd74380-0bb3-4d33-a593-35a6f3388e3d/registry-server/0.log" Feb 27 11:09:31 crc kubenswrapper[4998]: I0227 11:09:31.532319 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-vkrls_fbd2e6ba-efd1-454e-81b3-4f1e1d5e6380/extract-utilities/0.log" Feb 27 11:09:31 crc kubenswrapper[4998]: I0227 11:09:31.691581 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-vkrls_fbd2e6ba-efd1-454e-81b3-4f1e1d5e6380/extract-utilities/0.log" Feb 27 11:09:31 crc kubenswrapper[4998]: I0227 11:09:31.715182 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-vkrls_fbd2e6ba-efd1-454e-81b3-4f1e1d5e6380/extract-content/0.log" Feb 27 11:09:31 crc kubenswrapper[4998]: I0227 11:09:31.715928 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-vkrls_fbd2e6ba-efd1-454e-81b3-4f1e1d5e6380/extract-content/0.log" Feb 27 11:09:31 crc kubenswrapper[4998]: I0227 11:09:31.862640 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-vkrls_fbd2e6ba-efd1-454e-81b3-4f1e1d5e6380/extract-utilities/0.log" Feb 27 11:09:31 crc kubenswrapper[4998]: I0227 11:09:31.869311 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-vkrls_fbd2e6ba-efd1-454e-81b3-4f1e1d5e6380/extract-content/0.log" Feb 27 11:09:32 crc kubenswrapper[4998]: I0227 11:09:32.376822 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-vkrls_fbd2e6ba-efd1-454e-81b3-4f1e1d5e6380/registry-server/0.log" Feb 27 11:09:36 crc kubenswrapper[4998]: I0227 11:09:36.765994 4998 scope.go:117] "RemoveContainer" containerID="80e499ffd9f6c39119f9b9bd2b1b9c0b38519d681fc2c93cfe8afbe50a1baa31" Feb 27 11:09:36 crc kubenswrapper[4998]: E0227 11:09:36.766894 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m6kr5_openshift-machine-config-operator(400c5e2f-5448-49c6-bf8e-04b21e552bb2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" Feb 27 11:09:50 crc kubenswrapper[4998]: I0227 11:09:50.765725 4998 scope.go:117] "RemoveContainer" containerID="80e499ffd9f6c39119f9b9bd2b1b9c0b38519d681fc2c93cfe8afbe50a1baa31" Feb 27 11:09:50 crc kubenswrapper[4998]: E0227 11:09:50.766343 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m6kr5_openshift-machine-config-operator(400c5e2f-5448-49c6-bf8e-04b21e552bb2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" Feb 27 11:10:00 crc kubenswrapper[4998]: I0227 11:10:00.151191 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536510-vndwl"] Feb 27 11:10:00 crc kubenswrapper[4998]: E0227 11:10:00.152090 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2a0757f-dda2-4efc-accf-b9efd88032bb" containerName="oc" Feb 27 11:10:00 crc kubenswrapper[4998]: I0227 11:10:00.152103 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2a0757f-dda2-4efc-accf-b9efd88032bb" containerName="oc" Feb 27 11:10:00 crc kubenswrapper[4998]: I0227 11:10:00.152401 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2a0757f-dda2-4efc-accf-b9efd88032bb" containerName="oc" Feb 27 11:10:00 crc kubenswrapper[4998]: I0227 11:10:00.153187 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536510-vndwl" Feb 27 11:10:00 crc kubenswrapper[4998]: I0227 11:10:00.157079 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 11:10:00 crc kubenswrapper[4998]: I0227 11:10:00.157133 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-b74ch" Feb 27 11:10:00 crc kubenswrapper[4998]: I0227 11:10:00.157750 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 11:10:00 crc kubenswrapper[4998]: I0227 11:10:00.166265 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536510-vndwl"] Feb 27 11:10:00 crc kubenswrapper[4998]: I0227 11:10:00.245756 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnhqt\" (UniqueName: \"kubernetes.io/projected/695eb442-fe1c-4e2c-b1a4-161fa4bbc0db-kube-api-access-fnhqt\") pod \"auto-csr-approver-29536510-vndwl\" (UID: \"695eb442-fe1c-4e2c-b1a4-161fa4bbc0db\") " pod="openshift-infra/auto-csr-approver-29536510-vndwl" Feb 27 11:10:00 crc kubenswrapper[4998]: I0227 11:10:00.347638 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fnhqt\" (UniqueName: \"kubernetes.io/projected/695eb442-fe1c-4e2c-b1a4-161fa4bbc0db-kube-api-access-fnhqt\") pod \"auto-csr-approver-29536510-vndwl\" (UID: \"695eb442-fe1c-4e2c-b1a4-161fa4bbc0db\") " pod="openshift-infra/auto-csr-approver-29536510-vndwl" Feb 27 11:10:00 crc kubenswrapper[4998]: I0227 11:10:00.376244 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnhqt\" (UniqueName: \"kubernetes.io/projected/695eb442-fe1c-4e2c-b1a4-161fa4bbc0db-kube-api-access-fnhqt\") pod \"auto-csr-approver-29536510-vndwl\" (UID: \"695eb442-fe1c-4e2c-b1a4-161fa4bbc0db\") " pod="openshift-infra/auto-csr-approver-29536510-vndwl" Feb 27 11:10:00 crc kubenswrapper[4998]: I0227 11:10:00.475378 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536510-vndwl" Feb 27 11:10:00 crc kubenswrapper[4998]: I0227 11:10:00.950579 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536510-vndwl"] Feb 27 11:10:01 crc kubenswrapper[4998]: I0227 11:10:01.094949 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536510-vndwl" event={"ID":"695eb442-fe1c-4e2c-b1a4-161fa4bbc0db","Type":"ContainerStarted","Data":"5b64be60b4c0dd1cefcbc2a423cef9accb548bf44a8c32b32b06c2fa716dfcbd"} Feb 27 11:10:03 crc kubenswrapper[4998]: I0227 11:10:03.123653 4998 generic.go:334] "Generic (PLEG): container finished" podID="695eb442-fe1c-4e2c-b1a4-161fa4bbc0db" containerID="428ee579709d3f5a13740ecd39b70f04a7580af6ca259e352c95400323391c0a" exitCode=0 Feb 27 11:10:03 crc kubenswrapper[4998]: I0227 11:10:03.124153 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536510-vndwl" event={"ID":"695eb442-fe1c-4e2c-b1a4-161fa4bbc0db","Type":"ContainerDied","Data":"428ee579709d3f5a13740ecd39b70f04a7580af6ca259e352c95400323391c0a"} Feb 27 11:10:04 crc kubenswrapper[4998]: I0227 11:10:04.491206 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536510-vndwl" Feb 27 11:10:04 crc kubenswrapper[4998]: I0227 11:10:04.528401 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fnhqt\" (UniqueName: \"kubernetes.io/projected/695eb442-fe1c-4e2c-b1a4-161fa4bbc0db-kube-api-access-fnhqt\") pod \"695eb442-fe1c-4e2c-b1a4-161fa4bbc0db\" (UID: \"695eb442-fe1c-4e2c-b1a4-161fa4bbc0db\") " Feb 27 11:10:04 crc kubenswrapper[4998]: I0227 11:10:04.535079 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/695eb442-fe1c-4e2c-b1a4-161fa4bbc0db-kube-api-access-fnhqt" (OuterVolumeSpecName: "kube-api-access-fnhqt") pod "695eb442-fe1c-4e2c-b1a4-161fa4bbc0db" (UID: "695eb442-fe1c-4e2c-b1a4-161fa4bbc0db"). InnerVolumeSpecName "kube-api-access-fnhqt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:10:04 crc kubenswrapper[4998]: I0227 11:10:04.631903 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fnhqt\" (UniqueName: \"kubernetes.io/projected/695eb442-fe1c-4e2c-b1a4-161fa4bbc0db-kube-api-access-fnhqt\") on node \"crc\" DevicePath \"\"" Feb 27 11:10:05 crc kubenswrapper[4998]: I0227 11:10:05.143399 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536510-vndwl" event={"ID":"695eb442-fe1c-4e2c-b1a4-161fa4bbc0db","Type":"ContainerDied","Data":"5b64be60b4c0dd1cefcbc2a423cef9accb548bf44a8c32b32b06c2fa716dfcbd"} Feb 27 11:10:05 crc kubenswrapper[4998]: I0227 11:10:05.143767 4998 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b64be60b4c0dd1cefcbc2a423cef9accb548bf44a8c32b32b06c2fa716dfcbd" Feb 27 11:10:05 crc kubenswrapper[4998]: I0227 11:10:05.143462 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536510-vndwl" Feb 27 11:10:05 crc kubenswrapper[4998]: I0227 11:10:05.575019 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536504-stkb7"] Feb 27 11:10:05 crc kubenswrapper[4998]: I0227 11:10:05.596523 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536504-stkb7"] Feb 27 11:10:05 crc kubenswrapper[4998]: I0227 11:10:05.766474 4998 scope.go:117] "RemoveContainer" containerID="80e499ffd9f6c39119f9b9bd2b1b9c0b38519d681fc2c93cfe8afbe50a1baa31" Feb 27 11:10:05 crc kubenswrapper[4998]: E0227 11:10:05.766712 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m6kr5_openshift-machine-config-operator(400c5e2f-5448-49c6-bf8e-04b21e552bb2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" Feb 27 11:10:06 crc kubenswrapper[4998]: I0227 11:10:06.774509 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f5d454a-cf0e-418a-8026-4a7c4a47d253" path="/var/lib/kubelet/pods/5f5d454a-cf0e-418a-8026-4a7c4a47d253/volumes" Feb 27 11:10:20 crc kubenswrapper[4998]: I0227 11:10:20.768668 4998 scope.go:117] "RemoveContainer" containerID="80e499ffd9f6c39119f9b9bd2b1b9c0b38519d681fc2c93cfe8afbe50a1baa31" Feb 27 11:10:20 crc kubenswrapper[4998]: E0227 11:10:20.769492 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m6kr5_openshift-machine-config-operator(400c5e2f-5448-49c6-bf8e-04b21e552bb2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" Feb 27 11:10:31 crc kubenswrapper[4998]: I0227 11:10:31.765003 4998 scope.go:117] "RemoveContainer" containerID="80e499ffd9f6c39119f9b9bd2b1b9c0b38519d681fc2c93cfe8afbe50a1baa31" Feb 27 11:10:31 crc kubenswrapper[4998]: E0227 11:10:31.766214 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m6kr5_openshift-machine-config-operator(400c5e2f-5448-49c6-bf8e-04b21e552bb2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" Feb 27 11:10:43 crc kubenswrapper[4998]: I0227 11:10:43.765556 4998 scope.go:117] "RemoveContainer" containerID="80e499ffd9f6c39119f9b9bd2b1b9c0b38519d681fc2c93cfe8afbe50a1baa31" Feb 27 11:10:43 crc kubenswrapper[4998]: E0227 11:10:43.766860 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m6kr5_openshift-machine-config-operator(400c5e2f-5448-49c6-bf8e-04b21e552bb2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" Feb 27 11:10:54 crc kubenswrapper[4998]: I0227 11:10:54.072834 4998 scope.go:117] "RemoveContainer" containerID="ecb2662395685abb4009b04ad95af47df4babd9af901db78d07730df88667b5e" Feb 27 11:10:54 crc kubenswrapper[4998]: I0227 11:10:54.765213 4998 scope.go:117] "RemoveContainer" containerID="80e499ffd9f6c39119f9b9bd2b1b9c0b38519d681fc2c93cfe8afbe50a1baa31" Feb 27 11:10:54 crc kubenswrapper[4998]: E0227 11:10:54.765976 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m6kr5_openshift-machine-config-operator(400c5e2f-5448-49c6-bf8e-04b21e552bb2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" Feb 27 11:11:09 crc kubenswrapper[4998]: I0227 11:11:09.765896 4998 scope.go:117] "RemoveContainer" containerID="80e499ffd9f6c39119f9b9bd2b1b9c0b38519d681fc2c93cfe8afbe50a1baa31" Feb 27 11:11:09 crc kubenswrapper[4998]: E0227 11:11:09.767221 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m6kr5_openshift-machine-config-operator(400c5e2f-5448-49c6-bf8e-04b21e552bb2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" Feb 27 11:11:11 crc kubenswrapper[4998]: I0227 11:11:11.930341 4998 generic.go:334] "Generic (PLEG): container finished" podID="52d03bda-caf6-462f-96d8-37dd3c6ed001" containerID="25f463779b5babc9ef17007ce6902c0c44f85cfa1b27688d6c443c2a9700d78c" exitCode=0 Feb 27 11:11:11 crc kubenswrapper[4998]: I0227 11:11:11.930466 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6ndsf/must-gather-b5b4f" event={"ID":"52d03bda-caf6-462f-96d8-37dd3c6ed001","Type":"ContainerDied","Data":"25f463779b5babc9ef17007ce6902c0c44f85cfa1b27688d6c443c2a9700d78c"} Feb 27 11:11:11 crc kubenswrapper[4998]: I0227 11:11:11.931537 4998 scope.go:117] "RemoveContainer" containerID="25f463779b5babc9ef17007ce6902c0c44f85cfa1b27688d6c443c2a9700d78c" Feb 27 11:11:12 crc kubenswrapper[4998]: I0227 11:11:12.294563 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-6ndsf_must-gather-b5b4f_52d03bda-caf6-462f-96d8-37dd3c6ed001/gather/0.log" Feb 27 11:11:20 crc kubenswrapper[4998]: I0227 11:11:20.392062 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-6ndsf/must-gather-b5b4f"] Feb 27 11:11:20 crc kubenswrapper[4998]: I0227 11:11:20.392628 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-6ndsf/must-gather-b5b4f"] Feb 27 11:11:20 crc kubenswrapper[4998]: I0227 11:11:20.392908 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-6ndsf/must-gather-b5b4f" podUID="52d03bda-caf6-462f-96d8-37dd3c6ed001" containerName="copy" containerID="cri-o://cacb879143f9093a92c1055fbbdb8060a1caa5a52c72f3429e18d4cf2490d513" gracePeriod=2 Feb 27 11:11:20 crc kubenswrapper[4998]: E0227 11:11:20.432137 4998 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod52d03bda_caf6_462f_96d8_37dd3c6ed001.slice/crio-cacb879143f9093a92c1055fbbdb8060a1caa5a52c72f3429e18d4cf2490d513.scope\": RecentStats: unable to find data in memory cache]" Feb 27 11:11:20 crc kubenswrapper[4998]: I0227 11:11:20.819032 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-6ndsf_must-gather-b5b4f_52d03bda-caf6-462f-96d8-37dd3c6ed001/copy/0.log" Feb 27 11:11:20 crc kubenswrapper[4998]: I0227 11:11:20.819946 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6ndsf/must-gather-b5b4f" Feb 27 11:11:20 crc kubenswrapper[4998]: I0227 11:11:20.943366 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/52d03bda-caf6-462f-96d8-37dd3c6ed001-must-gather-output\") pod \"52d03bda-caf6-462f-96d8-37dd3c6ed001\" (UID: \"52d03bda-caf6-462f-96d8-37dd3c6ed001\") " Feb 27 11:11:20 crc kubenswrapper[4998]: I0227 11:11:20.943533 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vl4zk\" (UniqueName: \"kubernetes.io/projected/52d03bda-caf6-462f-96d8-37dd3c6ed001-kube-api-access-vl4zk\") pod \"52d03bda-caf6-462f-96d8-37dd3c6ed001\" (UID: \"52d03bda-caf6-462f-96d8-37dd3c6ed001\") " Feb 27 11:11:20 crc kubenswrapper[4998]: I0227 11:11:20.948118 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52d03bda-caf6-462f-96d8-37dd3c6ed001-kube-api-access-vl4zk" (OuterVolumeSpecName: "kube-api-access-vl4zk") pod "52d03bda-caf6-462f-96d8-37dd3c6ed001" (UID: "52d03bda-caf6-462f-96d8-37dd3c6ed001"). InnerVolumeSpecName "kube-api-access-vl4zk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:11:21 crc kubenswrapper[4998]: I0227 11:11:21.035200 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-6ndsf_must-gather-b5b4f_52d03bda-caf6-462f-96d8-37dd3c6ed001/copy/0.log" Feb 27 11:11:21 crc kubenswrapper[4998]: I0227 11:11:21.035613 4998 generic.go:334] "Generic (PLEG): container finished" podID="52d03bda-caf6-462f-96d8-37dd3c6ed001" containerID="cacb879143f9093a92c1055fbbdb8060a1caa5a52c72f3429e18d4cf2490d513" exitCode=143 Feb 27 11:11:21 crc kubenswrapper[4998]: I0227 11:11:21.035670 4998 scope.go:117] "RemoveContainer" containerID="cacb879143f9093a92c1055fbbdb8060a1caa5a52c72f3429e18d4cf2490d513" Feb 27 11:11:21 crc kubenswrapper[4998]: I0227 11:11:21.035834 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6ndsf/must-gather-b5b4f" Feb 27 11:11:21 crc kubenswrapper[4998]: I0227 11:11:21.045489 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vl4zk\" (UniqueName: \"kubernetes.io/projected/52d03bda-caf6-462f-96d8-37dd3c6ed001-kube-api-access-vl4zk\") on node \"crc\" DevicePath \"\"" Feb 27 11:11:21 crc kubenswrapper[4998]: I0227 11:11:21.069309 4998 scope.go:117] "RemoveContainer" containerID="25f463779b5babc9ef17007ce6902c0c44f85cfa1b27688d6c443c2a9700d78c" Feb 27 11:11:21 crc kubenswrapper[4998]: I0227 11:11:21.115953 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52d03bda-caf6-462f-96d8-37dd3c6ed001-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "52d03bda-caf6-462f-96d8-37dd3c6ed001" (UID: "52d03bda-caf6-462f-96d8-37dd3c6ed001"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 11:11:21 crc kubenswrapper[4998]: I0227 11:11:21.142888 4998 scope.go:117] "RemoveContainer" containerID="cacb879143f9093a92c1055fbbdb8060a1caa5a52c72f3429e18d4cf2490d513" Feb 27 11:11:21 crc kubenswrapper[4998]: E0227 11:11:21.143334 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cacb879143f9093a92c1055fbbdb8060a1caa5a52c72f3429e18d4cf2490d513\": container with ID starting with cacb879143f9093a92c1055fbbdb8060a1caa5a52c72f3429e18d4cf2490d513 not found: ID does not exist" containerID="cacb879143f9093a92c1055fbbdb8060a1caa5a52c72f3429e18d4cf2490d513" Feb 27 11:11:21 crc kubenswrapper[4998]: I0227 11:11:21.143454 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cacb879143f9093a92c1055fbbdb8060a1caa5a52c72f3429e18d4cf2490d513"} err="failed to get container status \"cacb879143f9093a92c1055fbbdb8060a1caa5a52c72f3429e18d4cf2490d513\": rpc error: code = NotFound desc = could not find container \"cacb879143f9093a92c1055fbbdb8060a1caa5a52c72f3429e18d4cf2490d513\": container with ID starting with cacb879143f9093a92c1055fbbdb8060a1caa5a52c72f3429e18d4cf2490d513 not found: ID does not exist" Feb 27 11:11:21 crc kubenswrapper[4998]: I0227 11:11:21.143548 4998 scope.go:117] "RemoveContainer" containerID="25f463779b5babc9ef17007ce6902c0c44f85cfa1b27688d6c443c2a9700d78c" Feb 27 11:11:21 crc kubenswrapper[4998]: E0227 11:11:21.143915 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25f463779b5babc9ef17007ce6902c0c44f85cfa1b27688d6c443c2a9700d78c\": container with ID starting with 25f463779b5babc9ef17007ce6902c0c44f85cfa1b27688d6c443c2a9700d78c not found: ID does not exist" containerID="25f463779b5babc9ef17007ce6902c0c44f85cfa1b27688d6c443c2a9700d78c" Feb 27 11:11:21 crc kubenswrapper[4998]: I0227 11:11:21.144026 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25f463779b5babc9ef17007ce6902c0c44f85cfa1b27688d6c443c2a9700d78c"} err="failed to get container status \"25f463779b5babc9ef17007ce6902c0c44f85cfa1b27688d6c443c2a9700d78c\": rpc error: code = NotFound desc = could not find container \"25f463779b5babc9ef17007ce6902c0c44f85cfa1b27688d6c443c2a9700d78c\": container with ID starting with 25f463779b5babc9ef17007ce6902c0c44f85cfa1b27688d6c443c2a9700d78c not found: ID does not exist" Feb 27 11:11:21 crc kubenswrapper[4998]: I0227 11:11:21.147393 4998 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/52d03bda-caf6-462f-96d8-37dd3c6ed001-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 27 11:11:22 crc kubenswrapper[4998]: I0227 11:11:22.775556 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52d03bda-caf6-462f-96d8-37dd3c6ed001" path="/var/lib/kubelet/pods/52d03bda-caf6-462f-96d8-37dd3c6ed001/volumes" Feb 27 11:11:24 crc kubenswrapper[4998]: I0227 11:11:24.765042 4998 scope.go:117] "RemoveContainer" containerID="80e499ffd9f6c39119f9b9bd2b1b9c0b38519d681fc2c93cfe8afbe50a1baa31" Feb 27 11:11:24 crc kubenswrapper[4998]: E0227 11:11:24.765854 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m6kr5_openshift-machine-config-operator(400c5e2f-5448-49c6-bf8e-04b21e552bb2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" Feb 27 11:11:36 crc kubenswrapper[4998]: I0227 11:11:36.765467 4998 scope.go:117] "RemoveContainer" containerID="80e499ffd9f6c39119f9b9bd2b1b9c0b38519d681fc2c93cfe8afbe50a1baa31" Feb 27 11:11:36 crc kubenswrapper[4998]: E0227 11:11:36.766191 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m6kr5_openshift-machine-config-operator(400c5e2f-5448-49c6-bf8e-04b21e552bb2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" Feb 27 11:11:51 crc kubenswrapper[4998]: I0227 11:11:51.766020 4998 scope.go:117] "RemoveContainer" containerID="80e499ffd9f6c39119f9b9bd2b1b9c0b38519d681fc2c93cfe8afbe50a1baa31" Feb 27 11:11:51 crc kubenswrapper[4998]: E0227 11:11:51.767186 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m6kr5_openshift-machine-config-operator(400c5e2f-5448-49c6-bf8e-04b21e552bb2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" Feb 27 11:12:00 crc kubenswrapper[4998]: I0227 11:12:00.190587 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536512-j97cf"] Feb 27 11:12:00 crc kubenswrapper[4998]: E0227 11:12:00.191820 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52d03bda-caf6-462f-96d8-37dd3c6ed001" containerName="gather" Feb 27 11:12:00 crc kubenswrapper[4998]: I0227 11:12:00.191844 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="52d03bda-caf6-462f-96d8-37dd3c6ed001" containerName="gather" Feb 27 11:12:00 crc kubenswrapper[4998]: E0227 11:12:00.191893 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52d03bda-caf6-462f-96d8-37dd3c6ed001" containerName="copy" Feb 27 11:12:00 crc kubenswrapper[4998]: I0227 11:12:00.191905 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="52d03bda-caf6-462f-96d8-37dd3c6ed001" containerName="copy" Feb 27 11:12:00 crc kubenswrapper[4998]: E0227 11:12:00.191947 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="695eb442-fe1c-4e2c-b1a4-161fa4bbc0db" containerName="oc" Feb 27 11:12:00 crc kubenswrapper[4998]: I0227 11:12:00.191959 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="695eb442-fe1c-4e2c-b1a4-161fa4bbc0db" containerName="oc" Feb 27 11:12:00 crc kubenswrapper[4998]: I0227 11:12:00.192382 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="52d03bda-caf6-462f-96d8-37dd3c6ed001" containerName="copy" Feb 27 11:12:00 crc kubenswrapper[4998]: I0227 11:12:00.192415 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="695eb442-fe1c-4e2c-b1a4-161fa4bbc0db" containerName="oc" Feb 27 11:12:00 crc kubenswrapper[4998]: I0227 11:12:00.192457 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="52d03bda-caf6-462f-96d8-37dd3c6ed001" containerName="gather" Feb 27 11:12:00 crc kubenswrapper[4998]: I0227 11:12:00.193471 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536512-j97cf" Feb 27 11:12:00 crc kubenswrapper[4998]: I0227 11:12:00.195750 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-b74ch" Feb 27 11:12:00 crc kubenswrapper[4998]: I0227 11:12:00.199250 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 11:12:00 crc kubenswrapper[4998]: I0227 11:12:00.200138 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 11:12:00 crc kubenswrapper[4998]: I0227 11:12:00.203173 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536512-j97cf"] Feb 27 11:12:00 crc kubenswrapper[4998]: I0227 11:12:00.300402 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkk5w\" (UniqueName: \"kubernetes.io/projected/d3064e41-b8aa-414b-9f73-8e9b1252df95-kube-api-access-qkk5w\") pod \"auto-csr-approver-29536512-j97cf\" (UID: \"d3064e41-b8aa-414b-9f73-8e9b1252df95\") " pod="openshift-infra/auto-csr-approver-29536512-j97cf" Feb 27 11:12:00 crc kubenswrapper[4998]: I0227 11:12:00.403593 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkk5w\" (UniqueName: \"kubernetes.io/projected/d3064e41-b8aa-414b-9f73-8e9b1252df95-kube-api-access-qkk5w\") pod \"auto-csr-approver-29536512-j97cf\" (UID: \"d3064e41-b8aa-414b-9f73-8e9b1252df95\") " pod="openshift-infra/auto-csr-approver-29536512-j97cf" Feb 27 11:12:00 crc kubenswrapper[4998]: I0227 11:12:00.442216 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkk5w\" (UniqueName: \"kubernetes.io/projected/d3064e41-b8aa-414b-9f73-8e9b1252df95-kube-api-access-qkk5w\") pod \"auto-csr-approver-29536512-j97cf\" (UID: \"d3064e41-b8aa-414b-9f73-8e9b1252df95\") " pod="openshift-infra/auto-csr-approver-29536512-j97cf" Feb 27 11:12:00 crc kubenswrapper[4998]: I0227 11:12:00.515211 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536512-j97cf" Feb 27 11:12:01 crc kubenswrapper[4998]: I0227 11:12:01.040050 4998 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 27 11:12:01 crc kubenswrapper[4998]: I0227 11:12:01.040568 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536512-j97cf"] Feb 27 11:12:01 crc kubenswrapper[4998]: I0227 11:12:01.432923 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536512-j97cf" event={"ID":"d3064e41-b8aa-414b-9f73-8e9b1252df95","Type":"ContainerStarted","Data":"0b289aef42793ae22f174b11aed99f14520bf253ab6c74a93e20a7068b6e8496"} Feb 27 11:12:03 crc kubenswrapper[4998]: I0227 11:12:03.457074 4998 generic.go:334] "Generic (PLEG): container finished" podID="d3064e41-b8aa-414b-9f73-8e9b1252df95" containerID="21915897fbf9efc70dbe9ccd8b99a0da900f015bd806a0e68df621a84dadf2c3" exitCode=0 Feb 27 11:12:03 crc kubenswrapper[4998]: I0227 11:12:03.457273 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536512-j97cf" event={"ID":"d3064e41-b8aa-414b-9f73-8e9b1252df95","Type":"ContainerDied","Data":"21915897fbf9efc70dbe9ccd8b99a0da900f015bd806a0e68df621a84dadf2c3"} Feb 27 11:12:03 crc kubenswrapper[4998]: I0227 11:12:03.765926 4998 scope.go:117] "RemoveContainer" containerID="80e499ffd9f6c39119f9b9bd2b1b9c0b38519d681fc2c93cfe8afbe50a1baa31" Feb 27 11:12:03 crc kubenswrapper[4998]: E0227 11:12:03.766207 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m6kr5_openshift-machine-config-operator(400c5e2f-5448-49c6-bf8e-04b21e552bb2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" Feb 27 11:12:04 crc kubenswrapper[4998]: I0227 11:12:04.880328 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536512-j97cf" Feb 27 11:12:04 crc kubenswrapper[4998]: I0227 11:12:04.904799 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qkk5w\" (UniqueName: \"kubernetes.io/projected/d3064e41-b8aa-414b-9f73-8e9b1252df95-kube-api-access-qkk5w\") pod \"d3064e41-b8aa-414b-9f73-8e9b1252df95\" (UID: \"d3064e41-b8aa-414b-9f73-8e9b1252df95\") " Feb 27 11:12:04 crc kubenswrapper[4998]: I0227 11:12:04.916420 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3064e41-b8aa-414b-9f73-8e9b1252df95-kube-api-access-qkk5w" (OuterVolumeSpecName: "kube-api-access-qkk5w") pod "d3064e41-b8aa-414b-9f73-8e9b1252df95" (UID: "d3064e41-b8aa-414b-9f73-8e9b1252df95"). InnerVolumeSpecName "kube-api-access-qkk5w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:12:05 crc kubenswrapper[4998]: I0227 11:12:05.006709 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qkk5w\" (UniqueName: \"kubernetes.io/projected/d3064e41-b8aa-414b-9f73-8e9b1252df95-kube-api-access-qkk5w\") on node \"crc\" DevicePath \"\"" Feb 27 11:12:05 crc kubenswrapper[4998]: I0227 11:12:05.476442 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536512-j97cf" event={"ID":"d3064e41-b8aa-414b-9f73-8e9b1252df95","Type":"ContainerDied","Data":"0b289aef42793ae22f174b11aed99f14520bf253ab6c74a93e20a7068b6e8496"} Feb 27 11:12:05 crc kubenswrapper[4998]: I0227 11:12:05.476483 4998 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0b289aef42793ae22f174b11aed99f14520bf253ab6c74a93e20a7068b6e8496" Feb 27 11:12:05 crc kubenswrapper[4998]: I0227 11:12:05.476484 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536512-j97cf" Feb 27 11:12:05 crc kubenswrapper[4998]: I0227 11:12:05.961628 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536506-wb42h"] Feb 27 11:12:05 crc kubenswrapper[4998]: I0227 11:12:05.974090 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536506-wb42h"] Feb 27 11:12:06 crc kubenswrapper[4998]: I0227 11:12:06.780269 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e1ac321-7765-4310-9042-626ce024d9e4" path="/var/lib/kubelet/pods/9e1ac321-7765-4310-9042-626ce024d9e4/volumes" Feb 27 11:12:17 crc kubenswrapper[4998]: I0227 11:12:17.764682 4998 scope.go:117] "RemoveContainer" containerID="80e499ffd9f6c39119f9b9bd2b1b9c0b38519d681fc2c93cfe8afbe50a1baa31" Feb 27 11:12:18 crc kubenswrapper[4998]: I0227 11:12:18.656760 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" event={"ID":"400c5e2f-5448-49c6-bf8e-04b21e552bb2","Type":"ContainerStarted","Data":"8d896093a44467e7ee53003858171c833352be9528110c9ca9c7f0e969ee905c"} Feb 27 11:12:54 crc kubenswrapper[4998]: I0227 11:12:54.269318 4998 scope.go:117] "RemoveContainer" containerID="d8037209095bf2f748f81ee6ce79d4a6d505c61a0aac2f3a7d7603b35c8402ef" Feb 27 11:14:00 crc kubenswrapper[4998]: I0227 11:14:00.162265 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536514-8wd48"] Feb 27 11:14:00 crc kubenswrapper[4998]: E0227 11:14:00.163562 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3064e41-b8aa-414b-9f73-8e9b1252df95" containerName="oc" Feb 27 11:14:00 crc kubenswrapper[4998]: I0227 11:14:00.163587 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3064e41-b8aa-414b-9f73-8e9b1252df95" containerName="oc" Feb 27 11:14:00 crc kubenswrapper[4998]: I0227 11:14:00.163928 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3064e41-b8aa-414b-9f73-8e9b1252df95" containerName="oc" Feb 27 11:14:00 crc kubenswrapper[4998]: I0227 11:14:00.164901 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536514-8wd48" Feb 27 11:14:00 crc kubenswrapper[4998]: I0227 11:14:00.169321 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-b74ch" Feb 27 11:14:00 crc kubenswrapper[4998]: I0227 11:14:00.169321 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 11:14:00 crc kubenswrapper[4998]: I0227 11:14:00.175422 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536514-8wd48"] Feb 27 11:14:00 crc kubenswrapper[4998]: I0227 11:14:00.177302 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 11:14:00 crc kubenswrapper[4998]: I0227 11:14:00.299641 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7ccl\" (UniqueName: \"kubernetes.io/projected/9930cc64-459c-430f-b78b-d01806c13920-kube-api-access-q7ccl\") pod \"auto-csr-approver-29536514-8wd48\" (UID: \"9930cc64-459c-430f-b78b-d01806c13920\") " pod="openshift-infra/auto-csr-approver-29536514-8wd48" Feb 27 11:14:00 crc kubenswrapper[4998]: I0227 11:14:00.401679 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7ccl\" (UniqueName: \"kubernetes.io/projected/9930cc64-459c-430f-b78b-d01806c13920-kube-api-access-q7ccl\") pod \"auto-csr-approver-29536514-8wd48\" (UID: \"9930cc64-459c-430f-b78b-d01806c13920\") " pod="openshift-infra/auto-csr-approver-29536514-8wd48" Feb 27 11:14:00 crc kubenswrapper[4998]: I0227 11:14:00.426914 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7ccl\" (UniqueName: \"kubernetes.io/projected/9930cc64-459c-430f-b78b-d01806c13920-kube-api-access-q7ccl\") pod \"auto-csr-approver-29536514-8wd48\" (UID: \"9930cc64-459c-430f-b78b-d01806c13920\") " pod="openshift-infra/auto-csr-approver-29536514-8wd48" Feb 27 11:14:00 crc kubenswrapper[4998]: I0227 11:14:00.506072 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536514-8wd48" Feb 27 11:14:00 crc kubenswrapper[4998]: W0227 11:14:00.992532 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9930cc64_459c_430f_b78b_d01806c13920.slice/crio-d4c42205f25d7875efd8783fb04a4a83fe730d2cae8800b215bb4326d70a8b7a WatchSource:0}: Error finding container d4c42205f25d7875efd8783fb04a4a83fe730d2cae8800b215bb4326d70a8b7a: Status 404 returned error can't find the container with id d4c42205f25d7875efd8783fb04a4a83fe730d2cae8800b215bb4326d70a8b7a Feb 27 11:14:00 crc kubenswrapper[4998]: I0227 11:14:00.998957 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536514-8wd48"] Feb 27 11:14:01 crc kubenswrapper[4998]: I0227 11:14:01.762054 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536514-8wd48" event={"ID":"9930cc64-459c-430f-b78b-d01806c13920","Type":"ContainerStarted","Data":"d4c42205f25d7875efd8783fb04a4a83fe730d2cae8800b215bb4326d70a8b7a"} Feb 27 11:14:02 crc kubenswrapper[4998]: I0227 11:14:02.777872 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536514-8wd48" event={"ID":"9930cc64-459c-430f-b78b-d01806c13920","Type":"ContainerStarted","Data":"8d9bc37363660ff7526f94af6eb2fd204941260366a9d81e72ac99b14dd2046b"} Feb 27 11:14:02 crc kubenswrapper[4998]: I0227 11:14:02.803524 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29536514-8wd48" podStartSLOduration=1.633789728 podStartE2EDuration="2.803501497s" podCreationTimestamp="2026-02-27 11:14:00 +0000 UTC" firstStartedPulling="2026-02-27 11:14:00.996729385 +0000 UTC m=+3392.995000353" lastFinishedPulling="2026-02-27 11:14:02.166441124 +0000 UTC m=+3394.164712122" observedRunningTime="2026-02-27 11:14:02.788888702 +0000 UTC m=+3394.787159670" watchObservedRunningTime="2026-02-27 11:14:02.803501497 +0000 UTC m=+3394.801772485" Feb 27 11:14:03 crc kubenswrapper[4998]: I0227 11:14:03.789060 4998 generic.go:334] "Generic (PLEG): container finished" podID="9930cc64-459c-430f-b78b-d01806c13920" containerID="8d9bc37363660ff7526f94af6eb2fd204941260366a9d81e72ac99b14dd2046b" exitCode=0 Feb 27 11:14:03 crc kubenswrapper[4998]: I0227 11:14:03.789431 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536514-8wd48" event={"ID":"9930cc64-459c-430f-b78b-d01806c13920","Type":"ContainerDied","Data":"8d9bc37363660ff7526f94af6eb2fd204941260366a9d81e72ac99b14dd2046b"} Feb 27 11:14:05 crc kubenswrapper[4998]: I0227 11:14:05.200934 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536514-8wd48" Feb 27 11:14:05 crc kubenswrapper[4998]: I0227 11:14:05.312134 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q7ccl\" (UniqueName: \"kubernetes.io/projected/9930cc64-459c-430f-b78b-d01806c13920-kube-api-access-q7ccl\") pod \"9930cc64-459c-430f-b78b-d01806c13920\" (UID: \"9930cc64-459c-430f-b78b-d01806c13920\") " Feb 27 11:14:05 crc kubenswrapper[4998]: I0227 11:14:05.319523 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9930cc64-459c-430f-b78b-d01806c13920-kube-api-access-q7ccl" (OuterVolumeSpecName: "kube-api-access-q7ccl") pod "9930cc64-459c-430f-b78b-d01806c13920" (UID: "9930cc64-459c-430f-b78b-d01806c13920"). InnerVolumeSpecName "kube-api-access-q7ccl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:14:05 crc kubenswrapper[4998]: I0227 11:14:05.415006 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q7ccl\" (UniqueName: \"kubernetes.io/projected/9930cc64-459c-430f-b78b-d01806c13920-kube-api-access-q7ccl\") on node \"crc\" DevicePath \"\"" Feb 27 11:14:05 crc kubenswrapper[4998]: I0227 11:14:05.821028 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536514-8wd48" event={"ID":"9930cc64-459c-430f-b78b-d01806c13920","Type":"ContainerDied","Data":"d4c42205f25d7875efd8783fb04a4a83fe730d2cae8800b215bb4326d70a8b7a"} Feb 27 11:14:05 crc kubenswrapper[4998]: I0227 11:14:05.821089 4998 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d4c42205f25d7875efd8783fb04a4a83fe730d2cae8800b215bb4326d70a8b7a" Feb 27 11:14:05 crc kubenswrapper[4998]: I0227 11:14:05.823074 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536514-8wd48" Feb 27 11:14:05 crc kubenswrapper[4998]: I0227 11:14:05.907902 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536508-qd4nl"] Feb 27 11:14:05 crc kubenswrapper[4998]: I0227 11:14:05.925892 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536508-qd4nl"] Feb 27 11:14:06 crc kubenswrapper[4998]: I0227 11:14:06.779314 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2a0757f-dda2-4efc-accf-b9efd88032bb" path="/var/lib/kubelet/pods/d2a0757f-dda2-4efc-accf-b9efd88032bb/volumes" Feb 27 11:14:40 crc kubenswrapper[4998]: I0227 11:14:40.505199 4998 patch_prober.go:28] interesting pod/machine-config-daemon-m6kr5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 11:14:40 crc kubenswrapper[4998]: I0227 11:14:40.505877 4998 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 11:14:54 crc kubenswrapper[4998]: I0227 11:14:54.384824 4998 scope.go:117] "RemoveContainer" containerID="9487ef2cde9eeb9d99a6308100c7da9ef7d1b52b900b90c41f6b6359db543d79" Feb 27 11:15:00 crc kubenswrapper[4998]: I0227 11:15:00.159167 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536515-dkcgd"] Feb 27 11:15:00 crc kubenswrapper[4998]: E0227 11:15:00.160299 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9930cc64-459c-430f-b78b-d01806c13920" containerName="oc" Feb 27 11:15:00 crc kubenswrapper[4998]: I0227 11:15:00.160320 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="9930cc64-459c-430f-b78b-d01806c13920" containerName="oc" Feb 27 11:15:00 crc kubenswrapper[4998]: I0227 11:15:00.160603 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="9930cc64-459c-430f-b78b-d01806c13920" containerName="oc" Feb 27 11:15:00 crc kubenswrapper[4998]: I0227 11:15:00.161417 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536515-dkcgd" Feb 27 11:15:00 crc kubenswrapper[4998]: I0227 11:15:00.164142 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 27 11:15:00 crc kubenswrapper[4998]: I0227 11:15:00.164161 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 27 11:15:00 crc kubenswrapper[4998]: I0227 11:15:00.175027 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536515-dkcgd"] Feb 27 11:15:00 crc kubenswrapper[4998]: I0227 11:15:00.216619 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/060fd5d7-abca-4877-8fdf-d171ac39f69a-config-volume\") pod \"collect-profiles-29536515-dkcgd\" (UID: \"060fd5d7-abca-4877-8fdf-d171ac39f69a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536515-dkcgd" Feb 27 11:15:00 crc kubenswrapper[4998]: I0227 11:15:00.216762 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2pjs\" (UniqueName: \"kubernetes.io/projected/060fd5d7-abca-4877-8fdf-d171ac39f69a-kube-api-access-b2pjs\") pod \"collect-profiles-29536515-dkcgd\" (UID: \"060fd5d7-abca-4877-8fdf-d171ac39f69a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536515-dkcgd" Feb 27 11:15:00 crc kubenswrapper[4998]: I0227 11:15:00.216815 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/060fd5d7-abca-4877-8fdf-d171ac39f69a-secret-volume\") pod \"collect-profiles-29536515-dkcgd\" (UID: \"060fd5d7-abca-4877-8fdf-d171ac39f69a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536515-dkcgd" Feb 27 11:15:00 crc kubenswrapper[4998]: I0227 11:15:00.318574 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/060fd5d7-abca-4877-8fdf-d171ac39f69a-config-volume\") pod \"collect-profiles-29536515-dkcgd\" (UID: \"060fd5d7-abca-4877-8fdf-d171ac39f69a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536515-dkcgd" Feb 27 11:15:00 crc kubenswrapper[4998]: I0227 11:15:00.318624 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2pjs\" (UniqueName: \"kubernetes.io/projected/060fd5d7-abca-4877-8fdf-d171ac39f69a-kube-api-access-b2pjs\") pod \"collect-profiles-29536515-dkcgd\" (UID: \"060fd5d7-abca-4877-8fdf-d171ac39f69a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536515-dkcgd" Feb 27 11:15:00 crc kubenswrapper[4998]: I0227 11:15:00.318663 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/060fd5d7-abca-4877-8fdf-d171ac39f69a-secret-volume\") pod \"collect-profiles-29536515-dkcgd\" (UID: \"060fd5d7-abca-4877-8fdf-d171ac39f69a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536515-dkcgd" Feb 27 11:15:00 crc kubenswrapper[4998]: I0227 11:15:00.319665 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/060fd5d7-abca-4877-8fdf-d171ac39f69a-config-volume\") pod \"collect-profiles-29536515-dkcgd\" (UID: \"060fd5d7-abca-4877-8fdf-d171ac39f69a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536515-dkcgd" Feb 27 11:15:00 crc kubenswrapper[4998]: I0227 11:15:00.326829 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/060fd5d7-abca-4877-8fdf-d171ac39f69a-secret-volume\") pod \"collect-profiles-29536515-dkcgd\" (UID: \"060fd5d7-abca-4877-8fdf-d171ac39f69a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536515-dkcgd" Feb 27 11:15:00 crc kubenswrapper[4998]: I0227 11:15:00.336189 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2pjs\" (UniqueName: \"kubernetes.io/projected/060fd5d7-abca-4877-8fdf-d171ac39f69a-kube-api-access-b2pjs\") pod \"collect-profiles-29536515-dkcgd\" (UID: \"060fd5d7-abca-4877-8fdf-d171ac39f69a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536515-dkcgd" Feb 27 11:15:00 crc kubenswrapper[4998]: I0227 11:15:00.511693 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536515-dkcgd" Feb 27 11:15:00 crc kubenswrapper[4998]: I0227 11:15:00.986703 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536515-dkcgd"] Feb 27 11:15:01 crc kubenswrapper[4998]: I0227 11:15:01.402095 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536515-dkcgd" event={"ID":"060fd5d7-abca-4877-8fdf-d171ac39f69a","Type":"ContainerStarted","Data":"79f264509fb657409a95b1ba6fdaee9e35719db53bb488764a5c5d6c6d7228f6"} Feb 27 11:15:01 crc kubenswrapper[4998]: I0227 11:15:01.402566 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536515-dkcgd" event={"ID":"060fd5d7-abca-4877-8fdf-d171ac39f69a","Type":"ContainerStarted","Data":"fadbf06f3043581b6f35334b1a2011b70c51de93276f27ac4b74971a25bd0850"} Feb 27 11:15:01 crc kubenswrapper[4998]: I0227 11:15:01.422998 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29536515-dkcgd" podStartSLOduration=1.422972643 podStartE2EDuration="1.422972643s" podCreationTimestamp="2026-02-27 11:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 11:15:01.415710176 +0000 UTC m=+3453.413981144" watchObservedRunningTime="2026-02-27 11:15:01.422972643 +0000 UTC m=+3453.421243611" Feb 27 11:15:02 crc kubenswrapper[4998]: I0227 11:15:02.412211 4998 generic.go:334] "Generic (PLEG): container finished" podID="060fd5d7-abca-4877-8fdf-d171ac39f69a" containerID="79f264509fb657409a95b1ba6fdaee9e35719db53bb488764a5c5d6c6d7228f6" exitCode=0 Feb 27 11:15:02 crc kubenswrapper[4998]: I0227 11:15:02.412264 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536515-dkcgd" event={"ID":"060fd5d7-abca-4877-8fdf-d171ac39f69a","Type":"ContainerDied","Data":"79f264509fb657409a95b1ba6fdaee9e35719db53bb488764a5c5d6c6d7228f6"} Feb 27 11:15:03 crc kubenswrapper[4998]: I0227 11:15:03.809624 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536515-dkcgd" Feb 27 11:15:03 crc kubenswrapper[4998]: I0227 11:15:03.893924 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b2pjs\" (UniqueName: \"kubernetes.io/projected/060fd5d7-abca-4877-8fdf-d171ac39f69a-kube-api-access-b2pjs\") pod \"060fd5d7-abca-4877-8fdf-d171ac39f69a\" (UID: \"060fd5d7-abca-4877-8fdf-d171ac39f69a\") " Feb 27 11:15:03 crc kubenswrapper[4998]: I0227 11:15:03.894340 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/060fd5d7-abca-4877-8fdf-d171ac39f69a-config-volume\") pod \"060fd5d7-abca-4877-8fdf-d171ac39f69a\" (UID: \"060fd5d7-abca-4877-8fdf-d171ac39f69a\") " Feb 27 11:15:03 crc kubenswrapper[4998]: I0227 11:15:03.894551 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/060fd5d7-abca-4877-8fdf-d171ac39f69a-secret-volume\") pod \"060fd5d7-abca-4877-8fdf-d171ac39f69a\" (UID: \"060fd5d7-abca-4877-8fdf-d171ac39f69a\") " Feb 27 11:15:03 crc kubenswrapper[4998]: I0227 11:15:03.895133 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/060fd5d7-abca-4877-8fdf-d171ac39f69a-config-volume" (OuterVolumeSpecName: "config-volume") pod "060fd5d7-abca-4877-8fdf-d171ac39f69a" (UID: "060fd5d7-abca-4877-8fdf-d171ac39f69a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 11:15:03 crc kubenswrapper[4998]: I0227 11:15:03.901453 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/060fd5d7-abca-4877-8fdf-d171ac39f69a-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "060fd5d7-abca-4877-8fdf-d171ac39f69a" (UID: "060fd5d7-abca-4877-8fdf-d171ac39f69a"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:15:03 crc kubenswrapper[4998]: I0227 11:15:03.903484 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/060fd5d7-abca-4877-8fdf-d171ac39f69a-kube-api-access-b2pjs" (OuterVolumeSpecName: "kube-api-access-b2pjs") pod "060fd5d7-abca-4877-8fdf-d171ac39f69a" (UID: "060fd5d7-abca-4877-8fdf-d171ac39f69a"). InnerVolumeSpecName "kube-api-access-b2pjs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:15:03 crc kubenswrapper[4998]: I0227 11:15:03.996734 4998 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/060fd5d7-abca-4877-8fdf-d171ac39f69a-config-volume\") on node \"crc\" DevicePath \"\"" Feb 27 11:15:03 crc kubenswrapper[4998]: I0227 11:15:03.996778 4998 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/060fd5d7-abca-4877-8fdf-d171ac39f69a-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 27 11:15:03 crc kubenswrapper[4998]: I0227 11:15:03.996793 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b2pjs\" (UniqueName: \"kubernetes.io/projected/060fd5d7-abca-4877-8fdf-d171ac39f69a-kube-api-access-b2pjs\") on node \"crc\" DevicePath \"\"" Feb 27 11:15:04 crc kubenswrapper[4998]: I0227 11:15:04.434412 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536515-dkcgd" event={"ID":"060fd5d7-abca-4877-8fdf-d171ac39f69a","Type":"ContainerDied","Data":"fadbf06f3043581b6f35334b1a2011b70c51de93276f27ac4b74971a25bd0850"} Feb 27 11:15:04 crc kubenswrapper[4998]: I0227 11:15:04.434460 4998 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fadbf06f3043581b6f35334b1a2011b70c51de93276f27ac4b74971a25bd0850" Feb 27 11:15:04 crc kubenswrapper[4998]: I0227 11:15:04.434461 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536515-dkcgd" Feb 27 11:15:04 crc kubenswrapper[4998]: I0227 11:15:04.499930 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536470-jpm4k"] Feb 27 11:15:04 crc kubenswrapper[4998]: I0227 11:15:04.527463 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536470-jpm4k"] Feb 27 11:15:04 crc kubenswrapper[4998]: I0227 11:15:04.781588 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7cd08a0b-2493-44df-993a-6d9acfe1bf6e" path="/var/lib/kubelet/pods/7cd08a0b-2493-44df-993a-6d9acfe1bf6e/volumes" Feb 27 11:15:10 crc kubenswrapper[4998]: I0227 11:15:10.504200 4998 patch_prober.go:28] interesting pod/machine-config-daemon-m6kr5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 11:15:10 crc kubenswrapper[4998]: I0227 11:15:10.504702 4998 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 11:15:29 crc kubenswrapper[4998]: I0227 11:15:29.490170 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-nwbrb"] Feb 27 11:15:29 crc kubenswrapper[4998]: E0227 11:15:29.491100 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="060fd5d7-abca-4877-8fdf-d171ac39f69a" containerName="collect-profiles" Feb 27 11:15:29 crc kubenswrapper[4998]: I0227 11:15:29.491116 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="060fd5d7-abca-4877-8fdf-d171ac39f69a" containerName="collect-profiles" Feb 27 11:15:29 crc kubenswrapper[4998]: I0227 11:15:29.491381 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="060fd5d7-abca-4877-8fdf-d171ac39f69a" containerName="collect-profiles" Feb 27 11:15:29 crc kubenswrapper[4998]: I0227 11:15:29.492959 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nwbrb" Feb 27 11:15:29 crc kubenswrapper[4998]: I0227 11:15:29.531437 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nwbrb"] Feb 27 11:15:29 crc kubenswrapper[4998]: I0227 11:15:29.593378 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d5da507-f32c-410c-b2db-29a0f6a4f6b0-utilities\") pod \"redhat-marketplace-nwbrb\" (UID: \"7d5da507-f32c-410c-b2db-29a0f6a4f6b0\") " pod="openshift-marketplace/redhat-marketplace-nwbrb" Feb 27 11:15:29 crc kubenswrapper[4998]: I0227 11:15:29.593572 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d5da507-f32c-410c-b2db-29a0f6a4f6b0-catalog-content\") pod \"redhat-marketplace-nwbrb\" (UID: \"7d5da507-f32c-410c-b2db-29a0f6a4f6b0\") " pod="openshift-marketplace/redhat-marketplace-nwbrb" Feb 27 11:15:29 crc kubenswrapper[4998]: I0227 11:15:29.593659 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k69rr\" (UniqueName: \"kubernetes.io/projected/7d5da507-f32c-410c-b2db-29a0f6a4f6b0-kube-api-access-k69rr\") pod \"redhat-marketplace-nwbrb\" (UID: \"7d5da507-f32c-410c-b2db-29a0f6a4f6b0\") " pod="openshift-marketplace/redhat-marketplace-nwbrb" Feb 27 11:15:29 crc kubenswrapper[4998]: I0227 11:15:29.695771 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d5da507-f32c-410c-b2db-29a0f6a4f6b0-catalog-content\") pod \"redhat-marketplace-nwbrb\" (UID: \"7d5da507-f32c-410c-b2db-29a0f6a4f6b0\") " pod="openshift-marketplace/redhat-marketplace-nwbrb" Feb 27 11:15:29 crc kubenswrapper[4998]: I0227 11:15:29.695826 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k69rr\" (UniqueName: \"kubernetes.io/projected/7d5da507-f32c-410c-b2db-29a0f6a4f6b0-kube-api-access-k69rr\") pod \"redhat-marketplace-nwbrb\" (UID: \"7d5da507-f32c-410c-b2db-29a0f6a4f6b0\") " pod="openshift-marketplace/redhat-marketplace-nwbrb" Feb 27 11:15:29 crc kubenswrapper[4998]: I0227 11:15:29.695914 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d5da507-f32c-410c-b2db-29a0f6a4f6b0-utilities\") pod \"redhat-marketplace-nwbrb\" (UID: \"7d5da507-f32c-410c-b2db-29a0f6a4f6b0\") " pod="openshift-marketplace/redhat-marketplace-nwbrb" Feb 27 11:15:29 crc kubenswrapper[4998]: I0227 11:15:29.696983 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d5da507-f32c-410c-b2db-29a0f6a4f6b0-catalog-content\") pod \"redhat-marketplace-nwbrb\" (UID: \"7d5da507-f32c-410c-b2db-29a0f6a4f6b0\") " pod="openshift-marketplace/redhat-marketplace-nwbrb" Feb 27 11:15:29 crc kubenswrapper[4998]: I0227 11:15:29.696999 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d5da507-f32c-410c-b2db-29a0f6a4f6b0-utilities\") pod \"redhat-marketplace-nwbrb\" (UID: \"7d5da507-f32c-410c-b2db-29a0f6a4f6b0\") " pod="openshift-marketplace/redhat-marketplace-nwbrb" Feb 27 11:15:29 crc kubenswrapper[4998]: I0227 11:15:29.722880 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k69rr\" (UniqueName: \"kubernetes.io/projected/7d5da507-f32c-410c-b2db-29a0f6a4f6b0-kube-api-access-k69rr\") pod \"redhat-marketplace-nwbrb\" (UID: \"7d5da507-f32c-410c-b2db-29a0f6a4f6b0\") " pod="openshift-marketplace/redhat-marketplace-nwbrb" Feb 27 11:15:29 crc kubenswrapper[4998]: I0227 11:15:29.812182 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nwbrb" Feb 27 11:15:30 crc kubenswrapper[4998]: I0227 11:15:30.295164 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nwbrb"] Feb 27 11:15:30 crc kubenswrapper[4998]: I0227 11:15:30.669688 4998 generic.go:334] "Generic (PLEG): container finished" podID="7d5da507-f32c-410c-b2db-29a0f6a4f6b0" containerID="f777b405acd3709bd1ec541875505c2338e583e7b840db03cc662ec8d01316fd" exitCode=0 Feb 27 11:15:30 crc kubenswrapper[4998]: I0227 11:15:30.669787 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nwbrb" event={"ID":"7d5da507-f32c-410c-b2db-29a0f6a4f6b0","Type":"ContainerDied","Data":"f777b405acd3709bd1ec541875505c2338e583e7b840db03cc662ec8d01316fd"} Feb 27 11:15:30 crc kubenswrapper[4998]: I0227 11:15:30.670057 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nwbrb" event={"ID":"7d5da507-f32c-410c-b2db-29a0f6a4f6b0","Type":"ContainerStarted","Data":"8b9aa62fb29f70936ce6099babbeaa2a2e07fb0068f4afa08e32cca55ff3f3a1"} Feb 27 11:15:32 crc kubenswrapper[4998]: I0227 11:15:32.692263 4998 generic.go:334] "Generic (PLEG): container finished" podID="7d5da507-f32c-410c-b2db-29a0f6a4f6b0" containerID="6fb45be8a1ef4631c0270375b43df275cf852280f42641a705b312b4d1b3894c" exitCode=0 Feb 27 11:15:32 crc kubenswrapper[4998]: I0227 11:15:32.692333 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nwbrb" event={"ID":"7d5da507-f32c-410c-b2db-29a0f6a4f6b0","Type":"ContainerDied","Data":"6fb45be8a1ef4631c0270375b43df275cf852280f42641a705b312b4d1b3894c"} Feb 27 11:15:33 crc kubenswrapper[4998]: I0227 11:15:33.707671 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nwbrb" event={"ID":"7d5da507-f32c-410c-b2db-29a0f6a4f6b0","Type":"ContainerStarted","Data":"712e79ed93b69f15bd5774ec62fc8835c6d68e6fcca39fd2fda1d1a511ded30d"} Feb 27 11:15:33 crc kubenswrapper[4998]: I0227 11:15:33.727518 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-nwbrb" podStartSLOduration=2.2803643989999998 podStartE2EDuration="4.727491353s" podCreationTimestamp="2026-02-27 11:15:29 +0000 UTC" firstStartedPulling="2026-02-27 11:15:30.672947019 +0000 UTC m=+3482.671217987" lastFinishedPulling="2026-02-27 11:15:33.120073963 +0000 UTC m=+3485.118344941" observedRunningTime="2026-02-27 11:15:33.724339598 +0000 UTC m=+3485.722610596" watchObservedRunningTime="2026-02-27 11:15:33.727491353 +0000 UTC m=+3485.725762351" Feb 27 11:15:39 crc kubenswrapper[4998]: I0227 11:15:39.813394 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-nwbrb" Feb 27 11:15:39 crc kubenswrapper[4998]: I0227 11:15:39.814419 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-nwbrb" Feb 27 11:15:39 crc kubenswrapper[4998]: I0227 11:15:39.875315 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-nwbrb" Feb 27 11:15:40 crc kubenswrapper[4998]: I0227 11:15:40.504369 4998 patch_prober.go:28] interesting pod/machine-config-daemon-m6kr5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 11:15:40 crc kubenswrapper[4998]: I0227 11:15:40.504460 4998 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 11:15:40 crc kubenswrapper[4998]: I0227 11:15:40.504523 4998 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" Feb 27 11:15:40 crc kubenswrapper[4998]: I0227 11:15:40.505604 4998 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8d896093a44467e7ee53003858171c833352be9528110c9ca9c7f0e969ee905c"} pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 27 11:15:40 crc kubenswrapper[4998]: I0227 11:15:40.505699 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" podUID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" containerName="machine-config-daemon" containerID="cri-o://8d896093a44467e7ee53003858171c833352be9528110c9ca9c7f0e969ee905c" gracePeriod=600 Feb 27 11:15:40 crc kubenswrapper[4998]: I0227 11:15:40.831208 4998 generic.go:334] "Generic (PLEG): container finished" podID="400c5e2f-5448-49c6-bf8e-04b21e552bb2" containerID="8d896093a44467e7ee53003858171c833352be9528110c9ca9c7f0e969ee905c" exitCode=0 Feb 27 11:15:40 crc kubenswrapper[4998]: I0227 11:15:40.831261 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" event={"ID":"400c5e2f-5448-49c6-bf8e-04b21e552bb2","Type":"ContainerDied","Data":"8d896093a44467e7ee53003858171c833352be9528110c9ca9c7f0e969ee905c"} Feb 27 11:15:40 crc kubenswrapper[4998]: I0227 11:15:40.831620 4998 scope.go:117] "RemoveContainer" containerID="80e499ffd9f6c39119f9b9bd2b1b9c0b38519d681fc2c93cfe8afbe50a1baa31" Feb 27 11:15:40 crc kubenswrapper[4998]: I0227 11:15:40.890952 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-nwbrb" Feb 27 11:15:40 crc kubenswrapper[4998]: I0227 11:15:40.944258 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nwbrb"] Feb 27 11:15:41 crc kubenswrapper[4998]: I0227 11:15:41.843275 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m6kr5" event={"ID":"400c5e2f-5448-49c6-bf8e-04b21e552bb2","Type":"ContainerStarted","Data":"38634b633b6df757c2c11b29cd2e2238841e10057cf8e6de0368494540bb57b0"} Feb 27 11:15:42 crc kubenswrapper[4998]: I0227 11:15:42.856373 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-nwbrb" podUID="7d5da507-f32c-410c-b2db-29a0f6a4f6b0" containerName="registry-server" containerID="cri-o://712e79ed93b69f15bd5774ec62fc8835c6d68e6fcca39fd2fda1d1a511ded30d" gracePeriod=2 Feb 27 11:15:43 crc kubenswrapper[4998]: I0227 11:15:43.347064 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nwbrb" Feb 27 11:15:43 crc kubenswrapper[4998]: I0227 11:15:43.493264 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d5da507-f32c-410c-b2db-29a0f6a4f6b0-catalog-content\") pod \"7d5da507-f32c-410c-b2db-29a0f6a4f6b0\" (UID: \"7d5da507-f32c-410c-b2db-29a0f6a4f6b0\") " Feb 27 11:15:43 crc kubenswrapper[4998]: I0227 11:15:43.493697 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d5da507-f32c-410c-b2db-29a0f6a4f6b0-utilities\") pod \"7d5da507-f32c-410c-b2db-29a0f6a4f6b0\" (UID: \"7d5da507-f32c-410c-b2db-29a0f6a4f6b0\") " Feb 27 11:15:43 crc kubenswrapper[4998]: I0227 11:15:43.493844 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k69rr\" (UniqueName: \"kubernetes.io/projected/7d5da507-f32c-410c-b2db-29a0f6a4f6b0-kube-api-access-k69rr\") pod \"7d5da507-f32c-410c-b2db-29a0f6a4f6b0\" (UID: \"7d5da507-f32c-410c-b2db-29a0f6a4f6b0\") " Feb 27 11:15:43 crc kubenswrapper[4998]: I0227 11:15:43.494688 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d5da507-f32c-410c-b2db-29a0f6a4f6b0-utilities" (OuterVolumeSpecName: "utilities") pod "7d5da507-f32c-410c-b2db-29a0f6a4f6b0" (UID: "7d5da507-f32c-410c-b2db-29a0f6a4f6b0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 11:15:43 crc kubenswrapper[4998]: I0227 11:15:43.502266 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d5da507-f32c-410c-b2db-29a0f6a4f6b0-kube-api-access-k69rr" (OuterVolumeSpecName: "kube-api-access-k69rr") pod "7d5da507-f32c-410c-b2db-29a0f6a4f6b0" (UID: "7d5da507-f32c-410c-b2db-29a0f6a4f6b0"). InnerVolumeSpecName "kube-api-access-k69rr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:15:43 crc kubenswrapper[4998]: I0227 11:15:43.529702 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d5da507-f32c-410c-b2db-29a0f6a4f6b0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7d5da507-f32c-410c-b2db-29a0f6a4f6b0" (UID: "7d5da507-f32c-410c-b2db-29a0f6a4f6b0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 11:15:43 crc kubenswrapper[4998]: I0227 11:15:43.596700 4998 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d5da507-f32c-410c-b2db-29a0f6a4f6b0-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 11:15:43 crc kubenswrapper[4998]: I0227 11:15:43.596776 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k69rr\" (UniqueName: \"kubernetes.io/projected/7d5da507-f32c-410c-b2db-29a0f6a4f6b0-kube-api-access-k69rr\") on node \"crc\" DevicePath \"\"" Feb 27 11:15:43 crc kubenswrapper[4998]: I0227 11:15:43.596795 4998 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d5da507-f32c-410c-b2db-29a0f6a4f6b0-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 11:15:43 crc kubenswrapper[4998]: I0227 11:15:43.872065 4998 generic.go:334] "Generic (PLEG): container finished" podID="7d5da507-f32c-410c-b2db-29a0f6a4f6b0" containerID="712e79ed93b69f15bd5774ec62fc8835c6d68e6fcca39fd2fda1d1a511ded30d" exitCode=0 Feb 27 11:15:43 crc kubenswrapper[4998]: I0227 11:15:43.872125 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nwbrb" event={"ID":"7d5da507-f32c-410c-b2db-29a0f6a4f6b0","Type":"ContainerDied","Data":"712e79ed93b69f15bd5774ec62fc8835c6d68e6fcca39fd2fda1d1a511ded30d"} Feb 27 11:15:43 crc kubenswrapper[4998]: I0227 11:15:43.872173 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nwbrb" event={"ID":"7d5da507-f32c-410c-b2db-29a0f6a4f6b0","Type":"ContainerDied","Data":"8b9aa62fb29f70936ce6099babbeaa2a2e07fb0068f4afa08e32cca55ff3f3a1"} Feb 27 11:15:43 crc kubenswrapper[4998]: I0227 11:15:43.872174 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nwbrb" Feb 27 11:15:43 crc kubenswrapper[4998]: I0227 11:15:43.872205 4998 scope.go:117] "RemoveContainer" containerID="712e79ed93b69f15bd5774ec62fc8835c6d68e6fcca39fd2fda1d1a511ded30d" Feb 27 11:15:43 crc kubenswrapper[4998]: I0227 11:15:43.917489 4998 scope.go:117] "RemoveContainer" containerID="6fb45be8a1ef4631c0270375b43df275cf852280f42641a705b312b4d1b3894c" Feb 27 11:15:43 crc kubenswrapper[4998]: I0227 11:15:43.931214 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nwbrb"] Feb 27 11:15:43 crc kubenswrapper[4998]: I0227 11:15:43.943292 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-nwbrb"] Feb 27 11:15:43 crc kubenswrapper[4998]: I0227 11:15:43.962954 4998 scope.go:117] "RemoveContainer" containerID="f777b405acd3709bd1ec541875505c2338e583e7b840db03cc662ec8d01316fd" Feb 27 11:15:44 crc kubenswrapper[4998]: I0227 11:15:44.023046 4998 scope.go:117] "RemoveContainer" containerID="712e79ed93b69f15bd5774ec62fc8835c6d68e6fcca39fd2fda1d1a511ded30d" Feb 27 11:15:44 crc kubenswrapper[4998]: E0227 11:15:44.023661 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"712e79ed93b69f15bd5774ec62fc8835c6d68e6fcca39fd2fda1d1a511ded30d\": container with ID starting with 712e79ed93b69f15bd5774ec62fc8835c6d68e6fcca39fd2fda1d1a511ded30d not found: ID does not exist" containerID="712e79ed93b69f15bd5774ec62fc8835c6d68e6fcca39fd2fda1d1a511ded30d" Feb 27 11:15:44 crc kubenswrapper[4998]: I0227 11:15:44.023915 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"712e79ed93b69f15bd5774ec62fc8835c6d68e6fcca39fd2fda1d1a511ded30d"} err="failed to get container status \"712e79ed93b69f15bd5774ec62fc8835c6d68e6fcca39fd2fda1d1a511ded30d\": rpc error: code = NotFound desc = could not find container \"712e79ed93b69f15bd5774ec62fc8835c6d68e6fcca39fd2fda1d1a511ded30d\": container with ID starting with 712e79ed93b69f15bd5774ec62fc8835c6d68e6fcca39fd2fda1d1a511ded30d not found: ID does not exist" Feb 27 11:15:44 crc kubenswrapper[4998]: I0227 11:15:44.023959 4998 scope.go:117] "RemoveContainer" containerID="6fb45be8a1ef4631c0270375b43df275cf852280f42641a705b312b4d1b3894c" Feb 27 11:15:44 crc kubenswrapper[4998]: E0227 11:15:44.024375 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6fb45be8a1ef4631c0270375b43df275cf852280f42641a705b312b4d1b3894c\": container with ID starting with 6fb45be8a1ef4631c0270375b43df275cf852280f42641a705b312b4d1b3894c not found: ID does not exist" containerID="6fb45be8a1ef4631c0270375b43df275cf852280f42641a705b312b4d1b3894c" Feb 27 11:15:44 crc kubenswrapper[4998]: I0227 11:15:44.024406 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fb45be8a1ef4631c0270375b43df275cf852280f42641a705b312b4d1b3894c"} err="failed to get container status \"6fb45be8a1ef4631c0270375b43df275cf852280f42641a705b312b4d1b3894c\": rpc error: code = NotFound desc = could not find container \"6fb45be8a1ef4631c0270375b43df275cf852280f42641a705b312b4d1b3894c\": container with ID starting with 6fb45be8a1ef4631c0270375b43df275cf852280f42641a705b312b4d1b3894c not found: ID does not exist" Feb 27 11:15:44 crc kubenswrapper[4998]: I0227 11:15:44.024430 4998 scope.go:117] "RemoveContainer" containerID="f777b405acd3709bd1ec541875505c2338e583e7b840db03cc662ec8d01316fd" Feb 27 11:15:44 crc kubenswrapper[4998]: E0227 11:15:44.024712 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f777b405acd3709bd1ec541875505c2338e583e7b840db03cc662ec8d01316fd\": container with ID starting with f777b405acd3709bd1ec541875505c2338e583e7b840db03cc662ec8d01316fd not found: ID does not exist" containerID="f777b405acd3709bd1ec541875505c2338e583e7b840db03cc662ec8d01316fd" Feb 27 11:15:44 crc kubenswrapper[4998]: I0227 11:15:44.024752 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f777b405acd3709bd1ec541875505c2338e583e7b840db03cc662ec8d01316fd"} err="failed to get container status \"f777b405acd3709bd1ec541875505c2338e583e7b840db03cc662ec8d01316fd\": rpc error: code = NotFound desc = could not find container \"f777b405acd3709bd1ec541875505c2338e583e7b840db03cc662ec8d01316fd\": container with ID starting with f777b405acd3709bd1ec541875505c2338e583e7b840db03cc662ec8d01316fd not found: ID does not exist" Feb 27 11:15:44 crc kubenswrapper[4998]: I0227 11:15:44.785307 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d5da507-f32c-410c-b2db-29a0f6a4f6b0" path="/var/lib/kubelet/pods/7d5da507-f32c-410c-b2db-29a0f6a4f6b0/volumes" Feb 27 11:15:54 crc kubenswrapper[4998]: I0227 11:15:54.482481 4998 scope.go:117] "RemoveContainer" containerID="672de149f4e450ea35aadc20bda64b8dd7d7a18530a5daa545bbcaad8f84a0d2" Feb 27 11:16:00 crc kubenswrapper[4998]: I0227 11:16:00.158749 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536516-smjjs"] Feb 27 11:16:00 crc kubenswrapper[4998]: E0227 11:16:00.160029 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d5da507-f32c-410c-b2db-29a0f6a4f6b0" containerName="extract-utilities" Feb 27 11:16:00 crc kubenswrapper[4998]: I0227 11:16:00.160049 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d5da507-f32c-410c-b2db-29a0f6a4f6b0" containerName="extract-utilities" Feb 27 11:16:00 crc kubenswrapper[4998]: E0227 11:16:00.160068 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d5da507-f32c-410c-b2db-29a0f6a4f6b0" containerName="registry-server" Feb 27 11:16:00 crc kubenswrapper[4998]: I0227 11:16:00.160077 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d5da507-f32c-410c-b2db-29a0f6a4f6b0" containerName="registry-server" Feb 27 11:16:00 crc kubenswrapper[4998]: E0227 11:16:00.160103 4998 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d5da507-f32c-410c-b2db-29a0f6a4f6b0" containerName="extract-content" Feb 27 11:16:00 crc kubenswrapper[4998]: I0227 11:16:00.160113 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d5da507-f32c-410c-b2db-29a0f6a4f6b0" containerName="extract-content" Feb 27 11:16:00 crc kubenswrapper[4998]: I0227 11:16:00.160418 4998 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d5da507-f32c-410c-b2db-29a0f6a4f6b0" containerName="registry-server" Feb 27 11:16:00 crc kubenswrapper[4998]: I0227 11:16:00.161317 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536516-smjjs" Feb 27 11:16:00 crc kubenswrapper[4998]: I0227 11:16:00.164152 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 11:16:00 crc kubenswrapper[4998]: I0227 11:16:00.168255 4998 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 11:16:00 crc kubenswrapper[4998]: I0227 11:16:00.168737 4998 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-b74ch" Feb 27 11:16:00 crc kubenswrapper[4998]: I0227 11:16:00.177377 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536516-smjjs"] Feb 27 11:16:00 crc kubenswrapper[4998]: I0227 11:16:00.252446 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fv64p\" (UniqueName: \"kubernetes.io/projected/60eee63f-fa4c-4e9f-8e47-b6876bb4123e-kube-api-access-fv64p\") pod \"auto-csr-approver-29536516-smjjs\" (UID: \"60eee63f-fa4c-4e9f-8e47-b6876bb4123e\") " pod="openshift-infra/auto-csr-approver-29536516-smjjs" Feb 27 11:16:00 crc kubenswrapper[4998]: I0227 11:16:00.354347 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fv64p\" (UniqueName: \"kubernetes.io/projected/60eee63f-fa4c-4e9f-8e47-b6876bb4123e-kube-api-access-fv64p\") pod \"auto-csr-approver-29536516-smjjs\" (UID: \"60eee63f-fa4c-4e9f-8e47-b6876bb4123e\") " pod="openshift-infra/auto-csr-approver-29536516-smjjs" Feb 27 11:16:00 crc kubenswrapper[4998]: I0227 11:16:00.380273 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fv64p\" (UniqueName: \"kubernetes.io/projected/60eee63f-fa4c-4e9f-8e47-b6876bb4123e-kube-api-access-fv64p\") pod \"auto-csr-approver-29536516-smjjs\" (UID: \"60eee63f-fa4c-4e9f-8e47-b6876bb4123e\") " pod="openshift-infra/auto-csr-approver-29536516-smjjs" Feb 27 11:16:00 crc kubenswrapper[4998]: I0227 11:16:00.484598 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536516-smjjs" Feb 27 11:16:01 crc kubenswrapper[4998]: W0227 11:16:01.020853 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod60eee63f_fa4c_4e9f_8e47_b6876bb4123e.slice/crio-6f21df708f79a91be13724d69d98dd8aef3e20ec8e9d9e79626d537e0428ac6a WatchSource:0}: Error finding container 6f21df708f79a91be13724d69d98dd8aef3e20ec8e9d9e79626d537e0428ac6a: Status 404 returned error can't find the container with id 6f21df708f79a91be13724d69d98dd8aef3e20ec8e9d9e79626d537e0428ac6a Feb 27 11:16:01 crc kubenswrapper[4998]: I0227 11:16:01.021120 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536516-smjjs"] Feb 27 11:16:01 crc kubenswrapper[4998]: I0227 11:16:01.062289 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536516-smjjs" event={"ID":"60eee63f-fa4c-4e9f-8e47-b6876bb4123e","Type":"ContainerStarted","Data":"6f21df708f79a91be13724d69d98dd8aef3e20ec8e9d9e79626d537e0428ac6a"} Feb 27 11:16:03 crc kubenswrapper[4998]: I0227 11:16:03.084872 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536516-smjjs" event={"ID":"60eee63f-fa4c-4e9f-8e47-b6876bb4123e","Type":"ContainerStarted","Data":"60cfdbbc4e7fa8f83106aa0c3de322c3d68c30e536b8ca4f53c21ecde686810b"} Feb 27 11:16:03 crc kubenswrapper[4998]: I0227 11:16:03.106011 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29536516-smjjs" podStartSLOduration=1.420111207 podStartE2EDuration="3.105988027s" podCreationTimestamp="2026-02-27 11:16:00 +0000 UTC" firstStartedPulling="2026-02-27 11:16:01.024317475 +0000 UTC m=+3513.022588453" lastFinishedPulling="2026-02-27 11:16:02.710194305 +0000 UTC m=+3514.708465273" observedRunningTime="2026-02-27 11:16:03.099718618 +0000 UTC m=+3515.097989596" watchObservedRunningTime="2026-02-27 11:16:03.105988027 +0000 UTC m=+3515.104258995" Feb 27 11:16:03 crc kubenswrapper[4998]: I0227 11:16:03.391955 4998 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-tmwxc"] Feb 27 11:16:03 crc kubenswrapper[4998]: I0227 11:16:03.395340 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tmwxc" Feb 27 11:16:03 crc kubenswrapper[4998]: I0227 11:16:03.431559 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tmwxc"] Feb 27 11:16:03 crc kubenswrapper[4998]: I0227 11:16:03.547977 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ead5a5f3-3d7d-49ca-aeca-c4f6d1cda9d4-catalog-content\") pod \"community-operators-tmwxc\" (UID: \"ead5a5f3-3d7d-49ca-aeca-c4f6d1cda9d4\") " pod="openshift-marketplace/community-operators-tmwxc" Feb 27 11:16:03 crc kubenswrapper[4998]: I0227 11:16:03.548267 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ead5a5f3-3d7d-49ca-aeca-c4f6d1cda9d4-utilities\") pod \"community-operators-tmwxc\" (UID: \"ead5a5f3-3d7d-49ca-aeca-c4f6d1cda9d4\") " pod="openshift-marketplace/community-operators-tmwxc" Feb 27 11:16:03 crc kubenswrapper[4998]: I0227 11:16:03.548305 4998 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgj27\" (UniqueName: \"kubernetes.io/projected/ead5a5f3-3d7d-49ca-aeca-c4f6d1cda9d4-kube-api-access-sgj27\") pod \"community-operators-tmwxc\" (UID: \"ead5a5f3-3d7d-49ca-aeca-c4f6d1cda9d4\") " pod="openshift-marketplace/community-operators-tmwxc" Feb 27 11:16:03 crc kubenswrapper[4998]: I0227 11:16:03.650395 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ead5a5f3-3d7d-49ca-aeca-c4f6d1cda9d4-catalog-content\") pod \"community-operators-tmwxc\" (UID: \"ead5a5f3-3d7d-49ca-aeca-c4f6d1cda9d4\") " pod="openshift-marketplace/community-operators-tmwxc" Feb 27 11:16:03 crc kubenswrapper[4998]: I0227 11:16:03.650451 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ead5a5f3-3d7d-49ca-aeca-c4f6d1cda9d4-utilities\") pod \"community-operators-tmwxc\" (UID: \"ead5a5f3-3d7d-49ca-aeca-c4f6d1cda9d4\") " pod="openshift-marketplace/community-operators-tmwxc" Feb 27 11:16:03 crc kubenswrapper[4998]: I0227 11:16:03.650482 4998 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgj27\" (UniqueName: \"kubernetes.io/projected/ead5a5f3-3d7d-49ca-aeca-c4f6d1cda9d4-kube-api-access-sgj27\") pod \"community-operators-tmwxc\" (UID: \"ead5a5f3-3d7d-49ca-aeca-c4f6d1cda9d4\") " pod="openshift-marketplace/community-operators-tmwxc" Feb 27 11:16:03 crc kubenswrapper[4998]: I0227 11:16:03.651213 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ead5a5f3-3d7d-49ca-aeca-c4f6d1cda9d4-utilities\") pod \"community-operators-tmwxc\" (UID: \"ead5a5f3-3d7d-49ca-aeca-c4f6d1cda9d4\") " pod="openshift-marketplace/community-operators-tmwxc" Feb 27 11:16:03 crc kubenswrapper[4998]: I0227 11:16:03.651492 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ead5a5f3-3d7d-49ca-aeca-c4f6d1cda9d4-catalog-content\") pod \"community-operators-tmwxc\" (UID: \"ead5a5f3-3d7d-49ca-aeca-c4f6d1cda9d4\") " pod="openshift-marketplace/community-operators-tmwxc" Feb 27 11:16:03 crc kubenswrapper[4998]: I0227 11:16:03.679654 4998 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgj27\" (UniqueName: \"kubernetes.io/projected/ead5a5f3-3d7d-49ca-aeca-c4f6d1cda9d4-kube-api-access-sgj27\") pod \"community-operators-tmwxc\" (UID: \"ead5a5f3-3d7d-49ca-aeca-c4f6d1cda9d4\") " pod="openshift-marketplace/community-operators-tmwxc" Feb 27 11:16:03 crc kubenswrapper[4998]: I0227 11:16:03.744345 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tmwxc" Feb 27 11:16:04 crc kubenswrapper[4998]: I0227 11:16:04.093436 4998 generic.go:334] "Generic (PLEG): container finished" podID="60eee63f-fa4c-4e9f-8e47-b6876bb4123e" containerID="60cfdbbc4e7fa8f83106aa0c3de322c3d68c30e536b8ca4f53c21ecde686810b" exitCode=0 Feb 27 11:16:04 crc kubenswrapper[4998]: I0227 11:16:04.093507 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536516-smjjs" event={"ID":"60eee63f-fa4c-4e9f-8e47-b6876bb4123e","Type":"ContainerDied","Data":"60cfdbbc4e7fa8f83106aa0c3de322c3d68c30e536b8ca4f53c21ecde686810b"} Feb 27 11:16:04 crc kubenswrapper[4998]: I0227 11:16:04.299296 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tmwxc"] Feb 27 11:16:04 crc kubenswrapper[4998]: W0227 11:16:04.308679 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podead5a5f3_3d7d_49ca_aeca_c4f6d1cda9d4.slice/crio-005c0227b56521b60510aef0b3a42e0852f83a4e947b595bb22dc8df3ad3728a WatchSource:0}: Error finding container 005c0227b56521b60510aef0b3a42e0852f83a4e947b595bb22dc8df3ad3728a: Status 404 returned error can't find the container with id 005c0227b56521b60510aef0b3a42e0852f83a4e947b595bb22dc8df3ad3728a Feb 27 11:16:05 crc kubenswrapper[4998]: I0227 11:16:05.111071 4998 generic.go:334] "Generic (PLEG): container finished" podID="ead5a5f3-3d7d-49ca-aeca-c4f6d1cda9d4" containerID="0e2a1780a0427864ff0a3ad5daa7b575c8c817587e181af867148c93b7f86ef3" exitCode=0 Feb 27 11:16:05 crc kubenswrapper[4998]: I0227 11:16:05.111333 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tmwxc" event={"ID":"ead5a5f3-3d7d-49ca-aeca-c4f6d1cda9d4","Type":"ContainerDied","Data":"0e2a1780a0427864ff0a3ad5daa7b575c8c817587e181af867148c93b7f86ef3"} Feb 27 11:16:05 crc kubenswrapper[4998]: I0227 11:16:05.111516 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tmwxc" event={"ID":"ead5a5f3-3d7d-49ca-aeca-c4f6d1cda9d4","Type":"ContainerStarted","Data":"005c0227b56521b60510aef0b3a42e0852f83a4e947b595bb22dc8df3ad3728a"} Feb 27 11:16:05 crc kubenswrapper[4998]: I0227 11:16:05.514970 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536516-smjjs" Feb 27 11:16:05 crc kubenswrapper[4998]: I0227 11:16:05.588538 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fv64p\" (UniqueName: \"kubernetes.io/projected/60eee63f-fa4c-4e9f-8e47-b6876bb4123e-kube-api-access-fv64p\") pod \"60eee63f-fa4c-4e9f-8e47-b6876bb4123e\" (UID: \"60eee63f-fa4c-4e9f-8e47-b6876bb4123e\") " Feb 27 11:16:05 crc kubenswrapper[4998]: I0227 11:16:05.594398 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60eee63f-fa4c-4e9f-8e47-b6876bb4123e-kube-api-access-fv64p" (OuterVolumeSpecName: "kube-api-access-fv64p") pod "60eee63f-fa4c-4e9f-8e47-b6876bb4123e" (UID: "60eee63f-fa4c-4e9f-8e47-b6876bb4123e"). InnerVolumeSpecName "kube-api-access-fv64p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:16:05 crc kubenswrapper[4998]: I0227 11:16:05.691980 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fv64p\" (UniqueName: \"kubernetes.io/projected/60eee63f-fa4c-4e9f-8e47-b6876bb4123e-kube-api-access-fv64p\") on node \"crc\" DevicePath \"\"" Feb 27 11:16:06 crc kubenswrapper[4998]: I0227 11:16:06.134645 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536516-smjjs" event={"ID":"60eee63f-fa4c-4e9f-8e47-b6876bb4123e","Type":"ContainerDied","Data":"6f21df708f79a91be13724d69d98dd8aef3e20ec8e9d9e79626d537e0428ac6a"} Feb 27 11:16:06 crc kubenswrapper[4998]: I0227 11:16:06.134678 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536516-smjjs" Feb 27 11:16:06 crc kubenswrapper[4998]: I0227 11:16:06.134692 4998 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f21df708f79a91be13724d69d98dd8aef3e20ec8e9d9e79626d537e0428ac6a" Feb 27 11:16:06 crc kubenswrapper[4998]: I0227 11:16:06.166322 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536510-vndwl"] Feb 27 11:16:06 crc kubenswrapper[4998]: I0227 11:16:06.182913 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536510-vndwl"] Feb 27 11:16:06 crc kubenswrapper[4998]: I0227 11:16:06.776741 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="695eb442-fe1c-4e2c-b1a4-161fa4bbc0db" path="/var/lib/kubelet/pods/695eb442-fe1c-4e2c-b1a4-161fa4bbc0db/volumes" Feb 27 11:16:10 crc kubenswrapper[4998]: I0227 11:16:10.172298 4998 generic.go:334] "Generic (PLEG): container finished" podID="ead5a5f3-3d7d-49ca-aeca-c4f6d1cda9d4" containerID="0af7566e761ee02d87bc4c15fdd49c90b8d2d515ac569c2c1f86e78bcd3156c2" exitCode=0 Feb 27 11:16:10 crc kubenswrapper[4998]: I0227 11:16:10.172342 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tmwxc" event={"ID":"ead5a5f3-3d7d-49ca-aeca-c4f6d1cda9d4","Type":"ContainerDied","Data":"0af7566e761ee02d87bc4c15fdd49c90b8d2d515ac569c2c1f86e78bcd3156c2"} Feb 27 11:16:11 crc kubenswrapper[4998]: I0227 11:16:11.185793 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tmwxc" event={"ID":"ead5a5f3-3d7d-49ca-aeca-c4f6d1cda9d4","Type":"ContainerStarted","Data":"9b4346c2ac248f8e4450e9702c850a41cf57ca23790f8461e9cba5b955300c02"} Feb 27 11:16:11 crc kubenswrapper[4998]: I0227 11:16:11.217066 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-tmwxc" podStartSLOduration=2.7378976010000002 podStartE2EDuration="8.217044239s" podCreationTimestamp="2026-02-27 11:16:03 +0000 UTC" firstStartedPulling="2026-02-27 11:16:05.117147491 +0000 UTC m=+3517.115418509" lastFinishedPulling="2026-02-27 11:16:10.596294139 +0000 UTC m=+3522.594565147" observedRunningTime="2026-02-27 11:16:11.206262567 +0000 UTC m=+3523.204533555" watchObservedRunningTime="2026-02-27 11:16:11.217044239 +0000 UTC m=+3523.215315217" Feb 27 11:16:13 crc kubenswrapper[4998]: I0227 11:16:13.744691 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-tmwxc" Feb 27 11:16:13 crc kubenswrapper[4998]: I0227 11:16:13.745095 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-tmwxc" Feb 27 11:16:13 crc kubenswrapper[4998]: I0227 11:16:13.839713 4998 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-tmwxc" Feb 27 11:16:23 crc kubenswrapper[4998]: I0227 11:16:23.827855 4998 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-tmwxc" Feb 27 11:16:23 crc kubenswrapper[4998]: I0227 11:16:23.931139 4998 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tmwxc"] Feb 27 11:16:23 crc kubenswrapper[4998]: I0227 11:16:23.967308 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sf2xm"] Feb 27 11:16:23 crc kubenswrapper[4998]: I0227 11:16:23.967596 4998 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-sf2xm" podUID="f9f52852-ea09-4a76-a196-c48346479c71" containerName="registry-server" containerID="cri-o://deaeb8376d7e055b04c70dc67e63719fe401a59b80e0a32f2d4c9fae5d0ee876" gracePeriod=2 Feb 27 11:16:24 crc kubenswrapper[4998]: I0227 11:16:24.344649 4998 generic.go:334] "Generic (PLEG): container finished" podID="f9f52852-ea09-4a76-a196-c48346479c71" containerID="deaeb8376d7e055b04c70dc67e63719fe401a59b80e0a32f2d4c9fae5d0ee876" exitCode=0 Feb 27 11:16:24 crc kubenswrapper[4998]: I0227 11:16:24.345566 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sf2xm" event={"ID":"f9f52852-ea09-4a76-a196-c48346479c71","Type":"ContainerDied","Data":"deaeb8376d7e055b04c70dc67e63719fe401a59b80e0a32f2d4c9fae5d0ee876"} Feb 27 11:16:24 crc kubenswrapper[4998]: I0227 11:16:24.345597 4998 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sf2xm" event={"ID":"f9f52852-ea09-4a76-a196-c48346479c71","Type":"ContainerDied","Data":"fb549892fdf168464c6dc4ea7f98c86421b3fb1ef9d3a988c0ef2ef87fef8e9b"} Feb 27 11:16:24 crc kubenswrapper[4998]: I0227 11:16:24.345609 4998 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb549892fdf168464c6dc4ea7f98c86421b3fb1ef9d3a988c0ef2ef87fef8e9b" Feb 27 11:16:24 crc kubenswrapper[4998]: I0227 11:16:24.388511 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sf2xm" Feb 27 11:16:24 crc kubenswrapper[4998]: I0227 11:16:24.471205 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9f52852-ea09-4a76-a196-c48346479c71-utilities\") pod \"f9f52852-ea09-4a76-a196-c48346479c71\" (UID: \"f9f52852-ea09-4a76-a196-c48346479c71\") " Feb 27 11:16:24 crc kubenswrapper[4998]: I0227 11:16:24.471385 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c9rqj\" (UniqueName: \"kubernetes.io/projected/f9f52852-ea09-4a76-a196-c48346479c71-kube-api-access-c9rqj\") pod \"f9f52852-ea09-4a76-a196-c48346479c71\" (UID: \"f9f52852-ea09-4a76-a196-c48346479c71\") " Feb 27 11:16:24 crc kubenswrapper[4998]: I0227 11:16:24.471522 4998 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9f52852-ea09-4a76-a196-c48346479c71-catalog-content\") pod \"f9f52852-ea09-4a76-a196-c48346479c71\" (UID: \"f9f52852-ea09-4a76-a196-c48346479c71\") " Feb 27 11:16:24 crc kubenswrapper[4998]: I0227 11:16:24.471610 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9f52852-ea09-4a76-a196-c48346479c71-utilities" (OuterVolumeSpecName: "utilities") pod "f9f52852-ea09-4a76-a196-c48346479c71" (UID: "f9f52852-ea09-4a76-a196-c48346479c71"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 11:16:24 crc kubenswrapper[4998]: I0227 11:16:24.471892 4998 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9f52852-ea09-4a76-a196-c48346479c71-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 11:16:24 crc kubenswrapper[4998]: I0227 11:16:24.480345 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9f52852-ea09-4a76-a196-c48346479c71-kube-api-access-c9rqj" (OuterVolumeSpecName: "kube-api-access-c9rqj") pod "f9f52852-ea09-4a76-a196-c48346479c71" (UID: "f9f52852-ea09-4a76-a196-c48346479c71"). InnerVolumeSpecName "kube-api-access-c9rqj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:16:24 crc kubenswrapper[4998]: I0227 11:16:24.539567 4998 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9f52852-ea09-4a76-a196-c48346479c71-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f9f52852-ea09-4a76-a196-c48346479c71" (UID: "f9f52852-ea09-4a76-a196-c48346479c71"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 11:16:24 crc kubenswrapper[4998]: I0227 11:16:24.573534 4998 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9f52852-ea09-4a76-a196-c48346479c71-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 11:16:24 crc kubenswrapper[4998]: I0227 11:16:24.573568 4998 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c9rqj\" (UniqueName: \"kubernetes.io/projected/f9f52852-ea09-4a76-a196-c48346479c71-kube-api-access-c9rqj\") on node \"crc\" DevicePath \"\"" Feb 27 11:16:25 crc kubenswrapper[4998]: I0227 11:16:25.355890 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sf2xm" Feb 27 11:16:25 crc kubenswrapper[4998]: I0227 11:16:25.399001 4998 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sf2xm"] Feb 27 11:16:25 crc kubenswrapper[4998]: I0227 11:16:25.410497 4998 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-sf2xm"] Feb 27 11:16:26 crc kubenswrapper[4998]: I0227 11:16:26.777724 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9f52852-ea09-4a76-a196-c48346479c71" path="/var/lib/kubelet/pods/f9f52852-ea09-4a76-a196-c48346479c71/volumes" Feb 27 11:16:54 crc kubenswrapper[4998]: I0227 11:16:54.559454 4998 scope.go:117] "RemoveContainer" containerID="428ee579709d3f5a13740ecd39b70f04a7580af6ca259e352c95400323391c0a" Feb 27 11:16:54 crc kubenswrapper[4998]: I0227 11:16:54.628010 4998 scope.go:117] "RemoveContainer" containerID="a7a5487650ef31d0878ffa7c8dd1a925cc18a678b61369e150c13c288829e3aa" Feb 27 11:16:54 crc kubenswrapper[4998]: I0227 11:16:54.714267 4998 scope.go:117] "RemoveContainer" containerID="deaeb8376d7e055b04c70dc67e63719fe401a59b80e0a32f2d4c9fae5d0ee876" Feb 27 11:16:54 crc kubenswrapper[4998]: I0227 11:16:54.742263 4998 scope.go:117] "RemoveContainer" containerID="ace4f67f68528e2c8b3992d7d4d254e3cbc4ae4643b850755d75b2d21e2c119e" var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515150276467024462 0ustar coreroot  Om77'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015150276467017377 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015150267165016515 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015150267165015465 5ustar corecore